Postingan

Show HN: Valknut – static analysis to tame agent tech debt https://ift.tt/qmNJ7rA

Show HN: Valknut – static analysis to tame agent tech debt Hi y'all, In my work to reduce the amount of time I spend in the agentic development loop, I observed that code structure was one of the biggest determinants in agent task success. Ironically, agents aren't good at structuring code for their own consumption, so left to their own devices purely vibe-coded projects will tend towards dumpster fire status. Agents aren't great at refactoring out of the box either, so rather than resign myself to babysitting refactors to maintain agent performance, I wrote a tool to put agents on rails while refactoring. Another big problem I encountered trying to remove myself from the loop was knowing where to spend my time efficiently when I did dive into the codebase. To combat this I implemented a html report that simplifies identifying high level problem. In many cases you can click from an issue in the report directly to the code via VS Code links. I hope you find this tool as usef...

Show HN: RunMat – runtime with auto CPU/GPU routing for dense math https://ift.tt/dqnQ4xB

Show HN: RunMat – runtime with auto CPU/GPU routing for dense math Hi, I’m Nabeel. In August I released RunMat as an open-source runtime for MATLAB code that was already much faster than GNU Octave on the workloads I tried. https://ift.tt/yQbMwHx Since then, I’ve taken it further with RunMat Accelerate: the runtime now automatically fuses operations and routes work between CPU and GPU. You write MATLAB-style code, and RunMat runs your computation across CPUs and GPUs for speed. No CUDA, no kernel code. Under the hood, it builds a graph of your array math, fuses long chains into a few kernels, keeps data on the GPU when that helps, and falls back to CPU JIT / BLAS for small cases. On an Apple M2 Max (32 GB), here are some current benchmarks (median of several runs): * 5M-path Monte Carlo * RunMat ≈ 0.61 s * PyTorch ≈ 1.70 s * NumPy ≈ 79.9 s → ~2.8× faster than PyTorch and ~130× faster than NumPy on this test. * 64 × 4K image preprocessing pipeline (mean/std, normalize, gain/bias, gamma,...

Show HN: An AI zettelkasten that extracts ideas from articles, videos, and PDFs https://ift.tt/Zjubwnk

Show HN: An AI zettelkasten that extracts ideas from articles, videos, and PDFs Hey HN! Over the weekend (leaning heavily on Opus 4.5) I wrote Jargon - an AI-managed zettelkasten that reads articles, papers, and YouTube videos, extracts the key ideas, and automatically links related concepts together. Demo video: https://youtu.be/W7ejMqZ6EUQ Repo: https://ift.tt/tdTBXwP You can paste an article, PDF link, or YouTube video to parse, or ask questions directly and it'll find its own content. Sources get summarized, broken into insight cards, and embedded for semantic search. Similar ideas automatically cluster together. Each insight can spawn research threads - questions that trigger web searches to pull in related content, which flows through the same pipeline. You can explore the graph of linked ideas directly, or ask questions and it'll RAG over your whole library plus fresh web results. Jargon uses Rails + Hotwire with Falcon for async processing, pgvector for embeddings, Exa ...

Show HN: Rust-based ultra-low latency streaming framework – Wingfoil https://ift.tt/ADnY4uZ

Show HN: Rust-based ultra-low latency streaming framework – Wingfoil https://ift.tt/KvdWyMX December 1, 2025 at 11:56PM

Show HN: FFmpeg Engineering Handbook https://ift.tt/ODbaz07

Show HN: FFmpeg Engineering Handbook https://ift.tt/jkUAReg December 1, 2025 at 11:31PM

Show HN: Memory Lane – bootstrap your naive Claude instances with their history https://ift.tt/hPgenk5

Show HN: Memory Lane – bootstrap your naive Claude instances with their history https://ift.tt/GqlfewJ December 1, 2025 at 04:04AM