Postingan

Show HN: Stage – Putting humans back in control of code review https://ift.tt/jYkKsr7

Show HN: Stage – Putting humans back in control of code review Hey HN! We're Charles and Dean, and we're building Stage: a code review tool that guides you through reading a PR step by step, instead of piecing together a giant diff. Here's a demo video: https://ift.tt/anDedrW . You can play around with some example PRs here: https://ift.tt/OHvftMp . Teams are moving faster than ever with AI these days, but more and more engineers are merging changes that they don't really understand. The bottleneck isn't writing code anymore, it's reviewing it. We're two engineers who got frustrated with GitHub's UI for code review. As coding agents took off, we saw our PR backlog pile up faster than we could handle. Not only that, the PRs themselves were getting larger and harder to understand, and we found ourselves spending most of our time trying to build a mental model of what a PR was actually doing. We built Stage to make reviewing a PR feel more like reading chap...

Show HN: Arrow. Point your phone, walk where it says, find out where https://ift.tt/sKAgU1X

Show HN: Arrow. Point your phone, walk where it says, find out where https://kouh.me/arrow April 16, 2026 at 11:36PM

Show HN: I built a music theory course with games and spaced repetition https://ift.tt/X6UxKS5

Show HN: I built a music theory course with games and spaced repetition I’ve spent a year building a theory learning path that starts from scratch and goes all the way up to topics like Secondary Dominants and Borrowed Chords. It uses a combination of games, interactive lessons and spaced repetition to help you understand and remember concepts. Not just learn something new and forget it in a few days. I’m trying to figure out: 1. Is the progression logical? 2. What am I missing that you’d like to see in there? 3. Where does it get confusing and could use more clarification? https://ift.tt/X0dpWjt April 16, 2026 at 10:29PM

Show HN: CodeBurn – Analyze Claude Code token usage by task https://ift.tt/e8IT0Uj

Show HN: CodeBurn – Analyze Claude Code token usage by task Built this after realizing I was spending ~$1400/week on Claude Code with almost no visibility into what was actually consuming tokens. Tools like ccusage give a cost breakdown per model and per day, but I wanted to understand usage at the task level. CodeBurn reads the JSONL session transcripts that Claude Code stores locally (~/.claude/projects/) and classifies each turn into 13 categories based on tool usage patterns (no LLM calls involved). One surprising result: about 56% of my spend was on conversation turns with no tool usage. Actual coding (edits/writes) was only ~21%. The interface is an interactive terminal UI built with Ink (React for terminals), with gradient bar charts, responsive panels, and keyboard navigation. There’s also a SwiftBar menu bar integration for macOS. Happy to hear feedback or ideas. https://ift.tt/x2bXt6I April 14, 2026 at 05:57AM

Show HN: MCP server gives your agent a budget (save tokens, get smarter results) https://ift.tt/ykMrxQ7

Show HN: MCP server gives your agent a budget (save tokens, get smarter results) As a consultant I foot my own Cursor bills, and last month was $1,263. Opus is too good not to use, but there's no way to cap spending per session. After blowing through my Ultra limit, I realized how token-hungry Cursor + Opus really is. It spins up sub-agents, balloons the context window, and suddenly, a task I expected to cost $2 comes back at $8. My bill kept going up, but was I really going to switch to a worse model? No. So I built l6e: an MCP server that gives your agent the ability to budget. It works with Cursor, Claude Code, Windsurf, Openclaw, and every MCP-compatible application. Saving money was why I built it, but what surprised me was that the process of budgeting changed the agent's behavior. An agent that understands the limitations of the resources doesn't try to speculatively increase the context window with extra files. It doesn't try to reach every possible API. The age...

Show HN: A Claude Code–driven tutor for learning algorithms in Go https://ift.tt/V5J8HSi

Show HN: A Claude Code–driven tutor for learning algorithms in Go https://ift.tt/yRlDxCJ April 15, 2026 at 12:41AM

Show HN: LangAlpha – what if Claude Code was built for Wall Street? https://ift.tt/QVt7SUP

Show HN: LangAlpha – what if Claude Code was built for Wall Street? Some technical context on what we ran into building this. MCP tools don't really work for financial data at scale. One tool call for five years of daily prices dumps tens of thousands of tokens into the context window. And data vendors pack dozens of tools into a single MCP server, schemas alone can eat 50k+ tokens before the agent does anything useful. So we auto-generate typed Python modules from the MCP schemas at workspace init and upload them into the sandbox. The agent just imports them like a normal library. Only a one-line summary per server stays in the prompt. We have around 80 tools across our servers and the prompt cost is the same whether a server has 3 tools or 30. This part isn't finance-specific, it works with any MCP server. The other big thing was making research actually persist across sessions. Most agents treat a single deliverable (a PDF, a spreadsheet) as the end goal. In investing that...