Postingan

Show HN: Skir – A schema language I built after 15 years of Protobuf friction https://ift.tt/E4cykGa

Show HN: Skir – A schema language I built after 15 years of Protobuf friction Why I built Skir: https://ift.tt/WbyDcpS... Quick start: npx skir init All the config lives in one YML file. Website: https://skir.build GitHub: https://ift.tt/bQTgXac Would love feedback especially from teams running mixed-language stacks. https://skir.build/ March 9, 2026 at 12:17AM

Show HN: Astro MD Editor – Schema-aware editor for Astro content collections https://ift.tt/MrTCSgj

Show HN: Astro MD Editor – Schema-aware editor for Astro content collections I built this for my own Astro projects where I got tired of hand-editing YAML frontmatter and switching between files. astro-md-editor reads your collection schemas and gives you a local editor UI with typed frontmatter controls (including image/color/icon pickers) alongside a markdown/MDX editor. Run it with: npx astro-md-editor Would love feedback on schema edge cases or missing field types. https://ift.tt/jbkmTY5 March 8, 2026 at 11:44PM

Show HN: I built a simple book tracker because I kept buying books I owned https://ift.tt/IFanU5m

Show HN: I built a simple book tracker because I kept buying books I owned I'm Maureen, a senior and self-taught developer. I love browsing second-hand book markets but kept coming home with books I already owned. I couldn't find a simple enough app to track my library — everything required an account, had ads, or pushed a subscription. So I built one myself. SeniorEase Library (Android): scan an ISBN, book is added instantly. No account, no ads, one-time €2.99. First 10 books free. Would love any feedback! https://ift.tt/JZTs2ci March 8, 2026 at 11:17PM

Show HN: Prompt Armour – Real-time PII detection for AI chatbots, 100% local https://ift.tt/ukAwgUi

Show HN: Prompt Armour – Real-time PII detection for AI chatbots, 100% local https://prompt-armour.vercel.app/ March 8, 2026 at 12:34AM

Show HN: Aegis – Open-source pre-execution firewall for AI agents https://ift.tt/jIQAlbg

Show HN: Aegis – Open-source pre-execution firewall for AI agents Every agent framework lets the LLM decide which tools to call at machine speed. There's nothing between the decision and execution — no check, no confirmation. AEGIS intercepts tool calls before they execute: classifies them (SQL, file, shell, network), evaluates against policies, and either allows, blocks, or holds for human approval. One line of code, zero changes to your agent: import agentguard agentguard.auto("http://localhost:8080") Built-in detection for SQL injection, path traversal, command injection, prompt injection, data exfiltration, and PII leakage. Every trace is Ed25519 signed and SHA-256 hash-chained. Supports 9 Python frameworks (Anthropic, OpenAI, LangChain, CrewAI, Gemini, Bedrock, Mistral, LlamaIndex, smolagents), plus JS/TS and Go SDKs. Self-hosted, MIT licensed, Docker Compose one-liner. https://ift.tt/mzbD7ZH https://ift.tt/mzbD7ZH March 7, 2026 at 11:47PM

Show HN: OpenGraviton – Run 500B+ parameter models on a consumer Mac Mini https://ift.tt/UwQsutb

Show HN: OpenGraviton – Run 500B+ parameter models on a consumer Mac Mini Hi HN, I built OpenGraviton, an open-source AI inference engine designed to push the limits of running extremely large models on consumer hardware. The system combines several techniques to drastically reduce memory and compute requirements: • 1.58-bit ternary quantization ({-1, 0, +1}) for ~10x compression • dynamic sparsity with Top-K pruning and MoE routing • mmap-based layer streaming to load weights directly from NVMe SSDs • speculative decoding to improve generation throughput These allow models far larger than system RAM to run locally. In early benchmarks, OpenGraviton reduced TinyLlama-1.1B from ~2.05GB (FP16) to ~0.24GB using ternary quantization. Synthetic stress tests at the 140B scale show that models which would normally require ~280GB FP16 can fit within ~35GB when packed with the ternary format. The project is optimized for Apple Silicon and currently uses custom Metal + C++ tensor unpacking. Benc...

Show HN: Claude-replay – A video-like player for Claude Code sessions https://ift.tt/QzVhpex

Show HN: Claude-replay – A video-like player for Claude Code sessions I got tired of sharing AI demos with terminal screenshots or screen recordings. Claude Code already stores full session transcripts locally as JSONL files. Those logs contain everything: prompts, tool calls, thinking blocks, and timestamps. I built a small CLI tool that converts those logs into an interactive HTML replay. You can step through the session, jump through the timeline, expand tool calls, and inspect the full conversation. The output is a single self-contained HTML file — no dependencies. You can email it, host it anywhere, embed it in a blog post, and it works on mobile. Repo: https://ift.tt/vlLaQVH Example replay: https://es617.github.io/assets/demos/peripheral-uart-demo.ht... https://ift.tt/vlLaQVH March 6, 2026 at 10:57PM