Postingan

Show HN: AI bot that automatically processes unstructured documents https://ift.tt/2SL06un

Show HN: AI bot that automatically processes unstructured documents Hi HN! We’re excited to share what we’ve been working on—a bot that automates the tedious task of processing unstructured documents from emails and entering them into ERPs. After many iterations, we’ve achieved 99.8% accuracy in extracting and mapping data from invoices, POs, and other documents. One surprising takeaway from this journey: building the AI was only 10% of the challenge! The real work came from handling edge cases, integrating seamlessly with various ERPs, and creating a reliable pipeline for real-world documents with messy formats. We’d love your feedback, thoughts, or questions about how we built this, the challenges we faced, or anything else. Let us know what you think! Thanks for checking it out! https://ift.tt/auiZswU November 23, 2024 at 08:20AM

Show HN: Open-Source Pull Request AI Reviewer https://ift.tt/tWK4LYX

Show HN: Open-Source Pull Request AI Reviewer Hey HN, Over the last year, I’ve reviewed more than 1000 code changes. Most of the time was spent catching obvious mistakes rather than debating complex design decisions. If we estimate ~10 minutes per review, that’s 160+ hours spent reviewing code in just one year. So I thought: could I get some of that time back using LLMs? That's why I spent the last few weekends building Presubmit.ai, an open-source AI reviewer that runs as a Github Action right when you open a Pull Request. The results so far are promising: I estimate it can reduce the review time by 50%, which in my case would mean I save 80hours (~10 working days) per year. Unlike similar SaaS solutions, the goal is not to replace the human reviewer but to highlight obvious mistakes early, spot security vulnerabilities and give more context about the change. I like to think of it as a “pre-reviewer”. Some of its features are: * Line-by-line comments * PR summarization * Title gen

Show HN: Pull Request Reviewed by LLM https://ift.tt/dOHS8DX

Show HN: Pull Request Reviewed by LLM This year I’ve reviewed more than 1000 code changes. Most of the time was spent catching obvious mistakes rather than debating complex design decisions. If we estimate ~10 minutes per review, that’s 160+ hours spent reviewing code in just one year. So I thought: could I get some of that time back using LLMs? That's why I spent the last few weekends building an LLM-based prereviewer that should take a first pass before the actual human reviewer. The results so far are promising: I estimate it can reduce the review time by 50%, which in my case would mean I save 80hours (~10 working days) per year. Linked above is an example of a PR where I'm testing the AI reviewer and it showcase how it can detect bugs, suggest best practices about token validity, generate summary and title, and even chat with me in review comments. The AI reviewer is a simple Github action that runs everytime I open or synchronize a pull request and you can see the source

Show HN: Shop on Amazon with Crypto https://ift.tt/HxzZSIy

Show HN: Shop on Amazon with Crypto Hey folks! I'm building an app to let anyone shop on Amazon with crypto. How it works: [1] Add "baggins.ai/" before any Amazon product URL [2] Fill in your shipping details [3] Pay with your preferred wallet (BTC, ETH, DOGE, USDC, and more) Features: - Supports 1-2 day Prime shipping - Uses Coinbase Commerce for secure transactions Here's an early version for y'all to test: https://www.baggins.ai/ Keen to get feedback on the UX and feature requests! My LI to show that it’s not a scam :) https://ift.tt/LYeyWg5 https://ift.tt/qQc6A39 November 22, 2024 at 11:29PM

Show HN: VS Code extensions that display CGM blood glucose levels in status bar https://ift.tt/bG5PEO0

Show HN: VS Code extensions that display CGM blood glucose levels in status bar As a Type 1 diabetic, I need to continuously monitor my blood glucose levels. I’ve implemented a couple of Visual Studio Code extensions that retrieve the latest blood glucose readings from your CGM and display them in your VS Code status bar. One VS code extension uses the Nightscout CGM to retrieve the blood glucose readings. It requires users to run the Nightscout application on a hosted server. A nice benefit of Nightscout application is that it works with all the major CGM devices. However, a slight drawback of this option is that it requires a hosted third party software (Nightscout) for proper functionality. I’ve also implemented a Visual Studio code extension for those (like myself) that use the Freestyle Libre CGM. This version connects directly to LibreLinkUp to retrieve the latest blood glucose readings and display them in your VS code status bar. This removes the dependency for the intermediary

Show HN: My Remote Teaching Station (Mk IV) https://ift.tt/80lNrnQ

Show HN: My Remote Teaching Station (Mk IV) The remote teaching station has been evolving for the last four years. The Mk IV is the most advanced and most attractive version so far. https://ift.tt/wgxiCEb November 21, 2024 at 11:37PM

Show HN: An AI that reliably builds full-stack apps by preventing LLM mistakes https://ift.tt/7ISCiHR

Show HN: An AI that reliably builds full-stack apps by preventing LLM mistakes Hey HN! Previous CERN physicist turned hacker here. We've developed a way to make AI coding actually work by systematically identifying and fixing places where LLMs typically fail in full-stack development. Today we're launching as Lovable (previously gptengineer.app) since it's such a big change. The problem? AI writing code typically make small mistakes and then get stuck. Those who tried know the frustration. We fixed most of this by mapping out where LLMs fail in full-stack dev and engineering around those pitfalls with prompt chains. Thanks to this, in all comparisons I found with: v0, replit, bolt etc we are actually winning, often by a wide margin. What we have been working on since my last post ( https://ift.tt/1exYjmr ): > Handling larger codebases. We actually found that using small LLMs works much better than traditional RAG for this. > Infra work to enable instant preview (it sp