Show HN: Arch GW – Distributed gateway for agents, engineered with small LLMs https://ift.tt/Gu0mQ6E
Show HN: Arch GW – Distributed gateway for agents, engineered with small LLMs Hi HN My name is Salman and I work on Arch GW - the intelligent gateway designed to protect, observe, and personalize LLM applications with your APIs. https://ift.tt/ZmNwFrs Our team built Envoy Proxy at Lyft, and re-imagined it with the belief that: Prompts are nuanced and opaque user requests, which require the same capabilities as traditional HTTP requests including secure handling, intelligent routing, robust observability, and integration with backend (API) systems for personalization – all outside business logic. Engineered with purpose-built LLMs, Arch handles the critical but undifferentiated tasks related to the handling and processing of prompts, including detecting and rejecting jailbreak attempts, intelligently calling "backend" APIs to fulfill the user's request represented in a prompt, routing to and offering disaster recovery between upstream LLMs, and managing the observability of prompts and LLM interactions in a centralized way. Core Features: * Built on Envoy: Arch runs alongside application servers, and builds on top of Envoy's proven HTTP management and scalability features to handle ingress and egress traffic related to prompts and LLMs. * Function Calling for fast Agentic and RAG apps. Engineered with purpose-built LLMs to handle fast, cost-effective, and accurate prompt-based tasks like function/API calling, and parameter extraction from prompts. * Prompt Guard: Arch centralizes prompt guardrails to prevent jailbreak attempts and ensure safe user interactions without writing a single line of code. * Traffic Management: Arch manages LLM calls, offering smart retries, automatic cutover, and resilient upstream connections for continuous availability. * Standards-based Observability: Arch uses the W3C Trace Context standard to enable complete request tracing across applications, ensuring compatibility with observability tools, and provides metrics to monitor latency, token usage, and error rates. We are just getting started, and would love feedback and contribution from the community https://ift.tt/EKrOmf4 November 9, 2024 at 10:25PM
Komentar
Posting Komentar