Show HN: No one cares about observability costs https://ift.tt/WuxDNLF
Show HN: No one cares about observability costs Morning HN - - I have a quick story about building a startup and misreading the market, and an ask. (The ask: Try out our product>> https://ift.tt/sdKpLBq ) My name is Julian Giuca and I was an early employee at New Relic, where I led their Logging product until 2022. It’s safe to say I have opinions about logs. Sometimes I think they are great. Too often I think we can do much better. When people talk to me about logs, they reliably complain about observability costs. Engineering leaders have said it feels “like a protection racket”, where you're forced to capture and send everything with no real levers of control. So I wanted to address this and built the ideal streaming pipeline to shape and route data. Basically Cribl for Datadog. If you can shape, filter, and route logs intelligently, you can slash your observability bill. Sending less data means spending less money. Coinbase had just spent $65M on Datadog (in 2023)! Surely, people wanted this… right? Wrong. Everyone talks about how much their observability bill is and what they’d like to do about it, but we found it's never a high enough priority. It was treated as the cost of doing business—a problem with no priority or owner. SRE / DevOps are mostly concerned with keeping the lights on, not data management. There was some interest there, but minimal. Platform engineers were interested, but were juggling other priorities. It also felt like this was an emerging segment of engineering, so it was hard to hone in on. VP’s of Engineering were focused on execution and growing the top line. This meant observability costs were usually below the line in priority until uptime slipped or the CFO started screaming. A nice to have, but again, not enough of an urgent problem. So, what gives? Cribl is crushing it, we see similar companies raising Series A’s, and we think we’ve got a better product. Turns out, they’re all security led. We thought “reducing observability data to reduce observability spend” was the play, but no one owned the problem, so no one was incentivized to fix it. Security teams don’t have that luxury. They need logs urgently. Cleanly. Ideally structured, and noise-free. They need signal out of all of these different sources and want to stop wasting time sifting through garbage data and alerts. SIEM vendors often serve the Dev market (I’m looking at you Splunk and Sumo), and I just spent too long looking at it that way—from the Dev lens. Couldn’t see the forest for the trees. Security teams are usually the ones driving adoption of these tools. Devs already have a wider selection of tools, and can take advantage of a logging platform. I still believe that a pipeline is the missing layer—the third way—where you can structure, enrich, and route data before there’s a flood of alerts and you’re paying for data you don’t need. But I defer to you all. Thoughts from folks here? We have a really useful product for both Sec and DevOps, but cost isn’t a reason to try it. So my ask is, try it out for 2 minutes: https://ift.tt/sdKpLBq It has all the features you’d expect, while being easier to get started and easier to use than any other pipeline on the market. We are actively working on automatic security detection and always adding more data sources/destinations. Tl;dr: Built a product to address observability need. I misread the market; not enough urgency there. Discovered the real driver was SecOps. March 27, 2025 at 10:57PM
Komentar
Posting Komentar