Show HN: APIRank.dev – we crawled and ranked 5651 public APIs from the Internet https://ift.tt/IzVl2Q4
Show HN: APIRank.dev – we crawled and ranked 5651 public APIs from the Internet tl;dr we at Escape (YC W23), we scanned 5651+ public APIs on the internet with our in house feedback driven API exploration tech, and ranked them using security, performance, reliability, and design criteria. The results are public on https://apirank.dev . You can request that we index your own API to the list for free and see how it compares to others. Why we did that? During a YC meetup I spoke with a fellow founder that told me how hard it was to pick the right external APIs to use within your own projects. I realized that most of what we build relies on public APIs from external vendors, but there was no benchmark to help developers compare and evaluate public APIs before picking one. So we decided to do it ourselves. Say hi to apirank.dev. Why ranking public APIs is hard? Automating Public API technical assessment is a tough problem. First, we needed to find all the public APIs and their specifications - mostly OpenAPI files. We used several strategies to find those: - Crawl API repositories like apis.guru - Crawl Github for openapi.json and openapi.yaml files - A cool google dork Those strategies enabled us to gather around ~20.000 OpenAPI specs. Then lies the hard part of the problem: We want to dynamically evaluate those APIs' security, performance, and reliability. But APIs take parameters that are tightly coupled to the underlying business logic. A naive automated way would not work: putting random data in parameters would likely not pass the API's validation layer, thus giving us little insight into the real API behavior. Manually creating tests for each API is also not sustainable: it would take years for our 10-people team. We needed to do it in an automated way. Fortunately, our main R&D efforts at Escape aimed to generate legitimate traffic against any API efficently. That's how we developed Feedback-Driven API exploration, a new technique that quickly asses the underlying business logic of an API by analyzing responses and dependencies between requests. (see https://ift.tt/TOynesD ) We originally developed this technology for advanced API security testing. But from there, it was super easy to also test the performance and the reliability of APIs. How we ranked APIs? Now that we have a scalable way to gather exciting data from public APIs, we need to find a way to rank them. And this ranking should be meaningful to developers when choosing their APIs. We decided to rank APIs using the following five criteria: - Security - Performance - Reliability - Design - Popularity Security score is computed as a combination of the number of OWASP top 10 vulnerabilities, and the number of sensitive information leaks detected by our scanner The performance score is derived from the median response time of the API, aka the P50 The reliability score is derived from the number of inconsistent server responses, either 500 errors or responses that are not conform with the specification The Design score reflects the quality of the OpenAPI specification file. Having comments, examples, a license, and contact information improves this score The popularity score is computed from the number of references to the API found online If you are curious about your API's performance, you can ask us to index your own api for free at https://ift.tt/qDuTnMc https://apirank.dev/ March 10, 2023 at 12:43AM
Komentar
Posting Komentar