Show HN: Open-Source SDK for AI Knowledge Work https://ift.tt/DIcjSJE
Show HN: Open-Source SDK for AI Knowledge Work GitHub: https://ift.tt/VtIiRZC Most AI agent frameworks target code. Write code, run tests, fix errors, repeat. That works because code has a natural verification signal. It works or it doesn't. This SDK treats knowledge work like an engineering problem: Task → Brief → Rubric (hidden from executor) → Work → Verify → Fail? → Retry → Pass → Submit The orchestrator coordinates subagents, web search, code execution, and file I/O. then checks its own work against criteria it can't game (the rubric is generated in a separate call and the executor never sees it directly). We originally built this as a harness for RL training on knowledge tasks. The rubric is the reward function. If you're training models on knowledge work, the brief→rubric→execute→verify loop gives you a structured reward signal for tasks that normally don't have one. What makes Knowledge work different from code? (apart from feedback loop) I believe there is some...