Why 'Autonomous Agents' Are the Wrong Abstraction for Business Logic
Building business systems that can justify decisions, enforce constraints, and halt safely
Correct me if I am wrong, but aren't we trying to build business systems on top of tools that were never designed to carry business authority? Agent frameworks are getting better at generating behaviour.
But...
Businesses need systems that can justify decisions, enforce constraints and halt safely under ambiguity.
Over the last months, I've been exploring how to build relevant agent-driven business systems for:
- strategic alignment
- CRM and growth decision logic
- campaign orchestration
- long-running business reasoning
The kind of systems that need to be correct, auditable and stable under scale. I assumed existing agent frameworks would be a good starting point. They weren't! Optimised for prompt choreography, emergent behaviour and convenience. That's fine if you're exploring behaviour. It's not fine if you're building business infrastructure.
The Systems Thinking Mindset
My background leans heavily toward systems thinking and hard optimisation problems (think scheduling, routing, constraint solving). If you've worked with tools like Google's OR-Tools, you'll recognise the mental model:
- define state
- define constraints
- let the system converge
- inspect why it converged
That mindset turns out to matter a lot when "agents" start influencing real business outcomes. I also didn't want to compromise on fundamentals: So I started building something from first principles, in Rust, where the type system is part of the semantic model.
Introducing Converge
The internal project I'm working on is called Converge. It's not an agent framework. It's a semantic engine:
- Agents never talk to each other — they observe and propose changes to shared context
- The engine enforces invariants, authority and convergence
- LLMs suggest. They never decide
- Humans are explicit authorities
Everything either reaches a fixed point or halts explicitly. No queues. No hidden workflows. No "vibes" on Jell-O.
The Problem Space
I'm not sharing code yet, but if you're building serious GenAI products at fast-growing companies and you've felt the gap between reasoning demos and reliable systems...
...this is the problem space I'm spending most of my spare time in.