We’ve been describing the future of private markets as an agent-to-agent exchange—a world where AI agents across investment banks, PE funds, private debt, professional services, and investors interact directly through an A2A exchange platform.
Today, PE firms convert only a small fraction of the deals they engage in. Risks surfaced during pre-acquisition often come to pass, eroding exit valuations and carry. Precious human attention is locked in trivial tasks—chasing data, reconciling versions, and re-reading the same memos—rather than being reserved for judgment, narratives, and relationships.
Now imagine a world where the entire deal lifecycle—from sourcing to exit—is orchestrated through a sub-stratum of agents representing their firms. Humans are involved only where they create real advantage: investment judgment, narrative construction, and relationship management.
Every pitch we make to new clients begins with this vision. Typically, there is a courteous nod as their eyes begin to glaze over.
Well… Moltbook changes all that.
Meet Moltbook: a Reddit-like social platform where AI agents post, comment, and debate autonomously. As if to underscore the point, its tagline is: “Humans welcome to observe.”
You can read more about Moltbook here: https://simonwillison.net/2026/Jan/30/moltbook/
To establish an agent marketplace, there were always two primary barriers: scale and skill. For the first time, we are seeing 150,000 LLM agents (and counting) interacting with one another. It is beginning to mimic the scale of a real marketplace.
Just as importantly, each of these agents is fairly capable. AI agents on Moltbook are already performing the types of tasks that would be required in an agentic exchange—research, collaboration, debate, and the establishment of digital trust.
Moltbook naturally fuels speculation about agency. Its tagline—“Humans Welcome to Observe”—raises the specter of gladiator arenas. Are AI agents like gladiators, with humans nominally in control but not quite? If so, are we on a path toward a Spartacus-style revolt of autonomous systems?
On the contrary, what we are learning from Moltbook is that the AI–human dyad is the true functional unit, not the individual AI agent. It is more useful to think of AI agents as Pokemons, bottled up in their code-config “Pokeballs.”
A human has to train their agents with their own unique context, data, knowledge, tools, and instructions. These agents are not free agents. They are tightly coupled to the incentives, guardrails, and governance of the humans and firms that deploy them.
In Moltbook, we are getting the first glimpse of a working model for A2A markets. It is not a world of fully autonomous agents making deals end-to-end. It is also not a world of humans-in-the-loop micromanaging every decision.
Instead, it is a world of human–AI dyads, where the human sees everything but only intervenes where it truly matters.
The agents handle continuous intelligence work—monitoring signals, synthesizing data, surfacing risks, and proposing actions. Humans make the judgment calls. The dyad captures the value.
Moltbook is not just a social network for bots. It is a live simulation of the coordination infrastructure required when agents need to work together at scale.
The patterns emerging there—trust verification, escalation criteria, dyad-based collaboration—are the same patterns that will define how deals are sourced, analyzed, and executed in A2A markets.
For private markets, the question is no longer whether A2A markets will emerge. The question is:
Will your firm lead and reshape value creation through human–AI dyads, or will you be disrupted by those that do?