Back to The Wire
3 min read

The Last Alpha is Human

As models converge, the advantage shifts to the human-authored thinking substrate that aligns people and AI. A personal story of compounding alignment and the thesis behind ThruWire.

SR
Author
Seth Rosen

The remaining advantage in an AI world is how human intellect, experience, and judgment structure the application of AI itself.

This is a structural claim. At the application layer, AI capabilities are converging. The marginal difference between models matters less than how thinking is organized above them: how it persists, how it coordinates across people and systems.

Most approaches to AI productivity miss this entirely. They optimize individual throughput. Output increases. Progress toward goals does not.

My brother and I have worked together across four companies. A data analytics consultancy. Topcoat, acquired by Snyk. Leadership roles at Snyk as product and engineering counterparts. And now this.

The constant across all of it was the hours spent in strategic alignment. Thinking through problems together. Making each other sharper before execution. A shared cognitive substrate that compounded over years.

Working with a brother is an unfair advantage. The trust is absolute. The stakes are high but the ego is low. You can be wrong without losing face. Alignment is the substrate everything else runs on.

This created asymmetric advantage. Our reasoning integrated continuously. We moved faster precisely because we had invested in shared structure.

We are trying to replicate this with AI. The compounding.

The first problem is individual alignment with AI itself.

Current AI tools optimize for output. They make you faster at producing things. But hard problems are solved by sustained thinking that evolves over time, where each session builds on the last, where your reasoning deepens rather than degrades.

Memory features and retrieval systems exist. They recall facts about you. They retrieve documents you wrote. But they do not capture the structure of your reasoning: your goals, your constraints, your beliefs, the relationships between your ideas. The AI can surface what you said. It cannot reason within how you think. It infers structure from text rather than operating within structure you have authored.

Alignment with AI is a structural problem. For AI to help you evolve hard problems over time, it needs access to the structure of your thinking. And that structure must persist and evolve alongside the work itself.

The multiplayer problem follows from the single-player problem. Once you have structured your thinking in a form AI can operate within, the question becomes: how do you share that structure with collaborators? How do their AIs align with yours? How does shared reasoning compound across people?

This workflow does not exist. Multi-person chat will not solve it. Additional context windows will not solve it. The problem is architectural: AI improvisation does not accumulate.

What accumulates is structured human thinking that AI can operate within.

The bet we are making is specific. The future alpha belongs to teams who structure their thinking in a form AI can reason with, and who maintain that structure as a shared, evolving substrate across people.

Hybrid intelligence. Human intellect authors the structure. AI scales the exploration within it.

The differentiator is who has the best thinking substrate: persistent, governed, evolving, and shared. At the application layer, models are converging.

We are building ThruWire.

In a sense, this has always been our ThruWire. The compounding alignment across four companies. The hours of shared thinking that made execution faster. The structure we could not name but always relied on.

Now we are building it.