AI Has an Authority Problem
AI systems encode a hierarchy of control that shapes decisions, often without humans participating in the reasoning that produces confident answers.
Thoughts on Human and AI Collaboration.
AI systems encode a hierarchy of control that shapes decisions, often without humans participating in the reasoning that produces confident answers.
Human memory preserves structure more than content, and that procedural logic is what current AI memory systems still cannot replay.
As users move their thinking into AI chats, software companies risk becoming mere infrastructure. The durable moat is structured domain judgment and customer context that keeps professional thinking inside the product.
Lencioni’s alignment playbook assumed every actor was human. With AI in the system, cultural clarity isn’t enough; organizations need explicit, structural reasoning to keep humans and AI aligned.
As models converge, the advantage shifts to the human-authored thinking substrate that aligns people and AI. A personal story of compounding alignment and the thesis behind ThruWire.
AI is brilliant in the moment, but the hardest work is the arc of reasoning that compounds over time. This essay explains why that arc is still human and what tools are missing.