<?xml version="1.0" encoding="UTF-8" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>ThruWire Blog</title>
    <link>https://thruwire.ai</link>
    <description>Thoughts on context, leverage, and building with AI.</description>
    <language>en</language>
    <atom:link href="https://thruwire.ai/rss.xml" rel="self" type="application/rss+xml" />
    
    <item>
      <title>AI Has an Authority Problem</title>
      <link>https://thruwire.ai/blog/ai-has-an-authority-problem</link>
      <description>AI systems encode a hierarchy of control that shapes decisions, often without humans participating in the reasoning that produces confident answers.</description>
      <pubDate>Thu, 26 Feb 2026 00:00:00 GMT</pubDate>
      <guid>https://thruwire.ai/blog/ai-has-an-authority-problem</guid>
    </item>
    <item>
      <title>Our Memories Are Executable Code</title>
      <link>https://thruwire.ai/blog/our-memories-are-executable-code</link>
      <description>Human memory preserves structure more than content, and that procedural logic is what current AI memory systems still cannot replay.</description>
      <pubDate>Tue, 17 Feb 2026 00:00:00 GMT</pubDate>
      <guid>https://thruwire.ai/blog/our-memories-are-executable-code</guid>
    </item>
    <item>
      <title>It&apos;s 11am: Do You Know Where Your (Human) Users Are?</title>
      <link>https://thruwire.ai/blog/its-11am-do-you-know-where-your-human-users-are</link>
      <description>As users move their thinking into AI chats, software companies risk becoming mere infrastructure. The durable moat is structured domain judgment and customer context that keeps professional thinking inside the product.</description>
      <pubDate>Tue, 10 Feb 2026 00:00:00 GMT</pubDate>
      <guid>https://thruwire.ai/blog/its-11am-do-you-know-where-your-human-users-are</guid>
    </item>
    <item>
      <title>The Organization Was Aligned. Until AI Assistants Showed Up</title>
      <link>https://thruwire.ai/blog/the-organization-was-aligned-the-system-is-not</link>
      <description>Lencioni’s alignment playbook assumed every actor was human. With AI in the system, cultural clarity isn’t enough; organizations need explicit, structural reasoning to keep humans and AI aligned.</description>
      <pubDate>Sat, 07 Feb 2026 00:00:00 GMT</pubDate>
      <guid>https://thruwire.ai/blog/the-organization-was-aligned-the-system-is-not</guid>
    </item>
    <item>
      <title>The Last Alpha is Human</title>
      <link>https://thruwire.ai/blog/the-last-alpha-is-human</link>
      <description>As models converge, the advantage shifts to the human-authored thinking substrate that aligns people and AI. A personal story of compounding alignment and the thesis behind ThruWire.</description>
      <pubDate>Wed, 04 Feb 2026 00:00:00 GMT</pubDate>
      <guid>https://thruwire.ai/blog/the-last-alpha-is-human</guid>
    </item>
    <item>
      <title>The Harder the Problem, the More Human It Gets</title>
      <link>https://thruwire.ai/blog/the-harder-the-problem-the-more-human-it-gets</link>
      <description>AI is brilliant in the moment, but the hardest work is the arc of reasoning that compounds over time. This essay explains why that arc is still human and what tools are missing.</description>
      <pubDate>Sun, 01 Feb 2026 00:00:00 GMT</pubDate>
      <guid>https://thruwire.ai/blog/the-harder-the-problem-the-more-human-it-gets</guid>
    </item>
  </channel>
</rss>