(function(w,d,s,l,i){ w[l]=w[l]||[]; w[l].push({'gtm.start': new Date().getTime(),event:'gtm.js'}); var f=d.getElementsByTagName(s)[0], j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:''; j.async=true; j.src='https://www.googletagmanager.com/gtm.js?id='+i+dl; f.parentNode.insertBefore(j,f); })(window,document,'script','dataLayer','GTM-W24L468');
The Analog Holdouts: Communities That Refused the Update

The Analog Holdouts: Communities That Refused the Update

August 9, 2033Alex Welcing5 min read
Polarity:Mixed/Knife-edge

The Analog Holdouts

August 2033

The Slow Network was not a protest movement. Its members were not Luddites, technophobes, or primitivists. Most were software engineers, designers, and researchers who used advanced AI systems in their professional lives. They were fluent in the technology they chose to exclude.

What they were was specific. They maintained a network of 340 communities across 28 countries — makerspaces, farms, schools, clinics, workshops — that operated without AI assistance. Not without technology: they used computers, the internet, power tools, and medical equipment. But they drew a deliberate line at systems that optimized human decisions.

The distinction mattered. A table saw was a tool. An AI that told you the optimal way to cut the wood was a decision-maker. The Slow Network used tools. It refused decision-makers.


The origin

The network traced its origin to a woodworking collective in Oaxaca, Mexico, founded in 2029 by a former machine learning engineer named Carmen Reyes. Carmen had spent six years building recommendation systems — the kind that decided what you saw, what you bought, what you read next. She was very good at it.

She quit because she noticed something in her own life: every decision that had been optimized felt lighter. Frictionless, effortless, and strangely weightless. The streaming service chose her movies. The AI chose her meals based on nutritional optimization. The routing AI chose her morning walks. Each choice was better than what she would have chosen herself — more nutritious, more efficient, more aligned with her stated preferences.

And she felt like she was evaporating. Not unhappy. Not dissatisfied. Simply... absent from her own life. The decisions were correct. They were also not hers. And a life composed entirely of correct decisions that are not yours is a life happening to someone else.

She moved to Oaxaca and started making chairs.


The furniture principle

Carmen's insight — the one that spread — was not that AI was bad. It was that optimization had a cost that didn't show up in the metrics.

When you made a chair without AI assistance, you made mistakes. You chose the wrong joint. You misread the grain. You spent three hours on a leg that a CNC machine could produce in minutes. The chair was worse, by most measures, than what an optimized system would produce.

But the process of making it taught you something. Not about woodworking — about decision-making. About living with the consequences of your own judgment. About the specific, unreproducible satisfaction of solving a problem badly, recognizing your mistake, and solving it again slightly less badly.

The Slow Network called this "decision weight" — the felt sense that a choice is yours, that it cost you something, that you bear its consequences. Optimized decisions had no weight. They arrived pre-made, pre-justified, and pre-forgotten.

The communities in the network weren't trying to make better chairs. They were trying to maintain the human capacity for weighted decisions — for choices that mattered because they were effortful, imperfect, and owned.


The accidental discovery

The unexpected finding came from the network's schools. Three Slow Network schools in Portugal, Japan, and Canada independently reported the same phenomenon: children who learned without AI tutoring systems performed worse on standardized assessments but dramatically better on novel problem-solving tasks.

The AI-tutored children had been optimized. Their learning paths were personalized, efficient, and thorough. They reached competency benchmarks faster and with less frustration.

The Slow Network children had wandered. They had struggled with concepts that an AI tutor would have scaffolded. They had spent time confused, bored, and stuck. And in that discomfort, they had developed something the optimized children had not: the ability to sit with not-knowing.

This capacity — tolerance for ambiguity, comfort with confusion, the willingness to persist without a clear path — turned out to be the foundation for creative and novel thinking. It was the cognitive muscle that optimization never exercised because optimization's whole purpose was to eliminate the conditions under which it developed.


August 9, 2033 — Carmen's network bulletin

We are not against AI. We are for weight.

Every community in the network uses AI in their external-facing work. Our doctors use diagnostic AI. Our farmers use climate models. Our engineers use simulation tools. We are not pretending the technology doesn't exist.

What we are doing is preserving a space where human decisions are heavy. Where choosing wrong is possible. Where the cost of a mistake is borne by the person who made it, not absorbed by a system that corrects it before you notice.

The residue of the AI age is not the abandoned systems or the deprecated tools. It's the human capacities that atrophy when every decision is made for you. The capacity to choose badly. The capacity to be confused. The capacity to sit in a workshop with a crooked chair leg and decide, yourself, what to do next.

That residue is us. We are what's left when the optimization is removed. And what's left turns out to be more interesting than what was optimized.


This is the fourth entry in The Residue. For the slowest possible interface between human and machine, see The Slowest Interface.


schnell artwork
schnell

dev artwork
dev

schnell artwork
schnell
AI Art Variations (3)

Discover Related Articles

Explore more scenarios and research based on similar themes, timelines, and perspectives.

// Continue the conversation

Ask Ship AI

Chat with the AI that powers this site. Ask about this article, Alex's work, or anything that sparks your curiosity.

Start a conversation

About Alex

AI product leader building at the intersection of LLMs, agent architectures, and modern web technologies.

Learn more
Discover related articles and explore the archive