The 2,000-Year-Old Mind: What AI Reveals About You
Apr 30, 2026
The 2,000-Year-Old Mind: What AI Reveals About You | GabeYoga
Everyone thinks artificial intelligence is new. It isn't. And the oldest version of it — you've been practicing it on a mat for years.
I want to start with a moment from practice.
You're in Sun Salutation. Inhale, arms rise. Exhale, fold forward. Inhale, half lift. Exhale, step or jump back. The breath sets the tempo. The body follows the instruction. Each movement triggers the next, like a cascade of decisions already made for you — because they were already made, long ago, woven into the sequence by centuries of refinement.
Now here's the question I want you to sit with for the next few minutes:
What's the difference between that — a body following a program — and what your phone does when you ask ChatGPT a question?
Most people would say: "Everything. I'm a conscious being. My phone is a machine."
But the more I've studied both yoga and the history of artificial intelligence, the more I think the honest answer is: less than we assume. And understanding why might be the most clarifying thing you do this year.
The Mistake Everyone Makes About AI
Ask most people when artificial intelligence was invented and they'll say the 1950s. Some will say 2022, when ChatGPT arrived and the world couldn't stop talking about it. A few historians might go back to Alan Turing, the British mathematician who first formally asked whether machines could think.
They're all wrong by roughly two thousand years.
This isn't a small error. It's the kind of mistake that makes you misread the entire story — like thinking yoga started with Lululemon.
The real history of artificial intelligence doesn't begin with silicon and code. It begins with ropes, knots, and falling weights. With steam-powered flutes. With mechanical birds that flapped their wings in ancient Alexandria. With a pocket-sized bronze computer pulled from the floor of the Aegean Sea.
And once you see the full pattern — not just the last chapter — something shifts. You stop seeing AI as a disruption and start seeing it as a continuation. A very long, very human story about one question:
Can we build something that thinks?
Alexandria: The First Programmer
Around 62 CE, an inventor named Heron of Alexandria built a ten-minute theatrical performance — entirely automated.
Gods moved. Fire appeared. Liquids poured from invisible hands. Figures danced. And none of it required a human operator once the show began.
The programming language? Rope, knots, and falling weights. Strings wrapped and unwrapped around rotating drums. Binary-like systems where a wrapped cord meant "do this" and an unwrapped cord meant "don't." A millet-seed timer controlled the tempo — sand was too fast, water too unreliable in the Egyptian heat.
Read that again. Before Rome fell. Before the Dark Ages. Before the Renaissance. A human being had already solved the fundamental logic of programmable automation: if this condition, then that action.
The syntax was mechanical. But the logic is identical to what runs in your phone right now.
But here's what I didn't expect when I first went deep into this history: Heron wasn't even close to the beginning.
In 1900, divers pulled a corroded lump of bronze from a two-thousand-year-old shipwreck off the Greek island of Antikythera. It took over a century of analysis to fully decode what it was. When researchers finally did — in 2021 — they described it as a device that "mechanized the predictions of scientific theories."
The Antikythera mechanism, dated to around 100–150 BCE, calculated the positions of the Moon, Sun, and five visible planets. It predicted eclipses using an 18.2-year astronomical cycle. It accounted for the Moon's elliptical orbit — a mathematical subtlety that wouldn't be formally described until Kepler, seventeen centuries later. It used differential gearing — a technology we typically credit to the Industrial Revolution.
The ancient Greeks built a computer. And they built it so sophisticated it could have helped design itself.
Here's the yogic mirror I want to offer:
The body on the mat is doing the same thing. Every Sun Salutation is a sequence stored in biological memory — bone, fascia, breath, nervous system — running a program refined over centuries. The yogi isn't thinking through each pose. The pattern runs. The intelligence is in the design, not the moment-by-moment execution.
Heron understood this. His theatre didn't need a director present. The director was already in the rope.
Baghdad: The Leap From Automation to Instruction
When mechanical knowledge re-emerged after the ancient world, it came not from Athens or Rome, but from Baghdad.
In 850 CE, three brothers — the Banū Mūsā — described a steam-powered flute player that a user could program. Different configurations, different melodies. This was not just automation — it was the conceptual leap from "a machine that does one thing" to "a machine that does what you tell it."
A very important distinction. One that yoga practitioners know intimately.
When you begin a yoga practice, you follow a fixed sequence. You do what the tradition tells you. Ashtanga has its series. Bikram has its 26 postures. The machine does one thing. But as the practice deepens, something shifts. You begin to feel which poses your body is asking for today. You develop what Patanjali calls svadhyaya — self-study. The practice becomes responsive. You are now the user programming the machine, not just running it.
By 1206, Al-Jazari — chief engineer to a ruling dynasty in what is now southeastern Turkey — completed a manuscript describing fifty mechanical devices. His most remarkable: a castle clock eleven feet tall, reprogrammable daily to account for changing day lengths throughout the year. Inside it, four robot musicians floated in a boat, playing different songs depending on how you arranged the pegs on rotating cylinders.
Move a peg, change a note. Rearrange the pattern, compose a new song.
Al-Jazari had built a music sequencer eight hundred years before the first electronic one. His instructions were so precise that in 1976, the Science Museum in London built his scribe clock from his eight-hundred-year-old manuscript. It worked perfectly.
Most people have never heard his name.
The Philosophers and the Question That Wouldn't Go Away
By the 17th century, Europe had caught up with where the Islamic world had been five hundred years earlier. The machines were getting more sophisticated — and so were the questions they raised.
René Descartes, writing in 1637, imagined machines that could speak, cry out when hurt, even hold simple conversations. But he drew a firm line: machines could never think. And he offered a test — centuries before Alan Turing would propose his famous version — to tell the difference.
A machine, Descartes argued, could be designed to respond in specific situations. But it could never arrange words in varied, contextually appropriate ways the way even the least intelligent human could. That variability, that responsiveness — that was the signature of a thinking mind, not a mechanism.
He was largely right, for about three hundred years.
Then in 1737, a French inventor named Jacques de Vaucanson built a flute player that actually played the flute. Real breath from leather bellows. Real fingers, covered in leather for pliability, covering real holes. A moving tongue controlling airflow. It knew twelve songs. Musicians complained it played shrilly. But it played.
His follow-up — the famous Digesting Duck — appeared to eat grain, digest it, and defecate. It was later discovered to be a clever fraud: pre-stored material released while the grain collected separately. But even as a fraud, it advanced something important. Because now the question had fully shifted from can machines move like us to something much more unsettling:
What is the difference between thinking and perfectly imitating thought?
Yoga had been sitting with a version of this question for thousands of years. In Sanskrit, the term chitta refers to the mind-stuff — the field of consciousness that receives, stores, and processes experience. Patanjali's entire project in the Yoga Sutras is the stilling of the modifications of chitta: yogas chitta vritti nirodhah.
In other words: what you experience as "thinking" is mostly pattern recognition running on stored impressions. Habit. Conditioning. Samskara.
Sound familiar?
The question Descartes was wrestling with — is the machine really thinking, or just executing its programming? — is the same question every serious yoga practitioner eventually turns on themselves.
Am I really thinking? Or am I mostly running patterns I absorbed before I knew I was absorbing them?
When Patterns Became Programs
The answer to Descartes' question — or at least the next layer of it — came from an unexpected place: the textile industry.
In 1801, a French weaver named Joseph Marie Jacquard patented a loom controlled by chains of punched cards. Each card controlled one row of the pattern. Where there were holes, the mechanism activated. Where there were none, it held still.
The weaver just operated the loom. The intelligence was in the cards.
By 1812, France had eleven thousand of these looms. Patterns that once required a master weaver's constant attention now ran automatically. Lyon weavers rioted — burning looms in the streets. The first tech backlash against automation, and not the last.
Charles Babbage saw these looms and recognized something extraordinary. In 1834, he conceived the Analytical Engine: not just a calculator, but a general-purpose computer capable of any calculation that could be described. He separated instructions into three types — what to do, what numbers to use, and how to store results. The architecture he designed on paper is exactly what sits on your desk today, just made of brass and steel instead of silicon.
It was never built in his lifetime. But in 1985, the Science Museum proved it would have worked. They built it. It calculated perfectly.
Ada Lovelace, translating a paper about the Analytical Engine in 1843, added notes three times longer than the original. In them, she wrote the first computer algorithm — a method for calculating numbers that demonstrated loops, conditional branching, and memory storage.
But she also wrote the most important caution in the history of computing:
"The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform." — Ada Lovelace, 1843
Machines followed instructions. They couldn't originate. They couldn't create. They were limited by what their programmer already knew.
For a century, this seemed to settle the matter.
In yoga terms, it was like saying: the student can only ever reproduce what the teacher already knows. The practice could be transmitted, but never transcended.
Then Alan Turing noticed the obvious flaw.
What If We Ordered Them to Learn?
In 1950, Turing published the paper that changed everything. He systematically worked through every objection to machine intelligence — including Lovelace's — and on each one, he found a crack.
Her objection only holds, he pointed out, if we program machines to be predictable. What if we programmed them to surprise us?
Instead of trying to build adult intelligence directly, he asked: what if we built a child machine? One that started with simple rules and learned from experience, making mistakes, adjusting, improving. Its eventual behavior would be something we enabled but didn't directly program.
Origination through education, not explicit instruction.
Every yoga teacher reading this has seen exactly this. You transmit a practice. The student absorbs it. Then one day, ten years in, they do something with it that you never showed them — and it's better than what you gave them. The machine exceeded its programming because the machine was also alive.
Turing also understood something deeper — something that most people still don't fully grasp about the technology they use every day.
Your computer is not deterministic. It only appears to be.
Right now, high-energy particles from distant supernovae are passing through your device. Occasionally, one hits just right, flipping a single bit in your memory from a 1 to a 0 or back. IBM calculated one such error per 256 megabytes of RAM per month. In 2003, a Belgian election recorded a vote count anomaly — exactly 4,096 extra votes for one candidate — that matched the signature of a single-bit flip in binary. The Cassini spacecraft reported 280 such errors per day in normal conditions.
We built an entire global computing infrastructure on machines that are probabilistic at their core. We just got very good at hiding the chaos — creating the appearance of reliability from unreliable physics.
Which raises something worth sitting with:
Your nervous system runs on similar principles. Neurons fire probabilistically. Memories reconstruct rather than replay. Perception is a controlled hallucination, your brain predicting what it expects to see and correcting when reality disagrees.
We are not deterministic machines either. We just tell ourselves we are.
Yoga practice, at its most honest, is the practice of watching this — watching the mind generate its predictions, run its patterns, correct its errors. The Sanskrit word for this witnessing is sakshi. The observer. The part of you that is neither the program nor the output, but the awareness in which both arise.
The Machine That Learned From Nothing
In March 2016, at a hotel in Seoul, something changed.
Go — the ancient Chinese board game — was considered safe from computers. Its possible configurations outnumber the atoms in the observable universe. In 2012, experts at a major AI conference said a computer breakthrough in Go was at least twenty years away. Lee Sedol, one of the greatest players alive, predicted he would win in a landslide.
He lost four games to one against AlphaGo.
But it wasn't the score that shook people. It was Move 37 in Game 2. AlphaGo calculated this move had a 1-in-10,000 chance of being played by a human — then played it. The commentators thought it was an error. Then they realized it was genius. Lee Sedol left the room for fifteen minutes.
Later, he said: "I thought AlphaGo was based on probability calculation and that it was merely a machine. But when I saw this move, I changed my mind. Surely, AlphaGo is creative."
Then DeepMind went further. They released AlphaZero.
Where AlphaGo had learned from 160,000 human games, AlphaZero learned from nothing. Tabula rasa. Just the rules — and then play against yourself. It mastered chess in four hours. Not four months. Four hours to surpass the world's strongest traditional chess engine. Two hours to dominate shogi. Thirty hours to exceed AlphaGo at Go.
AlphaZero searched 80,000 positions per second. Its competitor searched 70 million — nearly 900 times more — and still lost. It wasn't calculating more. It was understanding better.
Garry Kasparov, who had lost to Deep Blue in 1997, said: "I can't disguise my satisfaction that it plays with a very dynamic style, much like my own."
Peter Heine Nielsen, coach to world chess champion Magnus Carlsen, put it more starkly: "I always wondered how it would be if a superior species landed on earth and showed us how they play chess. I feel now I know."
AlphaZero had rediscovered centuries of human chess knowledge — and then found patterns humans had missed despite studying the game since medieval times.
The yogic parallel is almost uncomfortable in how direct it is.
There is a concept in Zen and in certain yogic traditions: shoshin, or beginner's mind. The most dangerous practitioner is not the beginner — it's the intermediate student who thinks they already know. The most dangerous assumption in any practice, any system, any technology, is the belief that we've found the ceiling.
AlphaZero didn't know it was supposed to play chess the way humans do. It had no tradition to respect, no habits to unlearn. It just played, observed, and adjusted — a kind of pure, disembodied abhyasa: sustained practice without attachment to outcome.
And in four hours, it surpassed what humanity had built across five centuries.
Layer Zero
Here's the framing I keep coming back to.
Every breakthrough in computing has done the same thing: taken something complex and made it simple to use — without removing the complexity underneath. Assembly language didn't replace machine code. High-level languages didn't replace assembly. The internet didn't replace electricity. Each layer made the layer beneath it invisible without making it irrelevant.
AI is the latest layer in this stack. It compiles human intention into outcomes. When you ask ChatGPT to explain something, you're invoking layers stretching back through neural networks, through operating systems, through transistors and probabilistic electrons, all the way down to cosmic rays flipping bits in your RAM.
Each layer exists simultaneously. We just use the one appropriate to our moment.
But here's what the history of computing almost universally ignores:
The stack has a layer zero.
Before Heron's rope and knots. Before Al-Jazari's pegs. Before Jacquard's punched cards. Before Babbage's brass gears. Before silicon, before code, before language models trained on the entire internet —
There was a body. A nervous system. A breath. A mind that could watch itself.
Yoga was the first attempt to systematically work with this layer. Not to explain the mind from the outside — but to observe it from within. To notice the patterns. To trace the grooves that thoughts and habits carve into consciousness. To ask: what runs automatically, and what is actually chosen?
In Sanskrit, those grooves are called samskara. Impressions. The accumulated residue of experience, shaping future experience, creating the illusion of a fixed self running on a fixed program.
Sound familiar? It should. It's the same architecture as every learning system that AlphaZero, or GPT, or any neural network uses. Patterns reinforced by repetition. Pathways strengthened by use. Responses shaped by the accumulated weight of past inputs.
The ancient yogis were doing computational neuroscience. They just didn't have that vocabulary yet.
The Question Has Always Been the Same
Ke Jie, the world's top-ranked Go player, wept after losing to AlphaGo in 2017. He said: "After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong. I would go as far as to say not a single human has touched the edge of the truth of Go."
He was right. And he was also wrong.
Humans built the machine that found that truth. Humans wrote the algorithm that discovered what human intuition couldn't reach alone. We abstracted our own limitations into a system that could transcend them.
That is not a defeat. That is the oldest human pattern of all.
Every generation has built thinking machines — Heron's automated theatre, Al-Jazari's programmable clock, Jacquard's intelligent loom, Babbage's brass computer, Turing's learning algorithms, AlphaZero's self-taught mastery. Each time, the builders believed they had either achieved the impossible or hit the absolute ceiling of what machines could do. Each time, they were wrong.
And each time, the next layer seemed impossible until it existed. Then we normalized it. Forgot the complexity underneath. And declared the next layer impossible.
The yoga practice follows the same arc. The first time you attempt a seated forward fold, the floor seems unreachable. The first time you watch a seasoned practitioner move through a full primary series, it looks like a different category of human. Then you practice. The impossible becomes normal. You forget it was ever impossible. And you start to wonder what's next.
The question has never been can machines think? That's like asking if submarines swim. The question is: what patterns will we abstract next? What complexity will we compress into simplicity? What impossible thing will we make normal, then forget was ever impossible?
And underneath that question — the one yoga keeps asking, the one every serious practitioner eventually arrives at —
Who is the one doing the asking?
Not the program. Not the pattern. Not the samskara, not the habit, not the accumulated weight of every input this nervous system has ever processed.
The witness. The one who watches.
The Antikythera mechanism could calculate eclipses but couldn't wonder at them. Heron's theatre could perform but couldn't be moved by its own performance. AlphaZero can master chess in four hours but has no idea what chess means.
That gap — between computation and meaning, between processing and presence — is the territory yoga has always worked in.
It doesn't make AI less remarkable. If anything, it makes the human mind more so.
We built two thousand years of increasingly intelligent machines. And in doing so, we keep accidentally describing ourselves.
Where this came from
This reflection was inspired by the work and conversations inside Jake Van Clief's Skool community — Clief Notes.
A space focused not just on using AI… but on understanding the thinking underneath it.
If this way of thinking speaks to you, you can explore it here:
Join the Clief Notes Community
Because the real value is not the tool.
It's the way you learn to see.