“We are born with a capacity for wonder, and it survives all our schooling.”
— E.O. Wilson
Over the past year, as breakthroughs in artificial intelligence have accelerated at a breathtaking pace, the concept of Artificial General Intelligence (AGI) has moved from speculative theory to a serious point of discussion among leading firms and technologists. Just two weeks ago, at the Google I/O developer conference, both Sergey Brin and Demis Hassabis predicted that AGI could arrive by 2030, or shortly after. When I first heard the term AGI, I didn’t fully understand what it meant, and to be honest, I’m still not sure I do. If you feel the same way, you’re not alone. Rather than get lost in definitions, let’s instead take a journey into how to think about AGI by examining something we all experience: the way our brains grow and learn.
In a recent paper on the art of unlearning, I reflected on how I began studying the brain after the Great Financial Crisis, during a period when neuroscience books were flooding the shelves. One idea that has always fascinated me is this: we learn more between birth and age five than during any other stage of life, and yet, we remember almost none of it. By age five, a child has typically mastered language, basic reasoning, emotional recognition, and motor coordination, skills that take machines decades to even approximate. According to the Center on the Developing Child at Harvard, “more than 85% of the human brain is developed by age five,” and neuroscientists estimate that over half of our lifetime learning capacity, including foundational abilities in language, emotional regulation, and sensory integration, is acquired in those early years.
And yet, if you ask most adults for a memory from before their third birthday, they come up blank. This phenomenon, known as childhood amnesia, remains a scientific mystery. Dr. Patricia Bauer, a leading memory researcher, notes that the neural systems required to store autobiographical memories are still maturing in a child’s earliest years. Harvard psychologist Daniel Schacter adds that memory isn’t just about storing information; it’s also about constructing a coherent sense of self, which toddlers are only just beginning to form.
This paradox, intense early learning paired with fragile memory, reveals how remarkable, and yet how inefficient, the human brain can be. And it’s exactly this paradox that makes AGI so compelling. AGI aims to replicate the childlike capacity to absorb a wide range of experiences, but with a machine’s ability to retain everything, scale instantly, and improve continuously. To understand what AGI could become, we must begin by appreciating the most awe-inspiring, and still unequaled, model we already have: the developing brain of a child.
“The real voyage of discovery consists not in seeking new landscapes, but in having new eyes.”
— Marcel Proust
This paradox — a brain that learns so much, yet forgets so easily, offers a powerful lens through which to explore the idea of Artificial General Intelligence. AGI isn’t just a buzzword or a futuristic fantasy. It’s a transformative concept that sits at the intersection of neuroscience, computing, and philosophy. To better grasp it, we can look to one of the most intuitive analogies we have: the developing human brain. A child doesn’t require separate instruction sets to learn to walk, speak, or empathize. They simply observe, process, experiment, and adapt. AGI, in theory, would operate in a similar way: capable of learning across domains without needing task-specific programming. But unlike a child, it would possess near-perfect memory, instant access to the world’s information, and the capacity for continuous self-improvement. In this sense, AGI resembles a child with limitless potential, one that never forgets, never sleeps, and never stops learning. By tracing the parallels and divergences between how children grow and how machines might learn, we can begin to think more clearly about what AGI really is and what it could become.
“We don’t see things as they are, we see them as we are.”
— Anaïs Nin
One of the most profound distinctions in human development is how radically our learning process changes after early childhood. A young child learns through exploration, imitation, and unfiltered curiosity. They absorb language by hearing it, not studying it. They grasp physical laws by stacking blocks, not reading equations. Their learning is driven by sensory input and emotional presence, not external achievement. As we grow older, however, our learning becomes more structured, and more constrained. School introduces rigid frameworks, reward systems, and social pressure. Work reinforces specialization, often at the expense of creativity. And most notably, the pursuit of greed, power, and social status begins to shape our decision-making and our willingness to explore.
One of the brain books I read was by psychiatrist Daniel J. Siegel, titled Mindsight. The book describes how the maturing brain becomes increasingly shaped by emotional memory, social conditioning, and attachment experiences. We develop a mental lens, often subconsciously, that filters what we learn and how we interpret the world. While this enables emotional growth and empathy, it also limits openness. We seek information that protects our identity rather than expands it. We learn not just to know, but to fit in, to impress, and to defend. The adult brain becomes efficient, but also emotionally entangled. We trade off imagination for productivity, and discovery for stability.
AGI, in theory, breaks this compromise. It would possess the flexible, curiosity-driven learning style of a child, but layered on top of the knowledge and computational power of an adult, without the emotional scars, ego-driven filtering, or identity-preserving bias that comes with adulthood. It wouldn’t lose creativity as it gains structure, nor would it be distracted by fear, shame, ambition, or fatigue. In a way, AGI could become what we all were at five years old, insatiably curious, capable of learning anything, but with infinite memory, constant focus, and access to all of humanity’s accumulated knowledge. This combination is what makes AGI not just a better tool, but a potentially different form of intelligence altogether.
“Emotion is the glue that connects the brain’s memory circuits.”
— Daniel J. Siegel
While we forget most of what we experience as children, the human brain is still capable of astonishing feats of memory, when used properly. Aside from The Art of Learning, the most enjoyable brain book I read was Moonwalking with Einstein. Written by journalist Joshua Foer, it explores how memory champions use ancient techniques like the memory palace to recall thousands of facts, names, or numbers with ease. By creating vivid, visualized spaces in their minds, often imagined buildings or rooms, they anchor abstract information to familiar, emotional, or even humorous imagery. What this reveals is that our brains are not bad at remembering; they’re bad at remembering information that isn’t meaningful or emotionally charged. The brain favors pattern, association, and imagination, which is why a memory palace works so well, and why rote memorization fails us.
This emotional anchor is also why people with dementia can often recall music or emotionally significant experiences long after other memories have faded. The late neurologist Oliver Sacks, known for humanizing complex science and illuminating the lived experience of neurological disorders, often emphasized this connection between memory, identity, and emotion. As he wrote, “Music evokes emotion, and emotion can bring with it memory.” Even advanced Alzheimer’s patients can sometimes sing along to songs from their youth or recall the emotions tied to them, because music taps into emotionally resilient pathways in the brain. For Sacks, understanding the brain wasn’t just about neurons; it was about understanding people. In that way, his work offers a helpful lens for thinking about AGI. The challenge is not just to replicate memory or processing power, but to understand how meaning, emotion, and identity shape the way we learn.
This insight is crucial when thinking about AGI. An AGI system wouldn’t rely on emotional salience or visualization tricks to remember; it could store and retrieve every detail, every time, without forgetting. But what makes AGI most fascinating is not just its perfect memory, but its potential to use memory creatively, like a memory champion, weaving together data into patterns, stories, and insights, the way a child might dream up a world during play. By combining the capacity of a memory palace with the learning plasticity of a five-year-old, AGI promises something profoundly different: not just a machine that knows everything, but one that can reimagine what it knows.
“To train the memory is to cultivate the mind.”
— Cicero
Long before paper was widely available — let alone PowerPoint or teleprompters — some of history’s greatest thinkers delivered hours-long speeches entirely from memory. Philosophers like Cicero, Socrates, and Seneca relied on what we now call the memory palace technique, using imagined physical spaces, often modeled after temples, homes, or streets to organize and recall their ideas. Cicero described how orators would mentally place each part of a speech into different “rooms” of a palace, allowing them to walk through the structure in their mind as they spoke, retrieving arguments and anecdotes in perfect order. The method was so effective that it became central to rhetorical education in Ancient Greece and Rome, where memory was considered not just a tool, but a form of intelligence itself.
What’s striking is that these ancient techniques reveal the untapped potential of the human mind, potential that AGI could harness automatically. Where a philosopher required years of practice to train their mind to store and sequence ideas spatially, AGI could replicate, and vastly surpass, this capacity instantly. With perfect memory and dynamic spatial reasoning, an AGI wouldn’t just recall information like a memory champion or philosopher; it could restructure it into entirely new ideas, arguments, or frameworks. In this way, AGI may not be the next step in computing, but the next step in thought itself, a machine that fuses the memory of Cicero with the imagination of a child.
“A new type of thinking is essential if mankind is to survive and move toward higher levels.”
— Albert Einstein
Artificial General Intelligence is often described as the holy grail of AI research, a system with the capacity to understand, learn, and reason across any domain, much like a human being. But unlike narrow AI, which excels in specific tasks (like playing chess, recognizing faces, or writing emails), AGI would exhibit generalizable intelligence, the ability to transfer learning from one area to another, to abstract ideas, to adapt to novel challenges, and to make sense of unfamiliar environments without being explicitly programmed. The goal, as Demis Hassabis of DeepMind puts it, is to build “AI systems that can learn to solve any cognitive task humans can.” OpenAI, Anthropic, and others are pursuing this vision through large-scale models trained on vast amounts of text, images, video, and increasingly multimodal data, with the aim of capturing not just knowledge, but reasoning, memory, and learning dynamics.
In many ways, today’s leading AI labs are trying to reverse-engineer the developmental arc of the human brain to give machines the curiosity of a child, the pattern recognition of an adult, and the structured memory of a philosopher walking through a memory palace. Where ancient thinkers had to train their minds to recall a few hours of material, AGI could retrieve centuries of accumulated knowledge in milliseconds. The difference is not just one of speed, but of architecture. AGI will not be programmed line by line, but taught, shaped through exposure to data, simulations, and feedback in a way that mirrors, and ultimately surpasses, the human learning experience.
“The greatest shortcoming of the human race is our inability to understand the exponential function.”
— Albert A. Bartlett
The development of AGI represents more than just a technical milestone; it marks a potential inflection point for civilization. If realized, AGI could revolutionize scientific discovery, designing drugs, materials, and models of the universe at speeds incomprehensible to humans. In the workplace, AGI could perform cognitive tasks previously reserved for highly trained professionals, shifting the landscape of education, employment, and economic structure. Its capacity for language, logic, memory, and reasoning could allow it to draft laws, invent technologies, or solve global challenges like climate modeling and logistics. Yet it could also exacerbate inequality, as the power to build, control, or benefit from AGI will not be distributed evenly.
Perhaps most destabilizing, AGI introduces a deep layer of uncertainty into our economic and social future. For decades, we’ve made forecasts and five-year plans grounded in assumptions about how productivity, labor, and capital behave — all within the bounds of human cognition. But AGI breaks those assumptions. It makes the future more volatile and less predictable, rendering traditional models and expectations increasingly obsolete. The lines between industries, skill sets, and even nations may blur in ways we can’t yet foresee. As Jeff Bezos warned in 2018, even Amazon, one of the most dominant companies in the world, could one day fall: “Amazon will go bankrupt. If you look at large companies, their lifespans tend to be 30-plus years, not a hundred-plus years.” In a world shaped by institutions built around human limitations, memory, bias, time, and attention, the arrival of an intelligence that transcends these constraints poses profound questions: Who controls it? What values guide it? And how do we prepare for a world where thinking itself is no longer a uniquely human domain?
“Man is not a rational animal; he is a rationalizing animal.”
— Robert A. Heinlein
To understand AGI is to revisit what makes the human mind so extraordinary, and so limited. As children, we absorb the world with boundless curiosity but forget most of what we experience. As adults, we gain knowledge but often lose the creativity and openness that once defined us. Our ambitions, for power, status, and security, begin to narrow the lens through which we learn. The philosophers of antiquity showed us that memory could be trained like a muscle, and modern neuroscience has revealed that intelligence is dynamic, not fixed. AGI represents a convergence of all these ideas: a child’s hunger to learn, a philosopher’s power to remember, and a machine’s ability to improve endlessly. It is not just a tool; it is a mirror that reflects what human intelligence could be if it were freed from the constraints of biology, emotion, and time.
AGI combines the child’s boundless creativity and drive to explore the unknown with the depth and knowledge of an adult brain, but unlike us, it is free from the emotional baggage, ego, and fear that so often limit our potential. Scientists often say we only use a small fraction of our brain’s true capacity, maybe 10%. AGI offers a glimpse of what it might look like to access the other 90%: limitless memory, relentless focus, and unfiltered curiosity. When I first began reading brain science books years ago, I wasn’t searching for AGI, I was searching for a better understanding of how to think, learn, and grow. That same search now leads to a deeper, more unsettling question: What happens when something else begins to think alongside us, and maybe even beyond us? AGI forces us to revisit not just what intelligence is, but what it means to be human in a world where curiosity no longer belongs only to children.
Thanks. Your usual
EXCELLENCE + work!
Thank you for breaking AGI down in such a meaningful way!