This month’s Tech Talk was presented by Glen; it was a follow-on to the “warm-up” Tech Talk he presented in June.
This month’s topic is a review of A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains by Max Bennett
It’s a great book, BTW. If you enjoy Science books, this is a great read.

I’m interested in this topic because, as AI becomes more human-like, I’m curious to see the differences and where it’s going…there’s so much difference between how human brains actually work and what AI is doing.
At the beginning of the book, one of the things Bennet mentions is that The Jetsons, in 1962, predicted much of today’s technology: flat-screen TVs…cell phones…3D printers…but the one thing they got wrong was Rosie The Robot, who was very human-like and understood emotion and was almost a human in a robot body. We don’t have anything like that now. LLMs are sounding sort of like people when you have a conversation with them but they are nowhere near Rosie-Level.
THE FIVE BREAKTHROUGHS THAT MADE OUR BRAINS:
– Steering and the First Bilaterians
– Reinforcing and the First Vertebrates
– Simulating and the First Mammals
– Mentalizing and the First Primates
– Speaking and the First Humans
NOTE: I’ve listed the sections on all of these breakthroughs in my handouts, but we don’t have time to to through every one so a lot of them are simply headings.
– STEERING AND THE FIRST BILATERIANS:
The first commercially successful robot was the small autonomous vacuum, iRobot. Almost the only thing iRobots did with intelligence was to steer straight ahead or to turn. And that’s what brains were initially used for.
When animals started hunting (unlike corals, for instance, which just sit and grab what passes), they needed to turn this way and that way or keep going straight…it doesn’t sound very complex but it’s what evolution came up with, and it was enough to make iRobot do its job.
The thing about steering is that the world had to be categorized, basically, into good-and-bad: Something that you needed to approach like food and something that you needed to avoid like predators. Brains needed a way to tell the organism to go one way or the other way; either go towards this thing or away from this thing. And that brought up emotion. In other words, you need to have some emotion, some reason to go one way or another…I don’t think nematodes [eelworms] feel emotion, but basically, but they had to have the first glimmers that led to emotion. Being afraid or whatever.
Audience Member: What’s a nematode?
Basically, a small worm [C. Elegans] that is famous for having 302 neurons…in comparison, we have 85 billion neurons. With only those 302 neurons, they are able to move around, find food, move away from predators, react to hot and cold. They’ve survived pretty much unchanged for the past 500 million years. Amazing. And, we basically evolved from them, or from their predecessors.
– REINFORCING AND THE FIRST VERTEBRATES
Why Life Got Curious is sort of fun…
How do you make an AI tackle the exploitation-exploration dilemma? You do that by making the AI curious. You make learning itself reinforcing so, instead of just sitting there… like in certain neurological deficits you might have after a particular kind of stroke…it’s fairly rare…but there’s one where people just don’t do anything because they’re not motivated to do anything. They see. They feel. They hear. But they don’t do anything.
So AI needs some reason to do things to kind of get things going and moving in a particular kind of direction.
Curiosity and Reinforcement Learning coevolved because curiosity is a requirement for reinforcement learning to work.
The First Model of the World
Nematodes and other non-vertebrates don’t have world models. For example, when a C. Elegons worm moves from one place to another, it senses something it needs to go to or move away from.
Audience Member: The current stimulus.

Yes! It doesn’t have any idea of what’s around it. It doesn’t have a world map at all. It just goes towards something that smells (or otherwise seems) Good and avoids whatever seems Bad. Vertebrates, though, do map the world. And, one of the reasons they can do so is they have ear canals, which kind of tell you where you’re moving and so on. Without that, it’s hard to build a World Map.
Audience Member: And keep your balance.

Yes. Especially if you’re two-legged.
– SIMULATING AND THE FIRST MAMMALS
Mammals are the first vertebrates that actually started simulating the environment which, of course, is computationally intensive. And that’s one reason why only mammals and birds do this. Everything else (fish, for example) is cold-blooded. Cold-blooded animals don’t have the computational horsepower to do that because neurons don’t work as well when they get cold. Mammals keep their body temperature the same and at a high level (around 100 degrees).
Also, far-ranging vision is necessary to do something early mammals could do with the simulation: to imagine what was going to happen as one went out in that world. Could you get to that food item before the predator bird got to you? Would changing your path make a difference?
Being able to learn from your own simulation of the world and what would happen if you do X or Y. Very useful.
Mammals needed eat a lot more food to support their higher-powered brains, but it was clearly worth the trade-off.
The secret to dishwashing robots
Basically, humans are deeply in touch with the world, literally. There are 17,000 nerve endings in your hand alone; hundreds in just the tip of your finger. And, one reason that reptiles are…if you notice when they move, they’re not very precise. When you see a reptile running, it’s sort of flip-flopping. When you see a cat running, it’s very precise. And, again, it’s the difference between mammal brains and reptilian brains. There’s a lot more computation involved and a lot more information being taken in through your senses, which allows for very fine motor control.
One reason we don’t have dishwashing robots is that they don’t have that very fine motor control. Basically, they’re using vision and other senses but they don’t have nearly as much tactile sensation, if any, and that’s one reason they don’t move like humans do…although, that’s going to change.
Audience Member: It is.

– MENTALIZING AND THE FIRST PRIMATES
The Arms Race of Political Savvy…that kind of says it all…
How To Model Other Minds.
Primates generally are political animals, in many ways. Which means they’ve got a whole other batch of simulations to run. They need to not only simulate what’s going on outside but they need to simulate the mental state of the people they are working with, so they know what this person is thinking, what that person is thinking, whether if would be good to do X or Y because of these two people working together…it’s a lot more complex.
Why Rats Can’t Go Grocery Shopping.
“How can your neocortex want something that your amygdala and hypothalamus do not?”
“There is another situation we have already seen where brains need to infer an intent—a ‘want’–of which it does not currently share: when they’re trying to infer the wants of other people … Put another way: Is imagining the mind of someone else really any different from imagining the mind of your future self?”
Primates gained that ability to basically imagine themselves in the future, given what they think is going to happen. Earlier mammals could not do that. They could learn from what’s going on…they could see what, maybe, what a fellow mouse wanted to do, but they didn’t learn the trick of being able to imagine what they were going to want. For example, when squirrels put together nuts for the coming winter, it’s strictly instinctual; DNA-driven. A human can use mental simulations to see what they want in the future.
– SPEAKING AND THE FIRST HUMANS
Speaking is interesting because it’s a whole, unique human activity that no other animal has, even though we don’t have any new brain regions that do this. Basically, we’ve got a DNA-based program that gets infants to learn speech, and two areas that other primates already have that get repurposed in humans to serve speech functions (Broca’s and Wernicke’s areas).
We’ve also got a larynx or voice box different from other primates and better control of our breath, all of which allows us to speak and use declarative language. What we can do with that is: we can say something like, “Remember the dog we saw yesterday?” What you’re doing is providing a mental image (to the people you’re talking to), triggering their mental image so that everybody in the group knows what you’re talking about. Monkeys can’t do that. Again, they’ve got emotional responses they can communicate like, “I’m scared!” But they can’t use declarative language or grammar; they don’t even have the physical body parts that could express speech in sufficient detail for that.
Audience Member: I understand that even primates raised from birth have never asked a question. They’ve only responded to questions. They’ve never actually said, “Why am I here?” or “What am I doing?” or “How are you feeling?” It’s always been responsive.
– CONCLUSION: THE SIXTH BREAKTHROUGH
Then Max Bennett talks about AI…of course, the sixth breakthrough is thought to be machine super intelligence…or, AGI (Artificial General Intelligence). We can’t know if or when it will happen or what form it will take, but AGI or something similar does appear on the horizon.
There’s a strong contingent of very smart people who think that super-intelligent AI will end the human race. Interestingly, there are people in the AI Industry like CEOs, that say both, “Yes, there’s a chance that AGI will make humans go extinct”, and “we need more money to build more centers to make these things smarter”.
***BONUS MATERIAL (in the handout)***
ON AI SAFETY
If anyone builds it, everyone dies: Why superhuman AI would kill us all. By Max Bennet – worth reading.
There are very strong arguments for lack of AI safety.
Then there are a number of links you can follow, related to AI and AI Safety.
There’s a news story about iRobot’s Founder saying that he won’t go within 10 feet of today’s walking robots—and why. As well as a link about the story about the company going bankrupt.
And, on the last couple of pages, a brief section on a previous talk that I did about the left and the right hemispheres of the human brain. I wish that more people in the AI Industry understood this because, when you spend some time looking at the left and right hemispheres, you realize how different are the two blueprints and manifestations.
The left hemisphere basically is a tool that the right hemisphere needs to manipulate the world and build things and so on. But, when the left hemisphere is running the show, it’s really bad news. And that, more and more, is the case.
Hemisphere differences are MUCH more subtle and complex than I can describe here,, and almost everything the brain does uses both hemispheres in one way or another, but the two hemispheres experience the world in very different ways — and we need both.
But we need the right hemisphere to be the dominant one – interestingly, it’s historically been called the “minor” hemisphere – despite it being physically larger and the one that sees holistically. The right hemisphere sees the world as a whole—a gestalt. It understands the big picture and senses a connection to the world. The left hemisphere sees things in terms of parts; it is a useful and necessary tool but has many dangerous characteristics.
Like the left hemisphere, AI has no empathy. AI has no reason to have empathy; it’s not DNA-based, doesn’t need water, doesn’t need anything, basically, that humans or other life needs.
AI sees everything much the way the left hemisphere does: as dead, lifeless matter to be manipulated.
General discussion followed…
You are invited to join us by adding your email address to the BCC List used to generate the monthly invitation to dinner.
Author: Karen
Researched & Written: November, 2025
Published: 11/18/25
Copyright © 2025, FPP. All rights reserved.