Michael Hiltzik: A leading roboticist punctures the hype about self-driving cars, AI chatbots and humanoid robots
Published in Op Eds
It may come to your attention that we are inundated with technological hype. Self-driving cars, human-like robots and AI chatbots all have been the subject of sometimes outlandishly exaggerated predictions and promises.
So we should be thankful for Rodney Brooks, an Australian-born technologist who has made it one of his missions in life to deflate the hyperbole about these and other supposedly world-changing technologies offered by promoters, marketers and true believers.
As I've written before, Brooks is nothing like a Luddite. Quite the contrary: He was a co-founder of IRobot, the maker of the Roomba robotic vacuum cleaner, though he stepped down as the company's chief technology officer in 2008 and left its board in 2011. He's a co-founder and chief technology officer of RobustAI, which makes robots for factories and warehouses, and former director of computer science and artificial intelligence labs at Massachusetts Institute of Technology.
In 2018, Brooks published a post of dated predictions about the course of major technologies and promised to revisit them annually for 32 years, when he would be 95. He focused on technologies that were then — and still are — the cynosures of public discussion, including self-driving cars, human space travel, AI bots and humanoid robots.
"Having ideas is easy," he wrote in that introductory post. "Turning them into reality is hard. Turning them into being deployed at scale is even harder."
Brooks slotted his predictions into three pigeonholes: NIML, for "not in my lifetime," NET, for "no earlier than" some specified date, and "by some [specified] date."
On Jan. 1 he published his eighth annual predictions scorecard. He found that over the years "my predictions held up pretty well, though overall I was a little too optimistic."
For example in 2018 he predicted "a robot that can provide physical assistance to the elderly over multiple tasks [e.g., getting into and out of bed, washing, using the toilet, etc.]" wouldn't appear earlier than 2028; as of New Year's Day, he writes, "no general purpose solution is in sight."
The first "permanent" human colony on Mars would come no earlier than 2036, he wrote then, which he now calls "way too optimistic." He now envisions a human landing on Mars no earlier than 2040, and the settlement no earlier than 2050.
A robot that seems "as intelligent, as attentive, and as faithful, as a dog" — no earlier than 2048, he conjectured in 2018. "This is so much harder than most people imagine it to be," he writes now. "Many think we are already there; I say we are not at all there." His verdict on a robot that has "any real idea about its own existence, or the existence of humans in the way that a 6-year-old understands humans" — "Not in my lifetime."
Brooks points out that one way high-tech promoters finesse their exaggerated promises is through subtle redefinition. That has been the case with "self-driving cars," he writes. Originally the term referred to "any sort of car that could operate without a driver on board, and without a remote driver offering control inputs ... where no person needed to drive, but simply communicated to the car where it should take them."
Waymo, the largest purveyor of self-driven transport, says on its website that its robotaxis are "the embodiment of fully autonomous technology that is always in control from pickup to destination." Passengers "can sit in the back seat, relax, and enjoy the ride with the Waymo Driver getting them to their destination safely."
Brooks challenges this claim. One hole in the fabric of full autonomy, he observes, became clear Dec. 20, when a power blackout blanketing San Francisco stranded much of Waymo's robotaxi fleet on the streets. Waymos, which can read traffic lights, clogged intersections because traffic lights went dark.
The company later acknowledged its vehicles occasionally "require a confirmation check" from humans when they encounter blacked-out traffic signals or other confounding situations. The Dec. 20 blackout, Waymo said, "created a concentrated spike in these requests," resulting in "a backlog that, in some cases, led to response delays contributing to congestion on already-overwhelmed streets."
It's also known that Waymo pays humans to physically deal with vehicles immobilized by — for example — a passenger's failure to fully close a car door when exiting. They can be summoned via the third-party app Honk, which chiefly is used by tow truck operators to find stranded customers.
"Current generation Waymos need a lot of human help to operate as they do, from people in the remote operations center to intervene and provide human advice for when something goes wrong, to Honk gig workers scampering around the city," Brooks observes.
Waymo told me its claim of "fully autonomous" operation is based on the fact that the onboard technology is always in control of its vehicles. In confusing situations the car will call on Waymo's "fleet response" team of humans, asking them to choose which of several optional paths is the best one. "Control of the vehicle is always with the Waymo Driver" — that is, the onboard technology, spokesman Mark Lewis told me. "A human cannot tele-operate a Waymo vehicle."
As a pioneering robot designer, Brooks is particularly skeptical about the tech industry's fascination with humanoid robots. He writes from experience: In 1998 he was building humanoid robots with his graduate students at MIT. Back then he asserted that people would be naturally comfortable with "robots with humanoid form that act like humans; the interface is hardwired in our brains," and that "humans and robots can cooperate on tasks in close quarters in ways heretofore imaginable only in science fiction."
Since then it has become clear that general-purpose robots that look and act like humans are chimerical. In fact in many contexts they're dangerous. Among the unsolved problems in robot design is that no one has created a robot with "human-like dexterity," he writes. Robotics companies promoting their designs haven't shown that their proposed products have "multi-fingered dexterity where humans can and do grasp things that are unseen, and grasp and simultaneously manipulate multiple small objects with one hand."
Two-legged robots have a tendency to fall over and "need human intervention to get back up," like tortoises fallen on their backs. Because they're heavy and unstable, they are "currently unsafe for humans to be close to when they are walking."
(Brooks doesn't mention this, but even in the 1960s the creators of "The Jetsons" understood that domestic robots wouldn't rely on legs — their robot maid, Rosie, tooled around their household on wheels, a perception that came as second nature to animators 60 years ago but seems to have been forgotten by today's engineers.)
As Brooks observes, "even children aged 3 or 4 can navigate around cluttered houses without damaging them. ... By age 4 they can open doors with door handles and mechanisms they have never seen before, and safely close those doors behind them. They can do this when they enter a particular house for the first time. They can wander around and up and down and find their way.
"But wait, you say, 'I've seen them dance and somersault, and even bounce off walls.' Yes, you have seen humanoid robot theater. "
Brooks' experience with artificial intelligence gives him important insights into the shortcomings of today's crop of large language models — that's the technology underlying contemporary chatbots — what they can and can't do, and why.
"The underlying mechanism for Large Language Models does not answer questions directly," he writes. "Instead, it gives something that sounds like an answer to the question. That is very different from saying something that is accurate. What they have learned is not facts about the world but instead a probability distribution of what word is most likely to come next given the question and the words so far produced in response. Thus the results of using them, uncaged, is lots and lots of confabulations that sound like real things, whether they are or not."
The solution is not to "train" LLM bots with more and more data, in the hope that eventually they will have databases large enough to make their fabrications unnecessary. Brooks thinks this is the wrong approach. The better option is to purpose-build LLMs to fulfill specific needs in specific fields. Bots specialized for software coding, for instance, or hardware design.
"We need guardrails around LLMs to make them useful, and that is where there will be lot of action over the next 10 years," he writes. "They cannot be simply released into the wild as they come straight from training. ... More training doesn't make things better necessarily. Boxing things in does."
Brooks' all-encompassing theme is that we tend to overestimate what new technologies can do and underestimate how long it takes for any new technology to scale up to usefulness. The hardest problems are almost always the last ones to be solved; people tend to think that new technologies will continue to develop at the speed that they did in their earliest stages.
That's why the march to full self-driving cars has stalled. It's one thing to equip cars with lane-change warnings or cruise control that can adjust to the presence of a slower car in front; the road to Level 5 autonomy as defined by the Society of Automotive Engineers — in which the vehicle can drive itself in all conditions without a human ever required to take the wheel — may be decades away at least. No Level 5 vehicles are in general use today.
Believing the claims of technology promoters that one or another nirvana is just around the corner is a mug's game. "It always takes longer than you think," Brooks wrote in his original prediction post. "It just does."
____
©2026 Los Angeles Times. Visit at latimes.com. Distributed by Tribune Content Agency, LLC.






















































Comments