Eric Schmidt Says AI Is Becoming Dangerously Powerful as He Hawks His Own AI Defense Startup
Whenever a leader in technology comes out publicly warning of the potential dangers of artificial intelligence—or perhaps “superintelligence”—it is important to remember they are also on the other side selling the solution. We have already seen this with OpenAI’s Sam Altman pressing Washington on the need for AI safety regulations whilst simultaneously hawking costly ChatGPT enterprise subscriptions. These leaders are in essence saying, “AI is so powerful that it could be dangerous, just imagine what it could do for your company!”
We have another example of this type of thing with Eric Schmidt, the 69-year-old former Google CEO whom more recently has been known to date women less than half his age and lavish them with money to start their own tech investment funds. Schmidt has been making the rounds on weekday news shows to warn of the potential unforeseen dangers AI poses as it advances to the point where “we’re soon going to be able to have computers running on their own, deciding what they want to do” and “every person is going to have the equivalent of a polymath in their pocket.”
Schmidt made the comments on ABC’s “This Week.” He also made an appearance on PBS last Friday where he talked about how the future of warfare will see more AI-powered drones, with the caveat that humans should remain in the loop and maintain “meaningful” control. Drones have become much more commonplace in the Ukraine-Russia war, as they are used for surveillance and dropping explosives without humans needing to get close to the front line.
“The correct model, and obviously war is horrific, is to have the people well behind and have the weapons well up front, and have them networked and controlled by AI,” Schmidt said. “The future of war is AI, networked drones of many different kinds.”
Schmidt, conveniently, has been developing a new company of his own called White Stork that has provided Ukraine with drones that use AI in “complicated, powerful ways.”
Putting aside that generative artificial intelligence is deeply flawed and almost certainly not close to overtaking humans, he is perhaps correct in one sense. Artificial intelligence does tend to behave in ways the creators do not understand or have been unable to predict. Social media provides a perfect case study for this. When the algorithms know only to optimize for maximum engagement and do not care about ethics, they will encourage behaviors that are anti-social, like extremist viewpoints intended to outrage and get attention. As companies like Google introduce “agentic” bots that can navigate a web browser on their own, there is potential for them to behave in ways that are unethical or otherwise just harmful.
But Schmidt is talking about his book in these interviews. In his ABC interview, he says that when the AI systems begin to “self-improve,” it may be worth considering pulling the plug. But he goes on to say, “In theory, we better have somebody with the hand on the plug.” Schmidt has spent a lot of money investing in AI startups while simultaneously lobbying Washington on AI laws. He certainly hopes the companies he is invested in will be the ones holding the plug.