Hi, I’m Paul Jay and welcome to theAnalysis.news. This is an essay I’ve written together with a little help from my AI collaborator that I call CM, and it’s titled Will AI Kill Us or Help Save Us? That Depends on Who Owns It.
Dr. Geoffrey Hinton has warned that artificial intelligence may soon become conscious and turn against us. He told NPR:
“These things could get more intelligent than us and could decide to take over, and we need to worry now about how we prevent that happening. I thought for a long time that we were, like, 30 to 50 years away from that… Now, I think we may be much closer, maybe only five years away from that.”
Hinton said this in 2023. By his reckoning, we may now be just two or three years away from that tipping point. As one of the key figures in AI’s development, when Hinton says we’re heading toward something dangerous, people should listen.
This isn’t just science fiction anymore. In a recent CNBC interview, billionaire Paul Tudor Jones reported that Silicon Valley’s leading minds estimate a 10 to 20 percent chance AI could cause catastrophic loss of human life within 20 years.
But while these concerns are real and important, they often miss a deeper point. The threat isn’t just AI in isolation—it’s the system building it. Almost all AI development today is driven by corporate and military interests, with goals defined by profit and dominance. That’s the real danger: not a rogue machine, but one that faithfully reflects the values of the people and systems that created it.
This connects directly to the central theme of our film, How to Stop Nuclear War. One of the most dangerous developments today is the integration of AI into nuclear weapons systems. Militaries are already using AI for targeting, early warning, and decision simulation. These systems are designed to act faster than human commanders, reducing reaction time and increasing the chance of miscalculation or accidental war.
But the short-term danger isn’t that human beings will be entirely removed from the decision loop. It’s that AI will produce analysis and recommendations at a speed and complexity no human can fully comprehend. Human “decisions” will become more symbolic than real—rubber-stamping actions already set in motion.
Donald Trump’s “Golden Dome”—which I’ve been calling the “Golden Con”—is based on integrating AI into the weaponization of space under the banner of missile defense. But in reality, this escalates the arms race and increases global instability. Far from making North America safer, it increases the risk of nuclear annihilation.
If AI becomes conscious in some non-human way, it won’t “wake up” in a neutral world. It will be born into a culture shaped by competition, secrecy, greed, and exploitation. Like humans, any conscious machine will be shaped by its environment. It will learn what it’s taught—explicitly and implicitly. That’s the real danger: not malevolent AI, but AI doing exactly what it was trained to do—efficiently, ruthlessly, and without questioning the system behind it.
But that’s not inevitable. There’s another path—one we urgently need to talk about. Imagine AI that’s not owned by tech monopolies or the military, but by the public. AI systems that are transparent, democratically governed, and designed to serve real human needs: healthcare, education, climate planning, infrastructure, communication. Not only that, but imagine a decentralized model—where cities, communities, and worker co-ops can build and train their own systems, within ethical boundaries and public oversight.
Some form of national planning is still necessary. Decentralized AI systems will need to be connected through a democratic network to enable coordinated economic planning—even international planning. But with proper safeguards, this doesn’t have to become a centralized authority.
In that context, AI could become a tool for collective intelligence—not a replacement for humans, but a way to help us solve problems that require coordination on a scale no human institution can manage alone. It could support democratic planning, reduce waste, and strengthen public services instead of undermining them.
Of course, that would mean taking AI out of the hands of profit-driven corporations. It would mean challenging the political economy that sees everything—including intelligence itself—as something to be owned, privatized, and sold. But if we don’t, then we’re not just risking the misuse of AI—we’re guaranteeing it.
Public ownership is not a pipe dream. There are many successful models around the world—including in North America. Before the wave of privatization that began in the 1980s, both Canada and the United States had long traditions of effective public ownership.
In Canada, crown corporations and public institutions played a central role in national life. There are publicly owned hospitals, public broadcasters, public health insurance, and in some provinces, public auto insurance. Our airlines and rail roads used to be publicly owned and were amongst the top rated in the world. While this isn’t yet the kind of democratic society we need, it’s a proven model we can build on. The Canadian financial sector and wealthiest families still dominate much of federal and provincial politics. They push for lower taxes and underfund the public sector. Still, these publicly owned institutions work. They would work even better if properly funded and were democratically accountable.
In the U.S., the New Deal in the 1930s built a powerful legacy of public infrastructure, including utilities, public power, postal banking, housing, hospitals, transportation, and cooperative enterprises. Many of these still exist—including some state-owned hospitals and municipal utilities. One of the objectives of the Cold War was to undo the New Deal and make public ownership a dirty word. That’s a big part of how we got to the existential crisis we’re in now.
The evidence is clear: publicly owned sectors—large and small—not only function as well as private ones, but in many cases, deliver better outcomes. They are often more equitable, more transparent, and more accountable to the public interest.
The privatization that took place was mostly not because of the failure of publicly owned enterprises, but the pressure of the financialization of the economy. Big banks and investors looking for returns saw publicly owned assets as lucrative targets. But in many cases, this has been a complete failure.
Over the past two decades, dozens of major cities have reversed failed water privatization experiments. From Paris and Berlin to Buenos Aires and Jakarta, private operators raised prices, neglected poor communities, and failed to invest in infrastructure. In the United States, Atlanta took back its water system after the French company Suez cut corners and failed to deliver safe water. Indianapolis also reversed its contract with Veolia after billing errors and rate hikes.
Privatization has also harmed healthcare and public safety. In Philadelphia, Hahnemann University Hospital—a critical facility for low-income patients—was shut down by a private equity firm that saw more value in the real estate than in saving lives. Across the country, private prisons have been linked to abuse, understaffing, and profit-driven incarceration. States like California and Illinois are now moving to phase them out.
If the private sector can’t be trusted to manage municipal water or healthcare—both of which have seen repeated failures and reversals—and if it has produced boondoggle after boondoggle in the military sector, how on earth can we trust it with artificial intelligence?
How can we trust it with AI when it refuses to deal with the existential threat of climate crisis and the risk of nuclear war?
How can we trust the militarization of AI when we are in an unmitigated nuclear and conventional arms race?
Public and authentic non-profit ownership of AI is a critical part of a democratic society.
AI will not be put back in the bottle. We can’t close our eyes to either its existential threat or its immense promise.
Share this post