Artificial Intelligence, Energy, Electricity, Singularity and John von Neumann

When Meta CEO Mark Zuckerberg talked about how energy, not compute, could become the ultimate bottleneck for AI progress, it sounded like a technical observation. But the deeper you go, the more you realize: this is a warning that could reshape the entire future of technology, infrastructure, and even geopolitics.

Today’s AI models are growing bigger, smarter, and more capable. OpenAI’s GPT-4 Turbo, Anthropic’s Claude, Google’s Gemini, all demand staggering amounts of computational power. But compute chips alone won’t determine AI’s future. Each training run of a large model already consumes energy equivalent to that used by hundreds of American homes in a year. As AI systems get more complex, their hunger for power grows even faster than their need for better chips.

Microsoft expects its energy consumption to double by 2026, largely driven by the growth of AI. OpenAI is reportedly exploring nuclear power to sustain its long-term ambitions. The AI race is quickly becoming an energy race.

Historically, technological revolutions were bottlenecked by materials (e.g., silicon in early computing) or cost. Today, it may be raw energy itself that limits innovation. And this energy isn’t easily scalable: grids are aging, renewable deployment is uneven, and geopolitical conflicts threaten supply chains for both fossil fuels and critical green technologies.

The situation gets even more interesting when viewed through the lens of thinkers like Jeff Hawkins, the neuroscientist and entrepreneur behind Numenta. Hawkins argues that human intelligence is astonishingly energy-efficient because the brain uses sparse, context-driven memory predictions instead of brute-force computation. Your brain, running on about 20 watts, less than a light bulb, easily outperforms today’s most advanced AI systems in general reasoning.

This efficiency gap highlights a profound technological misalignment. Current AI systems are powerful but extremely energy-inefficient. They brute-force through billions of calculations instead of using sparse, predictive architectures like biological brains. Without a radical shift, energy costs will continue scaling exponentially, creating not only technical bottlenecks but also environmental and economic ones.

The broader idea of technological singularity, the moment when machine intelligence surpasses human intelligence and growth becomes uncontrollable, relies on infinite scaling of compute. But if energy becomes the hard limit, the singularity itself might be delayed, diverted, or achieved in ways we don’t expect.

The concept of the singularity dates back to the mid-20th century, with its roots in the ideas of mathematician and computer scientist John von Neumann. He predicted that technological progress would reach a point where machines would surpass human intelligence, resulting in an uncontrollable explosion of knowledge and capabilities. This notion was later popularized by science fiction writer Vernor Vinge in 1993, who envisioned a moment when artificial intelligence would evolve beyond human comprehension, reshaping civilization. Throughout the years, figures like Ray Kurzweil, a prominent futurist, have further developed the idea, suggesting that this singularity could occur as soon as 2045.

Leading figures like Yann LeCun and Geoffrey Hinton have also voiced concerns recently. While Hinton has warned about AI’s potential dangers if unchecked, LeCun stresses that true intelligence will not come from scaling up today’s transformer models but will require fundamentally new, energy-efficient designs.

This conversation is gaining momentum among serious researchers. Scientists like Jurgen Schmidhuber, often called the “father of modern AI,” have long emphasized the need for self-improving, low-energy neural networks that operate closer to biological principles. His work on “Gödel Machines” envisions AI systems that could recursively rewrite themselves for efficiency and intelligence—without exponential compute growth.

Meanwhile, neuromorphic computing, a field pioneered by researchers at institutions like Stanford’s Brains in Silicon Lab and Intel’s Loihi project, aims to build chips that mimic the sparse, spiking behavior of neurons. These chips use event-driven architectures, consuming power only when necessary—a radical departure from today’s AI accelerators that constantly churn through data.

The Human Brain Project in Europe, though controversial in its ambitions, also illuminated one critical fact: simulating the brain’s power-efficient architecture at full scale could require less energy than training one large transformer model today. Yet, the gap between brain-like architectures and current AI models remains vast.

Another underappreciated angle is the work on predictive coding models, where energy efficiency is achieved by predicting sensory inputs and only processing “surprise” signals. Researchers like Karl Friston have argued that much of cognition could be explained—and recreated—through energy-minimizing predictive systems, not data-maximizing transformers.

Even DARPA, the U.S. Defense Advanced Research Projects Agency, has quietly shifted part of its AI research funding toward “energy-efficient AI systems” under programs like the Microsystems Technology Office (MTO). They recognize that in future battlefields and critical infrastructure, energy constraints could make current AI architectures impractical.

From these scattered efforts emerges a clear pattern: AI’s future won’t just be bigger models, but smarter architectures. Smarter, not just in reasoning—but smarter in how they use energy at a fundamental level.

The technological singularity might not arrive because we invented superintelligence too fast.
It might stall because we ran out of energy first.


Discover more from Semiconductors Insight

Subscribe to get the latest posts sent to your email.

Leave a Reply

Discover more from Semiconductors Insight

Subscribe now to keep reading and get access to the full archive.

Continue reading

Discover more from Semiconductors Insight

Subscribe now to keep reading and get access to the full archive.

Continue reading