Researchers at Tufts University have developed an artificial intelligence approach that significantly reduces energy consumption while improving performance in robotics tasks, potentially offering a path toward more sustainable AI systems as the technology’s power demands continue growing.

The research addresses mounting concerns about AI’s environmental impact. According to the International Energy Agency, global data center electricity consumption—including AI systems—reached approximately 460 terawatt hours in 2022, representing about 2% of total electricity demand. The agency projects this could more than double by 2026 as AI adoption accelerates.

The breakthrough comes from the laboratory of Matthias Scheutz, a professor in Tufts’ Department of Computer Science. His team developed what they call a neuro-symbolic AI system that combines traditional neural networks with symbolic reasoning—an approach that mirrors human problem-solving by breaking tasks into logical steps and categories.

Unlike consumer-facing AI systems such as ChatGPT, the Tufts research focuses on robotics applications. The team worked with visual-language-action (VLA) models, which extend large language model capabilities by incorporating vision and physical movement. These systems process visual data from cameras and language instructions, then translate that information into real-world robotic actions.

The researchers tested their approach using the Tower of Hanoi puzzle, a classic problem-solving task that requires strategic planning. In their experiments, the neuro-symbolic system achieved a 95% success rate compared to 34% for conventional approaches. When tested on more complex variations of the puzzle that the system hadn’t previously encountered, it maintained a 78% success rate while traditional models consistently failed.

The energy efficiency gains proved equally significant. The neuro-symbolic model required only 1% of the energy needed during training compared to standard VLA systems, and used just 5% of the operational energy required by conventional approaches. Training time also dropped dramatically, with the new system learning tasks in 34 minutes versus more than 36 hours for traditional models.

“Like an LLM, VLA models act on statistical results from large training sets of similar scenarios, but that can lead to errors,” Scheutz explained. “A neuro-symbolic VLA can apply rules that limit the amount of trial and error during learning and get to a solution much faster.”

The professor emphasized the efficiency problems inherent in current AI systems, which primarily work by predicting the next word or action in a sequence. “Their energy expense is often disproportionate to the task,” he said, noting that AI-powered search summaries can consume substantially more energy than generating traditional search results.

The neuro-symbolic approach represents a fusion of two distinct AI methodologies. Traditional neural networks excel at pattern recognition by learning from vast datasets, but this process can be energy-intensive and prone to errors or “hallucinations.” Symbolic reasoning, by contrast, uses predefined rules and abstract concepts such as shape and balance, allowing systems to plan more methodically and avoid unnecessary trial-and-error processes.

In the United States, data centers consumed an estimated 130-150 TWh of electricity in 2022, according to Lawrence Berkeley National Laboratory—roughly 1-1.3% of total U.S. electricity consumption. However, this figure is projected to grow substantially as AI workloads expand and more companies deploy energy-intensive machine learning systems.

The sustainability concerns extend beyond current consumption levels. Major technology companies are constructing increasingly large data centers, some requiring hundreds of megawatts of electricity—equivalent to the power needs of small cities. This expansion has intensified focus on developing more efficient AI architectures.

The Tufts research suggests that combining neural networks with symbolic reasoning could provide a more sustainable path forward. By reducing the computational overhead required for training and operation, such hybrid approaches might enable AI deployment in resource-constrained environments while reducing infrastructure demands on electrical grids.

The team’s findings indicate that current approaches based solely on large language models and VLA systems may face scalability challenges as energy costs and environmental concerns grow. The dramatic efficiency improvements demonstrated in their robotics experiments could potentially translate to other AI applications, though further research would be needed to validate broader applicability.

Industry observers note that energy efficiency has become a critical factor in AI development, particularly as companies face increasing pressure to meet sustainability goals while expanding AI capabilities. The Tufts approach offers one potential solution, though widespread adoption would likely require additional validation across diverse applications and deployment scenarios.

The research contributes to a growing body of work exploring more efficient AI architectures. As the technology becomes increasingly central to various industries, from autonomous vehicles to manufacturing, developing sustainable approaches to AI computation has become essential for long-term viability.

The timing appears significant as policymakers and industry leaders grapple with balancing AI innovation against environmental considerations and grid capacity constraints.