The rapid rise of artificial intelligence has sparked transformative advances across industries, but it has also introduced a major challenge: energy consumption. As more companies integrate AI technologies, the energy demands of these systems are increasing rapidly. While major players like Nvidia, Microsoft, and OpenAI have downplayed these concerns, one company, BitEnergy AI, believes it has a solution.

Researchers at BitEnergy AI have developed a new algorithm, Linear-Complexity Multiplication (L-Mul), which could drastically reduce AI energy usage by up to 95% without compromising performance. This breakthrough has the potential to reshape the AI landscape, offering a more sustainable approach to AI processing.

A key factor behind AI’s high energy demand is the use of floating-point numbers in computations. These numbers are essential for handling the large or minuscule values required in complex AI tasks, such as natural language processing and computer vision. However, this precision comes at a high energy cost. With AI adoption accelerating, analysts from the Cambridge Centre for Alternative Finance project that the industry could consume between 85 and 134 terawatt-hours annually by 2027—roughly equivalent to the electricity usage of medium-sized countries.

The L-Mul algorithm tackles this energy-intensive process by simplifying complex floating-point multiplications into less demanding integer additions. Integer-based operations require significantly less power, offering an energy-efficient alternative. In testing, L-Mul achieved a 95% reduction in energy use for tensor multiplications and 80% for dot products, both of which are foundational to AI computations.

This development could mark a major step toward sustainable AI, enabling companies to adopt advanced AI technologies without a massive environmental footprint.

By Impact Lab