27 April 2026
Accelerating innovation with smarter AI hardware
Can specialized chips speed up your AI results, save costs and help the environment?
Specialized artificial intelligence (AI) hardware – particularly large-scale, power-efficient solutions – can redefine innovation. Let’s look at a transformative example in the life sciences industry.
To accelerate drug discovery, global pharmaceutical giant GlaxoSmithKline (GSK) used a wafer-scale engine from Cerebras to compress a machine learning training schedule from weeks to hours. The 900,000 processing cores on a single chip enabled the GSK team to accelerate complex molecular modeling algorithms, paving the way for faster insights into potential vaccines and novel therapeutics. (Molecular modeling is a type of artificial intelligence that uses theoretical and computational concepts to understand the behavior of atoms and molecules.)
Next, Cerebras collaborated with several US National Laboratories to best its own record by modeling molecular dynamics at 1.1 million simulations per second. As a result, scientists performed two years worth of graphics processing unit (GPU)-based simulations in a single day. This is 748 times faster than the world’s leading supercomputer, Frontier. It’s also 20% faster than Anton, a supercomputer specially designed for molecular dynamics. Plus, the GPU-based computations consumed 7% of the computational power.
Such startling improvements are not isolated. They point to a broader paradigm shift in AI hardware. While GPUs have dominated machine learning for the last decade, mounting concerns about energy consumption, rising cloud costs and data privacy are spurring the adoption of next-generation chips designed for specific AI workloads. These new advancements promise far greater performance per watt than general-purpose hardware.
Cloud-based GPU clusters – once the default for many AI applications – are becoming prohibitively expensive for large, continuous workloads. Moreover, concerns around data sovereignty and environmental footprints are leading to a renewed interest in on-premises and specialized hardware solutions.
Analog processors store and manipulate signals as continuously variable electric properties, like voltage, resistance or current. Traditional processors – GPUs and central processing units (CPUs) – store and process information in bits (1s and 0s). Analog processing allows for extremely fast and energy-efficient processing for certain workloads. It is often used in neuromorphic and in-memory computing.
Within this context, a new generation of AI processors is entering the market, including:
- Analog or neuromorphic chips, which promise drastically lower energy consumption and sometimes higher performance than traditional CPU/GPU stacks. These innovations are best used for specialized tasks, as opposed to more general-purpose CPUs and GPUs.
- Quantum processors are still in relatively early stages but are demonstrating exponential speedups for certain problems. They may eventually complement or even converge with neuromorphic and in-memory architectures to form a broader ecosystem of specialized computing.
For organizations willing to adopt a portfolio approach – matching specific AI tasks to specialized hardware – the potential rewards include reduced operating costs, stronger environmental, social and governance (ESG) credentials, and a hardware-based competitive moat that can be hard for rivals to replicate.
Why is the shift happening?
AI systems in their current form and at their current growth rate are not sustainable for multiple reasons.
- Energy intensity. By 2028, data center use is projected to rise two to three times, from its current 4.4% of total US electricity consumption to as much as 12%, according to a US Department of Energy report. At the same time, tech giants like Microsoft, Google and Amazon are keen to meet pledges of carbon neutrality, underscoring the need for more energy-efficient AI.
- Transparency and explainability. AI can raise black-box concerns because some AI calculations are hidden in layers of deep learning models. The new analog and in-memory architectures can provide better visibility into signal pathways, potentially easing compliance and regulatory hurdles that require more robust transparency.
- Cloud. While the cloud offers elasticity, costs for on-demand capacity can fluctuate significantly, and long-term costs for running large-scale AI can be staggering. Cloud deployments also bring questions of data custody. In regulated sectors such as finance or health care, controlling the entire hardware stack provides additional security assurances and helps address specific compliance requirements.
- Application-specific integrated circuits (ASICs) and quantum computing. Google’s Tensor Processing Units (TPUs), for instance, remain a leading example of how a purpose-built chip can outperform more general-purpose hardware. The wave of emerging quantum, analog, neuromorphic and in-memory chips promises to continue the push toward more energy- and cost-efficient computation.
What are the challenges for implementation?
Although the benefits of these new AI chips are significant, they come with certain challenges, like ecosystem maturity. Many analog and neuromorphic chips are in early commercialization stages or still in research labs, making supply chains and product roadmaps unpredictable. Often, the necessary software for integration may not be readily available.
Quantum suffers from an even less mature ecosystem. Early adopters face the challenge of developing specialized algorithms, which entails coping with qubit error rates and navigating immature software environments. Organizations that adopt too early risk dealing with unforeseen hardware bugs and minimal developer or ecosystem support.
A second hurdle is the software toolchain and skills gap. Most AI practitioners are accustomed to GPU-based development in frameworks like TensorFlow or PyTorch. In contrast, specialized architectures often require custom compilers, dataflow graph optimizations or even rethinking fundamental aspects of model design. Overcoming this learning curve often requires strategic hires or meaningful training initiatives. This emphasizes the need for executive teams to treat this decision as a strategic investment to support capacity-building in next-generation AI development.
Beyond these concerns, there is the regulatory and compliance landscape. Analog, neuromorphic or wafer-scale chips must still meet relevant certifications for data handling, encryption or reliability. Highly regulated sectors such as health care, financial services or insurance may need additional due diligence before deploying unfamiliar hardware architectures.
Crafting a forward-looking strategy
A coherent plan for next-generation AI hardware should align with corporate goals and the evolving competitive environment. Rather than opting for a single solution, prudent organizations will adopt a portfolio approach, selecting different architectures for various workloads.
For example, low-power ReRAM solutions could excel at on-device analytics, while wafer-scale engines like Cerebras might handle large-scale training in a centralized data center, and a BrainChip solution powers IoT decisions on the edge. A robust hardware roadmap may incorporate partnerships with quantum service providers for those seeking breakthrough improvements in optimization, cryptography and complex simulation.
Partnerships with chip manufacturers can be another strategic lever. Early adopters often influence product roadmaps, customizations that align with specific AI tasks and preferential pricing. Such collaborations not only accelerate internal AI innovation but also ensure that hardware providers address key pain points in integration, toolchain support and performance optimization.
Boards, too, should encourage a culture of experimentation, ensuring that infrastructure teams explore specialized hardware for targeted workloads. By phasing deployments through proof-of-concept (PoC) and pilot projects, organizations can mitigate risk while still moving quicker than competitors that remain reliant on older, less efficient technologies.
Jevons Paradox is worth considering here, too: When new tech gets more efficient, people often end up using it even more. As AI hardware improves and consumes less energy, overall demand for AI and its workloads may increase, potentially offsetting efficiency and sustainability gains.
Continuous reevaluation is vital. The AI hardware landscape is evolving rapidly, with breakthroughs emerging from both large established companies and nimble startups. Regularly updating the hardware roadmap – through strategic technology assessments and input from cross-functional AI governance teams – can prevent the organization from being locked into outdated or suboptimal solutions.
Recommended reading
Explainer page
AI governance: What is is and why it matters
Explainer page
AI Agents: What they are and why they matter
Explainer page
Cloud Computing: What it is and why it matters
