Mythic’s AI Processors Could Reduce US AI Power Demand 10%

Summarize this article with:
Mythic has secured $125M in funding led by DCVC to address the escalating energy consumption of artificial intelligence through its novel analog processing units (APUs). These APUs promise a 100x improvement in energy efficiency compared to industry standard GPUs and competing AI ASICs, representing a fundamental shift in accelerated computing for both data centers and edge applications. By eliminating the physical separation between memory and compute—a limitation of the Von Neumann architecture—Mythic’s design aims to drastically reduce wasted energy, currently accounting for 90% of AI power usage, and potentially alleviate a crisis projected to consume one-tenth of US electricity by the end of the decade. Mythic’s APUs Address AI Energy Consumption Mythic’s analog processing units (APUs) aim to address the growing energy consumption of AI, offering a potential 100x improvement in energy efficiency compared to standard GPUs. This efficiency stems from a unique computer architecture that, unlike the Von Neumann design used in GPUs, doesn’t separate memory and compute. By treating them as one, Mythic’s APUs reduce energy waste, currently estimated at 90% in typical AI systems, and deliver 120 trillion operations per second (TOPS) per watt. Mythic APUs excel in matrix multiplications—the core of most AI workloads—performing these operations in analog, mirroring the human brain. A single multiply-accumulate (MAC) operation consumes only 17 femto joules, a thousand times less energy than today’s GPUs require for the same task. This allows for scaling to 1T parameter models more easily, and internal benchmarks demonstrate up to 750x more tokens per second per watt when running these large language models compared to NVIDIA’s GPUs. Cost benefits are also significant. Mythic’s chips utilize mature silicon memory cells and are manufactured in the US and allied countries. Internal benchmarks show up to 80x cost advantages per million tokens compared to the latest GPUs, reaching as low as half a cent per million tokens in 100B LLMs, and 4 cents in 1T parameter models. These advancements, combined with a new sensing category called “Starlight,” position Mythic as a leader in performance per watt.
Analog Architecture Rivals Human Brain Efficiency Mythic’s analog processing units (APUs) are designed to rival the energy efficiency of the human brain, offering a significant advantage over traditional GPUs. The company’s architecture treats processor and memory as one, mirroring the brain’s functionality, and performs matrix multiplications in analog. This results in 120 trillion operations per second (TOPS) per watt – 100 times better than current GPUs – while maintaining accuracy. Mythic aims to address the unsustainable energy consumption of AI computing. Mythic’s APUs achieve exceptional energy efficiency by consuming only 17 femtojoules per multiply-accumulate (MAC) operation – a core element of AI compute – compared to today’s GPUs. This translates to a 1,000x improvement in energy efficiency for the same operation. Internal benchmarks demonstrate up to 750x more tokens per second per watt when running 1T parameter Large Language Models (LLMs) with Mythic APUs compared to NVIDIA’s highest-end GPUs. The unique design of Mythic’s APUs also allows for significant cost savings. Benchmarks reveal up to 80x lower cost per million tokens compared to the latest GPUs, reaching as low as half a cent per million tokens in 100B LLMs and 4 cents per million tokens in 1T parameter models. These APUs utilize mature silicon memory cells, with 150 billion units already shipped, and are manufactured in the United States and allied countries. New Funding and Key Industry Partnerships Mythic recently secured $125 million in oversubscribed funding led by DCVC, intended to address the growing energy consumption problem of AI. This funding follows a complete rebuild of the company’s architecture, roadmap, software, and strategy. Key investors also include NEA, Atreides, Honda Motors, and Lockheed Martin, bolstering Mythic’s position in the automotive and defense industries—two trillion-dollar markets undergoing AI transformation. The company aims to become a leader in performance per watt. Mythic’s analog processing units (APUs) offer a significant advantage in energy efficiency, consuming only 17 femtojoules per multiply-accumulate operation—1,000x less than today’s GPUs. Current APU architecture achieves 120 trillion operations per second (TOPS) per watt, a 100x improvement over existing GPUs. Internal benchmarks demonstrate up to 750x more tokens per second per watt when running 1T parameter Large Language Models (LLMs) compared to NVIDIA’s GPUs. These APUs are constructed from highly mature silicon memory cells, with 150 billion units already shipped. Mythic’s manufacturing strategy focuses on production within the United States and allied countries. Benchmarks also show a potential cost advantage, with up to 80x lower costs per million tokens compared to the latest GPUs—reaching as low as half a cent per million tokens in 100B LLMs. Their APU platform is a clean-sheet reinvention of the AI infrastructure—one that collapses today’s limits on energy and cost. Aaron Jacobson, Partner at NEA APU Performance Advantages Over GPUs Mythic’s analog processing units (APUs) offer a significant energy advantage over traditional GPUs, achieving 100x greater energy efficiency. This is due to a fundamentally different architecture where computation and memory are not separated, mirroring the human brain. Current APUs deliver 120 trillion operations per second (TOPS) per watt, while a single multiply-accumulate operation consumes only 17 femtojoules—1,000x less energy than today’s GPUs for the same task. This efficiency is crucial as AI workloads are projected to consume one-tenth of US electricity by the end of the decade. Beyond energy savings, Mythic APUs demonstrate substantial cost benefits. Internal benchmarks reveal up to 80x lower cost per million tokens compared to the latest GPUs, reaching as low as half a cent per million tokens in 100B LLMs and 4 cents in 1T parameter models. This advantage is amplified as AI models become more complex, with APUs scaling easily to 1T parameter models without needing high-speed inter-chip connections like GPUs. Mythic APUs also outperform GPUs in LLM processing speed. Benchmarks show APUs can achieve up to 750x more tokens per second per watt when running 1T parameter LLMs compared to NVIDIA’s highest-end GPUs. This performance, combined with the use of mature silicon memory cells and domestic manufacturing, positions Mythic as a leader in AI infrastructure, aiming to augment GPUs and solve the escalating energy crisis. Manufacturing and Scalability of Mythic Chips Mythic’s analog processing units (APUs) aim to address the growing energy consumption of AI, currently projected to use one-tenth of US electricity by the end of the decade. Unlike traditional GPUs built on a 1945-era Von Neumann architecture, Mythic’s chips integrate computation and memory, mirroring the human brain. This design yields 120 trillion operations per second (TOPS) per watt – a 100x improvement over today’s GPUs – and significantly reduces wasted energy, currently at 90% for typical AI workloads. Mythic APUs offer advantages in scaling AI models, particularly large language models (LLMs). Internal benchmarks demonstrate up to 750x more tokens per second per watt when running 1T parameter LLMs compared to NVIDIA’s GPUs. The architecture simplifies scaling to 1T parameter models, eliminating the need for high-speed inter-chip connections like NVLINK. Furthermore, Mythic’s chips boast significant cost benefits, achieving up to 80x lower cost per million tokens compared to current GPUs, reaching as low as half a cent for 100B LLMs. Manufactured using mature silicon memory cells – 150 billion units shipped to date – Mythic’s APUs are produced in the United States and allied countries. This provides a strategic advantage over competitors. A single multiply-accumulate (MAC) operation, the core of AI computation, consumes only 17 femto joules on Mythic APUs, representing a 1,000x improvement in energy efficiency over GPUs, and is achieved by unifying computation and memory like the human brain. Mythic will win based on this fundamental insight: energy efficiency will define the future of AI computing everywhere.
Taner Ozcelik Source: https://mythic.ai/whats-new/mythic-to-challenge-ais-gpu-pantheon-with-100x-energy-advantage-and-oversubscribed-125m-raise/ Tags:
