
Zettascale
Energy efficient chips for AI
About the Company
Building energy-efficient chips ("XPUs") for AI training and inference.
Our XPUs are reconfigurable, capable of optimizing the dataflow of each model, making them faster and more energy-efficient than the current SOTA GPUs on the market. This saves data centers billions in cooling and energy costs.
Tech Stack
Our XPUs are capable of optimizing the dataflow of each specific model architecture, making them faster and more energy-efficient than the current SOTA GPUs.
The major bottleneck in computing is memory (often referred to as the "von Neumann bottleneck"). Moving data from point A to B is very expensive in both time and energy. A good rule of thumb is 1 bit = 1 pJ, and modern AI models are >100 GB in parameters.
With our polymorphic architecture, we essentially have less data movement, and we can perform larger parts of the AI models' computation in one go with the help of function compositions and keeping them localized. Since we can also perform non-linear tensor operations in parallel, this greatly improves throughput, speed, and energy efficiency.
Depending on the model architecture, we can achieve speedups from 2.5x to 10,000x (or more).
Founders
Open Positions at Zettascale (3 Jobs)



Ready to start your space career at Zettascale?