
Advanced Technology: R&D Engineer - AI/ML, HPC
Job Description
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
About The Team
Cerebras builds wafer-scale AI processors—single chips delivering tens of PB/s of memory bandwidth and a dataflow architecture that accelerates at a granularity no multi-device system can match. The Advanced Technology Group (ATG) is Cerebras’ pathfinding organization. We work ahead of product to explore new architectures, demonstrate breakthrough performance on scientific and AI workloads, and shape the technical roadmap for future Cerebras hardware and software. Our work regularly appears at top-tier venues (Supercomputing, SIAM, IEEE, and NeurIPS) and directly influences the design of next-generation wafer-scale systems.
About The Role
We are seeking R&D Engineers to join Cerebras' Advanced Technology Group. You will design and implement workloads that establish new performance benchmarks on wafer-scale hardware, leveraging architectural features that no traditional platform offers. The
scope ranges from large-scale scientific simulations to emerging AI/ML models, and the work sits at the intersection of algorithm design, compiler co-optimization, and hardware architecture. You will collaborate closely with Cerebras’ ASIC, compiler, kernel, and AI teams as well as external partners at universities and national laboratories.
What You Will Do
- Design and implement challenging scientific computing and AI workloads on Cerebras’ Wafer-Scale Engine, targeting performance results that advance the state of the art.
- Lead algorithm–hardware co-design efforts with internal R&D teams and external research partners, turning architectural capabilities into measurable application-level advantages.
- Build analytical performance models that quantify bottlenecks, guide optimization, and inform future chip and compiler design decisions.
- Contribute to Cerebras’ multi-year technology roadmap by identifying high-impact workloads, proposing architectural experiments, and validating them on silicon.
- Publish findings and present at top-tier conferences and journals; represent Cerebras in the broader HPC and AI research communities.
What We Are Looking For
- PhD in Computer Science, Engineering, Applied Mathematics, Physics, or a related quantitative field preferred. Exceptional candidates without a graduate degree who demonstrate equivalent depth through published research, significant open-source contributions, or a strong industry track record are encouraged to apply.
- Deep experience in at least one of the following: computer architecture and accelerator design; parallel, distributed, or high-performance computing; numerical methods and scientific simulation; AI/ML theory and model design at a mathematical level.
- Strong ability to analytically model and optimize the performance of complex systems and algorithms.
- Track record of published research or patents in relevant venues.
- Proficiency in C and Python; comfort working close to hardware.
- Excellent communication and interpersonal skills: able to present complex technical material to both specialist and cross-functional audiences, and to collaborate effectively in a fast-paced, small-team environment.
Areas Of Particular Interest
We are hiring across several focus areas. Exceptional depth in one or more of the following is a strong signal:
- Computational science: researchers who can bring insights from numerical methods and simulation into AI, or couple simulation and learning into joint computational workflows. Depth in hydrodynamics, solid mechanics, electromagnetics, molecular dynamics, or related PDE-based fields.
- AI/ML foundations: deep understanding of model architecture, optimization methods, and their statistical underpinnings—the ability to design from first principles, not just apply established recipes.
- Computer architecture: microarchitecture design, computing paradigms at the circuit and datapath level, memory hierarchy design.
- Performance engineering: roofline modeling, bandwidth analysis, kernel optimization, communication-computation overlap, and compiler-level tuning for novel hardware.
Why This Opportunity Is Exciting And Unique
- Build on a fundamentally different architecture, unconstrained by GPU assumptions.
- Publish and open-source your research. We present at Supercomputing, SIAM, IEEE, NeurIPS, and beyond.
- Work on the fastest AI system in the world, with direct access to the hardware your compiler targets.
- Join at a pivotal moment: Cerebras is pre-IPO with strong commercial traction and rapid growth.
- Be part of a small, technical team with high autonomy, minimal bureaucracy, and a culture that values depth over hierarchy.
We are hiring for multiple positions across experience levels. If this work resonates, we encourage you to apply.
Why Join Cerebras
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
- Build a breakthrough AI platform beyond the constraints of the GPU.
- Publish and open source their cutting-edge AI research.
- Work on one of the fastest AI supercomputers in the world.
- Enjoy job stability with startup vitality.
- Our simple, non-corporate work culture that respects individual beliefs.
Read our blog: Five Reasons to Join Cerebras in 2026.
Apply today and become part of the forefront of groundbreaking advancements in AI!
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Optimize Your Resume for This Job
Get a match score and see exactly which keywords you're missing
Job Details
- Department
- Software
- Category
- Software
- Employment Type
- Full Time
- Location
- Sunnyvale, CA
- Posted
- Apr 6, 2026, 05:11 PM
- Listed
- Apr 6, 2026, 05:11 PM
About Cerebras
Part of the growing space & AI ecosystem pushing the frontiers of technology.
More Roles at Cerebras





Similar Software Roles



Found this role interesting?