Job Description
About the Team
The Foundations team focuses on how model behavior changes as we scale models, data, and compute. The team studies the interactions between model architecture, optimization, and training data, and uses those insights to guide how new models are designed and trained.
About the Role
In this role, you will build the systems that enable advanced AI models to run efficiently at scale. You will operate at the intersection of model research and systems engineering, translating new architectural ideas into high-performance inference systems that surface real tradeoffs in performance, memory, and scalability.
Your work will directly influence how models are designed, evaluated, and iterated on across the research organization. By developing and evolving high-performance inference infrastructure, you will enable researchers to explore new ideas with a clear understanding of their computational and systems implications.
This is not a product-serving role. Instead, it is a research-enabling systems role focused on performance, correctness, and realism - ensuring that AI research is grounded in what can actually scale.
In this role, you will:
Design and build high-performance inference runtimes for large-scale AI models, with a focus on efficiency, reliability, and scalability.
Own and optimize core execution paths, including model execution, memory management, batching, and scheduling.
Develop and improve distributed inference across multiple GPUs, including parallelism strategies, communication patterns, and runtime coordination.
Implement and optimize inference-critical operators and kernels informed by real-world workloads.
Partner closely with research teams to ensure new model architectures are supported accurately and efficiently in inference systems.
Diagnose and resolve performance bottlenecks through profiling, benchmarking, and low-level debugging.
Contribute to observability, correctness, and reliability of large-scale AI systems.
You might thrive in this role if you:
Have experience building production inference systems, not just training or running models.
Are comfortable with GPU-centric performance engineering, including memory behavior and latency/throughput tradeoffs.
Have worked on multi-GPU or distributed systems involving batching, scheduling, or runtime coordination.
Can reason end-to-end about inference pipelines, from request handling through execution and output streaming.
Are able to understand research ideas and implement them within real system and performance constraints.
Enjoy solving hard, ambiguous systems problems that only emerge at scale.
Prefer hands-on technical ownership and execution over abstract design work.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.
Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Optimize Your Resume for This Job
Get a match score and see exactly which keywords you're missing
Job Details
- Category
- Research
- Employment Type
- Contract
- Location
- San Francisco
- Posted
- Mar 19, 2026, 03:51 PM
- Listed
- Mar 19, 2026, 04:35 PM
About OpenAI
Part of the growing space & AI ecosystem pushing the frontiers of technology.
More Roles at OpenAI





Similar Research Roles



Found this role interesting?