← Back to jobs
Physical Intelligence logo

ML Infra Engineer (TPU/Jax/Optimization)

Posted 1 month agoSoftware

Job Description

In this role you will help scale and optimize our training systems and core model code. You’ll own critical infrastructure for large-scale training, from managing GPU/TPU compute and job orchestration to building reusable and efficient JAX training pipelines. You’ll work closely with researchers and model engineers to translate ideas into experiments—and those experiments into production training runs.

This is a hands-on, high-leverage role at the intersection of ML, software engineering, and scalable infrastructure.

The Team

The ML Infrastructure team supports and accelerates PI’s core modeling efforts by building the systems that make large-scale training reliable, reproducible, and fast. The team works closely with research, data, and platform engineers to ensure models can scale from prototype to production-grade training runs.

In This Role You Will

- Own training/inference infrastructure: Design, implement, and maintain systems for large-scale model training, including scheduling, job management, checkpointing, and metrics/logging.

- Scale distributed training: Work with researchers to scale JAX-based training across TPU and GPU clusters with minimal friction.

- Optimize performance: Profile and improve memory usage, device utilization, throughput, and distributed synchronization.

- Enable rapid iteration: Build abstractions for launching, monitoring, debugging, and reproducing experiments.

- Manage compute resources: Ensure efficient allocation and utilization of cloud-based GPU/TPU compute while controlling cost.

- Partner with researchers: Translate research needs into infra capabilities and guide best practices for training at scale.

- Contribute to core training code: Evolve JAX model and training code to support new architectures, modalities, and evaluation metrics.

What We Hope You’ll Bring

- Strong software engineering fundamentals and experience building ML training infrastructure or internal platforms.

- Hands-on large-scale training experience in JAX (preferred), PyTorch.

- Familiarity with distributed training, multi-host setups, data loaders, and evaluation pipelines.

- Experience managing training workloads on cloud platforms (e.g., SLURM, Kubernetes, GCP TPU/GKE, AWS).

- Ability to debug and optimize performance bottlenecks across the training stack.

- Strong cross-functional communication and ownership mindset.

Bonus Points If You Have

- Deep ML systems background (e.g., training compilers, runtime optimization, custom kernels).

- Experience operating close to hardware (GPU/TPU performance tuning).

- Background in robotics, multimodal models, or large-scale foundation models.

- Experience designing abstractions that balance researcher flexibility with system reliability.

Optimize Your Resume for This Job

Get a match score and see exactly which keywords you're missing

Optimize Resume

Ready to Apply?

This will take you to Physical Intelligence's application page

Apply on Physical Intelligence

Job Details

Category
Software
Employment Type
Full Time
Location
San Francisco
Posted
Jan 23, 2026, 03:49 PM
Listed
Mar 11, 2026, 08:35 PM

About Physical Intelligence

Part of the growing space & AI ecosystem pushing the frontiers of technology.

Found this role interesting?

Shipping like we're funded. We're not. No affiliation.

Sequoia logo
Y Combinator logo
Founders Fund logo
a16z logo