
Atlas VLA Research Intern
Job Description
Are you passionate about the intersection of foundation models and physical AI? As an intern on the Atlas VLA Research Team, you will join a world-class group of engineers and scientists dedicated to giving Atlas the intelligence to perceive, reason, and act.
Our team focuses on scaling Vision-Language-Action (VLA) models, leveraging diverse data sources, and building accurate spatial perception that keep Atlas grounded and capable of performing challenging dexterous tasks. We are looking for PhD-level interns to lead research projects that push the boundaries of what humanoid robots can do. You won't work on pure theory; you'll be deploying your ideas directly onto one of the most sophisticated pieces of hardware on the planet.
How you will make an impact:
Lead a high-stakes research project focused on either VLA scaling/training or spatial perception (SLAM/Calibration).
Prototype and deploy your algorithms directly on Atlas, moving from simulation to hardware.
Architect data pipelines that ingest alternative data sources to improve robot robustness.
Write production-grade code (Python/C++) that integrates with our existing systems.
Collaborate across teams To integrate learned policies with low-level robot control.
We are looking for:
Actively pursuing a PhD (preferred) or a research-heavy Master’s in Computer Science, Robotics, Machine Learning, or a related field.
Expertise in ONE or more of the following tracks:
Training large-scale multimodal models (VLMs/LLMs), imitation learning, or generative world models for robotics.
Classical and learned SLAM, visual odometry, or extrinsic/intrinsic camera calibration at scale.
Nice to have:
Experience troubleshooting and deploying algorithms on physical robot platforms, especially mobile and humanoid form factors.
Experience working with and contributing to large-scale datasets (e.g., Open X-Embodiment) or specialized data collection approaches like Universal Manipulation Interface.
Experience with large-scale cluster training (SLURM, distributed GPU training) and maintaining high-quality codebases.
Strong grasp of Lie groups, optimization, or transformer architectures.
The hourly pay range for this position is between $30-$45/ hour. Base pay will depend on multiple individualized factors including, but not limited to internal equity, job related knowledge, skills, education and experience. This range represents a good faith estimate of compensation at the time of posting.
Optimize Your Resume for This Job
Get a match score and see exactly which keywords you're missing
Ready to Apply?
This will take you to Boston Dynamics's application page
Job Details
- Category
- Research
- Employment Type
- Internship
- Location
- Waltham Office (POST)
- Posted
- Mar 4, 2026, 07:00 PM
- Listed
- Mar 11, 2026, 11:48 PM
- Compensation
- $30 - $45 per hour
About Boston Dynamics
Part of the growing space & AI ecosystem pushing the frontiers of technology.
More Roles at Boston Dynamics





Similar Research Roles



Found this role interesting?