Skip to main content
← Back to jobs
Anthropic logo

Security Labs Engineer

Compensation
$320,000–$405,000/year

Job Description

About Anthropic

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About the Role

Security at Anthropic is not a compliance exercise. It is a core part of how we stay safe as we build increasingly capable systems. Our Responsible Scaling Policy commits us to launching structured security R&D projects: ambitious, time-boxed experiments designed to resolve high-uncertainty questions about our long-term security posture.

Each project runs for roughly 6 months with defined exit criteria. Some will succeed and move toward production. Others will fail, and we'll treat that as useful signals. The questions these projects are designed to answer include:

  • Can our core research workflows survive extreme isolation?
  • Can we get cryptographic guarantees where we currently rely on trust?
  • Can AI become our most effective security control?

As a Security Labs Engineer, you own one or more projects end-to-end: scoping the experiment, building the infrastructure, coordinating across teams, running the pilot, documenting results, and where the experiment succeeds, helping scale it into production. This is 0-to-1 and 1-to-10 work.

Current Project Areas

The portfolio evolves based on what we learn. Current areas include:

  • Designing and operating a mock high-assurance research environment: simulating what our infrastructure would look like under extreme isolation and physical security controls, with real measurement of productivity impact
  • Exploring cryptographic verification of model integrity using techniques like zero-knowledge proofs to provide mathematical guarantees about what is running in production
  • Assessing the feasibility of confidential computing across the full model lifecycle (note: this is an open question, not a committed roadmap item)
  • Piloting AI-assisted security tooling including vulnerability discovery, automated patching, anomaly detection, and adaptive behavioral monitoring
  • Prototyping API-only access regimes where even internal research workflows never touch raw model weights

Part of your job is helping shape what comes next based on gaps uncovered in the current round.

What You'll Do:

  • Own the end-to-end execution of a Security Labs project: refine the hypothesis, design the experiment, build the prototype, run the pilot, and write up the results
  • Build novel security infrastructure under real time pressure: isolated clusters, hardened access controls, cryptographic verification layers, with a bias toward learning fast
  • Where experiments succeed, drive them toward production scale. An experiment that works on one cluster but not a hundred is not a finished result.
  • Work embedded with research teams (Pretraining, RL, Inference) to stress-test whether their core workflows can function under extreme security controls, and document precisely where they break
  • Evaluate and integrate emerging security technologies through coordination with external vendors and research groups
  • Turn experimental results into clear, decision-ready writeups that inform Anthropic's long-term security architecture and RSP commitments
  • Maintain a pain-point registry and feasibility assessment for each project, feeding directly into the design of production high-assurance environments
  • Help scope and prioritize the next wave of Labs projects based on what the current round uncovers

You May Be a Good Fit If You Have:

  • 7+ years of software or security engineering experience, with a solid foundation in production systems
  • Some of that time spent on pilots, prototypes, or applied research work where shipping a working answer to a hard question was the explicit goal
  • Strong programming skills in Python and at least one systems language (Go, Rust, or C/C++)
  • Hands-on experience with cloud infrastructure (AWS, GCP, or Azure), Kubernetes, and networking fundamentals sufficient to stand up and tear down isolated environments quickly
  • A track record of cross-functional execution: you can walk into a room with ML researchers, infrastructure engineers, and vendors and leave with a shared plan
  • Clear written communication: you know how to turn six weeks of experimentation into a two-page memo someone can act on
  • Comfort with ambiguity and iteration, having run experiments that failed, extracted the lesson, and moved forward
  • Genuine curiosity about what it would actually take to defend against a nation-state-level adversary
  • Passion for AI safety and a real understanding of the role security plays in making frontier AI development go well
  • Bachelor's degree in Computer Science, a related field, or equivalent industry experience required.

Strong Candidates May Also Have:

  • Prior experience in offensive security, red teaming, or security research, having thought adversarially about systems and knowing which threats actually matter
  • Familiarity with airgapped or high-side environments (classified networks, ICS/SCADA, financial trading infrastructure, or similar) and the operational realities of working inside them
  • Knowledge of applied cryptography: zero-knowledge proofs, attestation protocols, secure enclaves, TPMs, or confidential computing primitives
  • Experience with ML infrastructure (training pipelines, inference serving, model packaging) sufficient for grounded conversations with researchers about what their workflows actually need
  • Background building or operating security systems in environments that demand rapid iteration rather than rigid change control
  • Prior work at a startup, on an innovation team, or in an applied research group where shipping a working v0 to answer a real question was explicitly the goal

Location

This role is based in our San Francisco office (500 Howard St). Several Labs projects involve physical secure facilities on-site, so expect to be in-office more frequently than Anthropic's standard 25% hybrid baseline.

We Encourage You to Apply

Not all strong candidates will meet every qualification listed above. Research shows that people from underrepresented groups are more likely to talk themselves out of applying. If this work interests you and you have most of what we're looking for, we'd like to hear from you.

We believe AI systems have profound social and ethical implications, and we think diverse perspectives make our work better. We actively work to build a team that reflects a range of backgrounds and experiences.

Deadline to Apply: None, applications will be received on a rolling basis.

The annual compensation range for this role is listed below.

For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.

Annual Salary:
$320,000$405,000 USD

Logistics

Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.

Location-based hybrid policy:
Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.

Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.

Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.

How we're different

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.

Come work with us!

Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process

Optimize Your Resume for This Job

Get a match score and see exactly which keywords you're missing

Optimize Resume

Job Details

Department
Security
Category
Security
Employment Type
Full Time
Location
San Francisco, California, United States
Posted
Mar 16, 2026, 05:06 PM
Listed
Mar 16, 2026, 05:06 PM
Compensation
$320,000 - $405,000 per year

About Anthropic

Part of the growing space & AI ecosystem pushing the frontiers of technology.

Found this role interesting?

Security Labs Engineer
Anthropic
Apply ↗

Shipping like we're funded. We're not. No affiliation.

Sequoia logo
Y Combinator logo
Founders Fund logo
a16z logo