About the role: Anthropic is working on frontier AI research that has the potential to transform how humans and machines interact. As we rapidly advance foundational LLMs, application security is paramount. In this role, you will apply security patterns built for high–risk environments to safeguard model weights as we scale new capabilities. Working closely with software engineers, you will institute controls around access, infrastructure, and data to proactively minimize risks from bad actors. This is an opportunity to join a team of experts building AI for social good, while advancing the frontier of safe and ethical AI development. Responsibilities:
- Lead "shift left" security efforts to build security into the software development lifecycle.
- Conduct secure design reviews and threat modeling. Identify and prioritize risks, attack surfaces, and vulnerabilities.
- Perform security code reviews of source code changes and advise developers on remediating vulnerabilities and following secure coding practices.
- Manage Anthropic's vulnerability management program. Triage and prioritize vulnerabilities from scans, audits, and bug bounty submissions. Track remediation and validate fixes.
- Oversee Anthropic's bug bounty program. Set scope, triage submissions, coordinate disclosure with engineering teams, and reward bounties. Cultivate relationships with the ethical hacker community.
- Research and recommend security tools and technologies to strengthen defenses against emerging threats targeting machine learning systems.
- Develop and document security policies, standards, and playbooks. Conduct security awareness training for engineers.
- Collaborate closely with product engineers and researchers to instill security best practices. Advocate for secure architecture, design, and development.
You may be a good fit if you:
- Have 5+ years of hands–on experience in application and infrastructure security, including securing cloud–based and containerized environments.
- Have empathy, collaboration skills, and a learning mindset to work cross–functionally with engineers of all levels to build security into the product life cycle.
- Can use creative and strategic thinking to reduce risk through secure design and simplicity, not just controls.
- Possess broad security knowledge to connect the dots across domains and identify holistic ways to lower the overall threat surface.
- Have the ability to distill complex security concepts into clear actions and drive consensus without direct authority.
- Have a proactive mindset to thread security throughout the product lifecycle through activities like threat modeling, secure code review, and education.
- Have strong grasp of offensive security to anticipate risks from an adversary's perspective, not just check compliance boxes.
- Have experience with modern application stacks, infrastructure, and security tools to implement pragmatic defenses.
- Are passionate for security fundamentals like least privilege, defense–in–depth, and eliminating complexity that could sub–linearly scale security through smart design.
Strong candidates may also:
- Have hands–on technical expertise securing complex cloud environments and microservices architectures leveraging technologies like Kubernetes, Docker, and AWS / GCP.
- Have experience with offensive security techniques like vulnerability testing, pen testing, and red team exercises.
- Have familiarity with AI/ML security risks such as data poisoning, model extraction, adversarial examples, etc. and mitigations.
- Have experience building security tools, scripts, and automations.
- Have a solid foundational knowledge of security engineering principles and technologies. Keen to continue learning.
- Possess excellent communication skills, able to distill complex security topics for broad audiences.
- Have a passion for security and protecting users. Willingness to constructively challenge assumptions to drive security.
Candidates need not have:
- 100% of the skills needed to perform the job
- Formal certifications or education credentials
#J-18808-Ljbffr