Staff AI Security Researcher
SandboxAQ
Job Summary
The SandboxAQ Cybersecurity R&D team is seeking an AI Security Researcher to help build the future of AI security. This role involves leading investigations into AI system vulnerabilities, building tools and frameworks for security, and integrating findings into real-world systems. The ideal candidate will thrive at the intersection of machine learning, security, and software engineering, with extensive freedom to explore, collaborate, publish, and deploy, shaping the field of AI security from both offensive and defensive sides.
Must Have
- Conduct original research into vulnerabilities, exploits, and adversarial behaviors in LLMs, LQMs, agents, and related AI frameworks.
- Build and operationalize AI security frameworks, evaluations and red teaming tools, and defensive mechanisms to protect models and data.
- Partner with engineering and product teams to integrate findings into real-world systems.
- Lead or contribute to responsible disclosure and research publications that advance the state of the art.
- Stay on the edge of the latest in AI interpretability, alignment, and adversarial robustness.
- PhD or Masters in Computer Science or related field with a focus on Machine Learning or Cybersecurity.
- Deep expertise in AI/ML, security research, or both, with a proven ability to find and fix real vulnerabilities.
- Hands-on experience with adversarial LLM red teaming, model extraction or prompt injection, data poisoning or evasion attacks, secure model deployment or sandboxing, detection and monitoring for AI misuse.
- Strong programming skills in Python, and relevant ML and/or agentic frameworks.
Good to Have
- Experience contributing to open source projects.
- Experience in the broader cybersecurity domain is a plus.
Perks & Benefits
- Competitive, equitable, and transparent compensation benchmarked to premium markets.
- Opportunity for future salary progression and reward for performance.
- Cultivated environment that encourages creativity, collaboration, and impact.
- Multidisciplinary environment with ample opportunity for continuous growth.
- Community of humble, empowered, and ambitious colleagues.
- Reasonable accommodations for individuals with disabilities in job application procedures.
Job Description
Location
United States, Canada, United Kingdom, Switzerland
Employment Type
Full time
Location Type
Remote
Department
Cybersecurity
Compensation
- Staff - Tier 1$213K – $298K
- Staff - Tier 2$181K – $256.1K
At SandboxAQ, we are committed to competitive, equitable, and transparent compensation; we continuously benchmark our salaries and total compensation to premium markets to ensure our competitiveness. Individual pay within the above range is determined by job-related skills, experience, education, and geographic location.
With a focus on pay equity and ensuring opportunity for future salary progression, our typical practice is to hire within the first half of the base salary range for a given role and level. This approach allows us to reward performance and increasing expertise consistently as your career develops with us.
We use two geographic pay tiers to reflect the pay differences in local markets:
- Tier 1: Applies to candidates located within 75 miles of San Francisco, Los Angeles, Seattle, and New York.
- Tier 2: Applies to candidates located in all other Locations in the US.
About SandboxAQ
SandboxAQ is a high-growth company delivering AI solutions that address some of the world's greatest challenges. The company’s Large Quantitative Models (LQMs) power advances in life sciences, financial services, navigation, cybersecurity, and other sectors.
We are a global team that is tech-focused and includes experts in AI, chemistry, cybersecurity, physics, mathematics, medicine, engineering, and other specialties. The company emerged from Alphabet Inc. as an independent, growth capital-backed company in 2022, funded by leading investors and supported by a braintrust of industry leaders.
At SandboxAQ, we’ve cultivated an environment that encourages creativity, collaboration, and impact. By investing deeply in our people, we’re building a thriving, global workforce poised to tackle the world's epic challenges. Join us to advance your career in pursuit of an inspiring mission, in a community of like-minded people who value entrepreneurialism, ownership, and transformative impact.
About the Role
The SandboxAQ Cybersecurity R&D team is looking for an AI Security Researcher to help us build the future of AI security where the world’s most advanced AI systems are tested, protected and hardened against the next generation of threats.
A successful candidate thrives at the intersection of machine learning, security and software engineering. You’ll lead investigations into how AI systems can fail, and build the tools, rules and frameworks that keep them secure. This is a hands-on role where you’ll break things, fix them and then harden them for good. You’ll have extensive freedom to explore, collaborate, publish, and deploy, shaping the field of AI security from both the offensive and defensive sides.
We’re looking for somebody with the curiosity of a researcher, the rigor of an engineer, and the creativity of a hacker.
You will be part of a diverse team consisting of ML experts, cryptographers, mathematicians, and physicists, where they will play a key role in efficient and effective enablement of the cutting-edge technologies being developed at SandboxAQ. We’re not another security vendor chasing patch cycles - we want to make an impact, and we want to do it fast.
Core Responsibilities
- Conduct original research into vulnerabilities, exploits, and adversarial behaviors in LLMs, LQMs, agents, and related AI frameworks
- Build and operationalize AI security frameworks, evaluations and red teaming tools, and defensive mechanisms to protect models and data
- Partner with engineering and product teams to integrate your findings into real-world systems
- Lead or contribute to responsible disclosure and research publications that advance the state of the art
- Stay on the edge of the latest in AI interpretability, alignment, and adversarial robustness, and use that knowledge to make AI safer for everyone
Required Qualifications
- PhD or Masters in Computer Science or related field with a focus on Machine Learning or Cybersecurity
- Deep expertise in AI/ML, security research, or both - with a proven ability to find and fix real vulnerabilities
- Hands-on experience with at least one of the following: Adversarial LLM red teaming, model extraction or prompt injection, data poisoning or evasion attacks, secure model deployment or sandboxing, detection and monitoring for AI misuse
- Strong programming skills in Python, and relevant ML and/or agentic frameworks
Desirable Qualifications
- Experience contributing to open source projects
- Experience in the broader cybersecurity domain is a plus, but not essential
SandboxAQ Welcomes All
We are committed to fostering a culture of belonging and respect, where diverse perspectives are actively sought and valued. Our multidisciplinary environment provides ample opportunity for continuous growth - working alongside humble, empowered, and ambitious colleagues ready to tackle epic challenges.
Equal Employment Opportunity: All qualified applicants will receive consideration regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, or Veteran status.
Accommodations: We provide reasonable accommodations for individuals with disabilities in job application procedures for open roles. If you need such an accommodation, please let a member of our Recruiting team know.
Read: Guidance for candidates on using AI Tools in interviews