Location: SF Bay Area/Remote
Type: Full-Time
About the Role:
LMArena is seeking an experienced Security Engineer to lead the design and implementation of secure-by-default infrastructure across our platform. In this role, you’ll be responsible for safeguarding the systems that power real-time AI evaluation. Your work will ensure Arena remains a trusted and resilient platform for the users, developers, researchers, and organizations relying on transparent, community-driven AI benchmarks. This role is ideal for someone who thinks like a builder and a breaker—someone who thrives on identifying attack surfaces, designing resilient defenses, and building systems that scale trust across a growing platform.
Responsibilities:
Design and implement foundational security infrastructure (IT systems, 2FA, VPNs, login controls, suspicious account filtering) while integrating tools to detect supply chain vulnerabilities
Audit and improve access patterns across compute/storage/databases to prevent credential leaks and PII exposure while enforcing Principle of Least Privilege (PoLP)
Develop and enforce security policies, standards and procedures across the organization, including regular vulnerability assessments, penetration testing and code reviews
Build and maintain real-time monitoring systems for threat detection/response while collaborating with engineering, product and operations teams on security integration
Ensure compliance with relevant regulations (GDPR, CCPA) and industry standards through regular audits and updates
Foster company-wide security awareness through training programs, documentation and cross-team guidance
Scale security operations and build/lead security teams to address evolving challenges as the company grows
Who is LMArena?
Why Join Us?
Trusted by organizations like Google, OpenAI, Meta, xAI, and more, LMArena is rapidly becoming essential infrastructure for transparent, human-centered AI evaluation at scale. With over one million monthly users and growing developer adoption, our impact is helping guide the next generation of safe, aligned AI systems—grounded in open access and collective feedback.
Our work is regularly referenced by industry leaders pushing the frontier of safe and reliable AI. Sundar Pichai, Jeff Dean, Elon Musk, and Sam Altman.
High Impact: Your work will be used daily by the world’s most advanced AI labs.
Global Reach: Develop data infrastructure powering millions of real-world evaluations, influencing AI reliability across industries at the top-tier
Exceptional Team: We are a small team of top talent from Google, DeepMind, Discord, Vercel, UC Berkeley, and Stanford.
Requirements:
5+ years of experience in software engineering or security engineering, with a focus on building secure, scalable systems
Proven experience in securing IT systems, networks, and cloud environments (e.g., AWS, Azure, GCP).
Strong knowledge of firewalls, IDS/IPS, endpoint protection, and vulnerability management tools.
Strong knowledge of threat modeling, risk assessment, and designing mitigation strategies for real-world attack scenarios
Experience implementing common security tools and practices, including VPNs, MFA/2FA, intrusion detection, secrets management, and secure deployment pipelines
Experience designing and deploying infrastructure security measures across cloud environments, identity systems, and access controls (i.e. Okta, OneLogin)
Experience maintaining company hardware
Clear communicator who can collaborate cross-functionally and articulate risks and tradeoffs to technical and non-technical stakeholders
Bonus: Experience in adversarial ML, trust & safety systems, or securing user-driven platforms with voting or reputation systems
Our Tech Stack:
NextJS
Tailwind
ShadCN
HonoJS
Postgres
Vitest
What we offer:
210k - 231k + equity. Actual compensation will depend on job-related knowledge, skills, experience, and candidate location.
Competitive salary and meaningful equity
Comprehensive healthcare coverage (medical, dental, vision)
The opportunity to work on cutting-edge AI with a small, mission-driven team
A culture that values transparency, trust, and community impact
Come help build the space where anyone can explore and help shape the future of AI.