We are seeking a highly skilled and innovative AI Security Scientist to join our RDSec team, which is a global team responsible for governance, risk assessment and compliance of Trend Micro software products and services. This unique role bridges the gap between theoretical AI security research and practical enterprise security implementation. The successful candidate will be responsible for defining acceptable security posture for new AI applications and services, and lead the effort of designing and implementing related mitigations and remediations to prevent potential risks caused by adoption of AI services.
You will act as the organization’s expert on AI risks, working to proactively identify, assess, and mitigate emerging threats across the entire AI pipelines and translating complex security requirements into scalable, deployable security services.
Key Responsibilities
AI Security Risk Assessment
- AI Threat Modeling & Risk Assessment: Lead the analysis and assessment of security and privacy risks inherent in large language models (LLMs), Generative AI services, and other machine learning applications (e.g., prompt injection, data poisoning, model extraction, and privacy breaches).
- Policy and Compliance: Collaborate with legal and compliance teams to ensure AI applications adhere to internal policies and external regulations (e.g., NIST AI RMF, EU AI Act, emerging AI-specific laws).
- Vulnerability Analysis: Study and document potential vulnerabilities within AI ecosystems and pipelines, including model integrity, training data exposure/leakage, and inference endpoints.
- Remediation Design: Design and recommend effective security controls and mitigations to address identified risks, translating security requirements into actionable engineering plans.
AI Security Implementation and Engineering Leadership
- Security Service Implementation: Lead the technical implementation of specific AI-related security services, such as input/output content filters, adversarial defense mechanisms, and secure model serving architectures.
- Coding Assistant Security: Specialize in securing coding assistant tools, analyzing the risks of code generation and suggestion, and designing guardrails to prevent the introduction of insecure code or intellectual property leakage.
- MLSecOps Integration: Partner with ML/AI Engineers and Data Scientists to embed security practices into the MLOps pipeline, championing a "security-by-design" approach for all AI initiatives.
- Research & Advisory: Stay current with the rapidly evolving field of AI security and attack vectors. Provide expert consultation to product and engineering teams on best practices for secure AI development.
Required Qualifications
- Education: Bachelor’s or Master’s degree in Computer Science, Cybersecurity, Data Science, or a related technical field.
- Experience: 5+ years of experience in Software Security, with at least 2 years focused specifically on AI/ML security, MLOps security, or cloud-native security.
- Deep AI/ML Understanding: Strong foundational knowledge of Machine Learning algorithms, Large Language Models (LLMs), and Generative AI architectures. Must be able to reason about a model’s vulnerabilities at a conceptual level.
- Technical Proficiency:
- Proficiency in Python and experience with ML frameworks (e.g., TensorFlow, PyTorch).
- Familiarity with Cloud Security principles (AWS, Azure, or GCP) and containerization technologies (Docker, Kubernetes).
- Experience with CI/CD/MLOps pipelines and implementing security automation within them.
- User experience of AI coding assistant framework such as Github Copilot, Claude Code, Cursor, Dify/n8n, and so on.
- GRC Expertise: Solid understanding of Governance, Risk, and Compliance principles, risk assessment methodologies, and industry security frameworks (e.g., NIST CSF, ISO 27001, ISO 42001)
Preferred Qualifications
- Experience in a role directly involving the security of coding assistant or developer tools.
- Advanced degree (Ph.D. or Master’s) in a relevant field.
- Relevant certifications such as CISSP, CISM, or CRISC, or specialized certifications like Certified AI Security Professional (CAISP).
- Experience publishing research or presenting on topics related to AI Security or Adversarial Machine Learning.
- Familiarity with software supply chain security as it applies to dependencies in ML models and training data.