AI Security Scientist

5 Minutes ago • 5 Years +
Cyber Security

Job Description

Trend Micro is seeking a highly skilled and innovative AI Security Scientist to join its global RDSec team. This unique role bridges theoretical AI security research with practical enterprise security implementation. The successful candidate will define acceptable security postures for new AI applications and services, leading the design and implementation of mitigations to prevent potential risks. You will act as the organization’s expert on AI risks, proactively identifying, assessing, and mitigating emerging threats across entire AI pipelines, and translating complex security requirements into scalable, deployable security services.
Good To Have:
  • Experience in a role directly involving the security of coding assistant or developer tools.
  • Advanced degree (Ph.D. or Master’s) in a relevant field.
  • Relevant certifications such as CISSP, CISM, or CRISC, or specialized certifications like Certified AI Security Professional (CAISP).
  • Experience publishing research or presenting on topics related to AI Security or Adversarial Machine Learning.
  • Familiarity with software supply chain security as it applies to dependencies in ML models and training data.
Must Have:
  • Lead AI Threat Modeling & Risk Assessment for LLMs, Generative AI, and ML applications.
  • Collaborate with legal and compliance teams to ensure AI applications adhere to policies and regulations (e.g., NIST AI RMF, EU AI Act).
  • Study and document potential vulnerabilities within AI ecosystems and pipelines.
  • Design and recommend effective security controls and mitigations.
  • Lead the technical implementation of specific AI-related security services (e.g., input/output content filters, adversarial defense mechanisms).
  • Specialize in securing coding assistant tools, analyzing risks of code generation and suggestion.
  • Partner with ML/AI Engineers and Data Scientists to embed security practices into the MLOps pipeline.
  • Stay current with the rapidly evolving field of AI security and attack vectors, providing expert consultation.
  • Bachelor’s or Master’s degree in Computer Science, Cybersecurity, Data Science, or a related technical field.
  • 5+ years of experience in Software Security, with at least 2 years focused on AI/ML security, MLOps security, or cloud-native security.
  • Strong foundational knowledge of Machine Learning algorithms, LLMs, and Generative AI architectures.
  • Proficiency in Python and experience with ML frameworks (e.g., TensorFlow, PyTorch).
  • Familiarity with Cloud Security principles (AWS, Azure, or GCP) and containerization technologies (Docker, Kubernetes).
  • Experience with CI/CD/MLOps pipelines and implementing security automation.
  • User experience of AI coding assistant frameworks such as Github Copilot, Claude Code, Cursor, Dify/n8n.
  • Solid understanding of Governance, Risk, and Compliance principles, risk assessment methodologies, and industry security frameworks (e.g., NIST CSF, ISO 27001, ISO 42001).

Add these skills to join the top 1% applicants for this job

risk-management
risk-assessment
github
game-texts
user-experience-ux
aws
azure
model-serving
cloud-security
data-science
pytorch
ci-cd
docker
kubernetes
python
algorithms
tensorflow
machine-learning

Join Trend ‧ Join New Generation

We are seeking a highly skilled and innovative AI Security Scientist to join our RDSec team, which is a global team responsible for governance, risk assessment and compliance of Trend Micro software products and services. This unique role bridges the gap between theoretical AI security research and practical enterprise security implementation. The successful candidate will be responsible for defining acceptable security posture for new AI applications and services, and lead the effort of designing and implementing related mitigations and remediations to prevent potential risks caused by adoption of AI services.

You will act as the organization’s expert on AI risks, working to proactively identify, assess, and mitigate emerging threats across the entire AI pipelines and translating complex security requirements into scalable, deployable security services.

Key Responsibilities

AI Security Risk Assessment

  • AI Threat Modeling & Risk Assessment: Lead the analysis and assessment of security and privacy risks inherent in large language models (LLMs), Generative AI services, and other machine learning applications (e.g., prompt injection, data poisoning, model extraction, and privacy breaches).
  • Policy and Compliance: Collaborate with legal and compliance teams to ensure AI applications adhere to internal policies and external regulations (e.g., NIST AI RMF, EU AI Act, emerging AI-specific laws).
  • Vulnerability Analysis: Study and document potential vulnerabilities within AI ecosystems and pipelines, including model integrity, training data exposure/leakage, and inference endpoints.
  • Remediation Design: Design and recommend effective security controls and mitigations to address identified risks, translating security requirements into actionable engineering plans.

AI Security Implementation and Engineering Leadership

  • Security Service Implementation: Lead the technical implementation of specific AI-related security services, such as input/output content filters, adversarial defense mechanisms, and secure model serving architectures.
  • Coding Assistant Security: Specialize in securing coding assistant tools, analyzing the risks of code generation and suggestion, and designing guardrails to prevent the introduction of insecure code or intellectual property leakage.
  • MLSecOps Integration: Partner with ML/AI Engineers and Data Scientists to embed security practices into the MLOps pipeline, championing a "security-by-design" approach for all AI initiatives.
  • Research & Advisory: Stay current with the rapidly evolving field of AI security and attack vectors. Provide expert consultation to product and engineering teams on best practices for secure AI development.

Required Qualifications

  • Education: Bachelor’s or Master’s degree in Computer Science, Cybersecurity, Data Science, or a related technical field.
  • Experience: 5+ years of experience in Software Security, with at least 2 years focused specifically on AI/ML security, MLOps security, or cloud-native security.
  • Deep AI/ML Understanding: Strong foundational knowledge of Machine Learning algorithms, Large Language Models (LLMs), and Generative AI architectures. Must be able to reason about a model’s vulnerabilities at a conceptual level.
  • Technical Proficiency:
  • Proficiency in Python and experience with ML frameworks (e.g., TensorFlow, PyTorch).
  • Familiarity with Cloud Security principles (AWS, Azure, or GCP) and containerization technologies (Docker, Kubernetes).
  • Experience with CI/CD/MLOps pipelines and implementing security automation within them.
  • User experience of AI coding assistant framework such as Github Copilot, Claude Code, Cursor, Dify/n8n, and so on.
  • GRC Expertise: Solid understanding of Governance, Risk, and Compliance principles, risk assessment methodologies, and industry security frameworks (e.g., NIST CSF, ISO 27001, ISO 42001)

Preferred Qualifications

  • Experience in a role directly involving the security of coding assistant or developer tools.
  • Advanced degree (Ph.D. or Master’s) in a relevant field.
  • Relevant certifications such as CISSP, CISM, or CRISC, or specialized certifications like Certified AI Security Professional (CAISP).
  • Experience publishing research or presenting on topics related to AI Security or Adversarial Machine Learning.
  • Familiarity with software supply chain security as it applies to dependencies in ML models and training data.

Set alerts for more jobs like AI Security Scientist
Set alerts for new jobs by Trend Micro
Set alerts for new Cyber Security jobs in Taiwan
Set alerts for new jobs in Taiwan
Set alerts for Cyber Security (Remote) jobs

Contact Us
hello@outscal.com
Made in INDIA 💛💙