Role Overview
LILT is seeking freelance AI Red Team experts to collaborate on projects focused on adversarial testing of AI systems — LLMs, multimodal models, inference services, RAG/embeddings, and product integrations. Your work will involve crafting prompts and scenarios to test model guardrails, exploring creative ways to bypass restrictions, and systematically documenting outcomes. You’ll think like an adversary to uncover weaknesses, while collaborating with engineers and safety researchers to share findings and improve system defenses.
Key Criteria
1. Deep Understanding of Generative AI and main models, including their underlying architectures, training processes, and potential failure modes. This includes knowledge of concepts like prompt engineering, fine-tuning, and reinforcement learning with human feedback (RLHF)
2. Cybersecurity & Threat Modeling: Experience in cybersecurity principles, including threat modeling, vulnerability assessment, and penetration testing. Ability to identify attack vectors, simulate real-world threats, and understand the potential impact of an attack.
3. Data Analysis & NLP: Strong analytical skills to dissect model outputs, identify subtle biases or factual errors, and recognize patterns in how the model responds to different inputs. A background in Natural Language Processing (NLP) would be highly beneficial.
4. Ethical Hacking Mindset: A commitment to using her/his skills for defensive and security-focused purposes, adhering to a strict ethical code, and understanding the importance of responsible disclosure.
Core Requirements
- You hold a Bachelor's or Master’s Degree in Computer Science, Software Engineering, Cybersecurity, Digital Forensics or other related fields.
- Your level of English is advanced (C1) or above
- Knowledge of vulnerabilities, common model vulnerabilities (prompt injection, prompt-history leakage, data exfiltration via RAG).
- Experience in AI/ML security, evaluation, and red teaming, particularly with LLMs, AI agents, and RAG pipelines.
- You are ready to learn new methods, able to switch between tasks and topics quickly and sometimes work with challenging, complex guidelines.
- Proficient in scripting and automation using Python, Bash, or PowerShell.
- Familiar with AI red-teaming frameworks such as garak or PyRIT.
Preferred Requirements
- Physical-world adversarial testing
- Experienced with containerization and CI/CD security tools, especially Docker.
- Proficient in offensive exploitation and exploit development.
- Skilled in reverse engineering using tools like Ghidra or equivalents.
- Expertise in network and application security, including web application security.
- Knowledge of operating system security concepts such as Linux privilege escalation and Windows internals.
- Familiar with secure coding practices for full-stack development.
Benefits
Why this freelance opportunity might be a great fit for you?
- Get paid for your expertise, with rates that can go up to $55/hour depending on your skills, experience, and project needs.
- Take part in a part-time, remote, freelance project that fits around your primary professional or academic commitments.
- Work on advanced AI projects and gain valuable experience that enhances your portfolio.
- Influence how future AI models understand and communicate in your field of expertise.