LILT is seeking freelance AI Red Team experts to collaborate on projects focused on adversarial testing of AI systems — LLMs, multimodal models, inference services, RAG/embeddings, and product integrations. Your work will involve crafting prompts and scenarios to test model guardrails, exploring creative ways to bypass restrictions, and systematically documenting outcomes. You’ll think like an adversary to uncover weaknesses, while collaborating with engineers and safety researchers to share findings and improve system defenses.
1. Deep Understanding of Generative AI and main models, including their underlying architectures, training processes, and potential failure modes. This includes knowledge of concepts like prompt engineering, fine-tuning, and reinforcement learning with human feedback (RLHF)
2. Cybersecurity & Threat Modeling: Experience in cybersecurity principles, including threat modeling, vulnerability assessment, and penetration testing. Ability to identify attack vectors, simulate real-world threats, and understand the potential impact of an attack.
3. Data Analysis & NLP: Strong analytical skills to dissect model outputs, identify subtle biases or factual errors, and recognize patterns in how the model responds to different inputs. A background in Natural Language Processing (NLP) would be highly beneficial.
4. Ethical Hacking Mindset: A commitment to using her/his skills for defensive and security-focused purposes, adhering to a strict ethical code, and understanding the importance of responsible disclosure.
Benefits
Why this freelance opportunity might be a great fit for you?
Our Story
Our founders, Spence and John met at Google working on Google Translate. As researchers at Stanford and Berkeley, they both worked on language technology to make information accessible to everyone. While together at Google, they were amazed to learn that Google Translate wasn’t used for enterprise products and services inside the company.The quality just wasn’t there. So they set out to build something better. LILT was born.
LILT has been a machine learning company since its founding in 2015. At the time, machine translation didn’t meet the quality standard for enterprise translations, so LILT assembled a cutting-edge research team tasked with closing that gap. While meeting customer demand for translation services, LILT has prioritized investments in Large Language Models, human-in-the-loop systems, and now agentic AI.
With AI innovation accelerating and enterprise demand growing, the next phase of LILT’s journey is just beginning.
Our Tech
What sets our platform apart:
LILT in the News
Information collected and processed as part of your application process, including any job applications you choose to submit, is subject to LILT's Privacy Policy at https://lilt.com/legal/privacy_
_.
At LILT, we are committed to a fair, inclusive, and transparent hiring process. As part of our recruitment efforts, we may use artificial intelligence (AI) and automated tools to assist in the evaluation of applications, including résumé screening, assessment scoring, and interview analysis. These tools are designed to support human decision-making and help us identify qualified candidates efficiently and objectively. All final hiring decisions are made by people. If you have any concerns, require accommodations, or would like to opt-out of the use of AI in our hiring process, please let us know at recruiting@lilt.com.
LILT is an equal opportunity employer. We extend equal opportunity to all individuals without regard to an individual’s race, religion, color, national origin, ancestry, sex, sexual orientation, gender identity, age, physical or mental disability, medical condition, genetic characteristics, veteran or marital status, pregnancy, or any other classification protected by applicable local, state or federal laws. We are committed to the principles of fair employment and the elimination of all discriminatory practices.