Software Engineer, AI/ML GenAI
Instrumental
Job Summary
As a Software Engineer, AI/ML GenAI at Instrumentl, you will own the full lifecycle of AI features, from rapid prototyping to production deployment and ongoing evaluation. This includes building agentic LLM systems that can plan and use tools, implementing RAG pipelines over domain data, managing and evolving embeddings and indices, running fine-tuning, and standing up evaluation and observability to ensure AI is grounded, safe, and cost-effective. You will collaborate closely with Product and Design teams.
Must Have
- 5+ years professional software engineering experience, 2+ years with modern LLMs (as an IC)
- Proven production impact with LLM/RAG systems from prototype to production
- Experience building LLM agentic systems (tool/function-calling workflows, planning/execution loops)
- Strong RAG expertise (document ingestion, chunking, embeddings, hybrid search, re-ranking, citations)
- Hands-on with embedding model selection/versioning and vector DBs
- Comfort designing eval suites (RAG/QA, extraction, summarization)
- Proficiency in Python (FastAPI, Celery) and TypeScript/Node
- Familiarity with Ruby on Rails or willingness to learn
- Experience with AWS/GCP, Docker, CI/CD, and observability
- Comfortable with SQL, schema design, and data pipelines
Good to Have
- Startup experience and comfort operating in fast, scrappy environments
- Practical experience with SFT/LoRA or instruction-tuning
- Exposure to open-source LLMs (e.g., Llama) and providers (e.g., OpenAI, Anthropic, Google, Mistral)
- Familiarity with responsible AI, red-teaming, and domain-specific safety policies
Perks & Benefits
- 100% covered health, dental, and vision insurance for employees
- 50% covered health, dental, and vision insurance for dependents
- Generous PTO policy, including parental leave
- 401(k)
- Company laptop + stipend to set up your home workstation
- Company retreats for in-person time with colleagues
- Opportunity to work with awesome nonprofits
Job Description
đHello, weâre Instrumentl. Weâre a mission-driven startup helping the nonprofit sector to drive impact, and weâre well on our way to becoming the #1 most-loved grant discovery and management tool.
About us: Instrumentl is a hyper growth YC-backed startup with over 4,000 nonprofit clients, from local homeless shelters to larger organizations like the San Diego Zoo and the University of Alaska. We are building the future of fundraising automation, helping nonprofits to discover, track, and manage grants efficiently through our SaaS platform. Our charts are dramatically up-and-to-the-right đ â weâre cash flow positive and doubling year-over-year, with customers who love us (NPS is 65+ and Ellis PMF survey is 60+). Join us on this rocket ship to Mars!
About the Role : As a Software Engineer, AI/ML GenAI at Instrumentl, youâll own the full lifecycle of AI featuresâfrom rapid prototyping to production deployment and ongoing evaluation. You will build agentic LLM systems that can plan and use tools, implement RAG pipelines over our domain data, manage and evolve embeddings and indices, run fineâtuning where itâs the right lever, and stand up evaluation/observability so our AI is grounded, safe, and costâeffective. Youâll embed with one of the above groups in a hands-on role, collaborating closely with Product and Design, while partnering with DTI on platformâlevel AI capabilities.
The Instrumentl team is fully distributed (though if youâd like to work from our Oakland office, we would love to see you there). For this position, we are looking for someone who has significant overlap with Pacific Time Zone working hours.
What you will do
- Design agentic systems & ship AI to production: Turn prototypes into resilient, observable services with clear SLAs, rollback/fallback strategies, and cost/latency budgets. Build toolâusing LLM âagentsâ (task planning, function/tool calling, multiâstep workflows, guardrails) for tasks like grant discovery, application drafting, and research assistance.
- Own RAG endâtoâend: Ingest and normalize content, choose chunking/embedding strategies, implement hybrid retrieval, reâranking, citations, and grounding. Continuously improve recall/precision while managing index health.
- Manage embeddings at scale: Select, evaluate, and migrate embedding models; maintain vector stores (e.g., pgvector/FAISS/Pinecone/Weaviate/Milvus/Qdrant); monitor drift and rebuild strategies.
- Fineâtune & build evaluation: Run SFT/LoRA or instructionâtuning on curated datasets; evaluate the ROI vs. prompt engineering/model selection; manage data versioning and reproducibility. Create offline and online eval harnesses (helpfulness, groundedness, hallucination, toxicity, latency, cost), synthetic test sets, redâteaming, and humanâinâtheâloop review.
- Collaborate crossâfunctionally while raising engineering standards: Work side by side with Product, Design, and GTM on scoping, UX, and measurement; run experiments (A/B, canaries), interpret results, and iterate. Write clear, maintainable code, add tests and docs, and contribute to reliability practices (alerts, dashboards, incident response).
What we're looking for
- Software engineering background: 5+ years of professional software engineering experience, including 2+ years working with modern LLMs (as an IC). Startup experience and comfort operating in fast, scrappy environments is a plus.
- Proven production impact: Youâve taken LLM/RAG systems from prototype to production, owned reliability/observability, and iterated postâlaunch based on evals and user feedback.
- LLM agentic systems: Experience building tool/functionâcalling workflows, planning/execution loops, and safe tool integrations (e.g., with LangChain/LangGraph, LlamaIndex, Semantic Kernel, or custom orchestration).
- RAG expertise: Strong grasp of document ingestion, chunking/windowing, embeddings, hybrid search (keyword + vector), reâranking, and grounded citations. Experience with reârankers/crossâencoders, hybrid retrieval tuning, or search/recommendation systems.
- Embeddings & vector stores: Handsâon with embedding model selection/versioning and vector DBs (e.g., pgvector, FAISS, Pinecone, Weaviate, Milvus, Qdrant).IDocument processing at scale (PDF parsing/OCR), structured extraction with JSON schemas, and schemaâguided generation.
- Evaluation mindset: Comfort designing eval suites (RAG/QA, extraction, summarization), using automated and humanâinâtheâloop methods; familiarity with frameworks like Ragas/DeepEval/OpenAI Evals or equivalent.
- Infrastructure & languages: Proficiency in Python (FastAPI, Celery) and TypeScript/Node; familiarity with Ruby on Rails (our core platform) or willingness to learn. Experience with AWS/GCP, Docker, CI/CD, and observability (logs/metrics/traces).
- Data chops: Comfortable with SQL, schema design, and building/maintaining data pipelines that power retrieval and evaluation.
- Collaborative approach: You thrive in a crossâfunctional environment and can translate researchy ideas into shippable, userâfriendly features.
- Resultsâdriven: Bias for action and ownership with an eye for speed, quality, and simplicity.
Nice to have
- Fineâtuning: Practical experience with SFT/LoRA or instructionâtuning (and good intuition for when fineâtuning vs. prompting vs. model choice is the right lever).
- Exposure to openâsource LLMs (e.g., Llama) and providers (e.g., OpenAI, Anthropic, Google, Mistral).
- Familiarity with responsible AI, redâteaming, and domainâspecific safety policies.
Compensation & Benefits
- Salary ranges are based on market data, relative to our size, industry, and stage of growth. Salary is one part of total compensation, which also includes equity, perks, and competitive benefits.
- For US-based candidates, our target salary band is $175,000 - $220,000/year + equity. Salary decisions will be based on multiple factors including geographic location, qualifications for the role, skillset, proficiency, and experience level.
- 100% covered health, dental, and vision insurance for employees, 50% for dependents
- Generous PTO policy, including parental leave
- 401(k)
- Company laptop + stipend to set up your home workstation
- Company retreats for in-person time with your colleagues
- Work with awesome nonprofits around the US. We partner with incredible organizations doing meaningful work, and you get to help power their success.
We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.