Who we are
Mindtickle is the market-leading revenue productivity platform that combines on-the-job learning and deal execution to get more revenue per rep. Mindtickle is recognized as a market leader by top industry analysts and is ranked by G2 as the #1 sales onboarding and training product. We’re honoured to be recognized as a Leader in the first-ever Forrester Wave™: Revenue Enablement Platforms, Q3 2024!
As an SDE-3 in AI/ML, you will:
- Translate business asks and requirements into technical requirements, solutions, architectures, and implementations.
- Define clear problem statements and technical requirements by aligning business goals with AI research objectives.
- Lead the end-to-end design, prototyping, and implementation of AI systems, ensuring they meet performance, scalability, and reliability targets.
- Architect solutions for GenAI and LLM integrations, including prompt engineering, context management, and agentic workflows.
- Develop and maintain production-grade code with high test coverage and robust CI/CD pipelines on AWS, Kubernetes, and cloud-native infrastructures.
- Establish and maintain post-deployment monitoring, performance testing, and alerting frameworks to ensure performance and quality SLAs are met.
- Conduct thorough design and code reviews, uphold best practices, and drive technical excellence across the team.
- Mentor and guide junior engineers and interns, fostering a culture of continuous learning and innovation.
- Collaborate closely with product management, QA, data engineering, DevOps, and customer facing teams to deliver cohesive AI-powered product features.
Key Responsibilities- Problem Definition & Requirements
1.Translate business use cases into detailed AI/ML problem statements and success metrics.
2.Gather and document functional and non-functional requirements, ensuring traceability throughout the development lifecycle.
- Architecture & Prototyping
1.Design end-to-end architectures for GenAI and LLM solutions, including context orchestration, memory modules, and tool integrations.
2.Build rapid prototypes to validate feasibility, iterate on model choices, and benchmark different frameworks and vendors.
- Development & Productionization
1.Write clean, maintainable code in Python, Java, or Go, following software engineering best practices.
2.Implement automated testing (unit, integration, and performance tests) and CI/CD pipelines for seamless deployments.
3.Optimize model inference performance and scale services using containerization (Docker) and orchestration (Kubernetes).
- Post-Deployment Monitoring
1.Define and implement monitoring dashboards and alerting for model drift, latency, and throughput.
2.Conduct regular performance tuning and cost analysis to maintain operational efficiency.
- Mentorship & Collaboration
1.Mentor SDE-1/SDE-2 engineers and interns, providing technical guidance and career development support.
2.Lead design discussions, pair-programming sessions, and brown-bag talks on emerging AI/ML topics.
3.Work cross-functionally with product, QA, data engineering, and DevOps to align on delivery timelines and quality goals.
Required Qualification- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- 8+ years of professional software development experience, with at least 3 years focused on AI/ML systems.
- Proven track record of architecting and deploying production AI applications at scale.
- Strong programming skills in Python and one or more of Java, Go, or C++.
- Hands-on experience with cloud platforms (AWS, GCP, or Azure) and containerized deployments.
- Deep understanding of machine learning algorithms, LLM architectures, and prompt engineering.
- Expertise in CI/CD, automated testing frameworks, and MLOps best practices.
- Excellent written and verbal communication skills, with the ability to distill complex AI concepts for diverse audiences.
Preferred Experience- Prior experience building Agentic AI or multi-step workflow systems (using tools like Langgrah, CrewAI or similar).
- Familiarity with open-source LLMs (e.g., Hugging Face hosted) and custom fine-tuning.
- Familiarity with ASR (Speech to Text) and TTS (Text to Speech), and other multi-modal systems.
- Experience with monitoring and observability tools (e.g. Datadog, Prometheus, Grafana).
- Publications or patents in AI/ML or related conference presentations.
- Knowledge of GenAI evaluation frameworks (e.g., Weights & Biases, CometML).
- Proven experience designing, implementing, and rigorously testing AI-driven voice agents - integrating with platforms such as Google Dialogflow, Amazon Lex, and Twilio Autopilot - and ensuring high performance and reliability.
What we offer?- Opportunity to work at the forefront of GenAI, LLMs, and Agentic AI in a fast-growing SaaS environment.
- Collaborative, inclusive culture focused on innovation, continuous learning, and professional growth.
- Competitive compensation, comprehensive benefits, and equity options.
- Flexible work arrangements and support for professional development.
Our culture & accolades
As an organization, it’s our priority to create a highly engaging and rewarding workplace. We offer tons of awesome perks and many opportunities for growth.
Our culture reflects our employee's globally diverse backgrounds along with our commitment to our customers, and each other, and a passion for excellence. We live up to our values, DAB, Delight your customers, Act as a Founder, and Better Together.
Mindtickle is proud to be an Equal Opportunity Employer.
All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law.
Your Right to Work - In compliance with applicable laws, all persons hired will be required to verify identity and eligibility to work in the respective work locations and to complete the required employment eligibility verification document form upon hire.