Software Engineer, Performance Optimization

7 Minutes ago • 5 Years + • Software Development & Engineering • $175,000 PA - $220,000 PA

Job Summary

Job Description

Fireworks AI is seeking a Software Engineer focused on Performance Optimization to enhance the speed and efficiency of their generative AI infrastructure. This role involves optimizing performance across the entire stack, from low-level GPU kernels to large-scale distributed systems, with a focus on maximizing the performance of demanding workloads like large language models, vision-language models, and video models. The engineer will collaborate with research, infrastructure, and systems teams to identify bottlenecks, implement optimizations, and scale AI systems for real-world production use cases, directly impacting the speed, scalability, and cost-effectiveness of advanced generative AI models.
Must have:
  • Optimize system and GPU performance
  • Analyze and improve latency, throughput, memory usage
  • Profile system performance and resolve bottlenecks
  • Implement low-level optimizations using CUDA/Triton
  • Drive improvements in execution speed and resource utilization
  • Collaborate with ML researchers for hardware efficiency
  • Improve support for mixed precision and quantization
  • Build and maintain performance benchmarking infrastructure
  • Scale inference and training across multi-GPU/multi-node environments
  • Evaluate optimizations for emerging hardware accelerators
  • Bachelor's degree in CS, CE, EE, or equivalent
  • 5+ years of experience in performance optimization/HPC
  • Proficiency in CUDA/ROCm and GPU profiling tools
  • Familiarity with PyTorch and performance-critical execution
  • Experience with distributed system debugging in multi-GPU environments
  • Deep understanding of GPU architecture and parallel programming
Good to have:
  • Master's or PhD in CS, EE, or related field
  • Experience optimizing large models (LLMs, VLMs, video)
  • Knowledge of compiler stacks or ML compilers
  • Contributions to open-source ML or HPC infrastructure
  • Familiarity with cloud-scale AI infrastructure and orchestration
  • Background in ML systems engineering or hardware-aware model design
Perks:
  • Meaningful equity in a fast-growing startup
  • Competitive salary
  • Comprehensive benefits package
  • Solve hard problems at the forefront of AI infrastructure
  • Build what's next with bleeding-edge technology
  • Ownership and impact in a fast-growing, passionate team
  • Learn from world-class engineers and AI researchers

Job Details

About Us:

Here at Fireworks, we’re building the future of generative AI infrastructure. Fireworks offers the generative AI platform with the highest-quality models and the fastest, most scalable inference. We’ve been independently benchmarked to have the fastest LLM inference and have been getting great traction with innovative research projects, like our own function calling and multi-modal models. Fireworks is funded by top investors, like Benchmark and Sequoia, and we’re an ambitious, fun team composed primarily of veterans from Pytorch and Google Vertex AI.

The Role: 

We're looking for a Software Engineer focused on Performance Optimization to help push the boundaries of speed and efficiency across our AI infrastructure. In this role, you'll take ownership of optimizing performance at every layer of the stack—from low-level GPU kernels to large-scale distributed systems. A key focus will be maximizing the performance of our most demanding workloads, including large language models (LLMs), vision-language models (VLMs), and next-generation video models.

You’ll work closely with teams across research, infrastructure, and systems to identify performance bottlenecks, implement cutting-edge optimizations, and scale our AI systems to meet the demands of real-world production use cases. Your work will directly impact the speed, scalability, and cost-effectiveness of some of the most advanced generative AI models in the world.

Key Responsibilities:

  • Optimize system and GPU performance for high-throughput AI workloads across training and inference
  • Analyze and improve latency, throughput, memory usage, and compute efficiency
  • Profile system performance to detect and resolve GPU- and kernel-level bottlenecks
  • Implement low-level optimizations using CUDA, Triton, and other performance tooling
  • Drive improvements in execution speed and resource utilization for large-scale model workloads (LLMs, VLMs, and video models)
  • Collaborate with ML researchers to co-design and tune model architectures for hardware efficiency
  • Improve support for mixed precision, quantization, and model graph optimization
  • Build and maintain performance benchmarking and monitoring infrastructure
  • Scale inference and training systems across multi-GPU, multi-node environments
  • Evaluate and integrate optimizations for emerging hardware accelerators and specialized runtimes

Minimum Qualifications:

  • Bachelor’s degree in Computer Science, Computer Engineering, Electrical Engineering, or equivalent practical experience
  • 5+ years of experience working on performance optimization or high-performance computing systems
  • Proficiency in CUDA or ROCm and experience with GPU profiling tools (e.g., Nsight, nvprof, CUPTI)
  • Familiarity with PyTorch and performance-critical model execution
  • Experience with distributed system debugging and optimization in multi-GPU environments
  • Deep understanding of GPU architecture, parallel programming models, and compute kernels

Preferred Qualifications:

  • Master’s or PhD in Computer Science, Electrical Engineering, or a related field
  • Experience optimizing large models for training and inference (LLMs, VLMs, or video models)
  • Knowledge of compiler stacks or ML compilers (e.g., torch.compile, Triton, XLA)
  • Contributions to open-source ML or HPC infrastructure
  • Familiarity with cloud-scale AI infrastructure and orchestration tools (e.g., Kubernetes, Ray)
  • Background in ML systems engineering or hardware-aware model design

Example projects:

  • Implement fully asynchronous low-latency sampling for large language models integrated with structured outputs
  • Implement GPU kernels for the new low-precision scheme and run experiments to find optimal speed-quality tradeoff
  • Build a distributed router with a custom load-balancing algorithm to optimize LLM cache efficiency
  • Define metrics and build harness for finding optimal performance configuration (e.g. sharding, precision) for a given class of model
  • Determine and implement in PyTorch an optimal sharding scheme for a novel attention variant
  • Optimize communication patterns in RDMA networks (Infiniband, RoCE)
  • Debug numerical instabilities for a given model for a small portion of requests when deployed at scale

Total compensation for this role also includes meaningful equity in a fast-growing startup, along with a competitive salary and comprehensive benefits package. Base salary is determined by a range of factors including individual qualifications, experience, skills, interview performance, market data, and work location. The listed salary range is intended as a guideline and may be adjusted.

Base Pay Range (Plus Equity)

$175,000 - $220,000 USD

Why Fireworks AI?

  • Solve Hard Problems: Tackle challenges at the forefront of AI infrastructure, from low-latency inference to scalable model serving.
  • Build What’s Next: Work with bleeding-edge technology that impacts how businesses and developers harness AI globally.
  • Ownership & Impact: Join a fast-growing, passionate team where your work directly shapes the future of AI—no bureaucracy, just results.
  • Learn from the Best: Collaborate with world-class engineers and AI researchers who thrive on curiosity and innovation.

Fireworks AI is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all innovators.

Similar Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Similar Skill Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Jobs in Redwood City, California, United States

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Software Development & Engineering Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

About The Company

Redwood City, California, United States (Hybrid)

Redwood City, California, United States (On-Site)

Redwood City, California, United States (Hybrid)

Redwood City, California, United States (On-Site)

Redwood City, California, United States (Hybrid)

Redwood City, California, United States (On-Site)

Redwood City, California, United States (On-Site)

Redwood City, California, United States (On-Site)

New York, United States (Hybrid)

Redwood City, California, United States (On-Site)

View All Jobs

Get notified when new jobs are added by Fireworks AI

Level Up Your Career in Game Development!

Transform Your Passion into Profession with Our Comprehensive Courses for Aspiring Game Developers.

Job Common Plug