Member Of Technical Staff (Winter Intern) at Wafer (S25)

Wafer

Job Summary

Join our team to build the future of inference, GPU optimization and AI infrastructure. You'll work directly with the team to define our technical direction and build the core systems that power our GPU optimization platform. This role involves building scalable infrastructure for AI model training and inference, and leading technical decisions and architecture choices.

Must Have

  • Build scalable infrastructure for AI model training and inference
  • Lead technical decisions and architecture choices
  • Deep understanding of GPU architectures, CUDA programming, and parallel computing patterns
  • Proficiency in PyTorch, TensorFlow, or JAX, particularly for GPU-accelerated workloads
  • Strong grounding in large language models (training, fine-tuning, prompting, evaluation)
  • Proficiency in C++, Python, and possibly Rust/Go for building tooling around CUDA

Good to Have

  • Publications or open-source contributions in inference GPU computing or ML/AI for code are a plus
  • Hands-on experience with large-scale experiments, benchmarking, and performance tuning

Perks & Benefits

  • Will sponsor

Job Description

Join our team to build the future of inference, GPU optimization and AI infrastructure. You'll work directly with the team to define our technical direction and build the core systems that power our GPU optimization platform.

What You'll Do

  • Build scalable infrastructure for AI model training and inference
  • Lead technical decisions and architecture choices

What We Look For

Core Technical Expertise

  • GPU Fundamentals: Deep understanding of GPU architectures, CUDA programming, and parallel computing patterns.
  • Deep Learning Frameworks: Proficiency in PyTorch, TensorFlow, or JAX, particularly for GPU-accelerated workloads.
  • LLM/AI Knowledge: Strong grounding in large language models (training, fine-tuning, prompting, evaluation).
  • Systems Engineering: Proficiency in C++, Python, and possibly Rust/Go for building tooling around CUDA.

Ideal Background

  • Publications or open-source contributions in inference GPU computing or ML/AI for code are a plus.
  • Hands-on experience with large-scale experiments, benchmarking, and performance tuning.

8 Skills Required For This Role

Cpp Game Texts Cuda Rust Pytorch Deep Learning Python Tensorflow