This role involves shaping the future of AI/ML hardware acceleration by developing cutting-edge TPU technology. You'll be part of a team creating custom silicon solutions for Google's TPUs, contributing to products used by millions. Responsibilities include defining power management schemes, creating power specifications (UPF), estimating and tracking power throughout project phases, running power optimization tools, and collaborating with cross-functional teams. The work focuses on ASIC development for machine learning computation in data centers, requiring problem-solving with micro-architecture and evaluating design options considering performance, power, and area. You will work on ASIC design verification, synthesis, and timing analysis, with a specific focus on TPU architecture and its integration within AI/ML systems.