Software Engineer, Tensor Processing Units Compiler

Google, City of Westminster

Software Engineer, Tensor Processing Units Compiler

Salary Not Specified

Google, City of Westminster

  • Full time
  • Permanent
  • Onsite working

Posted 1 day ago, 7 Nov | Get your application in today.

Closing date: Closing date not specified

job Ref: 4a7453f7c66f4abf853d3ac5e8514530

Full Job Description

Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google's needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward. Our team builds the compiler which enables Tensor Processing Units (TPUs), Google's in-house custom designed processor, to accelerate machine learning and other scientific computing workloads for both internal Google customers and external Cloud customers. The team offers opportunities up and down the compiler stack, working on Low Level Virtual Machine (LLVM) as well as the Multi-Level Intermediate Representation (MLIR) middle-end. In this role, you'll be working on the MLIR/LLVM based TPU compiler for TPUs. You will support new workloads, optimize for new models and new characteristics, as well as support new TPU hardware across multiple generations. Google Cloud accelerates organizations' ability to digitally transform their business with the best infrastructure, platform, industry solutions and expertise. We deliver enterprise-grade solutions that leverage Google's cutting-edge technology - all on the cleanest cloud in the industry. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems.

  • Contribute to a compiler for a novel processor designed to accelerate machine learning workloads. Compile high-performance implementations of operations at a distributed scale.
  • Work closely with users of TPUs to improve performance/efficiency and hardware designers to co-design future processors.
  • Investigate high-level representations to effectively program large-scale, distributed, and hetereogeneous systems.
  • Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also https://careers.google.com/eeo/ and https://careers.google.com/jobs/dist/legal/OFCCPEEOPost.pdf If you have a need that requires accommodation, please let us know by completing our Accommodations for Applicants form: https://goo.gl/forms/aBt6Pu71i1kzpLHe2.

  • Bachelor's degree or equivalent practical experience.
  • 3 years of experience testing, maintaining, or launching software products, and 1 year of experience with software design and architecture.
  • 2 years of experience working with CUDA C++ application development and 1 year of experience with Native Code, Just-In-Time (JIT), Cross, Source-to-Source or any other type of compilers.
  • 2 years of experience with data structures or algorithms, with experience with machine learning algorithms and tools (e.g. TensorFlow), artificial intelligence, deep learning, or natural language processing., Master's degree or PhD in Computer Science or related technical fields.
  • Experience with performance, large-scale systems data analysis, visualization tools, or debugging.
  • Experience with debugging correctness and performance issues at all levels of the stack.
  • Experience with optimizations in mid-level and low-level architecture.
  • Experience with hardware/software co-design.
  • Experience in GPU integrating low-level CUDA work into higher-level frameworks (e.g., TF, JAX, PyTorch).

Relevant jobs