Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Tel Aviv, Israel; Haifa, Israel.
Be part of a team that pushes boundaries, developing custom silicon solutions that power the future of Google's direct-to-consumer products. You'll contribute to the innovation behind products loved by millions worldwide. Your expertise will shape the next generation of hardware experiences, delivering unparalleled performance, efficiency, and integration.
In this role, you will work with internal system teams and the System-on-Chip (SoC) Architecture team to develop an understanding of the SoC, performance metrics, benchmarks/measuring tools, and available optimization knobs. You will define methods and technologies to model SoC performance at different accuracy levels by supporting architectural explorations and decision-making. In addition, you will correlate performance projections with measured post-silicon data.
The ML, Systems, & Cloud AI (MSCA) organization at Google designs, implements, and manages the hardware, software, machine learning, and systems infrastructure for all Google services (Search, YouTube, etc.) and Google Cloud. Our end users are Googlers, Cloud customers and the billions of people who use Google services around the world.
We prioritize security, efficiency, and reliability across everything we do - from developing our latest TPUs to running a global network, while driving towards shaping the future of hyperscale computing. Our global impact spans software and hardware, including Google Cloud’s Vertex AI, the leading AI platform for bringing Gemini models to enterprise customers.