The Amazon Search team owns the software that powers Search - a critical customer-focused feature of Amazon.com. Whenever you visit an Amazon site anywhere in the world, it's our technology that delivers you outstanding search results. Our services are used by millions of Amazon customers every day.
The Search Engine Infrastructure team is responsible for the large-scale distributed systems that power those results. We design, build and operate high performance fault tolerant services that apply the latest technologies to solve customer problems. As part of this vision, we are building the infrastructure to enable next generation Deep-learning-based relevance ranking which to be deployed quickly and reliably, with the ability to analyze model and system performance in production. We focus on high availability, frugally serving billions of requests per day with low latency. We work alongside applied scientists and ML engineers to make this happen.
Joining this team, you’ll experience the benefits of working in a dynamic, entrepreneurial environment, while leveraging the resources of Amazon.com (AMZN), one of the world's leading internet companies. We provide a highly customer-centric, team-oriented environment in our offices located in Palo Alto, California, with a team in San Francisco, California.
Key job responsibilities
As a senior engineer in this team, you will:
1. Evolve a sophisticated deep-learning ranking system and feature store deployed across thousands of machines in AWS, serving billions of queries at tens of millisecond latencies.
2. Immerse yourself in imagining and providing cutting-edge solutions to large-scale information retrieval and machine learning (ML/DL) problems.
3. Have a relentless focus on scalability, latency, performance robustness, and cost trade-offs -- especially those present in highly virtualized, elastic, cloud-based environments.
4. Conduct and automate performance testing of the model serving system to evaluate different hardware options (including CPUs, GPUs, and specialized accelerators such as AWS Inferentia2), model architectures and serving configurations.
5. Lead implementation and enhancement of a rapid experimentation framework to test ranking hypotheses.
6. Create mechanisms to ensure models work as expected in production.
7. Work closely with applied scientists to determine the requirements for deploying ranking models in production environments.
8. Work closely with Principal Engineers in Amazon Search to set the technical vision for this team.
We are open to hiring candidates to work out of one of the following locations:
Palo Alto, CA, USA | San Francisco, CA, USA
Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status. For individuals with disabilities who would like to request an accommodation, please visit https://www.amazon.jobs/en/disability/us.
Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $134,500/year in our lowest geographic market up to $261,500/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit https://www.aboutamazon.com/workplace/employee-benefits. This position will remain posted until filled. Applicants should apply via our internal or external career site.
Read Full Description