Solutions Architect, Deep Learning Inference

NVIDIA

NVIDIA is seeking outstanding AI Solution Architects to assist and support customers that are building solutions with our newest AI technology. At NVIDIA, our solution architects work across many teams and enjoy helping customers with the latest Accelerated Computing and Deep Learning software and hardware platforms. We're looking to grow our company, and build our teams with the smartest people in the world. Would you like to join us at the forefront of technological advancement? You will become a trusted technical advisor with our customers and work on exciting projects and proof-of-concepts focused on inference for Deep Learning (DL) and Generative AI (GenAI). You will also collaborate with a diverse set of internal teams on performance analysis and modeling of inference software. You should be comfortable working in a dynamic environment, and have experience with GenAI, DL and GPU technologies. This role is an excellent opportunity to work in an interdisciplinary team with the latest technologies at NVIDIA!

What You Will Be Doing

  • Partnering with other solution architects, engineering, product and business teams. Understanding their strategies and technical needs and helping define high-value solutions
  • Dynamically engaging with developers, scientific researchers, data scientists, which will give you experience across a range of technical areas
  • Strategically partnering with lighthouse customers and industry-specific solution partners targeting our computing platform
  • Analyze performance and power efficiency of deep learning inference workloads
  • Some travel to conferences and customers may be required

What We Need To See

  • BS, MS, or PhD in Computer Science, Electrical/Computer Engineering, Physics, Mathematics, other Engineering or related fields (or equivalent experience)
  • 5+ years of hands-on experience with Deep Learning frameworks such as PyTorch and TensorFlow
  • Strong fundamentals in programming, optimizations and software design, especially in Python
  • Excellent knowledge of theory and practice of DL and GenAI inference
  • Excellent presentation, communication and collaboration skills
  • Desire to be involved in multiple diverse and creative projects

Ways To Stand Out From The Crowd

  • Prior experience with DL training at scale, deploying or optimizing DL inference in production
  • Experience with NVIDIA GPUs and software libraries, such as NVIDIA Triton Inference Server, TensorRT, TensorRT-LLM
  • Excellent C/C++ programming skills, including debugging, profiling, code optimization, performance analysis, and test design
  • Familiarity with parallel programming and distributed computing platforms

NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most hard-working and talented people in the world working for us. If you're creative and passionate about developing cloud services we want to hear from you!

The base salary range is 144,000 USD - 270,250 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.

You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Read Full Description
Confirmed 14 hours ago. Posted 30+ days ago.

Discover Similar Jobs

Suggested Articles