NVIDIA is seeking a sharp, innovative, and hands-on Architect to help shape the future of LLM inference at scale. Join our dynamic E2E Architecture group, where we build cutting-edge systems powering the next generation of generative AI workloads. In this role, you will work across software and hardware domains to design and optimize inference infrastructure for large language models running on some of the most advanced GPU clusters in the world.
You’ll help define how AI models are deployed and scaled in production, driving decisions on everything from memory orchestration and compute scheduling to inter-node communication and system-level optimizations. This is an opportunity to work with top engineers, researchers, and partners across NVIDIA and leave a mark on the way generative AI reaches real-world applications.
What You’ll Be Doing:
What We Need to See:
Ways to Stand Out from the Crowd:
NVIDIA is widely considered one of the most desirable places to work in tech – we are passionate about what we do and are committed to fostering a culture of excellence, innovation, and collaboration. If you’re excited to help define how the world runs AI at scale, this role is for you.
Read Full Description