Software Engineer, SystemML - Scaling / Performance

Meta

In this role, you will be a member of the Network.AI Software team and part of the bigger DC networking organization. The team develops and owns the software stack around NCCL (NVIDIA Collective Communications Library), which enables multi-GPU and multi-node data communication through HPC-style collectives. NCCL has been integrated into PyTorch and is on the critical path of multi-GPU distributed training. In other words, nearly every distributed GPU-based ML workload in Meta Production goes through the SW stack the team owns. At the high level, the team aims to enable Meta-wide ML products and innovations to leverage our large-scale GPU training and inference fleet through an observable, reliable and high-performance distributed AI/GPU communication stack. Currently, one of the team’s focus is on building SW benchmarks, performance tuners and SW stacks around NCCL and PyTorch to improve the full-stack distributed ML performance (e.g. Large-Scale GenAI/LLM training) from the trainer down to the inter-GPU and network communication layer. And we are seeking for engineers to tech-lead the space of GenAI/LLM scaling and performance.

Software Engineer, SystemML - Scaling / Performance Responsibilities

  • Tech-leading the overall distributed ML enablement and performance on Meta's large-scale GPU training infra with a focus on GenAI/LLM scaling

Minimum Qualifications

  • Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience.
  • Proven C/C++ and Python programming skills
  • Proven track record of leading successful projects
  • Effective leadership and communication skills

Preferred Qualifications

  • PhD in Computer Science, Computer Engineering, or relevant technical field
  • Experience with NCCL and distributed GPU performance analysis on RoCE/Infiniband
  • Experience working with DL frameworks like PyTorch, Caffe2 or TensorFlow
  • Experience with both data parallel and model parallel training, such as Distributed Data Parallel, Fully Sharded Data Parallel (FSDP), Tensor Parallel, and Pipeline Parallel
  • Experience in AI framework and trainer development on accelerating large-scale distributed deep learning models
  • Experience in HPC and parallel computing
  • Knowledge of GPU architectures and CUDA programming
  • Knowledge of ML, deep learning and LLM

Locations

About Meta

Meta builds technologies that help people connect, find communities, and grow businesses. When Facebook launched in 2004, it changed the way people connect. Apps like Messenger, Instagram and WhatsApp further empowered billions around the world. Now, Meta is moving beyond 2D screens toward immersive experiences like augmented and virtual reality to help build the next evolution in social technology. People who choose to build their careers by building with us at Meta help shape a future that will take us beyond what digital connection makes possible today—beyond the constraints of screens, the limits of distance, and even the rules of physics.

Meta is committed to providing reasonable support (called accommodations) in our recruiting processes for candidates with disabilities, long term conditions, mental health conditions or sincerely held religious beliefs, or who are neurodivergent or require pregnancy-related support. If you need support, please reach out to accommodations-ext@fb.com.

$177,008/year to $251,000/year + bonus + equity + benefits

Individual pay is determined by skills, qualifications, experience, and location. Compensation details listed in this posting reflect the base salary only, and do not include bonus, equity or sales incentives, if applicable. In addition to base salary, Meta offers benefits. Learn more about benefits at Meta.

Read Full Description
Confirmed 17 hours ago. Posted 30+ days ago.

Discover Similar Jobs

Suggested Articles