Research Scientist Graduate (High-Performance Computing (Inference Optimization) - Vision AI Platform-San Jose)) - 2025 Start (PhD)

ByteDance

Responsibilities

Team Introduction

The Doubao (Seed) Vision AI Platform team focuses on the end-to-end infrastructure development and efficiency improvement for Seed vision-based large model development, including the data pipeline construction and training, evaluation data delivery, and full lifecycle efficiency enhancement for visual large models such as VLM, VGFM, and T2I. This also encompasses large-scale training stability and optimization for acceleration, as well as large model inference and multi-machine multi-card deployment.

We are looking for talented individuals to join our team in 2025. As a graduate, you will get unparalleled opportunities for you to kickstart your career, pursue bold ideas and explore limitless growth opportunities. Co-create a future driven by your inspiration with ByteDance.

Successful candidates must be able to commit to an onboarding date by end of year 2025.

We will prioritize candidates who are able to commit to the company start dates. Please state your availability and graduation date clearly in your resume.

Applications will be reviewed on a rolling basis. We encourage you to apply early.

Candidates can apply for a maximum of TWO positions and will be considered for jobs in the order you applied for. The application limit is applicable to ByteDance and its affiliates' jobs globally.

Responsibilities:

1. Design and develop next-generation large model inference engines, optimizing GPU cluster performance for image/video generation and multimodal models to achieve industrial-grade low-latency & high-throughput deployment.

2. Lead inference optimization including CUDA/Triton kernel development, TensorRT/TRT-LLM graph optimization, distributed inference strategies, quantization techniques, and PyTorch-based compilation (torch.compile).

3. Build GPU inference acceleration stack with multi-GPU collaboration, PCIe optimization, and high-concurrency service architecture design.

4. Collaborate with algorithm teams on performance bottleneck analysis, software-hardware co-design for vision model deployment, and AI infrastructure ecosystem development.

Qualifications

Minimum Qualifications:

1. Bachelor's/Master's or above in Computer Science/EE/related fields.

2. Proficient in C++/Python and high-performance coding.

3. Expertise in ≥1 domains: GPU programming (CUDA/Triton/TensorRT), model quantization (PTQ/QAT), parallel computing (multi-GPU/multi-node inference), or compiler optimization (TVM/MLIR/XLA/torch.compile).

4. Deep understanding of Transformer architectures and LLM/VLM/Diffusion model optimization.

Preferred Qualifications:

1. Experience in large-scale inference systems, vLLM/TGI customization, advanced quantization/sparsity;

Read Full Description
Confirmed 8 hours ago. Posted 30+ days ago.

Discover Similar Jobs

Suggested Articles