Team Introduction
The Seed Vision AI Platform team focuses on the end-to-end infrastructure development and efficiency improvement for Seed vision-based large model development, including the data pipeline construction and training, evaluation data delivery, and full lifecycle efficiency enhancement for visual large models such as VLM, VGFM, and T2I. This also encompasses large-scale training stability and optimization for acceleration, as well as large model inference and multi-machine multi-card deployment.
Responsibilities:
1. Design and develop next-generation large model inference engines, optimizing GPU cluster performance for image/video generation and multimodal models to achieve industrial-grade low-latency & high-throughput deployment.
2. Lead inference optimization including CUDA/Triton kernel development, TensorRT/TRT-LLM graph optimization, distributed inference strategies, quantization techniques, and PyTorch-based compilation (torch.compile).
3. Build GPU inference acceleration stack with multi-GPU collaboration, PCIe optimization, and high-concurrency service architecture design.
4. Collaborate with algorithm teams on performance bottleneck analysis, software-hardware co-design for vision model deployment, and AI infrastructure ecosystem development.
Minimum Qualifications:
1. Bachelor's/Master's or above in Computer Science/EE/related fields.
2. Proficient in C++/Python and high-performance coding.
3. Expertise in ≥1 domains: GPU programming (CUDA/Triton/TensorRT), model quantization (PTQ/QAT), parallel computing (multi-GPU/multi-node inference), or compiler optimization (TVM/MLIR/XLA/torch.compile).
4. Deep understanding of Transformer architectures and LLM/VLM/Diffusion model optimization.
Preferred Qualifications:
Experience in large-scale inference systems, vLLM/TGI customization, advanced quantization/sparsity;
Read Full Description