About Doubao (Seed)
Founded in 2023, the ByteDance Doubao (Seed) Team, is dedicated to pioneering advanced AI foundation models. Our goal is to lead in cutting-edge research and drive technological and societal advancements.
With a strong commitment to AI, our research areas span deep learning, reinforcement learning, Language, Vision, Audio, AI Infra and AI Safety. Our team has labs and research positions across China, Singapore, and the US.
Leveraging substantial data and computing resources and through continued investment in these domains, we have developed a proprietary general-purpose model with multimodal capabilities. In the Chinese market, Doubao models power over 50 ByteDance apps and business lines, including Doubao, Coze, and Dreamina, and is available to external enterprise clients via Volcano Engine. Today, the Doubao app stands as the most widely used AIGC application in China.
Why Join Us
Creation is the core of ByteDance's purpose. Our products are built to help imaginations thrive. This is doubly true of the teams that make our innovations possible. Together, we inspire creativity and enrich life - a mission we aim towards achieving every day. To us, every challenge, no matter how ambiguous, is an opportunity; to learn, to innovate, and to grow as one team. Status quo? Never. Courage? Always. At ByteDance, we create together and grow together. That's how we drive impact-for ourselves, our company, and the users we serve. Join us.
Team Introduction
The Doubao (Seed) Vision AI Platform team focuses on the end-to-end infrastructure development and efficiency improvement for Seed vision-based large model development, including the data pipeline construction and training, evaluation data delivery, and full lifecycle efficiency enhancement for visual large models such as VLM, VGFM, and T2I. This also encompasses large-scale training stability and optimization for acceleration, as well as large model inference and multi-machine multi-card deployment.
Responsibilities:
1. Design and develop next-generation large model inference engines, optimizing GPU cluster performance for image/video generation and multimodal models to achieve industrial-grade low-latency & high-throughput deployment.
2. Lead inference optimization including CUDA/Triton kernel development, TensorRT/TRT-LLM graph optimization, distributed inference strategies, quantization techniques, and PyTorch-based compilation (torch.compile).
3. Build GPU inference acceleration stack with multi-GPU collaboration, PCIe optimization, and high-concurrency service architecture design.
4. Collaborate with algorithm teams on performance bottleneck analysis, software-hardware co-design for vision model deployment, and AI infrastructure ecosystem development.
Minimum Qualifications:
1. Bachelor's/Master's or above in Computer Science/EE/related fields.
2. Proficient in C++/Python and high-performance coding.
3. Expertise in ≥1 domains: GPU programming (CUDA/Triton/TensorRT), model quantization (PTQ/QAT), parallel computing (multi-GPU/multi-node inference), or compiler optimization (TVM/MLIR/XLA/torch.compile).
4. Deep understanding of Transformer architectures and LLM/VLM/Diffusion model optimization.
Preferred Qualifications:
Experience in large-scale inference systems, vLLM/TGI customization, advanced quantization/sparsity;
ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At ByteDance, our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.
ByteDance Inc. is committed to providing reasonable accommodations in our recruitment processes for candidates with disabilities, pregnancy, sincerely held religious beliefs or other reasons protected by applicable laws. If you need assistance or a reasonable accommodation, please reach out to us at https://shorturl.at/cdpT2
Read Full Description