Team Introduction
Welcome to the Doubao Vision team, where we spearhead multi-modality foundation models on visual understanding and visual generation. Our mission is to solve the visual intelligence problem for AI. We conduct cutting-edge research on areas like vision and language, large vision models, and generative foundation models. The team is a mix of experienced research scientists and engineers, aiming to advance the research boundaries in foundation models and apply our technologies to our rich application scenarios, whereas a feedback loop is created to help further improve our foundation technologies. Join us in shaping the future of AI technologies and revolutionizing our product experience for global users.
We are looking for talented individuals to join us for a Student Researcher opportunity in 2025. Student Researcher opportunities at ByteDance aim to offer students industry exposure and hands-on experience. Turn your ambitions into reality as your inspiration brings infinite opportunities at ByteDance.
The Student Researcher position provides unique opportunities that go beyond the constraints of our standard internship program, allowing for flexibility in duration, time commitment, and location of work.
Candidates can apply to a maximum of two positions and will be considered for jobs in the order you apply. The application limit is applicable to ByteDance and its affiliates' jobs globally. Applications will be reviewed on a rolling basis - we encourage you to apply early.
Responsibilities:
1. Conduct cutting-edge research on autoregressive generative models that unify multiple modalities (images, video, text) into a single robust framework.
2. Implement novel model architectures with a strong emphasis on scalability and efficiency.
3. Utilize advanced deep learning frameworks to train and validate large-scale multimodal models on extensive datasets.
4. Drive the publication of research findings in top-tier conferences (e.g., CVPR, ECCV, ICCV, NeurIPS, ICLR, ICML, EMNLP, ACL) and high-impact journals.
Minimum Qualifications:
1. Currently pursuing a PhD in Computer Science, Computer Engineering, or a related technical discipline.
2. Hands-on research in multimodal understanding/generation (vision-language models, vision generative models, or related).
3. Publications in top-tier venues, such as CVPR, ECCV, ICCV, NeurIPS, ICLR, ICML, EMNLP, ACL, COLING, etc.
4. Proficient in Python and common deep learning frameworks. Demonstrated ability to implement, debug, and optimize complex architectures (autoregressive models, transformers, etc.).
5. Must obtain work authorization in country of employment at the time of hire, and maintain ongoing work authorization during employment.
Preferred Qualifications:
1. Work and collaborate well with team members.
2. Strong engineering implementation skills and coding ability, with a demonstrated capacity to develop algorithms and conduct experiments efficiently.
3. Experience in large-scale data processing and distributed training.
4. Experience in multi-modality models.
5. Experience in diffusion models is a plus.
Read Full Description