Seed LLM Global Data - Model Red-teaming Analyst

ByteDance

Education
Benefits
Qualifications
Skills

Responsibilities

About the team

As a core member of our LLM Global Data Team, you'll be at the heart of our operations. Gain first-hand experience in understanding the intricacies of training Large Language Models (LLMs) with diverse data sets. Through our carefully designed rotation program, you'll witness how high-quality data is meticulously crafted and used.

Job Responsibilities

  • Design and drive comprehensive red-teaming projects aimed at uncovering vulnerabilities in multimodal systems. Coordinate efforts across internal teams and external collaborators, including academic researchers, third-party red teamers, and industry partners.
  • Systematically analyze red-teaming outputs to identify failure modes, behavioral inconsistencies, and safety risks. Translate these findings into actionable insights to inform harm mitigation strategies, model alignment techniques, and product safety improvements.
  • Work closely with model development, safety, and policy teams to ensure red-teaming insights are integrated into training data curation, model safety evaluation frameworks, and deployment practices.
  • Conduct research on the latest developments in AI safety, adversarial testing, red-teaming methodologies, and responsible AI practices across academia and industry. Proactively identify limitations in existing evaluation paradigms and propose novel approaches to stress-test models under real-world and edge-case scenarios.

Please note that this role may involve exposure to potentially harmful or sensitive content, either as a core function, through ad hoc project participation, or via escalated cases. This may include, but is not limited to, text, images, or videos depicting:

  • Hate speech or harassment
  • Self-harm or suicide-related content
  • Violence or cruelty
  • Child safety

Support resources and resilience training will be provided to support employee well-being.

Qualifications

Minimum Qualifications

  • Bachelor's degree or higher in a relevant field (e.g., Computer Science, Engineering, Public Policy, or related disciplines).
  • Exceptional proficiency in both English and Mandarin, with strong written and verbal communication skills required to collaborate with internal teams and stakeholders across English and Mandarin-speaking regions.
  • Demonstrated analytical thinking, with the ability to synthesize both quantitative and qualitative data to draw meaningful insights.
  • Solid project management capabilities and effective cross-functional communication skills.
  • Foundational understanding of large AI models and familiarity with key industry practices in AI safety and responsible AI development.

Preferred Qualifications

  • Interest in or experience with reviewing technical literature, such as model or system cards, red-teaming reports, or AI alignment research.
  • Self-motivated, intellectually curious, detail-oriented, and collaborative, with a strong sense of ownership.
  • Awareness of emerging safety and alignment challenges related to frontier AI systems and high-capability models.
Read Full Description
Confirmed 13 hours ago. Posted 10 days ago.

Discover Similar Jobs

Suggested Articles