Seed LLM Global Data - Model Safety Analyst

ByteDance

Education
Benefits
Qualifications
Skills

Responsibilities

About the team

As a core member of our LLM Global Data Team, you'll be at the heart of our operations. Gain first-hand experience in understanding the intricacies of training Large Language Models (LLMs) with diverse data sets. Through our carefully designed rotation program, you'll witness how high-quality data is meticulously crafted and used.

Job Responsibilities

  • Conduct research on the latest developments in AI safety across academia and industry. Proactively identify limitations in existing evaluation paradigms and propose novel approaches to test models under real-world and edge-case scenarios.
  • Design and continuously refine safety evaluations for multi-models. Define and implement robust evaluation metrics to assess safety-related behaviors, failure modes, and alignment with responsible AI principles.
  • Conduct a thorough analysis of safety evaluation results to surface safety issues stemming from model training, fine-tuning, or product integration. Translate these findings into actionable insights to inform model iteration and product design improvements.
  • Partner with cross-functional stakeholders to build scalable safety evaluation workflows. Help establish feedback loops that continuously inform model development and risk mitigation strategies.
  • Manage end-to-end project lifecycles, including scoping, planning, execution, and delivery. Effectively allocate team resources and coordinate efforts across functions to meet project goals and timelines.

Please note that this role may involve exposure to potentially harmful or sensitive content, either as a core function, through ad hoc project participation, or via escalated cases. This may include, but is not limited to, text, images, or videos depicting:

  • Hate speech or harassment
  • Self-harm or suicide-related content
  • Violence or cruelty
  • Child safety

Support resources and resilience training will be provided to support employee well-being.

Qualifications

Minimum Qualifications

  • Bachelor's degree or higher, preferably in AI policy, Computer Science, Engineering, journalism, international relations, law, regional studies, or a related discipline.
  • Exceptional written and verbal communication skills in English.
  • Strong analytical skills, with the ability to interpret both qualitative and quantitative data and translate them into clear insights.
  • Proven project management abilities, with experience leading cross-functional initiatives in dynamic, fast-paced environments.
  • Creative problem-solving mindset, with comfort working under ambiguity and leveraging tools and technology to improve processes and outputs.

Preferred Qualifications

  • Professional experience in AI safety, Trust & Safety, Risk consulting, or Risk management. Experience working at or with AI companies is highly desirable.
  • Intellectually curious, self-motivated, detail-oriented, and team-oriented.
  • Deep interest in emerging technologies, user behavior, and the human impact of AI systems. Enthusiasm for learning from real-world case studies and applying insights in a high-impact setting.
Read Full Description
Confirmed 10 hours ago. Posted 9 days ago.

Discover Similar Jobs

Suggested Articles