Responsibilities
About the Team
The Applied Machine Learning Enterprise AI Foundation US team drives the design, development, and operation of MaaS solutions in US and rest of world overseas of mainland China. We are a product team that builds full stack end-to-end solutions covering text and multi-modality LLM algorithms, prompting engineering, LLM model alignment, and intelligent agents, etc. for a wide range of application domains. We build products that facilitate development of LLM applications and landing into production. We are actively seeking talented Software and Algorithm Engineers specializing in prompt engineering, RAG, multi-agents, post-train, RL, Robotics to join our team.
In this role, you will be at the forefront of cutting-edge research and development of advanced techniques for LLM solutions including model fine-tuning, evaluation, inference, alignment, prompt engineering, intelligent agent, etc. To ensure we have the ability to run current and future models at increasingly high scale with unprecedented efficiency.
Responsibilities:
- lead the creation of next-generation, high-capacity LLM platforms and innovative products.
- work closely with cross-functional teams to plan and implement projects harnessing LLMs for diverse purposes and vertical domains.
- maintain a deep passion for contributing to the success of large models is essential in this innovative and fast-paced team environment.
Qualifications
Minimum Qualifications:
- Ph.D./Master in Computer Science, Data Science, Artificial Intelligence, or a related field.
- Strong understanding of cutting-edge LLM research (e.g., long context, multi modality, alignment research, agent ecosystem, etc.) and possess practical expertise in effectively implementing these advanced systems.
- Proficiency in programming languages such as Python, Rust, or C++ and a track record of working with deep learning frameworks (e.g., pytorch, deepspeed, megatron, vllm, etc.).
- Strong understanding of distributed computing framework & performance tuning and verification for training/finetuning/inference. Being familiar with PEFT, RL, MoE, CoT or Langchain is a plus.
Preferred Qualifications:
- Excellent problem-solving skills and a creative mindset to address complex AI challenges. Demonstrated ability to drive research projects from idea to implementation, producing tangible outcomes.
- Published research papers or contributions to the LLM community would be a significant plus.
- Experience with inference tuning and Inference acceleration. Have a deep understanding of GPU and/or other AI accelerators, experience with large scale AI networks, pytorch 2.0 and similar technologies.
- Experience with large scale machine learning systems' scheduling and orchestration, familiar with Kubernetes and Docker.
- Experience with evaluation of AI systems, LLM application & agent development is desirable.
Read Full Description