Software Engineer Graduate (Data Arch - Data Ecosystem ) - 2026 Start (BS/MS)

TikTok

Responsibilities

About the team: The TikTok Data Ecosystem Team has the vital role of crafting and implementing a storage solution for offline data in TikTok's recommendation system, which caters to more than a billion users. Their primary objectives are to guarantee system reliability, uninterrupted service, and seamless performance. They aim to create a storage and computing infrastructure that can adapt to various data sources within the recommendation system, accommodating diverse storage needs. Their ultimate goal is to deliver efficient, affordable data storage with easy-to-use data management tools for the recommendation, search, and advertising functions. We are looking for talented individuals to join our team in 2026. As a graduate, you will get opportunities to pursue bold ideas, tackle complex challenges, and unlock limitless growth. Launch your career where inspiration is infinite at TikTok. Successful candidates must be able to commit to an onboarding date by end of year 2026. Please state your availability and graduation date clearly in your resume. Candidates can apply for a maximum of TWO positions and will be considered for jobs in the order you applied for. The application limit is applicable to TikTok and its affiliates' jobs globally. Applications will be reviewed on a rolling basis - we encourage you to apply early. Responsibilities: 1. Design and implement real-time and offline data architecture for large-scale recommendation systems. 2. Build scalable and high-performance streaming Lakehouse systems that power feature pipelines, model training, and real-time inference. 3. Collaborate with ML platform teams to support PyTorch-based model training workflows and design efficient data formats and access patterns for large-scale samples and features. 4. Own core components of our distributed storage and processing stack, from file format to stream compaction to metadata management.

Qualifications

Minimum Qualifications: - Bachelor's degree or above (or expected by 2026) in Computer Science or related technical field. - Experience building large-scale distributed systems, preferably in storage, stream processing, or ML infrastructure. - Familiarity with modern Lakehouse technologies such as Apache Paimon, Iceberg, Delta Lake, or Hudi, especially around incremental ingestion, schema evolution, and snapshot isolation. Preferred Qualifications: - Understanding of Apache Flink internals, with hands-on experience in state management, connectors, or UDFs. - Experience in designing and optimizing Flink + Paimon architectures for unified batch/stream processing. - Familiarity with feature storage and training data pipelines, and their integration with PyTorch, especially for large-scale model training. - Knowledge of columnar file formats (Parquet, ORC, Lance) and how they are used in feature engineering or ML data loading. - Proficiency in Java/Scala/C++, and strong debugging/performance tuning ability. - Previous experience in Lakehouse metadata management, compaction scheduling, or data versioning is a plus. - Knowledge of legacy data stores like HBase/Kudu is a bonus but not required.

Read Full Description
Confirmed 18 hours ago. Posted 30+ days ago.

Discover Similar Jobs

Suggested Articles