Data Ops Engineer - Build By McKinsey

McKinsey & Company

Education
Benefits
Qualifications
Special Commitments

Who You'll Work With

You’ll work in one of our offices in Greater China as part of our McKinsey Digital team.

McKinsey Digital combines unparalleled business knowledge with a world class agile development process to offer distinctive support for enterprise IT enablement. As part of this group, you’ll join a global team working on everything from IT modernization and strategy to agile, cloud, cybersecurity, and digital transformation. You’ll typically work on projects across all industries and functions and will be fully integrated with the rest of our global firm. 

Our development teams are small, flexible and employ agile methodologies to quickly provide our clients with the solutions they need. We combine the latest open source technologies together with traditional Enterprise software products.

What You'll Do

You will work extensively with data engineers, data architects, data governance and data operations teams to understand architecture, infrastructure, processes and data models.

You will also build and improve scalable data pipelines and processes, with focus on data maintenance, data quality, data integration, and data automation.

You will support the design, development and maintenance of various data platforms, design and develop data engineering assets and scalable engineering framework to support clients’ data quality management demands. You will also have the opportunity to code, test and document data models for reporting and analytics, and use processes and automation to streamline data processing in real time and to increase the reliability of data analytics.

Qualifications

  • Bachelor’s or master’s degree in information security, computer engineering, information systems, computer science or other IT-related degree
  • Great experience in building software applications and data-driven systems with an in-depth understanding of SDLC
  • 5+ years of experience in a DataOps related role (e.g., data engineer, machine learning engineer, data platform administration, data integration engineer, etc.), preferably relevant experience in medium and large sized projects 
  • Advanced knowledge of DevSecOps tooling, data structures and algorithms, networking, operating systems and foundational experience standard unix/linux operating systems, version control, etc.
  • Background in development, with practical experience with at least one general purpose programming languages (e.g., C++, Java, Scala, Python, Go, Node.js, etc.)
  • Experience with variety of ETL tools and proven capability to apply data pipeline principles and best practices to production workloads
  • Familiar with big data computing engine and related components (e.g., HDFS, Hive, HBase, Flink, Spark, Kafka, etc.)
  • Familiar with non-relational, relational and distributed databases (e.g., PostgreSQL, Redis, Snowflake, etc.) 
  • Experience and interest in at least one cloud platforms such as AWS, Azure, AliCloud, Tencent Cloud, etc.
  • Ability to communicate complex ideas effectively, both verbally and in writing, in English and Chinese
  • Comfortable with traveling when required
Read Full Description
Confirmed 30+ days ago. Posted 30+ days ago.

Discover Similar Jobs

Suggested Articles