Why Work at Lenovo
We are Lenovo. We do what we say. We own what we do. We WOW our customers.
Lenovo is a US$57 billion revenue global technology powerhouse, ranked #248 in the Fortune Global 500, and serving millions of customers every day in 180 markets. Focused on a bold vision to deliver Smarter Technology for All, Lenovo has built on its success as the world’s largest PC company with a full-stack portfolio of AI-enabled, AI-ready, and AI-optimized devices (PCs, workstations, smartphones, tablets), infrastructure (server, storage, edge, high performance computing and software defined infrastructure), software, solutions, and services. Lenovo’s continued investment in world-changing innovation is building a more equitable, trustworthy, and smarter future for everyone, everywhere. Lenovo is listed on the Hong Kong stock exchange under Lenovo Group Limited (HKSE: 992) (ADR: LNVGY).
This transformation together with Lenovo’s world-changing innovation is building a more inclusive, trustworthy, and smarter future for everyone, everywhere. To find out more visit www.lenovo.com, and read about the latest news via our StoryHub.
Description and Requirements
The Job
- Translate Data Pipeline requirements into data pipeline design, guide and direct the design by working closely with a range of stakeholders including the rest of the architecture team, external developers, data consumers, data providers, and internal/external business users.
- Contribute to use case development, e.g. workshop to gather and validate business requirements.
- Model and design the ETL pipeline data structure, storage, integration, integrity check and reconciliation. Standardize all exception control and ensure good traceability during trouble shooting.
- Document and write technical specifications for the function and non-functional requirements of the solution.
- Design data model/platform allowing managed growth of the data model to minimize risk and cost of change for a large-scale data platform.
- Analyze new data sources with structured Data Quality Evaluation approach and work with stakeholders to understand the impact of integrating new data into existing pipelines and models.
- Bridge the gap between business requirements and ETL logic by troubleshooting data discrepancies and implementing scalable solutions.
The Person
- Bachelor's degree (or higher) in mathematics, statistics, computer science, engineering, or related field
- At least 5+ year IT experiences with 2-year in data migration and/or data warehouse pipelines projects. At least 3-year experience with Oracle or SQL Server SQL development.
- Strong technical understanding of data modelling, design and architecture principles and techniques across master data, transaction data and data warehouse
- Experience with Stored Procedure (PL/SQL) and SQL DDL/DML
- Power BI, Hadoop, Java Springboot, docker or OCP skill is advantage
- Proficient in both spoken and written English and Chinese (Mandarin/Cantonese)
- Proactive with good problem solving and multitasking skills and task management strategies
- Strong technical understanding of data modelling, design and architecture principles and techniques across master data, transaction data and data warehouse
#LPS
Additional Locations:
Read Full Description