NumberFour has flagged the Data Engineer job as unavailable. Let’s keep looking.

Fuelling tomorrow – that is our vision. The mobility of the future needs innovative solutions. The mobility of the future needs people who can make things happen. The future of mobility needs: you.

Let’s shape the future of the energy industry together! 

For our central Data & Analytics Team at our location in Hamburg HafenCity we are looking for a

Lead Data Engineer – Data Platform (m/f/d)

Your benefits:

  • Work-life balance: 38.75 working hours per week with 30 days annual leave
  • Ability to work freely: flexible working hours and individual remote work arrangement
  • Suitable for families: subsidized vacation camps for children, nanny support, etc.
  • Security and provisions: accident insurance also covering private accidents; excellent employer's contribution to your company pension scheme. Lifetime Account with opportunities to save for sabbaticals or early retirement
  • Health: healthy, subsidized meals in our employee restaurant, access to the free gym with sports offers, online sports courses, support in the event of personal problems, and free use of the Headspace app
  • Learning & Development: we support you on your individual career path
  • Networking: networks within the company to spotlight all diversity aspects within the company, as well as participate in various company events
  • Other benefits: Profit participation, Christmas bonus, vacation allowance, travel allowance, etc.

Your purpose:

We are looking for an experienced Lead Data Engineer to oversee the development and management of sophisticated data processing systems in our Data Platform on Azure cloud. This role involves advancing data solutions on Databricks, optimizing data acquisition via Azure Data Factory, Logic Apps, Azure Functions, Spark and ensuring quality assurance for data pipelines. You'll be responsible for ensuring scalability and high-quality data processing within the Central Data Platform team.

Your expertise in Azure services, particularly Azure Data Factory and Databricks, will be vital in providing technical guidance and best practices to a team of data engineers. This role demands a deep technical understanding and the ability to effectively implement and maintain critical data solutions.

Key Responsibilities in this role:

  • Technical Guidance and Best Practices: Provide technical leadership and guidance to a team of data engineers, ensuring adherence to best practices in data engineering, cloud computing, and data security.
  • Streamlining Data Acquisition: Optimize the acquisition of data through Azure Data Factory, Logic Apps, and Azure Functions, streamlining the process to improve efficiency and accuracy.
  • Quality Assurance: Ensure the highest quality of data pipelines, including validation, error handling, and consistent performance across different data sources and processes.
  • Scalability and Performance: Oversee the scalability of data processing systems, ensuring they can handle increasing volumes of data and complex processing requirements while maintaining high performance.
  • Cost Optimization: Continuously monitor and optimize cloud resource usage and expenses, implementing cost-effective solutions without compromising on performance and scalability.
  • Automation and CI/CD: Implement and maintain continuous integration and continuous deployment (CI/CD) practices for data pipelines to enhance efficiency, reduce manual errors, and accelerate deployment processes.
  • Collaboration and Communication: Collaborate with cross-functional teams, including data scientists, business analysts, and IT, to align data engineering efforts with overall business objectives. Communicate complex technical concepts and project progress to both technical and non-technical stakeholders.

What helps you to fulfill this role:

  • A bachelor's or master's degree in Computer Science, Information Technology, Engineering, or a related field
  • 5+ years proven experience in (lateral) leading Data Engineering teams and developing scalable data pipelines
  • Azure Cloud Services Mastery: In-depth knowledge of Azure Data Factory, Databricks, Azure Functions, Spark and Logic Apps, including setup, configuration, and optimization.
  • Advanced Data Engineering: Expertise in complex ETL process design, data modeling, data warehousing architectures, and data lake solutions.
  • Programming Proficiency: High-level skills in Python and SQL, and strong proficiency in object-oriented programming principles and practices.
  • Big Data Expertise: Comprehensive understanding of big data technologies, including Hadoop, Spark, and related distributed computing frameworks.
  • Data Quality Management: Advanced skills in data quality assurance, including developing robust validation frameworks, error tracking, and resolution methodologies.
  • Performance Optimization: Expertise in fine-tuning data processes for high performance and scalability, including resource management and query optimization.
  • Cost Efficiency in Cloud: Proficiency in managing and reducing cloud computing costs through resource optimization and cost-effective architecture design.
  • CI/CD and DevOps Practices: Strong experience in implementing CI/CD pipelines for data solutions, leveraging tools like Jenkins, Git, and Docker.
  • Data Architecture and Design: Deep understanding of data architecture frameworks, including data mesh and hub-and-spoke architectures, to create scalable, efficient, and flexible data ecosystems.
  • Data Security and Regulation Compliance: Deep understanding of data security, privacy, and compliance requirements, including GDPR.

Your Contact:

Please send us your application documents, stating your salary expectations and your earliest starting date, using the application form on our website.

Simone Gilau (e-mail: recruiting@mabanaft.com) will be happy to answer any questions you may have.

Read Full Description
Confirmed 13 hours ago. Posted a day ago.

Discover Similar Jobs

Suggested Articles