Databricks Data Engineer (f/m/d)

Axpo Group

Who We Are

Axpo is driven by a single purpose – to enable a sustainable future through innovative energy solutions. As Switzerland's largest producer of renewable energy and a leading international energy trader, Axpo leverages cutting-edge technologies to serve customers in over 30 countries. We thrive on collaboration, innovation, and a passion for driving impactful change.

About the Team

You will report directly to our Head of Development and join a team of highly committed IT data platform engineers with a shared goal: unlocking data and enabling self-service data analytics capabilities across Axpo. Our decentralized approach means close collaboration with various business hubs across Europe, ensuring local needs shape our global platform. You’ll find a mindset committed to innovation, collaboration, and excellence.

What You Will Do

As a Databricks Data Engineer, you will:

  • Be a core contributor in Axpo’s data transformation journey by using Databricks as our primary data and analytics platform.
  • Design, develop, and operate scalable data pipelines on Databricks, integrating data from a wide variety of sources (structured, semi-structured, unstructured).
  • Leverage Apache Spark, Delta Lake, and Unity Catalog to ensure high-quality, secure, and reliable data operations.
  • Apply best practices in CI/CD, DevOps, orchestration (e.g., Dragster, Airflow), and infrastructure-as-code (Terraform).
  • Build re-usable frameworks and libraries to accelerate ingestion, transformation, and data serving across the business.
  • Work closely with data scientists, analysts, and product teams to create performant and cost-efficient analytics solutions.
  • Drive the adoption of Databricks Lakehouse architecture and help standardize data governance, access policies, and documentation.
  • Ensure compliance with data privacy and protection standards (e.g., GDPR).
  • Actively contribute to the continuous improvement of our platform in terms of scalability, performance, and usability.

What You Bring & Who You Are

We’re looking for someone with:

  • A university degree in Computer Science, Data Engineering, Information Systems, or a related field.
  • Strong experience with Databricks, Spark, Delta Lake, and SQL/Scala/Python.
  • Proficiency in dbt, ideally with experience integrating it into Databricks workflows.
  • Familiarity with Azure cloud services (Data Lake, Blob Storage, Synapse, etc.).
  • Hands-on experience with Git-based workflows, CI/CD pipelines, and data orchestration tools like Dragster and Airflow.
  • Deep understanding of data modeling, streaming & batch processing, and cost-efficient architecture.
  • Ability to work with high-volume, heterogeneous data and APIs in production-grade environments.
  • Knowledge of data governance frameworks, metadata management, and observability in modern data stacks.
  • Strong interpersonal and communication skills, with a collaborative, solution-oriented mindset.
  • Fluency in English.

Technologies You’ll Work With

  • Core: Databricks, Spark, Delta Lake, Python, dbt, SQL
  • Cloud: Microsoft Azure (Data Lake, Synapse, Storage)
  • DevOps: Bitbucket/GitHub, Azure DevOps, CI/CD, Terraform
  • Orchestration & Observability: Dragster, Airflow, Grafana, Datadog, New Relic
  • Visualization: Power BI
  • Other: Confluence, Docker, Linux

Nice to Have

  • Experience with Unity Catalog and Databricks Governance Frameworks
  • Exposure to Machine Learning workflows on Databricks (e.g., MLflow)
  • Knowledge of Microsoft Fabric or Snowflake
  • Experience with low-code analytics tools like Dataiku
  • Familiarity with PostgreSQL or MongoDB
  • Front-end development skills (e.g., for data product interfaces)
Read Full Description
Confirmed 5 hours ago. Posted a day ago.

Discover Similar Jobs

Suggested Articles