Senior Data Engineer - bpost

Radial

Education
Benefits
Qualifications
Skills

We are looking for a seasoned Data Engineer with strong experience in designing and building modern data platforms from the ground up. This role involves close collaboration with architects, DevOps, and business teams to establish a scalable, secure, and high-performing data ecosystem. The ideal candidate is hands-on, cloud-savvy, and passionate about data infrastructure, governance, and engineering best practices.Job Title: Data Engineer Profilecode: LOC_DATA_ETL_SSA_1 At bpost group, data is more than numbers — it’s a strategic asset that drives critical decisions across our logistics and e-commerce operations. As we continue to evolve into a more data-driven organization, we are investing in cutting-edge infrastructure and talent to unlock deeper insights and enable smarter resource allocation. We are seeking a Data Engineer to join our Yield and Capacity Management team. In this role, you will play a central part in designing, building, and maintaining robust data pipelines and platforms that support advanced analytics, forecasting, and optimization of our operational capacity and pricing strategies. If you’re passionate about scalable data architecture, enjoy working at the intersection of business and technology, and want to make a tangible impact on performance and profitability, then this is your opportunity to help shape the future of data-driven logistics at bpost group. Role Summary: We are looking for a seasoned Data Engineer with strong experience in designing and building modern data platforms from the ground up. This role involves close collaboration with architects, DevOps, and business teams to establish a scalable, secure, and high-performing data ecosystem. The ideal candidate is hands-on, cloud-savvy, and passionate about data infrastructure, governance, and engineering best practices. Key Responsibilities: Platform Design and Architecture Design and implement the foundational architecture for a new enterprise-grade data platform Work with architects to define infrastructure, storage, and processing solutions aligned with business needs Ensure the platform adheres to security, compliance, and scalability standards Data Ingestion and Pipeline Development Build scalable, reusable data ingestion pipelines from structured and unstructured sources Develop batch and streaming data workflows using modern ETL/ELT frameworks Ensure data quality, lineage, and monitoring across pipelines Cloud and Infrastructure Integration Set up and configure cloud-native data services (e.g., AWS Glue, Redshift, S3, Lambda, EMR) Collaborate with DevOps to implement CI/CD for data pipeline deployments Support infrastructure-as-code practices (e.g., Terraform, CloudFormation) Governance and Operationalization Implement data cataloging, lineage, and access control mechanisms Define and enforce data platform usage standards, schema management, and audit logging Establish robust monitoring and alerting practices for data workflows Cross-Functional Collaboration Work closely with business teams, data scientists, and analysts to understand platform requirements Enable self-service access to data through tools and standardized interfaces Act as a technical advisor for data-related platform decisions Hands-On Engineering Contribute directly to the codebase using best practices for version control, testing, and documentation Perform peer reviews and contribute to technical design sessions Lead proof-of-concept implementations for new tools or frameworks Qualifications: Experience: 6–10 years of experience in data engineering, with at least 3 years building or re-architecting data platforms Proven track record in setting up cloud-based or hybrid data infrastructures Technical Skills: Deep knowledge of data warehousing, data lakes, and real-time data processing Relational database technologies: Snowflake, Postgres, Oracle Proficiency in SQL (dbt, sqlmesh), Python, and Spark or similar processing engines Experience with AWS, Azure, or GCP data services Familiarity with data modeling, schema evolution, and partitioning strategies Competency in modern data orchestration tools (e.g., Apache Airflow, dbt) High-level understanding of the capabilities and the role of different technical areas (cloud engineering, platform engineering, analytics engineering, ML engineering). Soft Skills: Strong problem-solving and systems thinking mindset Effective communication and stakeholder management abilities Ability to balance strategic planning with hands-on implementation Preferred Skills: Experience with Kubernetes, Docker, or serverless architectures Exposure to data mesh or domain-oriented data platform design Familiarity with tools like Apache Kafka, Delta Lake, or Iceberg

Read Full Description
Confirmed 19 hours ago. Posted 3 days ago.

Discover Similar Jobs

Suggested Articles