Overview
As a Manager/Architect Data Engineer, you’ll be at the forefront of large-scale digital transformation projects, designing and implementing robust, cloud-native data platforms. You will guide both strategic architecture and hands-on delivery, working closely with stakeholders to shape data ecosystems that drive business innovation and growth.
Responsibilities
Your Impact
- Architecture & Strategy
- Define end-to-end data architecture strategies leveraging Azure and Databricks, ensuring scalability, security, and alignment with enterprise standards.
- Lead the selection of data technologies, frameworks, and patterns suited to each use case, considering cost, performance, and maintainability.
- Develop and maintain architectural roadmaps for data platform modernization.
- Solution Design & Delivery
- Translate complex business requirements into modular, scalable data architectures.
- Design and implement data ingestion, processing, storage, and consumption layers using modern data engineering practices.
- Build reusable components and frameworks to accelerate future development efforts.
- Technical Leadership
- Provide technical leadership to engineering teams, reviewing solution designs and ensuring adherence to architectural principles.
- Support estimation of project scope, timelines, and resources.
- Promote data engineering best practices, including CI/CD, testing, and automation in data workflows.
- Client Engagement & Collaboration
- Partner with business and technical stakeholders to understand goals and co-create data-driven solutions.
- Facilitate architecture reviews, design workshops, and technical deep dives with clients.
- Operational Excellence
- Monitor, optimize, and ensure the reliability and performance of deployed data solutions.
- Lead efforts in automation, observability, and platform health monitoring.
Qualifications
Your Skills & Experience
- Proven experience designing and implementing scalable data pipelines and architectures using Azure and Databricks.
- Deep expertise with Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure SQL Database.
- Strong programming skills in Python for data transformation and pipeline development.
- Experience with diverse data storage solutions: columnar (e.g., BigQuery, Redshift, Vertica), NoSQL (e.g., Cosmos DB, DynamoDB), and relational databases (e.g., SQL Server, Oracle, MySQL).
- Solid understanding of ETL/ELT processes, data modeling (dimensional, star/snowflake), and distributed computing frameworks (e.g., Spark).
- Familiarity with CI/CD pipelines, Git, and automated deployment tools.
- Effective communication and stakeholder management skills.
Additional Information
Set Yourself Apart With
- Familiarity with DevOps and DataOps practices in cloud data environments.
- Exposure to multi-cloud environments (AWS, GCP) and hybrid cloud architecture.
This position is full time and open for permanent hire and contractor modality.
Read Full Description