Big Data Tech Lead (Databricks) - Periscope

McKinsey & Company

Education
Benefits
Special Commitments

Who You'll Work With

You will be based in our Bangalore or Gurugram office as part of Periscope’s technology team.

Periscope is the asset-based arm of McKinsey’s Marketing & Sales practice and is at the leading edge of the new ways we serve clients. This integrated model of serving clients, i.e., combining our generalist consulting approaches with technology solutions, is a proof of the firm’s commitment to continue innovation in the spirit of bringing the best of the firm to our clients.

Founded in 2007, Periscope® By McKinsey enables better commercial decisions by uncovering actionable insights. The Periscope platform

combines world leading intellectual property, prescriptive analytics, and cloud based tools to provide more than 25 solutions focused on insights and

marketing, with expert support and training. It is a unique combination that drives revenue growth both now and in the future. Customer experience,

performance, pricing, category, and sales optimization are powered by the Periscope platform. Periscope has a presence in 26 locations across 16

countries with a team of 600+ business and IT professionals and a network of 300+ experts. To learn more about how Periscope’s solutions and

experts are helping businesses continually drive better performance, visit http://www.periscope-solutions.com/.

You will be a part of McKinsey’s asset-enabled consulting model, which fosters innovation driven by analytics, design thinking, mobile and social by

developing new products/services and making them an integral part of our client work. It is helping to shift our model toward asset-based consulting

and is a foundation for, and expands our investment in, our entrepreneurial culture. Through innovative software-as-a-service solutions, strategic acquisitions, and a vibrant ecosystem of alliances, we are redefining what it means to work with McKinsey.

What You'll Do

As a core member of Periscope’s technology team, you will develop and implement our core enterprise product, Data Lake, and will be responsible for designing, implementing and deploying different components of MSS Data Lake on the Databricks Spark platform.

You will technically lead and own development and client support, stabilization, performance optimization, and DevOps; managing the day-to-day operations.

In this role, you will be involved in software development projects in a hands-on manner. You will spend about 80% of your time writing and reviewing code, creating software designs. Your expertise will expand into database design, core middle-tier modules, performance tuning, cloud technologies, DevOps and continuous delivery domains over time.

You will be an active learner, tinkering with new open source libraries, using unfamiliar technologies without supervision, and learning about different

frameworks and approaches. You will also have a strong understanding of key agile engineering practices to guide teams on improvement opportunities in their engineering practices. 

In addition, you will provide ongoing coaching and mentoring to developers to improve our organizational capabilities.

Qualifications

  • 6+ years of experience in software development building complex enterprise systems that involve large scale data processing
  • Experience developing end-to-end data ETL pipelines for data ingestion, processing and transformation of data at scale using PySpark and SparkSQL in agile methodology
  • Experience and expertise in Spark/Databricks/Big Data architecture and database management systems in distributed environments including design, implementation and deployment.
  • Experience using Delta Lake on databricks
  • Hands-on experience with Spark performance & cost optimization
  • Ability to participate in all aspects of the software lifecycle, including analysis, design, development, and unit testing
  • Experience with container technologies like Docker and Kubernetes
  • Strong cloud infrastructure experience with AWS and/or Azure
  • Experience in event-drive architecture/real time process is an advantage
  • Experience in DevOps, CI/CD and production deployment
  • Experience working with Spark structured streaming using Event Hub/Kafka/Kinesis
  • Experience with orchestration tools like Azure Data Factory, AWS Data Pipeline, Prefect, or Airflow.
  • Experience with different Azure and/or AWS tech stack: e.g. Azure Event Grid/Functions/Log Analytics or AWS SNS/SQS/Lambda/Cloud Trails.
  • Experience using Snowflake and Power BI; full stack development experience is preferred
Read Full Description
Confirmed 30+ days ago. Posted 30+ days ago.

Discover Similar Jobs

Suggested Articles