Job Description :
Description
- Strong hands-on experience in Python
- Having good experience on Spark/Spark Structure Streaming.
- Experience of working on MSK (Kafka) Kinesis.
- Ability to design, build and unit test applications on Spark framework on Python.
- Exposure to AWS cloud services such as Glue/EMR, RDS, SNS, SQS, Lambda, Redshift etc.
- Good experience of writing SQL queries
- Strong technical development experience in effectively writing code, code reviews, and best practices
- Ability to solve complex data-driven scenarios and triage towards defects and production issues
- Ability to learn-unlearn-relearn concepts with an open and analytical mindset
Department :
Cloud & Data Engineering
Open Positions :
5
Skills Required :
Pyspark, SQL
Role :
- Work closely with business and product management teams to develop and implement analytics solutions.
- Collaborate with engineers & architects to implement and deploy scalable solutions.
- Actively drive a culture of knowledge-building and sharing within the team
- Able to quickly adapt and learn
- Able to jump into an ambiguous situation and take the lead on resolution
Good To Have:
- Experience of working on MSK (Kafka), Amazon Elastic Kubernetes Service and Docker
- Exposure on GitHub Actions, Argo CD, Argo Workflows
- Experience of working on Databricks
Education/Qualification :
BE / B.Tech / MCA / M.Tech / M.Com
Years Of Exp :
4 to 6 Years
Designation :
Senior Software Engineer
Read Full Description