Job Description :
We are seeking a highly skilled Big Data Engineer to join our team in Phoenix, AZ. The ideal candidate will have extensive experience working with Hadoop, Spark (Batch/Streaming), Hive, and Shell scripting, alongside solid programming skills in Java or Scala.
Department :
Cloud & Data Engineering
Open Positions :
1
Skills Required :
Hadoop, Python, Spark
Role :
Development: Design and develop scalable big data solutions using Hadoop, Spark, and Hive.
Design: Architect and implement big data pipelines and workflows, ensuring efficiency, security, and reliability.
Deployment: Deploy big data solutions, leveraging best practices for optimal performance in large-scale environments.
Optimization: Optimize big data solutions to improve performance, scalability, and cost-efficiency.
Collaboration: Work closely with cross-functional teams to integrate big data solutions with business goals.
Years Of Exp :
5 to 8 Years
Read Full Description