Would you like to be part of our innovative team? If so, read on!
GEICO is looking for a highly motivated team player with outstanding technical skills to join the Decision Science and Transformation team. As part of this highly collaborative team you will be working closely with data consumers of our Big Data platform to create the foundation for data analytics and machine learning activities.
Job Duties & Responsibilities:
As a Data Engineer, you will be responsible for working with data scientists and data architects to create efficient transformations on top of Spark/Hadoop and will have the opportunity to learn new bleeding edge technologies.
This position requires a strong foundation of technical skills, ability to learn new technologies, good communication skills, attention to details, and a commitment to continuous learning.
Ready to make an impact? If so, do you meet these qualifications?
- 3 years of hands-on experience in Hadoop Eco System (HDFS AND YARN AND MapReduce AND Oozie AND Hive)
- 1 year of hands-on experience in Spark core AND Spark SQL
- 5 years of hands-on programming experience in either core Java OR Spark
- 3 years of hands-on experience in Data Warehousing AND Data Marts AND Data/Dimensional Modeling AND ETL
- 1 years of hands-on experience in HBase OR Cassandra OR any other NoSQL DB
- Understanding of Distributed computing design patterns AND algorithms AND data structures AND security protocols
- Understanding of Kafka AND Spark Streaming
- Experience in any one of the ETL tools such as Talend, Kettle, Informatica OR Ab Initio
- Exposure to Hadoop OR NoSQL performance optimization and benchmarking using tools such as HiBench OR YCSB
- Experience in performance monitoring tools such as Ganglia OR Nagios OR Splunk OR DynaTrace
- Experience on continuous build and test process using tools such as Maven AND Jenkins
- Certification in HortonWorks OR Cloudera preferred
Salary will be assigned commensurate with experience and skills