Experience : 4-7 years
Description :
Additional Job Description
Hands on expertise in Installing, Administering and Monitoring Hadoop Cluster, with Apache Spark. Have installed Cloudera & Hortonworks distributions on Cloud and on Premises. Have a good understanding of Hadoop framework software like ( HDFS, Yarn, Map-Reduce , Oozei , Hive, ZooKeeper, Tez etc.) Experience in Infrastructure Planning, Sizing and Costing. Full understanding of Container based CI/CD and should be able to set up the entire Pipeline. Have experience in setting up Non Relational databases (No-SQL) and have good experience on MongoDB, Cassandra, Hive, HBase, BigQuery, Bigtable, Datastore etc. Experience in creating Network Diagrams, Deployment Diagrams. Willingness to learn New things and quickly come up with solutions with little or no assistance. Understand various Security compliances and take proactive steps to secure hardware and software assets. Collaborate and participate in scrum ceremonies and should be a Team Player . Good knowledge of Google Cloud Platform (specifically Dataproc, Cloud Storage, Pub/Sub, Cloud SQL, Cloud Functions etc.) Candidates should have excellent communication skills both oral and written. Full ownership of Big-data infrastructure. Responsible for commissioning and decommissioning Data nodes, review Hadoop log files in High Availability (HA) environment Enforce industry specific Compliance on infrastructure. Full ownership of CI/CD. Configured the LDAP, Sentry for Authorization, KMS for Data Encryption, and Extended ACL for HDFS. Strong Experience in Performance Tuning, and Upgraded Hadoop Cluster on CDH platform Should have experience working with Agile methodology Demonstrated ability to learn and apply new technologies and frameworks quickly Should be self-motivated and Smart working Candidate should have excellent communication skills both oral and written Should be B.E, B.Tech, MCA, M.Sc.