Sonsoft , Inc. is a USA based corporation duly organized under the laws of the Commonwealth of Georgia. Sonsoft Inc. is growing at a steady pace specializing in the fields of Software Development, Software Consultancy and Information Technology Enabled Services.
• At least 2 years of experience in Implementation and Administration of Hadoop infrastructure
• At least 2 years of experience in Project life cycle activities on development and maintenance projects.
• Operational expertise in troubleshooting , understanding of system’s capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks
• Hadoop, MapReduce, HBase, Hive, Pig, Mahout
• Hadoop Administration skills: Experience working in Cloudera Manager or Ambari, Ganglia, Nagios
• Experience in using Hadoop Schedulers - FIFO, Fair Scheduler, Capacity Scheduler
• Experience in Job Schedule Management - Oozie or Enterprise Schedulers like Control-M, Tivoli
• Good knowledge of Linux
• Exposure to setting up Ad/LDAP/Kerberos Authentication models
• Familiarity with open source configuration management and deployment tools such as Puppet or Chef and Linux scripting, Autosys
• Experience in Shell and Perl scripting and exposure to Python
• Knowledge of Troubleshooting Core Java Applications is a plus
• Exposure to Real-time Execution engines like Spark, Storm, Kafka
• Version control Management tools : Subversion or Clearcase or CVS, Github
• Experience in Service Management Ticketing tools - ServiceNow, Service Manager, Remedy
Hadoop Ecosystem and Clusters maintenance as well as creation and removal of nodes
Perform administrative activities with Cloudera Manager/Ambari and tools like Ganglia, Nagios
Setting up and maintaining Infrastructure and configuration for Hive, Pig and MapReduce
Monitor Hadoop Cluster Availability, Connectivity and Security
• Setting up Linux users, groups, Kerberos principals and keys
• Aligning with the Systems engineering team in maintaining hardware and software environments required for Hadoop
• Software installation, configuration, patches and upgrades
• Working with data delivery teams to setup Hadoop application development environments
• Performance tuning of Hadoop clusters and Hadoop MapReduce routines
• Screen Hadoop cluster job performances
• Data modelling, Database backup and recovery
• Manage and review Hadoop log files
• File system management, Disk space management and monitoring (Nagios, Splunk etc)
• HDFS support and maintenance
• Planning of Back-up, High Availability and Disaster Recovery Infrastructure
• Diligently teaming with Infrastructure, Network, Database, Application and Business Intelligence teams to guarantee high data quality and availability
• Collaborating with application teams to install operating system and Hadoop updates, patches and version upgrades
• Ability to work in team in diverse/ multiple stakeholder environment
• Experience to XXdomain
• Experience and desire to work in a Global delivery environment
• Bachelor’s degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education.
• At least 4 years of overall IT experience
** U.S. Citizens and those who are authorized to work independently in the United States are encouraged to apply. We are unable to sponsor at this time.
1.This is a Full-Time & Permanent job opportunity for you.
2.Only US Citizen Green Card Holder GC-EAD,& TN can apply.
3.No H4-EAD L2-EAD OPT-EAD, H1B candidates please.
4.Please mention your Visa Status in your email or resume.