About Teradata Corporation
* The company was founded in 1979 and is headquartered in Dayton, Ohio with major US offices in San Diego and Seattle.
* Our products are
o Integrated Data Warehouse Hardware and Software.
o Unified Data Architecture.
o Big Data Analytics.
o Cloud for analytics
o See complete list at http://in.teradata.com/
* Revenue - $2.5 billion.
* 10 companies acquired by Teradata till date including Aster Analytics
* 11,300 Employees all over the world.
* Number of customers: More than 2,500 in 77 countries, with 90% of the Fortune 100 companies as Teradata customers.
* Office locations: In 42 countries across the Americas, Europe, the Middle East, Africa and Asia Pacific
* NYSE symbol: TDC
Role -Think Big Hadoop Applications Support
* Providing Applications Support for Think Big Customers on Hadoop platforms. Typically these customers may have 24/7 contracts, and the successful applicant must be prepared to work in shifts and also be on-call to support customer site/s per contractual obligations.
* Around 3-8 years of experience in Applications Support (Java / J2EE, ETL, BI operations, Analytics support) engagements on large scale systems.
* Strong analytical and exceptional problem-solving abilities
* Experience working independently and as part of a team to debug application issues working with configuration files\databases and application log files.
* Root cause analysis for job failures & data quality issues & providing solutions.
* Have a working understanding of the software development lifecycle and be able to communicate incident and project status, issues, and resolutions
* Working experience with one of the Scheduling tools (Control-M, JCL, Unix/Linux-cron etc.)
* Experience in JIRA, Change Management Process.
* 2+ years of experience in Scripting Language (Linux, SQL, Python). Should be proficient in shell scripting
* Experience in developing / supporting RESTful applications
* Working knowledge of Linux operating system required
* Strong written and verbal communication skills
* ITIL Knowledge
Nice to have Experience:
* Platform operations administration
* Development, implementation or deployment experience in the Hadoop ecosystem
* Experience with ANY ONE of the following:
o Proficiency in Hive internals (including HCatalog), SQOOP, Pig, Oozie and Flume/Kafka.
o Proficiency with at least one of the following: Java, Python, Perl, Ruby, C or Web-related development
o Development or administration on NoSQL technologies like Hbase, MongoDB, Cassandra, Accumulo, etc.
o Development or administration on Web or cloud platforms like Amazon S3, EC2, Redshift, Rackspace, OpenShift, etc.
o Development/scripting experience on Configuration management and provisioning tools e.g. Puppet, Chef
o Web/Application Server & SOA administration (Tomcat, JBoss, etc.)
* Handle deployment methodologies, code and data movement between Dev., QA and Prod Environments (deployment groups / folder copy/ data-copy etc.)
* Should be able to articulate and discuss the principles of performance tuning on Hadoop
* Develop and produce daily/ weekly operations reports and metrics as required by IT management
* Experience on any of the following will be an added advantage:
o Hadoop integration with large scale distributed DBMSs like Teradata, Teradata aster, Vertica, Greenplum, Netezza, DB2, Oracle, etc.
o Data Modeling or ability to understand data models
o Knowledge of Business Intelligence and/or Data Integration (ETL) solution delivery techniques, models, processes, methodologies
o Exposure to tools data acquisition, transformation & integration tools like Talend, Informatica, etc. & BI tools like Tableau, Pentaho, etc.
Global Delivery Center (GDC)