Trimble has flagged the Enterprise DevOps Systems Engineer job as unavailable. Let’s keep looking.

Job Description:

Job Title : Principal - Data Engineering - DevOps Engineer

The Purpose of This Role

We are looking for a Principal - DevOps Engineer (Operations) with experience in infrastructure, operation, cloud, bigdata dev-ops based data lake and Data services. The role involves continuous collaboration with development teams, driving best practices for operational management of cloud ready data services and following Fidelity best practices / methodologies, thorough understanding of the technology roadmap, advancement to design / development process and providing innovative solutions at a very fast pace.

Associate will be part of Data Operations COE group within Fidelity Data Architecture & Engineering Division. This team primarily responsible for Data Lake Operations on Cloud that supports Nifi (ELT Tool), Collibra Data Governance, Alation Data Catalog Enterprise Tools & Platform Engineering (Informatica, Middleware), Data masking (TDM) Support. In addition, it also provides DBA support for multiple RDBMS like Oracle, Oracle E-Business Suite, DB2, PostgreSQL, MongoDB, YugaByte, SQL Server, RDS on AWS & Enterprise Snowflake Admin.

The Value You Deliver

  • Work closely with multiple BU’s primarily within ETGS in supporting their use cases in moving data from on prem to cloud via the Data Ingestion tools & Managing the Infrastructure build pipelines & the releases by following the enterprise guidelines that enables the data movement to cloud.

The Skills that are Key to this role

Technical / Behavioral

  • Having good experience in building and managing infrastructure on AWS cloud using various AWS services (EC2, IAM, S3, Route52, SQS, SNS, EKS, EBS, KMS, SMS, Cloudwatch, CloudTrail, CloudFormation, etc)
  • Expertise in DevOps tools (Jenkins, Sonar-Qube, Maven, Jfrog, Python/Groovy, uDeploy) and proficiency with git & CI/CD pipelines
  • Proficiency in working with infrastructure build process through containerization, automation using Docker, Kubernetes. 
  • Strong understanding and hands on experience on any of the monitoring tool like Datadog, Splunk, Prometheus, Kibana etc.
  • Experience in IAC tools CFN, ARM, Terraform
  • Experience in Linux administration & shell scripting
  • Monitoring & scaling the infrastructure, collaborating with other team members to develop automation strategies and deployment processes. 
  • Implementing and managing the security compliance of infrastructure, and Update systems as soon as the new version of OS and application software comes out.
  • Demonstrates good written, verbal communication and presentation skills.
  • Delivery focus with the ability to take full ownership
  • Strong commitment to quality, documentation, and engineering excellence
  • Strong problem-solving skills and ability to work in a team environment.
  • Ability to work in tight deadlines under pressure.
  • Flexible and adaptable to changing business/technology requirements
  • A sense of ownership that drives quality in everything you do.

The Skills those are good to have for this role

  • Multi Cloud experience (AWS/Azure/GCP)
  • AWS DevOps Engineer Certification
  • Knowledge of big data design patterns.
  • Familiarity and experience working with agile methodologies: SCRUM
  • Knowledge on any ETL/ELT platforms (Nifi/Informatica) would be desirable.

How Your Work Impacts the Organization

Driving Fidelity Data Strategy Goals by being part of FDA Data Lake Operations team to enable a Stable Data Processing environment by building infrastructure as a code.

The Expertise we’re looking for

  • 10+ years of IT experience in DevOps Engineer role
  • Graduate
  • Any AWS certification

Location: Chennai

Shift timings: 11:00 am - 8:00pm

Certifications:

Category:

Information Technology

Read Full Description
Confirmed 6 hours ago. Posted 30+ days ago.

Discover Similar Jobs

Suggested Articles