Senior ETL Developer

Aflac

Benefits
Qualifications

Sr ETL Developer

Job Summary

Leads, Develops, maintains, and integrates processes which extract data from standardized or varied data sources; transforms data for storing in proper formats and structures for querying and analysis; loads data into target data structures, Operational data store (ODS), data mart, or data warehouse; Participates on a project team working closely with internal and external clients, business analysts and other team members to ensure that the end to end designs meet the business and data requirements; responsible for unit testing, implementation of solution, and continuous maintenance and support of existing solutions.

Principal Duties & Responsibilities

Build repeatable automated and sustainable Extract, Transform and Load (ETL) processes leveraging platforms such as AWS cloud native - AWS Glue, DMS, Informatica, Infoworks

Exercises independent decision making to creates processes which initiate the ETL or Batch cycle; develops streaming processes for extracted data loading to destination database, including on-the-fly processing where extract and transformation phase to no go to persistent storage

Performs data profiling of source data in order to identify data quality issues and anomalies, business knowledge embedded in data; natural keys, and meta data information

Creates data validation rule on source data to confirm the data has correct and/or expected values; Writes alternate workflow steps or reports back to the source for further analysis and correction of incorrect record(s) when validation rules are not passed

Develops processes to be applied to extracted source data to move to target state; Writes data cleansing functions to get data to proper prunes data set to include only fields needed; translates source code values to target value; Standardizes free form values to codes; Derives new values through calculations on existing fields; Merges data from multiple in order to generate on consolidated source for the target

Sorts and Aggregates records into rollup where multiple records are represented; Creates surrogate key values to use in place of multiple natural keys; Turns multiple columns into multiple rows or vice– versa (Transposing or Pivoting); Splits multi-valued column data into multiple columns; Disaggregates repeating columns into separate detail table(s); Creates lookup tables; Looks up and validates reference information as part of data validation

Creates and applies data validation step process in order to perform partial, full or no record’s rejection; Writes processes which handle exceptions and/or move records exceptions to alternate Transform step(s)

Develops processes which load the transformed data into end target systems (database, file, application, etc.); may apply different techniques based on business needs including inserting new data into target; Over write existing data with cumulative information; Updates existing data at some frequency; Creates data validation steps in this layer to ensure loaded data

Creates process cleanup after complex ETL processes which release resources used to run ETL; Creates processes to archive data

Participates in project collaboration meeting with clients, business analysts, and team members in order to analyze and clarify business requirements; Translates business requirements into detailed technical specifications

Works with project teams to define and design scope for each project; creates unit test cases to ensure the application meets the needs of the business

Ensures proper configuration management and change controls are implemented; Provides technical assistance and cross training to other team members

Designs and implements technology best practices, guidelines and repeatable processes; Prepares and presents status updates on various projects

Professional - Education & Experience

Bachelor's Degree in programming/systems or computer science (preferred)

Five or more years of programming experience, requires experience and understanding of multiple programming languages and applicable applications including SQL and ETL., Experienced in cloud data storage and consumption models such as S3 Buckets, Lake Formation, Redshift, Dynamo DB Experienced in working with compute engines such as Spark, EMR, Data bricks, Snowflake, etc., or an equivalent combination of education and experience.

Job Knowledge & Skills

  • Amazon Web Services Data Platform - Cloud infrastructure, Datalake/Cloud Formation, Automation
  • Amazon Cloud Data Storage -S3, Redshift, DynamoDB, NoSQL
  • ETL - AWS Glue, Informatica, SSIS, Infoworks
  • SQL & Relational Databases Advanced
  • XML Advanced
  • XSL Advanced
  • Business Intelligence
  • ETL Techniques Advanced
  • Data Modeling Advanced
  • Data Warehousing/Business Intelligence Advanced
  • Meta Data Repository Intermediate
  • MS SQL Server Advanced

Organizational Competencies

Acting with Integrity, Communicating Effectively, Pursuing Self-Development, Serving Customers, Supporting Change, Supporting Organizational Goals, Working with Diverse Populations

Read Full Description
Confirmed 23 hours ago. Posted 30+ days ago.

Discover Similar Jobs

Suggested Articles