As a Senior Data Engineer in the Aero Services organization, you will be part of the team designing, building, and maintaining our Big Data platform and massive data stores on which our data analytics applications and tools run – and onboarding process for new users.
Key to the growth of the Aero Services Org will be our ability to monetize critical data that is produced by our suite of Aerospace products.

Main Responsibilities:

Design, develop, and maintain the Big Data platform processes to be fault-tolerant and scalable
Design, develop, and maintain cross-platform
ETL/transformation processes and maintain dimensions and reference lookup dictionaries
Develop guidelines, standards, and processes to ensure the highest data quality and integrity in the data stores residing on the
Big Data platform
Participate in setting strategy and standards through data architecture and implementation leveraging big data and analytics
tools and technologies
Work closely with data scientists and product managers to understand their data requirements for existing and future projects on
data analytics applications and acquire/prepare that data – and later deploy the models
Work with IT and data owners to understand the types of data collected in various databases and data warehouses and define the
migration strategy to move existing data into the Big Data platform including a Data Lake

Additional Attributes:
Keen business acumen to recognize and recommend
cost-effective and scalable platform solutions that best meet our business needs
Ability to execute projects using an agile approach in a multi-disciplinary, matrixed environment
Comfortable working in a dynamic, research and development environment with several ongoing concurrent projects
Enjoys exploring and learning new technologies and has demonstrated ability to quickly learn new technologies

You Must Have:
Bachelor degree in computer science, IT, engineering, or other relevant field with a minimum of 5 years of data management
Minimum of 3 years of experience in designing, deploying, and supporting Big Data systems and solutions
Minimum of 2 years of experience in migrating data from data sources (MS SQL, Oracle, MySQL etc.) into Hadoop platform using
Hadoop technology (Spark, Hive, NiFi, Pig, Sqoop, etc.)
Must be a US person (US citizen, US permanent resident or individual with protected status i.e. asylum/refugee) due to US export
control laws and regulations
Minimum of 3 years of experience in scripting languages (Python, bash, Ruby, etc.)
Minimum of 3 years of experience with Java or other JVM-based language

We Value: 
Spark hands-on experience (Scala or PySpark)
Experience with CI/CD systems such as Bamboo, Jenkins, etc.
Minimum of 2 years of experience in NoSQL solutions (HBase, Cassandra, MongoDB, CouchDB etc.) and managing unstructured data
Masters or PhD degrees in computer science, IT, engineering or relevant fields
Certification in Hadoop and other big data tools and technologies
Experience with open source data processing frameworks
Experience with data management on public cloud hosting services (e.g., Azure)
Experience with predictive analytics or machine learning
Experience with web analytics and managing external data streams (e.g., social media)
Experience with Agile software development methodology using tool such as JIRA
Ability to work in a fast-paced and ambiguous environment

Read Full DescriptionHide Full Description
Confirmed an hour ago. Posted 30+ days ago.

Discover Similar Jobs

Suggested Articles

One Step Register
Need an account? Sign Up