Software Engineer II Pyspark Data bricks AWS

JPMorgan Chase & Co.

Job Description

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level.

As a Software Engineer III at JPMorgan Chase within the Corporate Data Services, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.

Job responsibilities

  • Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
  • Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems
  • Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development
  • Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems
  • Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture
  • Contributes to software engineering communities of practice and events that explore new and emerging technologies
  • Adds to team culture of diversity, equity, inclusion, and respect

Required qualifications, capabilities, and skills

  • Formal training or certification on software engineering concepts and 2+ years applied experience
  • Experience designing and implementing data pipelines in a cloud environment is required. (E.g. Apache Ni Fi / Informatica etc.).
  • 3+ years of experience migrating/developing data solutions in the AWS cloud is required. Experience needed in AWS Services, Apache Airflow.
  • 3+ years of experience building/implementing data pipelines using Databricks such as Unity Catalog, Databricks workflow, Databricks Live Table etc.
  • Solid understanding of agile methodologies such as CI/CD, Applicant Resiliency, and Security3+ years of Hands-on object-oriented programming experience using Python (specially Py Spark) to write complex, highly optimized queries across large volumes of data.
  • Experience in Big data technologies such as Hadoop/Spark.
  • Experience in Data modeling and ETL processing
  • Hands on experience in data profiling and advanced PL/SQL procedures

Preferred qualifications, capabilities, and skills

  • Familiarity Oracle, ETL, Data Warehousing, and good to have cloud expertise
  • Exposure to cloud technologies
Read Full Description
Confirmed 9 hours ago. Posted 2 days ago.

Discover Similar Jobs

Suggested Articles