We are hiring top Data Engineers to join our team in Bengaluru. 

We are accelerating our growth as our company gains increasing traction in the exciting “AI for the Enterprise” market.  We are looking for talented technologists who want to be part of a world-class team and bring with them a healthy mix of intellectual curiosity, desire to learn and passion for excellence.

As a Data Engineer, you will collaborate with the Noodle Client Service team, Data Scientists, SW Engineers, and UX Designers, as well as industry-specific experts from our clients. You will be responsible for developing, maintaining, and testing data solutions with a wide variety of data platforms including relational databases, big data platforms and no-sql databases. You will develop various data ingestion & transformation routines to acquire data from external data sources, manage distributed crawlers to parse data from web sources, and develop APIs for secure exchange of data. You will be involved in securing access to the data based on appropriate rights, implementing data quality routines and mechanisms to flag bad data for correction, and building QA and automation frameworks to monitor daily ingestion of data and provide alerts on errors and other problems.

 

Qualifications:

Must haves

  • 7-12 years of experience with engineering data pipelines
  • BE/B.Tech or Advanced degree in a relevant field (Computer Science and Engineering, Technology and related fields)
  • Excellent knowledge of relational databases like SQL Server, PostgreSQL or MySQL
  • Proficient with writing SQL queries, Stored Procedures and Views
  • Strong fundamentals in any programming language like C#, Python or Java
  • Familiarity with any ETL tool like SSIS, Informatica Power Center, Talend or Pentaho
  • Excellent at writing code to parse JSON / HTML / Javascript etc.
  • Passion for learning and a desire to grow – Noodlers are life-long learners!

Nice to haves

  • Strong knowledge of what works and what doesn’t. This includes common pitfalls and mistakes when designing a data pipeline.
  • Comfortable working with both high performance on-premises SQL installations and cloud instances.
  • Familiarity with Hadoop and Spark
  • Demonstrated energy and passion that extends beyond your field of study – Are you a computer engineer who writes poetry? A mathematician who loves psychology? An engineer passionate about public policy?  We want to build something with you.
  • Experience with (and excitement for) interdisciplinary collaboration

 

Want to help shape the future of Enterprise Artificial Intelligence? 

Let’s noodle.

 

Read Full DescriptionHide Full Description
Confirmed 19 hours ago. Posted 30+ days ago.

Discover Similar Jobs

Suggested Articles