Sabre Holdings Corporation has flagged the Contributor Software Developer (Sabre Red Platform) job as unavailable. Let’s keep looking.

Introduction

Watson Orders is an IBM Silicon Valley based technology group working on a world-class conversational AI system. Our mission is to deliver advanced solutions that address real-world needs in the quick service restaurant industry. We use state-of-the-art Machine Learning and related technologies to deliver a product that will help serve tens of millions of customers per day.

Your Role and Responsibilities

We are currently seeking a talented software developer focused on our data platform that powers transformative AI/ML products reaching tens of millions of customers per day, feeding billions of customers worldwide. The department covers data infrastructure, data pipelines, analysis, and performance optimization.

The ideal candidate has experience architecting, developing, and supporting large-scale data platforms & infrastructure with a focus on resilience, scalability, and performance within a fast-growing, agile environment.

Responsibilities:

  • Develop and maintain the petabyte scale data lake, warehouse, pipelines, and query layers.
  • Develop and support multi-region data ingestion system from geographically distributed edge AI systems.
  • Develop and support AI research pipelines, training and evaluation pipelines, audio re-encoding and scanning pipelines, and various analysis outputs for business users
  • Use pipelines to manage resilient idempotent coordination with external databases, APIs, and systems
  • Work with AI Speech and Audio engineers to support and co-develop heterogenous pipelines over large flows of conversation AI data to support and accelerate experimentation with new AI models and improvements

Required Technical and Professional Expertise

5+ Years Professional Python Experience

2+ Years PubSub Experience (Kafka, Kinesis, SQS, MQTT, etc)

3+ Years working in petabyte scale data platforms

3+ Years working in AWS

Experience building schema-based parsers or ETLs using standard tooling in Python

Experience developing with Apache Avro, Parquet Schemas, SQLAlchemy (or similar ORMs), and pySpark in Python

Preferred Technical and Professional Expertise

Professional experience with data platforms powering conversational AI (chatbots, virtual assistants, etc.)

Professional experience developing and supporting large scale Lakehouses

Professional experience architecting and implementing large scale query engines such as Presto

What we offer:

  • Working for a top 5 IT company according to Forbes 2022 best employers ranking
  • International and prestigious projects
  • Highly skilled teams of experts
  • Wide range of IBM trainings and certificates
  • Unlimited access to Udemy, Harvard Business Review, Safari O’Reilly, getAbstract, IBM AI Skills Academy

And what is more:

  • Contract of employment
  • Competitive compensation – salary range, depending on your skills and experience
  • Private medical care and life insurance
  • Employee Assistance Program
  • Sport, charity & other networking groups
  • Summer / winter camps for children
  • Discounts with IBM employee badge
  • Referral Bonus Program
  • Home office option
  • No dress code
Read Full Description
Confirmed 3 hours ago. Posted 22 days ago.

Discover Similar Jobs

Suggested Articles