Joining Razer will place you on a global mission to revolutionize the way the world games. Razer is a place to do great work, offering you the opportunity to make an impact globally while working across a global team located across 5 continents. Razer is also a great place to work, providing you the unique, gamer-centric #LifeAtRazer experience that will put you in an accelerated growth, both personally and professionally.
Job Responsibilities/ 工作职责 :
Summary
We are seeking a highly skilled AI Software Engineer specializing in API automation to design, develop, and maintain the infrastructure required to collect, process, and manage large-scale data for AI applications. This role focuses on developing API integrations for automating data pipelines and ensuring scalable access to AI-ready datasets.
You will collaborate closely with data engineers, data scientists, and DevOps teams to build these APIs that fuel AI / Machine Learning (ML) models. The ideal candidate has strong software development, API design, and data pipeline automation skills, along with a deep understanding of AI-driven data needs.
Essential Duties and Responsibilities
- Develop and manage API-based data ingestion pipelines to collect structured and unstructured data from third-party sources.
- Integrate with external APIs (REST, GraphQL, gRPC) and internal data services to automate data retrieval.
- Build custom APIs to provide AI teams with seamless access to curated datasets.
- Implement real-time data streaming solutions using Kafka, WebSockets, or RabbitMQ
- Develop event-driven architectures for data ingestion using cloud-native tools (AWS Lambda, Google Cloud Functions).
- Work with SQL/NoSQL databases (PostgreSQL, MongoDB, Elasticsearch) for efficient data storage and retrieval
- Implement CI/CD pipelines for automated data pipeline deployment and monitoring
- Optimize API performance and ensure fault tolerance, scalability, and security in data sourcing pipelines
- Implement secure authentication & authorization (OAuth, JWT) for API-based data access
- Monitor, troubleshoot, and continuously optimize data sourcing pipelines for efficiency
- Collaborate with data sourcing specialists, DevOps and cloud engineers to align API development with data pipelines and data sourcing requirements.
Qualifications
- Has a Bachelor’s or Master’s degree in Computer Science, Software Engineering, AI or similar discipline from an accredited institution.
- 3+ years of experience in software development, data infrastructure, or API engineering.
- Proven track record of building scalable data ingestion and automation pipelines for AI applications.
- Familiarity with MLOps and AI model data preparation
- Strong programming skills in Python, Java, or Go
- Hands-on experience with API development (FastAPI, Flask, Express.js, GraphQL, gRPC) and experience with data pipeline tools (Apache Airflow, Prefect, Luigi, Dagster).
- Proficiency in SQL/NoSQL databases, vector databases.
- Expertise in cloud platforms (AWS, GCP, Azure) and containerization (Docker, Kubernetes).
- Familiarity with event-driven architectures and real-time data streaming (Kafka, WebSockets, RabbitMQ)
- Ability to work in a fast-paced AI-driven environment with evolving requirements
- Strong problem-solving skills to handle complex data integration challenges.
- Excellent collaboration and communication skills, especially in cross-functional AI teams.
职位概述
我们正在寻找一位资深 AI 软件工程师,专注于 API 自动化,负责设计、开发和维护支持大规模 AI 应用数据采集、处理与管理的基础设施。该职位重点在于构建 API 集成,推动数据管道自动化,并确保对 AI 数据集的高效、可扩展访问。
在此岗位中,您将与数据工程师、数据科学家以及 DevOps 团队紧密合作,共同打造为 AI/机器学习(ML)模型提供数据支持的 API。理想的候选人需具备扎实的软件开发、API 设计和数据管道自动化能力,并深入理解 AI 对数据的需求。
主要职责
- 设计并管理基于 API 的数据采集管道,从第三方来源收集结构化与非结构化数据。
- 与外部 API(如 REST、GraphQL、gRPC)及内部数据服务对接,实现数据自动获取。
- 开发定制 API,确保 AI 团队能够无缝访问经过精心整理的数据集。
- 利用 Kafka、WebSockets 或 RabbitMQ 构建实时数据流解决方案。
- 运用云原生工具(如 AWS Lambda、Google Cloud Functions)开发事件驱动的数据采集架构。
- 使用 SQL/NoSQL 数据库(如 PostgreSQL、MongoDB、Elasticsearch)实现高效的数据存储与检索。
- 构建 CI/CD 流水线,实现数据管道的自动部署与监控。
- 优化 API 性能,确保数据采集管道具备高容错性、可扩展性和安全性。
- 实施安全的身份验证和授权机制(如 OAuth、JWT),保障 API 数据访问的安全。
- 持续监控、故障排查并优化数据采集流程,提高整体运行效率。
- 与数据专家、DevOps 和云工程师协作,确保 API 开发与数据管道及数据需求相匹配。
任职要求
- 具备计算机科学、软件工程、AI 或相关专业的学士或硕士学位。
- 至少 3 年软件开发、数据基础设施或 API 工程领域的工作经验。
- 在构建面向 AI 应用的可扩展数据采集及自动化管道方面有成功案例。
- 熟悉 MLOps 及 AI 模型数据预处理流程。
- 精通 Python、Java 或 Go 等编程语言。
- 拥有 API 开发经验(如 FastAPI、Flask、Express.js、GraphQL、gRPC)及数据管道工具使用经验(如 Apache Airflow、Prefect、Luigi、Dagster)。
- 熟练掌握 SQL/NoSQL 数据库及向量数据库的应用。
- 精通 AWS、GCP、Azure 等云平台和 Docker、Kubernetes 等容器化技术。
- 熟悉事件驱动架构及实时数据流技术(如 Kafka、WebSockets、RabbitMQ)。
- 能适应快节奏、不断变化的 AI 驱动型工作环境。
- 具备卓越的问题解决能力,能够应对复杂的数据集成挑战。
- 拥有出色的团队协作与沟通技巧,适应跨职能 AI 团队的协同工作。
Pre-Requisites/ 任职要求 :
Are you game?
Read Full Description