About PatSnap

Patsnap empowers IP and R&D teams by providing better answers, so they can make

faster decisions with more confidence. Founded in 2007, Patsnap is the global leader

in AI-powered IP and R&D intelligence. Our domain-specific LLM, trained on our

extensive proprietary innovation data, coupled with Hiro, our AI assistant, delivers

actionable insights that increase productivity for IP tasks by 75% and reduce R&D

wastage by 25%. IP and R&D teams collaborate better with a user-friendly platform

across the entire innovation lifecycle. Over 15,000 companies trust Patsnap to

innovate faster with AI, including NASA, Tesla, PayPal, Sanofi, Dow Chemical, and

Wilson Sonsini.

About the Role

We are seeking a passionate MLOps Engineer to join our team and drive the deploym

ent, monitoring, and optimization of machine learning models in production. This rol

e will be key in ensuring the reliability, scalability, and efficiency of our ML infrastruct

ure while supporting the development and release of AI-driven solutions. If you have

a strong background in cloud technologies, automation, and ML model deployment, t

his is an excellent opportunity to work on cutting-edge AI applications.

Responsibilities

  • Design, build, and maintain scalable ML model deployment pipelines for real-time and batch inference.
  • Manage and optimize cloud-based ML infrastructure, ensuring high availability and cost efficiency.
  • Implement monitoring, logging, and alerting systems for ML models in production to track performance, data drift, and anomalies.
  • Automate model training, evaluation, and deployment processes using CI/CD pipelines.
  • Ensure compliance with MLOps best practices, including model versioning, reproducibility, and governance.
  • Collaborate with data scientists, ML engineers, and software developers to streamline the transition of models from development to production.
  • Optimize model serving infrastructure using Kubernetes, Docker, and serverless technologies.
  • Improve data pipelines for feature engineering, data preprocessing, and real-time data streaming.
  • Research and implement tools for scalable AI development, such as Retrieval-Augmented Generation (RAG) and agent-based applications.

Qualifications

  • Hands-on experience with MLOps platforms (e.g., MLflow, Kubeflow, TFX, SageMaker).
  • Strong expertise in cloud services (AWS, GCP, Azure and other Clouds).
  • Proficiency in containerization (Docker, Kubernetes) and infrastructure as code (Terraform, CloudFormation).
  • Experience in building CI/CD pipelines for machine learning models.
  • Solid programming skills in Python, Go, or Shell scripting for automation.
  • Familiarity with data versioning and model monitoring tools (DVC, Evidently AI, Prometheus, Grafana).
  • Understanding of feature stores and efficient data management for ML workflows.
  • Strong problem-solving skills with a proactive, self-motivated attitude.
  • Excellent collaboration and communication skills to work in a cross-functional team.
  • Fluent in Mandarin for effective communication within a multilingual team environment.

Why Join Us

Work with cutting-edge MLOps and AI deployment technologies in a fast-growin

g industry.

Be part of a dynamic and innovative team focused on AI and cloud solutions.

Gain exposure to end-to-end machine learning workflows, from data processing

to model deployment.

Opportunities for professional growth in cloud computing, automation, and AI in

frastructure.

Read Full Description
Confirmed 23 hours ago. Posted 30+ days ago.

Discover Similar Jobs

Suggested Articles