ClusterOne has flagged the DevOps Engineer job as unavailable. Let’s keep looking.

Job summary

Plume crafts new-to-planet IoT experiences built on top of the world's best performing home WiFi system. We are looking for an experienced Sr. DevOps Engineer to Develop and Support our cloud based infrastructure. In concert with iOS, Cloud, Product, Design and Hardware Engineering teams, you will help build, automate, and scale our customer and deployment infrastructure.

We're looking for more than an individual contributor. We’re in search of a talented engineer capable of implementing their vision of what automated, global infrastructure should look like. 

Responsibilities

  • Build, scale and extend multi-datacenter monitoring using in-house/open source tools such as Prometheus/Thanos, Icinga2, Graphite and Grafana 
  • Maintenance and support for Data Pipeline systems such as Airflow and Hadoop clusters
  • Support Data Science and Analysts with Notebooks such as Zeppelin, etc.
  • Manage/monitor/maintain/scale AWS cloud infrastructure platform and the following services across multiple geographic regions
  • EC2, S3, Route 53, Elasticsearch, Spark, Hadoop, IAM, RDS/PostgreSQL
  • Automate end to end infrastructure using Terraform, Cloud Formation, cloud-init, autoscale and open source configuration management
  • Write autoscale configurations to automatically provision customer production services
  • Write configuration management formulas using Saltstack and Python
  • Write automation tools in Python, Perl, or or Bash
  • Manage operations source code and version controls using Git
  • Manage and monitor open source database services: MongoDB, Kafka, YugabyteDB, 
  • Configure monitoring, replication, clustering and automated deployment 
  • Administer employee authorization and authentication systems using industry standard LDAP and VPN technologies
  • Perform software updates for production code using Jenkins
  • Assist Plume employees with technical issues (login access, application debugging, other general items that may come up on a day to day basis)

Qualifications

  • Bachelor's degree in Computer Science, Computer Engineering, Electrical Engineering or 5+ years equivalent professional experience

Position also requires the following:

  • 1 year (of the total required years of experience) must include DevOps and Infrastructure Engineering work including: deployment automation and knowledge of infrastructure services
  • 1 year (of the total required years of experience) must include experience with a configuration management framework: Puppet, Chef, Saltstack or Ansible
  • Experience using Jenkins for CI/CD Pipelines
  • Familiarity with Data Pipeline Tooling such as :
  • Hadoop Clusters (EMR, Databricks, Dataproc)
  • Apache Airflow, DAGs, Operators and Sensors
  • Experience with databases, including Relational DBs, MongoDB and Elasticsearch
  • Development experience with open source monitoring tools such as Grafana, Icinga2, and Nagios
  • Solid foundation with Debian, Ubuntu and Enterprise Linux
  • Fluency in a programming language such as: Perl, Python, PHP, Ruby
  • Bonus points for : C, C++ or Java
  • Prior experience working in a startup is a must
  • Ability to work on a 24/7/365 on-call rotation
  • Ability to think and solve problems
Read Full Description
Confirmed 30+ days ago. Posted 30+ days ago.

Discover Similar Jobs

Suggested Articles