Information Technology | Redwood City, CA, United States





Essential Duties

  • Design, install and maintain big data analytics platform;
  • Manage public and private cloud infrastructure;
  • Support the analytics team requests for specialised solutions;
  • Troubleshooting and performance tuning of analytic jobs;
  • Maintain and improve system automation and management tools;
  • Administer and manage the web and online gaming server environment globally;
  • Work alongside the system administrators located at other sites;
  • Monitor online services and take necessary actions to ensure SLAs are met or exceeded;
  • Monitor system health and security;
  • Create system, process and workflow documentations;
  • Provide technical support towards all aspect of the infrastructure and help developers in debugging, troubleshooting and optimizing online software components and tools;
  • Work with agile delivery teams to ensure build management, automated testing and software deployment;
  • Work closely with hosting companies and datacentres to improve the service delivery throughout the business;
  • Take proactive approach in identifying and alleviating capacity planning issues and bottlenecks, forecast resource requirements;
  • Provide high quality support of the infrastructure and liaise with the other 3rdlevel support teams regarding networking, server and storage issues;
  • Script, manage, configure and tailor monitoring solutions and respond to incidents.
  • Ability to work in a team, create well organized documentation, follow procedures.
  • Open for potential 24/7 support in the future if required 
Design, install and maintain big data analytics platform; Manage public and private cloud infrastructure; Support the analytics team requests for specialised solutions; Troubleshooting and performance tuning of analytic jobs; Maintain and improve system automation and management tools; Administer and manage the web and online gaming server environment globally; Work alongside the system administrators located at other sites; Monitor online services and take necessary actions to ensure SLAs are met or exceeded; Monitor system health and security; Create system, process and workflow documentations; Provide technical support towards all aspect of the infrastructure and help developers in debugging, troubleshooting and optimizing online software components and tools; Work with agile delivery teams to ensure build management, automated testing and software deployment; Work closely with hosting companies and datacentres to improve the service delivery throughout the business; Take proactive approach in identifying and alleviating capacity planning issues and bottlenecks, forecast resource requirements; Provide high quality support of the infrastructure and liaise with the other 3rdlevel support teams regarding networking, server and storage issues; Script, manage, configure and tailor monitoring solutions and respond to incidents. Ability to work in a team, create well organized documentation, follow procedures. Open for potential 24/7 support in the future if required 

Competencies, Skills & Knowledge

  • In-depth knowledge of big data processing solutions, architectures, system components;
  • Competent programming or scripting background in one or more of the following languages: python, java, scala, r, bash, javascript;
  • Solid network troubleshooting experience;
  • Interest in working towards a high quality, finding the right solutions, following best practices;
  • Strong focus on business outcomes and a service provider attitude.
  • Active involvement in cloud auto-scaling solution development;
  • Data proection, encyrption experience;
  • Experience in continuous integration/continuous delivery;
  • In-depth Puppet/Chef/Ansible knowledge;
  • Hybrid cloud experience with scale out to public cloud;
  • Working with high traffic web applications and dynamic scaling;
  • Experience with any of prometheus, grafana, kibana, icinga, observer, cacti, logstash, rsyslog, graylog
  • Working in production public cloud environment;
In-depth knowledge of big data processing solutions, architectures, system components; Competent programming or scripting background in one or more of the following languages: python, java, scala, r, bash, javascript; Solid network troubleshooting experience; Interest in working towards a high quality, finding the right solutions, following best practices; Strong focus on business outcomes and a service provider attitude. Active involvement in cloud auto-scaling solution development; Data proection, encyrption experience; Experience in continuous integration/continuous delivery; In-depth Puppet/Chef/Ansible knowledge; Hybrid cloud experience with scale out to public cloud; Working with high traffic web applications and dynamic scaling; Experience with any of prometheus, grafana, kibana, icinga, observer, cacti, logstash, rsyslog, graylog Working in production public cloud environment;

Essential Requirements

  • Experience working in production public cloud environment;
  • Experience in building and managing big data processing pipelines in a production environment;
  • Experience with managing both SQL and noSQL systems;
  • in-depth knowledge and management experience with AWS and GCP automation pipelines and serverless architectures;
  • Strong Linux system administration background;
  • System automization experience regarding deployment and configuration management.
  • Experience in writing, debugging and troubleshooting analytics jobs, map/reduce jobs, spark jobs in python/scala/java;
  • Hands-on experience with more of the following components: hadoop, 
  • Experience with data protection regulations, PCI;
  • Understanding of encryption solutions;
  • Performance tuning and optimization of analytics jobs;
  • Experience with real-time analytics and machine learning.
Experience working in production public cloud environment; Experience in building and managing big data processing pipelines in a production environment; Experience with managing both SQL and noSQL systems; in-depth knowledge and management experience with AWS and GCP automation pipelines and serverless architectures; Strong Linux system administration background; System automization experience regarding deployment and configuration management. Experience in writing, debugging and troubleshooting analytics jobs, map/reduce jobs, spark jobs in python/scala/java; Hands-on experience with more of the following components: hadoop,  Experience with data protection regulations, PCI; Understanding of encryption solutions; Performance tuning and optimization of analytics jobs; Experience with real-time analytics and machine learning.

Plus

  • Hadoop admin certification;
  • AWS/GCP certification: DevOps/SysOps/Architect;
  • Linux certificates Ubuntu/RedHat;
  • Comfortable with open communication
  • Available to work out of office hours and remote 
  • Interest in video games, online gaming.
Hadoop admin certification; AWS/GCP certification: DevOps/SysOps/Architect; Linux certificates Ubuntu/RedHat; Comfortable with open communication Available to work out of office hours and remote  Interest in video games, online gaming.

Square Enix and Crystal Dynamics are EOE and M/F/D/V employers.





Read Full DescriptionHide Full Description
Confirmed 3 hours ago. Posted 30+ days ago.

Discover Similar Jobs

Suggested Articles

One Step Register
Need an account? Sign Up