Scala Data Engineer – HPE InfoSight
HPE is seeking an outstanding software engineer to play a key role in helping the HPE InfoSight team build AI for the datacenter. HPE InfoSight allow HPE partners and customers to optimize, manage, and protect their datacenter infrastructure, while helping customer support, engineering, and sales to deliver more value to our customers.
You will be joining a small, agile, empowered team, focused on analyzing call-home data sent from HPE storage and enterprise products to provide business value through analytics. The team leverages a modern big-data and microservice-based technology stack for our end-to-end data processing, analysis, API, and web application – to provide our users with the insights they need to be successful.
- Technical contributor as a full-stack developer in a small, cross-functional development team, focused on providing data analytics as a service to internal and external HP customers.
- Contribute to the continuous improvement of our IoT analytics platform, powered by Scala, Spark, Mesos, Akka, Cassandra, Kafka, Elasticsearch, and Vertica.
- Develop unit, integration, system or any tests that are needed to help the team deliver value quickly, with high quality, to our customers.
- Leverage big-data technologies for data analytics, including Hadoop/Spark, Vertica (SQL), and Elasticsearch.
- Develop automation for continuous delivery, testing, and monitoring of our application and infrastructure, using Mesosphere DCOS, Jenkins, Ansible, Kibana, and others.
Education and Experience
Bachelor/Master's in Computer Science/Engineering, or equivalent, and a minimum of 5-7 years’ experience.
Knowledge and Skills
- Team player with a passion for learning, programming, automation, and data analytics.
- Excellent programming skills, with experience or an interest in learning functional programming.
- Excellent analytical and problem solving skills.
- Excellent communications skills.
We are looking for a candidate with some or all of the following:
- Experience building a data pipeline using Scala, Java, or Python, preferably with Spark and Kafka
- Data analytics experience with SQL, NoSQL, Hadoop, or ideally Spark.
- Machine learning experience
- Linux development or system administration experience, including Python or BASH scripting.
- Automation experience with Ansible, Chef, Puppet, or other.