About the Role:
At SoundHound, we view our data and our engineering team as two of our biggest assets. This role lives at the intersection of the two. We have a huge amount of data from hundreds of millions of users of:
● SoundHound: music app featuring search, discovery, and play with LiveLyrics
● Hound: newly released app featuring unprecedented speech recognition and natural language understanding
● Houndify: platform enabling developers to add voice enabled conversational interface to anything
We aspire to leverage this data to make informed decisions to steer product development, marketing and user engagement. We have only scratched the surface of the kind of advanced analytics and insight generation we'd like to do! This is an opportunity to work on interesting data engineering and data science problems, build large scale distributed machine learning systems from the ground up, and use cutting edge Big Data technologies like Spark, Kafka, HBase and Hive.
● Design and implement data pipelines empowering real time insights.
● Leverage massive datasets for modeling, recommendations, and reporting solutions.
● Understanding query intent using NLP, Machine Learning, Deep Learning and using the same for Ad Targeting.
● Build user-facing scalable systems powering ad targeting, push and/or in app messaging.
● Drive framework for A/B tests, exposing the results through visualization tools like Tableau.
● Provide technical leadership as team executes cross functional critical projects.
● Identify, leverage, and successfully evangelize opportunities to improve engineering productivity.
● Experience developing large-scale applications, complex data pipelines and related projects.
● Strong coding experience preferably in Java, Scala, or Python.
● Hands-on experience with large scale Big Data environments (Spark, Kafka, Hive, Hadoop)
● Ability to handle multiple competing priorities in a fast-paced environment.
● BS/MS in Computer Science or equivalent
Nice to Haves:
● Familiarity with NoSQL stores including HBase/Cassandra, Redis, Riak, and/or Mongo.
● Familiarity with data modeling, machine learning, frameworks like Spark MLlib.
● Experience with analytical tools supporting data analysis and reporting (eg. Tableau)