Experian DataLabs is a R&D unit at Experian formed with the focus on research and development of innovative solutions by applying Machine Learning, big data technology, and vast amount of data assets Experian has and may acquire. Experian® is a global leader in providing information, analytical tools and marketing services to organizations and consumers to help manage the risk and reward of commercial and financial decisions. Using our comprehensive understanding of individuals, markets and economies, we help organizations find, develop and manage customer relationships to make their businesses more profitable.

Position Summary:

The Software Engineer will be concentrating on prototyping and developing new analytical solutions/platforms, including to build tools and develop processes. The solutions normally will leverage modern big data software stacks such as Cassandra, Spark, Solr, and mongDB, etc. They will be tasked to utilize their software development skills to prototype and design solutions based on business requirements. The Software Engineer will need to make recommendations on approaches, evaluate various solutions, and deploy prototypes developed in the Lab into production. The position will require that the candidate learn to work with large datasets, which is not a requirement before starting the job. We have found that candidates with strong programming skills end up being strong contributors in this environment. The candidate must be detail oriented, curious, and able to adapt to business problems in different industries. The candidate must be able to balance multiple concurrent projects, have the ability to prioritize projects, anticipate obstacles, and make high quality deliveries on an aggressive schedule. The candidate must also be a team player that is self-motivated and has excellent communication skills.

Key job functions include:

  • Utilize programming skills (Java, Python, etc) to develop data processing and information retrieval tools
  • Refining and implementing functional requirements, scoping, detail design, effort estimation, coding, maintenance, and support
  • Convert business requirements to working prototypes and eventual deployment
  • Design and implement high-performance, scalable data solution
  • Prepare and install solutions by determining and designing system specs and programming
  • Learning how to analyze and process large data sets by utilizing new open source technologies
  • Use new and existing data processing tools to independently analyze and draw conclusions from large data sets


  • BS/MS/PhD degree in computer science, computer engineering, or other quantitative fields
  • 3-10 years of working experience in relevant job experience
  • Proficient in Java, Python, or C/C++
  • Familiarity with Unix and scripting languages
  • Experience with Big Data, i.e. Hadoop and associated tools
  • Must be able to quickly understand technical and business requirements and be able to translate into technical implementation
  • Past work with datasets also a plus


  • Experience in developing large-scale software platforms involving ETL, data quality, fusion of data, and real-time ingestion and delivery
  • Experience with Hadoop and NoSQL related technologies such as Map Reduce, Spark, Hive, Pig, HBase, mongoDB, Cassandra, Solr, Elasticsearch, etc.
  • Experience with graph databases
  • Experience in streaming data processing platform such as Kafka, etc. 
  • Experience with data collection using public API’s
  • Experience in developing real-time based solutions
  • Working with data visualization tools such as Javascript libraries, JQuery, etc.
  • Basic understanding of web applications, web services, and related ecosystem


Product Development

Primary Location

United States-California-San Diego



Job Posting

07/10/2017, 3:50:33 PM



Read Full DescriptionHide Full Description
Confirmed 5 hours ago. Posted 30+ days ago.

Discover Similar Jobs

Suggested Articles

One Step Register
Need an account? Sign Up