Facebook has flagged the State Estimation and Calibration Scientist, PhD University Grad (Oculus) job as unavailable. Let’s keep looking.

Meta Reality Labs brings together a world-class team of researchers, developers, and engineers to create the future of virtual and augmented reality, which together will become as universal and essential as smartphones and personal computers are today. And just as personal computers have done over the past 45 years, AR and VR will ultimately change everything about how we work, play, and connect. We are developing all the technologies needed to enable breakthrough AR glasses and VR headsets, including optics and displays, computer vision, audio, graphics, brain-computer interface, haptic interaction, eye/hand/face/body tracking, perception science, and true telepresence. Some of those will advance much faster than others, but they all need to happen to enable AR and VR that are so compelling that they become an integral part of our lives. Meta Reality Labs Research is looking for upcoming scientists and research with great interest in depth sensing and processing. During this internship, you will work closely with our researchers on an aspect of depth sensing to help advance the state of the art of the field. Our team researches diverse methods of real-time depth sensing including stereo vision, passive and active illumination methods, and others. Our research expands into depth processing with novel techniques focused on high impact ar/vr applications. Our internships are twelve (12) to sixteen (16), or twenty-four (24) weeks long and we have various start dates throughout the year.

Research Scientist Intern, Depth (PhD) Responsibilities

  • Collaborate with research scientists and engineers from a variety of technical backgrounds and cross-functional partners through all stages of depth system development including communicating research plans, progress, and results.
  • Publish research results and contribute to research that can be applied to Meta product development.
  • Research on deployment of ML algorithms on hardware architectures.
  • Prototyping, building and characterizing experimental systems and custom hardware
  • Develop advanced optical techniques for dynamic characterization of AR/VR products and subsystems including experimental design, data acquisition, signal processing and model correlation.
  • Contribute to research on novel depth sensing modalities for AR/VR systems.
  • Research on design / model / execution of ML algorithm for depth sensing.

Minimum Qualifications

  • Currently has, or is in the process of obtaining a PhD in the fields of Electrical Engineering, Computer Science, Mechanical Engineering, Robotics, Mechanics, or relevant technical field.
  • Must obtain work authorization in the country of employment at the time of hire and maintain ongoing work authorization during employment.
  • Research experience, strong understanding and hands-on experience with depth estimation techniques, including but not limited to: depth completion and upsampling, monocular depth estimation, stereo depth estimation
  • 2+ years of programming, simulation and modeling experience with language such as Python/C++ or other related language

Preferred Qualifications

  • Proven track record of achieving significant results as demonstrated by grants, fellowships, patents, as well as first-authored publications at leading workshops or conferences such as CVPR, ICCV, ECCV, NeurIPS, or ICML or similar.
  • Experience working and communicating cross functionally in a team environment.
  • Experience with deep learning in computational photography, neural rendering, 3D reconstruction, computational imaging, or related topics.
  • Experience in localization and mapping algorithms (SLAM, VIO, multi-view geometry, calibration, scene reconstruction, etc.).
  • Experience in building novel perception systems that involves low-level, cross-platform efforts.
  • Experience in non deep learning techniques for depth reconstruction.
  • Experience in multimodal deep learning including image and depth inputs.
  • Experience in generating synthetic data and addressing domain gaps.

Locations

About Meta

Meta builds technologies that help people connect, find communities, and grow businesses. When Facebook launched in 2004, it changed the way people connect. Apps like Messenger, Instagram and WhatsApp further empowered billions around the world. Now, Meta is moving beyond 2D screens toward immersive experiences like augmented and virtual reality to help build the next evolution in social technology. People who choose to build their careers by building with us at Meta help shape a future that will take us beyond what digital connection makes possible today—beyond the constraints of screens, the limits of distance, and even the rules of physics.

Meta is committed to providing reasonable support (called accommodations) in our recruiting processes for candidates with disabilities, long term conditions, mental health conditions or sincerely held religious beliefs, or who are neurodivergent or require pregnancy-related support. If you need support, please reach out to accommodations-ext@fb.com.

$7,313/month to $11,333/month + benefits

Individual pay is determined by skills, qualifications, experience, and location. Compensation details listed in this posting reflect the base salary only, and do not include bonus, equity or sales incentives, if applicable. In addition to base salary, Meta offers benefits. Learn more about benefits at Meta.

Read Full Description
Confirmed 17 hours ago. Posted 30+ days ago.

Discover Similar Jobs

Suggested Articles