MENU

AI to replace depth sensors in smartphones, says Lucid’s CEO

AI to replace depth sensors in smartphones, says Lucid’s CEO

Interviews |
By eeNews Europe



The LucidCam 180º field of view 3D camera for VR
content creation. Dual camera, no depth sensor.

Back in 2014 even before Lucid was established as a company, co-founders Han Jin (CEO) and Adam Rowell (CTO) had set themselves the goal to improve robots’ vision and sense of their environment, focusing on the eyes as a dual camera. What really launched the company was the LucidCam 180º 3D VR consumer camera they proposed to consumers on Indiegogo in 2015. The compact camera doesn’t have a depth sensor and 3D feature extraction is done purely in software with the help of a well-trained machine-learning algorithm, which the company says gives results on par with depth-sensor-equipped devices but without the added costs.

At Mobile World Congress Shanghai, Lucid announced it wants to scale its core AI-enhanced 3D software technology into dual- or multi-cameras mobile and smart devices, including smartphones, drones, smart speakers and robotics.

Software-based 3D feature extraction isn’t new of course, yet most dual-camera smartphones, drones and robots also sport a depth sensor for good measure. So what is it that makes Lucid’s solution so compelling for OEMs to license it? We asked Lucid’s CEO during a phone interview.

Jin first gave us a small market overview, noting that although dual cameras have been around for years, it is only over the last few years that those devices have benefited from more GPU power and connectivity.

“Back in 2012 started the 3D hype, driven by more powerful CPUs and GPU for advanced computer vision. But now everything is more connected, they are no longer isolated devices whose content you have to export to a microSD card. 3D cameras are connected through apps, smartphones, internet, yet stereoscopic data has not been looked at” explains the CEO.


For years, the industry has struggled with critical cases, when there are no surface textures, bad lighting, or when the light comes strongly from one side, hence the need for adding a depth sensor, often Lidar-based with structured IR light. But Lucid claims it was able to train its AI algorithms to circumvent those issues and make the appropriate corrections for 3D extraction.

“The way we as humans accurately perceive three dimensions and distances is not solely based on our two eyes but rather a combination of experience, learning and inference. As chips and servers begin to approach the processing power of our brains, we can mimic this intelligence in software only, using AI and data on top of dual cameras,” observes Jin.

“We’ve collected all the robotic data and drone data available over the last three to four years and applied AI to it” said Jin, noting that since all the historical data came from robots and drones equipped with depth sensors, the startup was able to train its machine learning algorithms to correlate depth data with the stereoscopic data so they could eventually infer the right depth patterns purely from a stereoscopic video stream. To be ported to any device, the AI-algorithm leverages a unique “vision profile”, which takes into consideration the hardware’s specifics, such as the baseline, the field of view and other optical parameters of the dual-camera system, but also any available neural network and compute capacity. This is done during the manufacturing process.

“The race is not on resolution, but on advanced software, AI and deep learning” Jin emphasized.

Now the most compelling reason why OEMs would want to license Lucid’s proprietary real-time 3D Fusion technology is cost. As an example, Jin mentioned Apple’s iphone X equipped with a structured light IR depth sensor adding approximately $60 per bill of material, while with the new software, a $10 dual camera could do a better job he argued. For example, AI would be better at face recognition than a depth sensor, as it could still figure out the same face wearing sunglasses on or not, he says.

For now, the company trains its AI algorithms in the cloud because it didn’t have a chip to do what it wanted on a device, but in the longer term, Lucid hopes the algorithm could keep learning directly on the device, improving the 3D camera’s performance without requiring software updates. This is likely with more and more smartphone SoC vendors offering AI processing capabilities and neural networks on their chips for so-called edge-AI, without requiring cloud-processing power.


The LucidCam is still selling well, but Lucid’s CEO knows the company won’t be able to scale up with a standalone product. It took note of what happened to GoPro getting crushed by smarphones’ increasing video performances and the commoditization of Full HD high-frame rate videos, it doesn’t want to make the same mistake.

3D depth extraction could be used in many
applications, from navigation to styling and architecture.

Instead, the surest way to upscale the company is to license its software solution to OEMs for its integration into dual-camera phones, laptops, tablets, drones, robots and smart appliances. Jin expects the mass adoption of 3D cameras within the next couple of years, with smartphones driving this growth. Lucid is involved with several OEMs, incorporating its software into devices from several mobile phone, camera and robot makers, but Jin would not comment on any particular deals.

In the future, the company will work on a platform and open it up for individual developers and brands with an API call so they could tune their 3D applications.

Lucid – www.Lucidinside.com

Related articles:

RealSense 3D camera goes into smartphone

Ams and Snny Opotech join forces on 3D sensing camera development

Real-time 360º video stitching: 3D next

The future of video surveillance: HD, hyperspectral and stereoscopic

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s