Embracing new 3D imaging technologies

April 04, 2018 // By Bernhard Estermann
Embracing new 3D imaging technologies
For a growing number of consumers, the experience of buying new clothes has changed significantly in the past decade or so. The introduction of virtual changing rooms is already altering the way people select new clothes and even choose an outfit from their existing wardrobe.

A growing number of technological solutions are changing the shape of retail in general, with the online experience becoming more significant at a rapid pace. It may have seemed that something as personal as buying clothes would always be a physical experience but, increasingly, technology is allowing consumers to virtually try on clothes before they buy and, in some cases, even before they reach the high street.

The online avatar is a concept that allows a person’s shape and size to be faithfully reproduced in the virtual world and dressed with digital representations of real garments. Mixing and matching, finding the perfect fit and even tailor-made are now all aspects of the purchasing process that have become part of the augmented world.

 

Made to measure

Technology is at the heart of this retail revolution. In the not so distant past, shops would employ models to parade around in the latest fashions so that ladies (and perhaps gentlemen) could see how the clothes would wear before buying. Mass production pushed the price of clothing down to a point where it was difficult for most retailers to justify such a lavish expense, instead consumers were invited to try the clothes on there, in the shop. Until relatively recently the private changing room was the pinnacle of purchasing privacy but now you can see the clothes on a 3-dimensional virtual representation of yourself without the need to shed a stitch. Furthermore, you can be measured by technology to ensure you get a perfect fit, every time.

Body scanners are becoming more common. Simple solutions based on phone cameras can be used in the home for online shoppers, while more sophisticated versions are being installed in changing rooms ­– or perhaps that should be fitting rooms – around the world.


One advantage of body scanning is that a customer can be measured once, typically in the store, and then use their recorded measurements through an online portal for subsequent purchases, thereby ensuring a perfect fit every time. Thullex is just one example of how a company can follow this philosophy to create a business. The start-up’s solution, which it calls the 3D Butler, used a technology called RealSense to create body scanner than can measure a customer with an accuracy of within 2mm in a matter of seconds. It uses multiple sensors to capture the data in three dimensions, which is subsequently used to create a two-dimensional cutting pattern. By mixing modern measuring methods with traditional tailoring techniques, the company hopes to bring the benefits of body scanning to the world of bespoke fashion. 

 

Coded light

As mentioned, the system is based on RealSense, a technology platform developed by Intel that brings together image capture sensors and infrared laser projecting capabilities. It can be used to enable a wide range of applications, including facial analysis body tracking and robotics.

RealSense can be used to create virtual, augmented or mixed reality experiences, as well as provide the processing power behind robotics or drones. As already demonstrated, it can form the basis for 3D scanning solutions for use in the home or retail environment, or even for surveillance. The rise of automation is also an application area in vertical industries such as logistics and transportation.

The primary technique employed by RealSense is coded light; an enabling technology for all the applications outlined above. Coded light involves projecting a known pattern onto an unknown surface and detecting how that pattern is deformed, from which the 3D shape of the object being viewed can be inferred.

One of the core components of RealSense is the SR300 subassembly (see figure 1), which comprises an infrared projector and camera, a colour camera and a dedicated imaging ASIC for controlling the component parts and pre-processing the captured data before passing it to a more powerful host processor over a USB3 interface.


Fig. 1: A representation of the SR300 subassembly (Source: Intel)

Fig. 2: A typical 3D imaging system using the RealSense
SR300 from Intel (Source: Intel)

A typical 3D imaging system using the SR300 is shown in figure 2. The infrared elements of the system implement the coded light technique to capture a monochromatic image in two dimensions, which is processed by the ASIC to calculate the relative depth of objects in the image. The colour camera can operate independently or in collaboration with the infrared projector/camera. The process for capturing video with depth data is shown in figure 3.

Fig. 3: Depth video data flow using the Intel RealSense SR300 platform (Source: Intel)

 

Stereo – the next generation

Most animals have stereoscopic vision; two eyes set a distance apart that allows the brain to quickly and accurately calculate distances. Whether predator or prey, stereoscopic vision is a key attribute in the animal kingdom and the same can be said for machine vision (although admittedly for different reasons). When coupled to the right ‘brain’, therefore, using two cameras instead of one enables depth data to be gathered even more effectively.

This is illustrated by the next generation of Intel’s RealSense technology; the D400 Series. It comprises a left and right infrared camera and an optional infrared projector, which can be beneficial in scenes with limited texture. However, it is the stereoscopic nature of the D400 that delivers the key feature of 3D vision. It achieves this thanks to the latest ‘brain’, the Vision Processor D4. Figure 4 shows a block diagram of the D400 Series depth module.

Fig. 4: The RealSense Stereo D415/D435 system
block diagram (Source: Intel).

Each of the cameras features a 1080p RGB image sensor set between 55mm and 50mm apart offering either a standard or wide angle field of view (for the D415 and D435, respectively). The Vision Processor D4 is an ASIC purpose-designed for computer vision, able to deliver results faster and at lower power than a system that offloads the image processing to a host processor, allowing the Depth Camera to powered by USB.


Like the SR300, D4 Series is also supported by a software development kit; the SDK 2.0, known as LibRealSense. This is a cross-platform and open source SDK and it includes the tools needed to get image data out of the Depth Camera, it also comes with a number of code examples. The tools included are Intel RealSense Viewer; a GUI based application for easy evaluation, and the Depth Quality Test tool for Intel RealSense Camera, which is a GUI based tool used to get the best out of the camera’s ability to measure depth.

 

Conclusion

As society becomes more comfortable interacting with services delivered through machines we can expect new solutions for old problems to appear, designed to make lives easier while providing a more engaging user experience. 3D machine vision is a major part of that transition and will be aided by solutions that successfully embrace the latest technologies to create innovative applications.

 

About the author:

Bernhard Estermann is Intel Line Manager at Rutronik –

Design category: 

Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.