Advances in Laser Scanners

Advances in Laser Scanners

Lars Lindner, Oleg Sergiyenko, Moisés Rivas-Lopez, Wendy Flores-Fuentes, Julio C. Rodríguez-Quiñonez, Daniel Hernandez-Balbuena, Fabian N. Murrieta-Rico, Mykhailo Ivanov
DOI: 10.4018/978-1-7998-6522-3.ch002
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

One focus of present chapter is defined by the further development of the technical vision system (TVS). The TVS mainly contains two principal parts, the positioning laser (PL) and the scanning aperture (SA), which implement the optomechanical function of the dynamic triangulation. Previous versions of the TVS uses stepping motors to position the laser beam, which leads to a discrete field of view (FOV). Using stepping motors, inevitable dead zones arise, where 3D coordinates cannot be detected. One advance of this TVS is defined by the substitution of these discrete actuators by DC motors to eliminate dead zones and to perform a continuous laser scan in the TVS FOV. Previous versions of this TVS also uses a constant step response as closed-loop input. Thereby the chapter describes a new approach to position the TVS laser ray in the FOV, using a trapezoidal velocity profile as trajectory.
Chapter Preview
Top

Introduction

Laser scanners are optical devices, which use lasers to obtain certain information about surface topography, superficial coordinates or other characteristics, by physical sensing a light spot displacement across a surface. In opposition to stylus instruments, laser scanners’ measuring is contactless and thereby has higher scanning speeds.

Applications for contactless measurement of 3D coordinates use mostly optical signals with CCD cameras or laser signals in Laser Scanning Systems (Toth & Zivcak, 2014). Cameras have the advantage that they resemble the way of human vision (Sergiyenko O., Optoelectronic System for Mobile Robot Navigation, Optoelectronics, 2010), which makes it easy to implement algorithms for different scenario detection in an unknown environment. Also, the scanning results do not depend on the examined object surface properties, when using cameras. On the other side, cameras are not preferable for single coordinate measurements (e.g. distance), due to their large amount of data generation. Another disadvantage is their dependency from the condition and existence of visible light and from atmospheric effects. Laser scanning systems however are suited for accurate coordinate measurements, which they can perform from objects in long distances and independent of ambient light. They also have the advantage of a fast measuring speed and a simple optical arrangement among low cost (Zhongdong, Peng, Xiaohui, & Changku, 2014). On the other hand, it must be noted, that for laser scanning systems the measurement readings depend on the scanning surface and that post-processing is required, due to large and high resolution 3D data sets.

One application, where measurement of 3D coordinates is absolutely needed, can be found in the movement control of Autonomous and Mobile Robots (AMR). The environment of a robot is typically measured with CCD cameras and / or laser scanning systems. In (Ohnishi & Imiya, 2013) for example, a robot is navigated using a „visual potential“, which is computed using a sequence-capturing of various images by a camera mounted on the robot. Paper (Correal, Pajares, & Ruz, 2014) uses an automatic expert system for 3D terrain reconstruction, which captures his environment with two cameras in a stereoscopic way, similar to the human binocular vision. Laser scanning systems, as remote sensing technology, instead are known as Light Detection and Ranging (Lidar) systems, which are widely used in many areas, as well as in mobile robot navigation. Paper (Kumar, McElhinney, Lewis, & McCarthy, 2013) for example uses an algorithm and terrestrial mobile Lidar data, to compute the left and right road edge of a route corridor. In (Hiremath, van der Heijden, van Evert, Stein, & ter Braak, 2014), a mobile robot is equipped with a Lidar-system, which, using the time-of-flight principle, navigates in a cornfield.

However, other sensors and methods are also used to navigate mobile robots. Paper (Benet, Blanes, Simo, & Perez, 2002) for example uses infrared (IR) and ultrasonic sensors (US) for map building and object location of a mobile robot prototype. One ultrasonic rotary sensor is installed on the top and a ring of 16 infrared sensors are distributed in eight pairs around the perimeter of the robot. These IR sensors are based on the direct measurement of the IR light magnitude that is back-scattered from a surface placed in front of the sensor. The typical response time of these IR sensors for a distance measurement is about 2ms. Distance measurement with this sensor can be realized from a few centimeters to 1m. which represents one limitation of this approach. The range for coordinate measurements by triangulation can be far over 1m. Paper (Volos, Kyprianidis, & Stouboulos, 2013) even experiments with a chaotic controlled mobile robot, which only uses an ultrasonic distance sensor for short-range measurement to avoid obstacle collision. The experimental results show applicability of chaotic systems to real autonomous mobile robots.

Complete Chapter List

Search this Book:
Reset