Article Preview
TopIntroduction
Geo-localization is the identification of the real-world geographic location of an object. It is closely related to geographic coordinate positioning systems such as a radar source, mobile phone, Internet connected computer terminal, autonomous robot or any kind of automatically moving objects. Internet and computer-based geo-localization can be accomplished by associating a geographic location with the Internet Protocol (IP) address, MAC address, RFID, hardware embedded article/production number, embedded software number, Wi-Fi positioning system, device fingerprint, canvas fingerprinting, device’s GPS coordinates, or some self-disclosing information (Holdener & Anthony, 2011; Haque et al., 2013).
Autonomous robots, just like humans, also have the ability to make their own decisions and then perform an action accordingly. A truly autonomous robot is one that can perceive its environment, make decisions based on what it perceives and/or has been programmed to recognize and then actuate a movement or manipulation within that environment (“Autonomous Robots”, 2020). Geo-localizing the current position of autonomous robot is a significant issue because the robot needs to know its current location before any movement within a reasonable time-frame.
Many prospective researchers have proposed different methods for the geo-localization of an autonomous robot using Radio Frequency (RF), GPS, Internet, laser system, ultrasonic sensor, landmarks, skylines etc. A detailed overview of these relevant papers is given below.
In 1995, Magee & Aggarwal (1995) presented a computationally straightforward method for determining location of camera that is mounted on a robot. The positioning of robot from sensor data was proposed by Burgard, Fox, and Thrun (1997). This approach provides logical criteria for (i) setting the robot’s motion direction (exploration), and (ii) determining the pointing direction of sensors to efficiently localize a robot. A low-cost strategy for localization was proposed by utilizing a Kalman filter operating on sensors’ data for estimating the position and orientation of robot (Goel, Roumeliotis, & Sukhatme, 1999).
Han, Lee, and Hashimoto (2000) offered an approach by using binocular stereo vision to control the position and orientation of a robot. This method works for SCARA (Selective Compliance Assembly Robot Arm or Selective Compliance Articulated Robot Arm) robot manipulator. Another localization method was proposed by Yun, Lyu, and Lee (2006) which utilize the external monitoring camera information for the indoor environment. Two methods for simultaneous localization and mapping for both outdoor and indoor environments were described by Berrabah and Bedkowski (2008). The first method (Berrabah & Bedkowski, 2008) is a feature-based algorithm that combines geo-referenced images to localize robot in user-defined global coordinates frame. The second method (Berrabah & Bedkowski, 2008) works in indoor environment and robot uses a laser range finder to build an occupancy grid map in its navigation area.
A localization method using the Matrix Pencil (MP) algorithm for hybrid detection of the Direction of Arrival (DOA) and Time of Arrival (TOA) was presented in (Trinh et al., 2012). Huang, Tsai, & Lin (2012) published two techniques for mobile robot localization for the indoor environment. At first, they (Huang et al., 2012) use the images of the markers attached on the ceiling with known positions to calculate the location and orientation of the robot. Secondly (Huang et al., 2012), an RGB-D camera mounted on the robot is adapted to acquire the color and depth of images of the environment. A real-time 3D localization and mapping approach for the USAR (Urban Search and Rescue) robotic application was proposed by Bedkowski, Maslowski, and Cubber (2012).