lidar robot

download lidar robot

of 30

Transcript of lidar robot

  • 7/31/2019 lidar robot

    1/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 1 TEC Oyur

    ACKNOWLEDGEMENT

    Before I go into thick of the things I would like to add a few heartfelt words

    for the people who were part of this endeavor in numerous ways. I am grateful to Mr.Shafeek

    A S (Head of the Department, Electronics and Communication) for his valuable advice and

    help given during the entire duration of the seminar. I also thank for his guidance, technical

    advice and help rendered.

    I also remembered the help given by all the faculty of department of

    electronics and communication. I would also like to extend my thanks to Mr. Bijith Basher

    and Mr .Balraj S (Assistant professor in Electronics and Communication). I also wish tothank our classmates for their help and comments on selected portions of our seminar.

    Last but not least I would like our parents who were always there with their

    loving advices and suggestions throughout the completion of our effort.

  • 7/31/2019 lidar robot

    2/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 2 TEC Oyur

    ABSTRACT

    This paper presents a characterization of sensing failures of light detection and

    ranging (LIDAR) given the presence of a mirror, which are quite common in our daily lives.

    Although LIDARs play an important role in the field of robotics, previous research has

    addressed little regarding the challenges in optical sensing such as mirror reflections. As light

    can be reflected off a mirror and penetrate a window, mobile robots equipped with LIDARs

    only may not be capable of dealing with real environments. It is straightforward to deal with

    mirrors and windows by fusing sensors of heterogeneous characteristics. However, in

    distinguishability between mirror images and true objects makes the map inconsistent with

    the true environment, even for a robot with heterogeneous sensors. We propose a Bayesian

    framework to detect and track mirrors using only LIDAR information. Mirrors are detected

    by utilizing the property of mirror symmetry. Spatiotemporal information is integrated using

    a Bayesian filter. The proposed approach can be seamlessly integrated into the occupancy

    grid map representation and the mobile robot localization framework, and has been

    demonstrated using real data from a LIDAR. Mirrors, as potential obstacles, are successfully

    detected and tracked.

  • 7/31/2019 lidar robot

    3/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 3 TEC Oyur

    CONTENTS

    Chapter No TITLE Page No

    ACKNOWLEDGEMENT

    ABSTRACT

    1 INTRODUCTION 5

    2 BACKGROUND NOISE 6

    3 SENSOR FUSION 9

    4 MIRROR DETECTION 11

    4.1 PREDICTION 11

    4.2 VERIFICATION 13

    4.3 REPRESENTATION 14

    5 MIRROR TRACKING 16

    5.1 LINE UPDATE 16

    5.2 ENDPOINT UPDATE 17

    5.3 COMPLEXITY ANALYSIS 18

    6 EXPERIMENTAL RESULTS 19

    6.1 MAPPING LOCALIZATION &NAVIGATION 19

    6.2 QUANTITATIVE EVALUATION 21

  • 7/31/2019 lidar robot

    4/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 4 TEC Oyur

    Chapter No TITLE Page No

    DVANTAGES 25

    DISADVANTAGES 26

    APPLICATION 27

    CONCLUSION 28

    FUTURE SCOPE 29

    REFERENCE 30

  • 7/31/2019 lidar robot

    5/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 5 TEC Oyur

    CHAPTER 1

    INTRODUCTION

    Simultaneous localization and mapping (SLAM) is the process by which a

    mobile robot can build a map of the environment and, at the same time, use this map to

    compute its location. As the SLAM problem has attracted immense attention in the mobile

    robotics literature, a large variety of sensors have been used for SLAM, such as sonar, lightdetection and ranging (LIDAR), IR, monocular vision, stereo vision, and GPS. The past

    decade has seen rapid progress in solving the SLAM problem, and LIDARs are at the core of

    most state of-the-art robot systems, such as Boss and Stanley, and the autonomous vehicles in

    the Defense Advanced Research Projects Agency (DARPA) Urban Challenge and Grand

    Challenge. Because of their narrow beamwidth and fast time of flight, LIDARs are

    appropriate for high-precision applications in the field of robotics.

    A LIDAR estimates the distance to a surface by measuring the round-trip time

    of flight of an emitted pulse of light. Only a fraction of the photons emitted by the LIDAR

    are received back through the sensors optics, with this amount being a strong function of the

    reflectivity of the object being imaged. White surfaces reflect a large fraction of light, while

    black surfaces reflect only a small amount. Transparent objects such as glasses often refract

    the light, and a LIDAR measurement of such a surface typically results in the range

    information for the object behind the transparent surface. In addition, the mirror-like

    reflection of light, in which light from a single incoming direction is reflected into a single

    outgoing direction, is called specular reflection or regular reflection. Mirrors are very flat

    surfaces and reflect nearly all incident light such that the angles of incidence and reflection

    are equal.

  • 7/31/2019 lidar robot

    6/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 6 TEC Oyur

    In this paper, the problem of mirror reflection is addressed. The main

    contribution of this study is to provide a solution to detect and track mirrors using only

    LIDAR information. The mirror detector utilizes the geometric property of mirror symmetry

    to generate hypothetical mirror locations. An identified mirror location is represented using a

    line model with endpoints. The mirror tracker is then used to integrate the potential mirror

    locations temporally using a Bayesian filter. A Bayesian framework is introduced to the

    mobile robot mapping and localization process so that the mirror images can be eliminated.

    The proposed approach has been demonstrated using real data from the experimental

    platform equipped with a SICK LMS 291 LIDAR. The performance of the proposed

    approach has also been evaluated using real data. The ground truth is obtained using another

    LIDAR that can observe the actual boundary of a mirror. The ample experimental results

    demonstrate the feasibility and effectiveness of our approach.

  • 7/31/2019 lidar robot

    7/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 7 TEC Oyur

    CHAPTER 2

    BACKGROUND

    Current LIDARs are a standard sensor for both indoor and outdoor mobile

    robots, given their inherent reliability. The data from a LIDAR include the angles and the

    distances to the objects in the field of view. Compared with LIDARs, vision sensors require

    complicated and error-prone processing before obtaining depth information. Range sensors

    such as sonar sensors and IR sensors are not capable of fine angular resolution. As a result,

    LIDARs are capable of fine angular and distance resolution, real-time data retrieval, and low

    false rates.

    As light can be reflected off a mirror and penetrate a window, mobile robots

    equipped with LIDARs only may not be capable of dealing with real environments. The

    sonar, oppositely, is capable of detecting those objects that a LIDAR can miss. The main

    drawbacks in sonar sensing are specularity, wide beam width, and frequent misreading due to

    either external ultrasound sources or crosstalk. In optical sensing, specular reflection cancause loss of data and noisy signals in optical scans.

    Several new LIDAR systems have been introduced recently. A time-of-flight

    camera is a 3-D LIDAR that can provide immediate depth images. It enables a diverse set of

    emerging medical, biometric, and robotics applications. Several small LIDARs have been

    introduced for indoor use, and have a reasonable price and low power consumption. They

    operate at high data rates with approximate millimeter resolution. As the development of

    LIDARs is getting more and more mature, prices are greatly reduced. Robots also rely more

    and more on laser sensing. However, new LIDARs also suffer from the problems of mirror

    reflection and glass transparency.

  • 7/31/2019 lidar robot

    8/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 8 TEC Oyur

    Making robots fully autonomous in a wide variety of environments is difficult,

    especially in environments with transparent objects, light-reflected objects, or light-absorbed

    objects. To make robots fully autonomous in environments with mirrors and windows,

    detection and modeling of these objects are critical. The objective is the extraction of sonar

    range readings, which are complementary to corresponding laser range information in the

    sense that they provide additional environmental information. The LIDAR information is

    used to verify corresponding sonar range information. A collection of sonar measurements is

    acquired to obtain a dense range map. The laser sensing is used to complement the sonar

    sensing by accurately pinpointing the corners and the borders of objects, where the sonar data

    are ambiguous. Both of these works proposed to extract complementary sonar readings to

    detect those objects not seen by LIDARs. However, the in distinguishability between mirrors

    and windows makes robot exploration problematic.

  • 7/31/2019 lidar robot

    9/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 9 TEC Oyur

    CHAPTER 3

    SENSOR FUSION

    In order to demonstrate the ambiguities that arise in a conventional sensor

    fusion approach, we maintain two individual occupancy grid maps accumulated from a

    LIDAR and a sonar array, respectively. Instead of making hard decisions at every time step,

    the occupancy grid maps are utilized to accumulate the temporal information of the sensor

    readings. Let Mland M

    sbe the occupancy grid maps built using data from a LIDAR and a

    sonar array, respectively. Each grid cell (x,y) is determined as a potential obstacle if the

    following inequalities

    Ml x,y s (2)

    Where land

    sare predefined probabilities. The values of

    land

    scan be

    obtained according to the apriori probabilities used in the occupancy grid map

    representation. In our experiments, l is 0.05 and s is 0.95. At every time step, the sensor

    fusion map is calculated accordingly. The probability Mx,y of the grid cell (x,y) in the sensor

    fusion map M is Msif (1) and (2) hold; otherwise, M

    l.

    Fig. 1 visualizes the resulting grid maps using data collected in an

    environment with mirrors and windows. Fig. 4(a) and (b) depicts the occupancy grid maps

    built using data from a LIDAR and a sonar array, respectively. It can be observed that

    mirrors and windows are objects that are likely to be seen by sonar sensors, but less likely to

    be identified by LIDARs. Fig. 4(c) shows the sensor fusion map in which most of the mirror

    and window locations are successfully identified, in contrast to the LIDAR-only map. Fusion

    of heterogeneous sensors is important for collision-free navigation in real environments.

  • 7/31/2019 lidar robot

    10/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 10 TEC Oyur

    Fig 1. Occupancy grid maps

    To deal with the problem of mirror reflection, conventional approaches might

    include the use of sonar to detect obstacles unseen by a LIDAR. However, it still fails to

    resolve the ambiguity of whether an obstacle is specifically a mirror or a window. The

    interpretation of an object that appears to be behind the obstacle can be ambiguous. In order

    to ensure collision-free navigation and reliable localization capability, having a consistent

    understanding of the environment is important. We take advantage of the property of mirror

    symmetry to resolve the ambiguity, and use the Bayesian framework to incorporate spatial

    and temporal information. By investigating the spatial symmetry of the environment and

    using only LIDAR information, our approach can identify mirrors, estimate their locations,

    and properly interpret the mirror images of objects.

  • 7/31/2019 lidar robot

    11/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 11 TEC Oyur

    CHAPTER 4

    MIRROR DETECTION

    In this section, we describe a method to identify potential mirror locations

    within a laser scan. We assume that mirrors are planar. A distance-based criterion is used to

    determine gaps in a laser scan. The geometric property of mirror symmetry is exploited to

    restore the spatial information of reflected scan points. The likelihood field sensor model isapplied to calculate the likelihood that a gap is indeed a mirror. A mirror prediction is then

    represented by a Gaussian. The iterative closest points (ICPs) algorithm is utilized for

    evaluating the uncertainty of a mirror prediction.

    4.1 PREDICTION

    The mirror prediction method utilizes the fact that mirrors are usually framed,

    i.e., mirrors are physically bounded. For instance, in Fig. 2.1, the mirror is enclosed by a

    wooden frame, whereas in Fig. 2.2, the mirror that is supported by a pillar is framed with

    steel. The assumption can fail when a mirror that is not placed along anything else does not

    have a boundary. First, we assume environments are smooth and define that gaps are

    discontinuities of range measurements within a laser scan. Letting z be an observation

    containing range measurements taken from a LIDAR, a gap Gij consists of two

    measurements {zi,zj|1 i 1}, such that

    zi+1 zi >d (3)

    zj1 zj >d (4) zk zk+1|d for i

  • 7/31/2019 lidar robot

    12/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 12 TEC Oyur

    where n is the cardinality |z| of the observation z, zi is the i-th range

    measurement, and d is a predetermined constant. The cardinality of an observation is a

    measure of the number of measurements of the observation. In our experiments, d is 1.5 m.

    The line with endpoints {pi,pj}is thus considered as a potential mirror location, where pi and

    pj are the Cartesian coordinates of the range measurements zi and zj, respectively, in the

    robot frame.

    Fig 2.1 Fig 2.2

  • 7/31/2019 lidar robot

    13/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 13 TEC Oyur

    4.2 VERIFICATION

    For each gap Gi,j with endpoints {pi,pj}, the measurements {zi+1,zi+2,...,zj1}are

    restored in accordance with the geometric property of mirror symmetry. Let ei,j be the line

    with endpoints pi and pj, e0,k be the line with endpoints pk and the origin 0, and pi,j,k be the

    intersection point between the two lines ei,j and e0,k. The reflected scan point pk with

    respect to the kth range measurement zk is calculated such that the angle function calculating

    the angle between vectorsp1p2 and

    p3p2. The process is illustrated in Fig

    (0,pk)= (0,pi,j,k)+(pi,j,k,pk) (6)

    (0,pi,j,k,pi)= (pj,pi,j,k,pk) (7)

    Fig 3

    where (,)is the Euclidean distance function and (p1,p2,p3)

    The likelihood i,j of the reflected scan points {pi+1,pi+2,...,pj1} with respect to the local

    map around the robot is then calculated using the likelihood field sensor model. A gap Gi,j

    with likelihood i,j greater than or equal to is considered likely to be a mirror Mi,j, is a

    predefined constant probability. In experiments 0.5, meaning that a gap with at least 50%

    confidence is considered as a possible mirror location.

  • 7/31/2019 lidar robot

    14/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 14 TEC Oyur

    4.3 REPRESENTATION

    To incorporate temporal integration, a mirror location has to be represented

    properly so that the uncertainty can be taken into account. Intuitively, a mirror location is a

    line segment and can be described with its endpoints. A filtering algorithm updates the two

    endpoints with the associated mirror measurement separately. However, whether a laser

    beam is reflected back or reflected off is highly relevant to the smoothness of the mirror

    surface and the angle of incidence. The distance between the endpoints of a mirror prediction

    is never longer than the true distance. Accompanying the basic light property, the observed

    endpoints are, almost surely, not the true endpoints of the mirror. The instability of the

    measurements around mirrors are illustrated in Fig. 6. Instead of storing the endpoints of a

    mirror measurement directly in the state vector, we propose to represent the mirror with a

    line model and store the corresponding endpoints separately. In Section V, this property will

    be further used to facilitate the process of estimating the endpoints of a mirror.

    We propose to represent mirrors as line segments. In the state vector, a line

    segment is represented by the angle and the distance of the closest point on the line to the

    origin of the robot frame. The endpoints of a line segment are not placed within the state

    vector, but stored separately. The mean vector of the line segment of Mi,j with respect to the

    robot frame is given as

    RMi , j = _RMi , jRMi , j _ =arctan _ yi , j , kxi , j , k__ x2i,j,k+ y2i,j,k

    where x i,j,k and y i,j,k are the xy coordinates of the closest point on the

    line to the origin. Image registration is the process of transforming the different sets of data,

    acquired at different times or from different perspectives, into one coordinate system. We

    propose to exploit the ICP algorithm to estimate the uncertainty of a mirror prediction. By

    matching the reflected scan points with the whole laser scan, the displacement, including

  • 7/31/2019 lidar robot

    15/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 15 TEC Oyur

    Translation and rotation, between the reflected scan points and the

    environment is calculated. However, adjusting the four parameters of a mirror prediction,

    two parameters for the line model and two parameters for the endpoints, using the three

    parameters of the displacement is infeasible. Note that a point on a line has 1 DOF. Instead of

    using the registration result to refine a mirror prediction, the displacement is utilized to

    calculate the covariance matrix of a mirror prediction, which can be expressed as

    RMi , j = _2+2 00 2 +x2 +y2

    Where and are predetermined values of the measurement noise for the

    covariance matrix, and x, y, and are the registration results using the ICP algorithm by

    which {pi+1,pi+2,...,pj1} and the whole laser scan arealigned. The values of and

    can be obtained by taking into account the modeled uncertainty sources. In our experiments,

    is 3

    and is 0.2 m. Fig.4 illustrates the mirror detection results in which the gaps in the

    laser scans are identified.

    Fig 4 ( (a) and (b) is the same as that shown in Fig. 2 and the scene of (c) is the same

    as that shown in Fig. 2.1 The robot is at the origin and heads toward the positive x-axis. Dots

    are the raw range measurements, where the heavy dots (in red) are the measurements not

    identified as the mirror).

  • 7/31/2019 lidar robot

    16/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 16 TEC Oyur

    CHAPTER 5

    MIRROR TRACKING

    In this section, we describe a method to update mirror locations for temporal

    integration. Bayesian filtering is a general probabilistic approach for estimating an unknown

    probability density function over time using a mathematical process model and incoming

    observations. Mirror predictions at different time steps are integrated using an extended

    Kalman filter (EKF), which is inherently a nonlinear Bayesian filter. As the endpoints are not

    stored in the state vector, the update stage is separated into two stages: the line update stage

    and the endpoints update stage. The line update stage integrates mirror predictions tem-

    porally using EKFs. The endpoints update stage updates the endpoints of a mirror by

    exploiting the basic light property.

    5.1 LINE UPDATE

    The mean vector and the covariance matrix of a line model are first

    transformed into global coordinates, which are given as

    Mi , j = _ RMi , j+ txtcos _RMi , j+ t_+ytsin _RMi , j+ t__ (10)Mi ,

    Where Jxtand JM

    i,jare the Jacobian matrices of the line model with respect to

    the robot pose xt =( xt yt t )

    T

    and the line measurement, respectively, and Pt is the

    covariance matrix of the robot pose. Data association is implemented using a validation gate

    defined by the Mahalanobis distance. The standard EKF process is then applied to update the

    mean vector and the covariance matrix of a mirror estimate.

  • 7/31/2019 lidar robot

    17/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 17 TEC Oyur

    5.2 ENDPOINT UPDATE

    After the line model of a mirror estimate is updated, the endpoints of the

    mirror should be updated accordingly. Let Mt

    be j the updated mirror estimate at time t, Mt+1

    be the associated mirror measurement at time t + 1, {pti, ptj} and {pt+1u, pt+1v} be the

    endpoints of M ti,j andMt+1u,v , respectively, M t+1 be the updated mirror estimate at time

    t + 1, and et+1 be the corresponding line model of the updated mirror estimate. We can

    compute the point set P = {pti, pt. pt+1u, pt+1v}, which includes the closest points from

    the points in P = {pti, ptj, pt+1u, pt+1v} to the line et+1. The process is illustrated in Fig 5.1.

    As described in Section IV-C and illustrated in Fig. 5.2, the observed

    endpoints of a mirror are usually not the true counterparts, and thus, the distance between the

    endpoints of a mirror prediction is never longer than the true distance. We take advantage of

    the a priori knowledge to accommodate this phenomenon. The endpoints of the mirror

    estimate M t+1 are obtained by finding pair of points in P, such that the distance between

    these two points is maximum, which can be expressed as

    pt+11, pt+12 _ = argmaxp1, p2 P (p1, p2) (12)

    Where pt+11 and pt+12 are the resulting endpoints of the mirror estimate M

    t+1. Fig. 9 illustrates a mirror tracking result in which a mirror is correctly detected and

    tracked. As can be seen from Fig. 4(c), although the mirror detected with sensor fusion is

    spatially sparse, the proposed approach can accurately estimate the location of the mirror.

  • 7/31/2019 lidar robot

    18/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 18 TEC Oyur

    Fig 5.1 (Endpoints update. The dashed line shows the updated line model et+1 of a

    mirror at time t+ 1. The solid lines indicate the line model of the updated mirror estimate Mti,j and the associated mirror measurementMt+1 u ,v . The thick lines show the corresponding

    line segments of M ti,j and Mt+1u ,v . The set P can be computed accordingly whichcontains the closest points from M ti,j andMt+1u ,v to the lineet+1.)

    5.3 COMPLEXITY ANALYSIS

    The mirror tracker requires O(1) operations in the general case and O(|z|) in

    the worst case, where |z| denotes the cardinality of the observation z, as shown in Fig 5.1 The

    line update stage takes constant time to perform an EKF update for each of the mirror

    estimates. The endpoints update stage also takes constant time to update the endpoints of a

    mirror. There are O(|z|)mirror estimates in this stage. Similarly, as there are usually only a

    couple of mirrors around an environment, the number of mirror estimates in the line update

    stage and the endpoints update stage can be bounded by some constant. The overall time

    complexity in the general case is greatly reduced to O(1), which is sufficient for real-time

    applications.

  • 7/31/2019 lidar robot

    19/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 19 TEC Oyur

    CHAPTER 6

    EXPERIMENTAL RESULTS

    6.1 MAPPING, LOCALIZATION, AND NAVIGATION

    First, we describe the mapping, localization, and navigation problems in

    environments with mirrors. Without the mirror detection and tracking process, mirror imagesare considered as parts of real environments. As an occupancy grid map represents the

    configuration space (C-space) of a robot, the inconsistency between the real environment and

    the map containing mirror images makes the robot navigation problematic. Robots should be

    capable of figuring out mirror locations and avoid entering the fake areas formed due to

    mirror reflection. To deal with the phenomenon of mirror reflection, the mirror images within

    a map have to be detected and corrected accordingly.

    In this paper, mirrors are detected and tracked while the SLAM process is

    performed. The map is further refined by incorporating mirror information such that mirror

    images are eliminated. Accompanying the post processing process, each measurement

    perceiving the distance between the robot and a mirror is updated as the distance to the

    mirror surface Fig.6 illustrates the post processing process. In Fig. 6(a) the maps built-in

    environments with mirrors are shown. The mirror locations, which are estimated while the

    robot drove by, are also visualized. As can be seen, the maps that contain mirror images are

    inconsistent with the real environments. In Figs. 6(a) and 6(b), the maps incorporating the

    mirror information are depicted. Mirror images are eliminated by correcting LIDAR

    measurements affected by mirrors.

  • 7/31/2019 lidar robot

    20/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 20 TEC Oyur

    The false estimates are removed probabilistically by discarding uncertain

    mirror estimates. With the use of the proposed mirror detection and tracking process, the map

    can be estimated consistently without apriori knowledge of mirror locations. For mobile

    robot localization, such as EKF localization, Markov localization, and Monte Carlo

    localization, compared to the post processing process, the preprocessing process is required

    to take the mirror information into account. The preprocessing process eliminates the mirror

    image within a laser scan by applying the property of mirror symmetry, as described in

    chapter 5. The updated LIDAR measurements are then used to perform the localization task.

    Fig 6. (Mirror tracking. The scene is the same as that shown in Fig. 3, where the

    robot is at around place B. The maps are depicted with respect to a global coordinate system.

    The occupancy grid map of the environment is shown, where the rectangle (filled with blue)indicates the robot pose, the lines (in red) are the line models of the mirrors, the ellipses (in

    green) show the 2 covariance of the line models, and the thick lines (in red) indicate the

    mirror locations. (a) Mirror tracking result. (b) Enlargement of (a). (c) Enlargement of (b))

  • 7/31/2019 lidar robot

    21/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 21 TEC Oyur

    6.2 QUANTITATIVE EVALUATION

    The feasibility of the proposed algorithm has been demonstrated using real

    data. Furthermore, we present a performance analysis of the proposed algorithm. In this

    experiment, the SICK LMS 100 LIDAR is used whose angle of view is 270

    .As ground truth

    mirror locations are usually unobtainable; markers are placed at the boundary of the mirror.

    Two LIDARs are used to collect data whose observations are parallel to each other, as shown

    in Fig. 12. While one perceives a mirror image, the other can obtain the ground truth mirror

    location by observing the markers placed alongside. The two LIDARs are calibrated by

    calculating the mean of the displacements from matching empirical observations.

    To quantify the performance, we perform SLAM using data from the two

    LIDARs separately. There are seven datasets collected around the environment shown in Fig.

    7.1. Each dataset contains about 500 observations. The ground truth mirror locations are

    annotated and taken into account in the mapping process for obtaining consistent mirror

    locations in global coordinates. The maps can be slightly different from each other due to

    various noise sources. The resulting maps are aligned for a fair comparison and used to

    calculate the estimation error. Fig. 7.2 illustrates the calibrated observations and the resulting

    maps obtained from the LIDARs. As the maps are similar, only one estimated map and one

    ground truth map are depicted in Fig. 7.2(e) and (f).

  • 7/31/2019 lidar robot

    22/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 22 TEC Oyur

    Fig 7.1 (The case in which the LIDAR sees the robot itself in the mir-generated when

    the robot sees itself. By adopting the Bayesian ror is also illustrated in Fig. 13(b) and (d). As

    can be seen, the framework, it is eliminated naturally from temporal integra-LIDAR can

    detect a mirror when the angle of incidence of the tion of observations. The resulting mirror

    estimate is shown in emitted photon is zero.)

  • 7/31/2019 lidar robot

    23/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 23 TEC Oyur

    We define the overall error of a mirror estimate as the sum of the residuals

    between the estimated endpoints and the true endpoints, and define the angular error of a mir-

    ror estimate as the angular misalignment between the estimated line model and the true line

    model. Root-mean-squared error (RMSE) is used to evaluate the accuracy of our algorithm.

    In the experiment, the overall error is 0.12 m and the angular error is 0.47

    . The majority of

    the error tends to be in the plane of the wall, and the angular misalignment of the estimated

    Mirror location is small. This is mainly because of the instability of LIDAR measurements

    around a mirror. The predicted locations of the endpoints depend on whether the emitted

    photon is reflected back, reflected off, or missing. Mirror reflection can make the observed

    endpoints ambiguous. However, a LIDAR that offers high precision can provide accurate

    angular estimates of mirrors. Note that the error includes uncertainties from the SLAM

    process. Just as with solving the SLAM problem, the performance also depends on sensor

    characteristics and the environment. The experiment shows that the proposed approach is

    effective, even though various noise sources are involved. The results in the experiments fig

    7.2 shown below

  • 7/31/2019 lidar robot

    24/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 24 TEC Oyur

    Fig 7.2 (Observations from the LIDARs. (a)(d) Robot is shown by the rectangle (in black)

    and heads toward the positive x-axis. Dots are the range measurements containing mirror

    images, where the heavy dots (in red) are the measurements not identified as the mirror

    images, and the light dots (in cyan) are the measurements with false range information due to

    mirror reflection. Lines (in black) indicate the detected mirror locations. Circles (in green)

    are the range measurements used for performance evaluation. (e) Estimated map is shown in

    which the thick (red) line indicates the mirror location. (f) Ground truth map is depicted, )

  • 7/31/2019 lidar robot

    25/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 25 TEC Oyur

    ADVANTAGES

    High speed response If the environment is opaque.

    Due to the use of LIDAR the captured images have high resolution

    It can detect contaminants in Sewages having radioactive particles

    It have high effective edge detecting capability

  • 7/31/2019 lidar robot

    26/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 26 TEC Oyur

    DISADVANTAGES

    There is time delay for the scanning of glass environments.

    The LIDAR robot cannot detect obscured anti silicon pirate glasses

    LIDAR robots need high power servo motor because of the heaviness of LIDAR

    Transceiver

  • 7/31/2019 lidar robot

    27/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 27 TEC Oyur

    APPLICATION

    It can used as pipe line and sewage maintains

    It can used as a land mapper in various environment

    The LIDAR implemented Robot can work in hazards environments

    It can use as a navigator robot.

    It can used as fire extinguisher

  • 7/31/2019 lidar robot

    28/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 28 TEC Oyur

    CONCLUSION

    Making robots fully autonomous in a wide variety of environments is difficult.

    To our best knowledge, the solution to the problem of mirror reflection has not been

    addressed previously. The primary contribution of this paper is to introduce the mirror

    detection and tracking framework using only LIDAR information. The mirror detectionmethod utilizes the property of mirror symmetry to calculate the confidence of a mirror pre-

    diction. The image registration technique is used for evaluating the uncertainty of a mirror

    prediction. The proposed endpoints update strategy employs the fact that the distance

    between the endpoints of a mirror prediction is never longer than the true distance. The

    proposed approach can be seamlessly integrated into the mobile robot localization framework

    and the occupancy grid map representation. The ample experimental results using real data

    from a LIDAR have demonstrated the feasibility and effectiveness of the proposed approach.

  • 7/31/2019 lidar robot

    29/30

    On Solving Mirror Reflection in lidar Sensing

    Dept. of ECE 29 TEC Oyur

    FUTURE WORK

    In this paper, we use a heuristic method to guess possible mirror locations in

    the continuous Cartesian space. It relies on the fact that mirrors are usually framed or placed

    along a wall. If the boundary of a mirror is not apparent or the mirror is not placed along

    anything else, the proposed approach will fail. Sensor fusion is versatile in its capability to

    deal with diversified surfaces, but less precise. On the other hand, the major drawback of

    LIDAR-only approaches can be their incapability to detect transparent objects, due to the

    nature of light. Future work will include an approach to guess the possible mirror locations

    using sensor fusion. Because of the inaccuracy of sonar readings, the extraction and

    reconstruction of disjointed line segments is required to generate a mirror prediction. in

    distinguishability between mirrors and windows in sensor fusion can also be resolved

    through the use of sensor fusion and the proposed mirror detection and tracking process. In

    addition, it would also be of interest to study some of the special cases: multiple reflections

    of mirrors, curved mirrors, and mirror symmetric scenes.

  • 7/31/2019 lidar robot

    30/30

    On Solving Mirror Reflection in lidar Sensing

    REFERENCE

    [1] E. Asadi and M. Bozorg, A decentralized architecture for simultaneous localization and

    mapping,IEEE/ASME Trans. Mechatronics, vol. 14,no. 1, pp. 6471, Feb. 2009.

    [2] A. Franchi, L. Freda, G. Oriolo, and M. Vendittelli, The sensor-basedrandom graph

    method for cooperative robot exploration,IEEE/ASMETrans. Mechatronics, vol. 14, no. 2,pp. 163175, Apr. 2009.

    [3] C. Urmson, J. Anhalt, D. Bagnell, C. Baker, R. Bittner, M. N. Clark, J. Dolan, D.Duggins, T. Galatali, C. Geyer, M. Gittleman, S. Harbaugh,M. Hebert, T. M. Howard, S.

    Kolski, A. Kelly, M. Likhachev,M. McNaughton, N. Miller, K. Peterson, B. Pilnick, R.

    Rajkumar,P. Rybski, B. Salesky, Y.-W. Seo, S. Singh, J. Snider, A. Stentz, W. R.Whittaker,Z. Wolkowicki, J. Ziglar, H. Bae, T. Brown, D. Demitrish,

    [4] S. Thrun, M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron, J. Diebel, P. Fong, J. Gale,

    M. Halpenny, G. Hoffmann, K. Lau, C. Oakley, M. Palatucci, V. Pratt, P. Stang, S.Strohband, C. Dupont, L.-E. Jendrossek, C. Koelen, C. Markey, C. Rummel, J. van Niekerk,

    [vol. 23, no. 9, pp. 661692, Sep. 2006.

    [5] C.-C. Wang, Simultaneous localization, mapping and moving object tracking, Ph.D.dissertation, Robot. Inst., Carnegie Mellon Univ., Pittsburgh, PA, Apr. 2004.