Searching and Tracking People in Urban Environments with Static and Dynamic Obstacles

Alex Goldhoorn, Anaís Garrell, René Alquézar and Alberto Sanfeliu

Journal Robotics and Autonomous Systems

Abstract— Searching and tracking people in crowded urban areas where they can be occluded by static or dynamic obstacles is an important behavior for social robots which assist humans in urban outdoor environments. In this work, we propose a method that can handle in real-time searching and tracking of people using a Highest Belief Particle Filter Searcher and Tracker. It makes use of a modified Particle Filter (PF), which, in contrast to other methods, can do both searching and tracking of a person under uncertainty, with false negative detections, lack of a person detection, in continuous space and real-time. Moreover, the method uses dynamic obstacles to improve the predicted possible location of the person. Comparisons have been made with our previous method, the Adaptive Highest Belief Continuous Real-time POMCP Follower, in different conditions and with dynamic obstacles. Real-life experiments have been done during two weeks with a mobile service robot in two urban environments of Barcelona with other people walking around.


On this page videos are shown which give an overview of the experiments done with the HB-Particle Filter Searcher & Tracker to search and track people. Like explained in the article, the robot has to search and track the person. The person to track is recognized using an adapted version of AR Markers, see Amor-Martinez et al. (2014) [35].


In the video the experiment is shown in two sections:
  1. Left: video of the experiment.
  2. Right: map and probability maps.
The right image shows a map and a probability map:
  • Map (left): map as shown by ROS rviz; which shows the following:
    • Dabo: blue body, white head;
    • Obstacles: black and dark gray;
    • Laser detections: blue line/dots;
    • Path: blue lines indicate the path already executed by the robot.
    • People detection:
      • Leg detection: shown by blue dots;
      • Last used person location: red dot, this is a combination of leg detection and AR Marker detection.
  • Probability Map: shows the robot's probability of the person's location:
    • Robot self: the blue circle;
    • Detected person's location: red circle;
    • Dynamic obstacles: light blue circles, the persons walking around which are used as dynamic obstacles by the algorithm;
    • Obstacles: black squares;
    • Probability matrix: the probability of the person being on a certain location is shown with the colors white to red, the light blue color indicates a probability of 0, white is a low probability and red high. Note that in some videos these cells are bigger because the resolution of the probability matrix is set lower than of the discretized map.


Like explained in the article, the robot detects the person in two phases: 1) by laser detection of the legs [27] combined with a Multiple Hypothesis Tracking for Multiple Targets [28] (MHT), and 2) by the AR tag [23]. Since the first can only be used to detect people, we added the second to recognize the person. The AR tag should be sufficient, but resulted in some false positives, therefore we only accepted detected tags if there was a person detection by laser close enough. This however, still resulted some false positive detections if, for example, another person was close to the position of a falsely detected tag.
When a false positive detection occurs, the probability for the person being on that location increases, and therefore the robot goes and explores that area, but since the false positive normally is detected only for a short time, the probability propagates to other places, and the probability map recovers to the correct area.

The videos

  • Exploration in a small environment: Dabo explored the FME environment without the person to find (i.e. with tag) being there. It shows the exploration behavior in which the robot searched for the person around the obstacles. While the robot was moving the belief also moved to the other side of the obstacles. Particles that were in the field of view of the robot got a low weight since the robot did not observe the person.
    A false positive detection of the person also occurred, because a person with bike entered, and the tag detection algorithm incorrectly detected a tag close to that person. However, after a few iterations of not seeing the tag, the belief started growing again.

  • Searching and tracking : Dabo searched for the person in the FME environment with other people being there which occluded the person. It can be clearly seen that the algorithm assumed there to be a probability of the person being behind the people standing in front of him. The robot went behind the people to see if the person was there, and then tracked the person. In the end, the robot had a false detection at the corner where other people were sitting, and therefore the belief concentrated there until the false detection stopped.

  • Exploration in a larger environment : The robot exploration on the Telecos Square is shown in different experiments. Also there is an experiment with a false positive detection, but here also the belief grew after having not seen the person, and therefore the robot expanded its exploration area again.

  • Searching and tracking in a larger environment : The robot searched for the person on the Telecos Square. Two experiments are shown in which the robot searched for the person behind two others blocking the view. In the second experiment also tracking of the person is done in which one person partly occluded the person.

  • Searching and tracking in a larger environment 2 : Several experiments of the robot searching for and tracking the person on the Telecos Square are shown in this video.

  • Use of dynamic obstacles : This video shows a simulation where the person (red) started at the same position, and the robot (blue) also. There were 30 people (dynamic obstacles) walking around to random locations. The left side of the video shows the method using the visible dynamic obstacles (i.e. in visibility range) to update the belief. This means that it took into account that the person could be hidden behind a dynamic obstacle. At the right side of the video the algorithm did not use the dynamic obstacles, and therefore assumed the person not to be in the field of view. The belief map (second and fourth image) shows where the algorithm estimated the person to be, and it can be seen that - when using dynamic obstacles - the belief was maintained behind them (if there was any close before).

The full version of the videos can be found on RAS full.