Alex Goldhoorn

Publication

← Back to Publications

Analysis of Methods for Playing Human Robot Hide-and-Seek in a Simple Real World Urban Environment

Alex Goldhoorn, Alberto Sanfeliu and René Alquézar
ROBOT 2013: First Iberian Robotics Conference, Madrid, Spain, 2013
PDF | BibTeX | DOI

Abstract

This paper analyzes different methods for enabling robots to play hide-and-seek with humans in urban environments. The problem is formulated as a search task where a robot seeker must find a human hider in an environment with static obstacles and limited visibility.

We compare several approaches ranging from simple heuristic methods (random walk, frontier exploration) to probabilistic planning algorithms that maintain belief distributions over the hider's location. The methods are evaluated in simulation and in real-world scenarios with dynamic obstacles and occlusions.

The results show that belief-based approaches, which explicitly model uncertainty about the hider's location, significantly outperform purely heuristic methods. This work laid the foundation for the more sophisticated POMCP and particle filter approaches developed in my later research.

Interactive Demonstration

Try the interactive simulation to explore the belief-based algorithms presented in this paper. You can compare greedy frontier exploration with POMCP and particle filter methods, design custom maps, and control the seeker robot manually.

Launch Interactive Demo →

Key Contributions

Methods Compared

1. Random Walk

Baseline approach: robot moves randomly until it finds the hider. No planning or belief representation.

2. Frontier Exploration (Greedy)

Robot maps the environment and moves toward unknown areas (frontiers). Deterministic strategy focused on information gain without probabilistic reasoning.

3. Bayesian Grid Belief

Maintains a probability distribution over grid cells. Updates belief based on observations using Bayesian inference. Robot moves toward highest probability regions.

4. Particle Filter

Sample-based belief representation using particles. Efficient for continuous spaces and moving targets. Particles are resampled based on observations.

Related Work

Comparison of MOMDP and Heuristic Methods to Play Hide-and-Seek

International Conference of the Catalan Association for Artificial Intelligence (CCIA), 2013
A. Goldhoorn, A. Sanfeliu, R. Alquézar

Companion paper focusing on Mixed Observability Markov Decision Process (MOMDP) approaches for the same hide-and-seek problem.

PDF | BibTeX | DOI

Searching and Tracking of Humans in Urban Environments by Humanoid Robots

PhD Thesis, Universitat Politècnica de Catalunya, 2017
A. Goldhoorn

This paper's methods were extended and refined in my PhD thesis, which presents POMCP-based planning for continuous spaces and multi-robot coordination.

Thesis page | PDF

Citation

@inproceedings{Goldhoorn2013Robot,
  title={Analysis of Methods for Playing Human Robot Hide-and-Seek
         in a Simple Real World Urban Environment},
  author={Goldhoorn, Alex and Sanfeliu, Alberto and Alquézar, René},
  booktitle={ROBOT 2013: First Iberian Robotics Conference},
  pages={505--517},
  year={2013},
  publisher={Springer},
  doi={10.1007/978-3-319-03653-3_37}
}

← Back to Publications