CMU-CS-17-129
Computer Science Department
School of Computer Science, Carnegie Mellon University



CMU-CS-17-129

Augmented Reality Visualization for Autonomous Robots

Danny Zhu

December 2017

Ph.D. Thesis

CMU-CS-17-129.pdf


Keywords: Artificial intelligence, augmented reality, autonomous robots, mobile robots, RoboCup Small Size League, visualization

We believe that it is essential to be able to analyze the reasoning of autonomous robots as it relates to their behavior, and to be able to display this reasoning in a quantitatively correct manner. Videos of robots are often naturally used to aid in replaying and demonstrating robot performance; plain videos contain no information about the processing or behavior of the robots, but such videos can be enhanced by being combined with extra information. Overall, the goal of this thesis is to combine real systems of mobile robots with tools for visualizing algorithms, so that the behavior of complex autonomous agents can be displayed in tandem with the real world. Concretely, the thesis will investigate the addition of visualizations onto initially plain, uninformative videos.

We focus on the creation of augmented reality visualizations to explain the reasoning of three autonomous robot systems – a quadrotor, a robot soccer team (with separate discussions of offense and defense aspects), and a robot soccer automatic referee (autoref) – and how to extend those visualizations to general robot domains. After motivating the work by providing a detailed explanation of how to build a visualization of reasoning for an example quadrotor domain, we explain how to generalize the concepts introduced in the process. We contribute a specication of a means of storing a set of graphical information, along with corresponding times and additional organizational information, that we can use to store sets of time-varying spatial drawings and combine them with videos to create visualizations. We also demonstrate a working implementation of the whole procedure, which we have already used with those robots.

An important part of the contribution is the ability to dynamically change what objects are visualized for a given plan. We can show multiple levels of detail of the plan, or filter the visualizations corresponding to different portions of the plan, according to a viewer's choices. Allowing these actions extends the idea of layered disclosure for text, which we have already heavily used, to these visualizations.

After introducing the general contributions, we return to specific robots: the quadrotor again, as well as the soccer offense, soccer defense, and autoref algorithms. For each algorithm, we give a description of the algorithm itself (brief for the offense, detailed for the other two) followed by an exploration of how to instantiate the general visualization principles for that particular algorithm. We demonstrate that we can apply our general methods of visualization across multiple diverse robot systems.

85 pages

Thesis Committee:
Manuela Veloso (Chair)
Emma Brunskill
Maxim Likhachev
Stefan Zickler (iRobot Corporation)

Frank Pfenning, Head, Computer Science Department
Andrew W. Moore, Dean, School of Computer Science




Return to: SCS Technical Report Collection
School of Computer Science

This page maintained by reports@cs.cmu.edu