Think of things like volcanoes. If a scientist wants data to be collected from a volcano, we are not going to send a graduate student there. My name is Pratap Tokekar and I’m an assistant professor in the ECE department. I’m especially interested in using robots as sensors and sensing agents. There are a number of scenarios and applications that we can think of— things like environmental monitoring, first response, emergency response, security surveillance— where we really need eyes and ears on the ground or in the air. It’s the three D’s—it’s dull, dirty, or dangerous— to send a human in those regions. The goal is to get a bunch of robots, equip them with sensors, and send them out in the area of interest and have them go and autonomously explore the environment and do the sensing for you. A simple example would be if you are exploring an environment— an indoor environment— not all regions are accessible to a ground robot. For example, a ground robot will not be able to go over steps or look at the ceiling. But you may have an aerial robot that is flying above the ground which can look up at the ceiling, look over tables, look over structures, fly through corridors and so on. On the other hand, a ground robot may be able to go under a table, and go in narrow, confined spaces and give you good views from that. How do you now automatically decide which type of robot should go where, and how do they exchange information with each other? It’s a notoriously hard problem in computer science, and it’s been around for the past 20-30 years. We were able to look at a very small subset, a very restricted case of the problem which was still computationally challenging, and we were able to come up with a result which is very promising. It’s robots! I don’t think I need to say more.