
Associate Professor Sridhar Lakshmanan has taught the robotics vision course at 51视频-Dearborn plenty of times before. But this semester, he thought he鈥檇 spice up the class by adding an obvious missing ingredient: an actual robot. In this case, it鈥檚 an open-source research robot known as a 鈥淭urtleBot鈥濃攕o-named because of its short, squat profile, and its top speed of roughly 1.5 miles per hour. They acquired a herd of 15 TurtleBots for the class with matching funds from the Department of Electrical and Computer Engineering and Henry W. Patton Center for Engineering Education and Practice.
To be fair, teaching the course without a robot wasn鈥檛 quite as counterintuitive as it sounds. Robots, after all, are fundamentally computers, and Lakshmanan said you can cover plenty about how they see by just focusing on the complicated math and coding at the core of robot vision.
That鈥檚 still a big part of the course. But now, all the students鈥 knowledge of algorithms and linear algebra will get put to the test in a final team project. The big exam will be making their TurtleBots autonomously traverse a mini obstacle course in the Institute for Advanced Vehicle Systems鈥 high bay lab鈥攅quipped only with the programming the student teams provide. As in, no remote controls allowed.
鈥淲hen you or I are navigating on the road, for example, we use fiduciary markers,鈥 Lakshmanan said. 鈥淲e can recognize stop signs or lane markers. We use color to distinguish between a potential obstacle and a background. We don鈥檛 get confused by shadows or reflections. In a class like this, we can鈥檛 give the robot all that capability, but we can give it enough that it can navigate in a somewhat controlled environment.鈥
Getting a robot to do those kinds of things鈥攖hings our human brains do unconsciously鈥 involves using complex mathematics and computer algorithms to mimic those human abilities. For a robot to recognize a lane marker, for example, you have to write code that allows it to pick out the edges and boundaries within a two-dimensional image.
Students will also be tackling another fundamental challenge in robotics vision: the difference between two-dimensional and three-dimensional objects. Lakshmanan said if a robot had only an optical camera, it could easily confuse a shadow for an object and then wildly veer off course to avoid it.
The TurtleBots are therefore equipped with a second set of 鈥渆yes鈥濃攁 LIDAR system, which functions similarly to radar, and uses infrared pulses to determine how far away something is and if it鈥檚 made of actual physical stuff. Even then, you still have to write code that lets the optical cameras and LIDAR systems talk with each other.
For students, the challenges they鈥檒l face in the class will take them straight into the heart of the same core problems confronting engineers of driverless cars. Lakshmanan said almost all autonomous vehicles still use real-time perception鈥攊n conjunction with mapping and GPS technology鈥攁s their way of knowing what鈥檚 around them. As in humans, vision is a powerful unsung enabler of mobility.
鈥淲e have made many breakthroughs in the past decade, but there are still many problems that are in need of solutions,鈥 Lakshmanan said. 鈥淭he holy grail in vision is making a perception system that can see under all conditions鈥攂right light, low light, complex shadows, reflections. And, of course, the algorithms must be robust enough to do it while travelling at relatively high speeds. That complexity is what makes vision so challenging.鈥
Luckily, for now, students will just have to achieve success at less than 2 miles per hour. Lakshmanan said even then, the obstacle course in the high bay lab will make for a challenging run鈥攅specially if passing clouds create some interesting lighting conditions or someone accidentally flips off the lights.
鈥淚f that happens,鈥 Lakshmanan said, 鈥渢hen all bets are off.鈥