Drone racing is a high adrenaline, high speed ‘extreme’ sport – and to be a successful drone racing pilot, a person requires a great deal of skill and experience to perform the tight maneuvers needed in intricate maze-like environments.
Fascinated by the speed and accuracy of drone racing, one MIT researcher brought together a team to see if they could create a virtual-reality system in which they could ‘train’ drones without the danger of constantly crashing into walls and other objects.
And so they did, naming the system “Flight Goggles”. The testing ground does not consist of actual VR goggles such as those worn by a drone racer, however, or a room surrounded by safety nets that still pose issues for drones such as getting propellers stuck.
In a nutshell, the testbed is a gymnasium-sized, hangar-like room built into MIT’s new drone-testing facility in Building 31, lined with motion-capture cameras that keep track of the drone’s position in the room.
Sertac Karaman, associate professor of aeronautics and astronautics at MIT, explains the process by which they use image-rendering software to create ‘rooms’ or other environments to create a virtual space for the testing drone to navigate.
“The drone will be flying in an empty room, but will be ‘hallucinating’ a completely different environment, and will learn in that environment,” Karaman explains.
Researchers have previously trained drones using deep learning to navigate tricky environments, such as researchers from the University of Zurich who trained a drone to recognise and follow forest trails. This can be a time-consuming task though, requiring many images, a lot of computing power – and presumably many crashes and repairs – to get a successful result.
Karaman hopes the VR testing environment will change that. “The moment you want to do high-throughput computing and go fast, even the slightest changes you make to its environment will cause the drone to crash,” Karaman says.”You can’t learn in that environment.”
If you want to push boundaries on how fast you can go and compute, you need some sort of virtual-reality environment Click To Tweet
In the Flight Goggles room, high-speed imaging and computer-processing are put to work, with images being processed around three times faster than the human eye – 90 frames per second.
This is made possible by a super computer that the researchers embedded with custom-built circuit boards, as well a camera and an inertial measurement unit – all neatly packaged into a small, 3-D-printed nylon and carbon-fiber-reinforced drone frame.
Karaman, who will present details of the VR drone system at ICRA2018 next week, says “We think this is a game-changer in the development of drone technology, for drones that go fast.”
But does the drone, once trained in a virtual environment, successfully translate it’s learning to the real world?
The team carried out several experiments, and describe in a press release one of them involving a window about twice the size of the drone.
The drone flew through the virtual window 361 times over 10 flights, ‘crashing’ a very inconsequential 3 times – of course without any sustained damage.
After completing the virtual testing, the team set up a real window of the same size and turned on the drone’s onboard camera t\so it could see the real world surroundings.
Over 8 flights and 119 fly-throughs, the drone only crashed or required intervention 6 times.
“It does the same thing in reality,” Karaman says. “It’s something we programmed it to do in the virtual environment, by making mistakes, falling apart, and learning. But we didn’t break any actual windows in this process.”
The system has huge potential, as researchers can create any number of environments or layouts with which to test drones, even testing whether drones can ‘safely’ fly around virtual humans.
Karaman has his eye on the high-speed prize though – entering an autonomous drone in a real drone race against human drone pilots.
“In the next two or three years, we want to enter a drone racing competition with an autonomous drone, and beat the best human player,” Karaman says.
Co-authors include Thomas Sayre-McCord, Winter Guerra, Amado Antonini, Jasper Arneberg, Austin Brown, Guilherme Cavalheiro, Dave McCoy, Sebastian Quilter, Fabian Riether, Ezra Tal, Yunus Terzioglu, and Luca Carlone of MIT’s Laboratory for Information and Decision Systems, along with Yajun Fang of MIT’s Computer Science and Artificial Intelligence Laboratory, and Alex Gorodetsky of Sandia National Laboratories.