Machine learning, in which computers learn new skills by looking for patterns in training data, is the basis of most recent advances in artificial intelligence, from voice-recognition systems to self-parking cars. It's also the technique that autonomous robots typically use to build models of their environments.
That type of model-building gets complicated, however, in cases in which clusters of robots work as teams. The robots may have gathered information that, collectively, would produce a good model but which, individually, is almost useless. If constraints on power, communication, or computation mean that the robots can't pool their data at one location, how can they collectively build a model?
At the Uncertainty in Artificial Intelligence conference in July, researchers from MIT's Laboratory for Information and Decision Systems will answer that question. They present an algorithm in which distributed agents — such as robots exploring a building — collect data and analyze it independently. Pairs of agents, such as robots passing each other in the hall, then exchange analyses.
In experiments involving several different data sets, the researchers' distributed algorithm actually outperformed a standard algorithm that works on data aggregated at a single location.
"A single computer has a very difficult optimization problem to solve in order to learn a model from a single giant batch of data, and it can get stuck at bad solutions," says Trevor Campbell, a graduate student in aeronautics and astronautics at MIT, who wrote the new paper with his advisor, Jonathan How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics. "If smaller chunks of data are first processed by individual robots and then combined, the final model is less likely to get stuck at a bad solution."
Campbell says that the work was motivated by questions about robot collaboration. But it could also have implications for big data, since it would allow distributed servers to combine the results of their data analyses without aggregating the data at a central location.
"This procedure is completely robust to pretty much any network you can think of," Campbell says. "It's very much a flexible learning algorithm for decentralized networks."
link.
No comments:
Post a Comment