0731 GMT February 29, 2020
Until recently, the bulk of artificial intelligence (AI) research has focused on individual agents. A single car that navigates streets on its own. A thermostat that learns from its immediate environment.
For the most part, artificial intelligent systems haven’t been tasked with working and learning as a group, but Jonathan How, professor of aeronautics and astronautics at MIT, considers this a lost opportunity. How leads a research team that is changing the way that mobile, artificial intelligent devices collaborate and learn from each other, WIRED magazine reported.
Ultimately, How and his team want to use machine learning (ML) to help smart objects make each other smarter. He sees a future where robots learning from one another and working together could transform industries like logistics (where robots fulfill orders and deliver them to your front door) and space exploration (where robots collaborate to investigate new frontiers). The real challenge, How says, is readying these AI-enabled bots for the real world, outside of the lab. And that’s where even more AI comes in.
Work of the whole
After 20 years in research labs, How knows that the real world is more complicated than lab environments, in which AI-powered robots are developed. Humans must contend with the complexities and uncertainties of life. What is everybody else doing? How do we execute the task that we’re doing together? How might that task change depending on our understanding of the world?
Leveraging their own groundbreaking algorithms, How and his team have optimized robots to adapt based on their experiences — and the experiences of their robot peers. To do this, the team utilizes reinforcement learning, an ML technique that allows AI-enabled agents to learn from their environment through trial and error, not unlike how humans learn. And the research even takes this a step further by studying what happens when multiple agents come into play — the crux of the emerging discipline of multi-agent reinforcement learning. The challenges include issues like how to get otherwise independent AI-enabled agents to build consensus and agree on something. How do you ensure that their constant chatter with one another doesn’t overwhelm the network? And what happens when an AI-enabled robot believes it knows the correct way to do something — but it’s wrong?
Not until the advent of viable deep-learning platforms has it become possible to truly answer these questions. Here, How and his team use AWS Deep Learning AMI environments powered by sophisticated Amazon EC2 GPU instances that can perform incredibly complex calculations on the cloud (without the cost and headache of managing racks and racks of servers). The end goal? To train and run reinforcement learning models fast and accurate enough to tackle real-world implications, such as when robots disagree and the constant chatter between them could otherwise overwhelm the network.
“It is known as the consensus problem,” How says. “If we all have a different opinion of what time we go to supper, how much communication do you need before you can all agree on a time? It seems relatively straightforward, but within a robotic system we’re dealing with many, many more questions than that, and typically these questions have a lot of uncertainty associated with them.”
Drilling deeper, How’s research (jointly funded by AWS, Boeing and IBM) has gone a long way toward solving the problem of performing an action or making a decision with incomplete knowledge. By way of example, he asks, “If two basketball players partially knew how to shoot a three-point shot, could they combine those two skills and actually do it?” It’s complicated stuff, but How’s new reinforcement learning system, called Hierarchical Multi-Agent Teaching (HMAT), is proving successful at improving team-wide learning through optimized reward functions and more efficient communications.
In How’s idealized ecosystem of co-learning AI-enabled bots, the whole is greater than the sum of the parts, and it’s taking some heavy technological efforts to make things work. For that, the team relies heavily on simulations, which they build using the PyTorch and TensorFlow machine-learning frameworks on Amazon EC2 P3 instances.
How notes that he has struggled to provide enough computational horsepower on-premises to his team (largely his graduate students) to run the complex RL algorithms required to keep a herd of robots in constant communication and to adapt their behaviors on the fly.
“If on day one you gave a student a new machine and said, ‘This is your computer for the next five years,’ you’d get laughed out of the room,” How says.
“After two years you’re dealing with old hardware that’s not cutting-edge. Then you’re limiting the ability of a student and of what they can accomplish.”
Leveraging cloud-based services gives every member of the team access to as much computing power as they need, no matter what aspect of the machine-learning problem they’re working on.
“In this sort of simulation-based training, where we’re testing hundreds of settings, speed is crucial,” says Dong-Ki Kim, a master of science candidate working with How.
“It translates directly into our ability to run more iterations in a shorter period of time. AWS offers powerful GPU instances that cut down training times significantly, accelerating the pace of our research.”
How says he sees a timeline of five to ten years for this research to be commercialized, perhaps a fundamental enabler of what’s to come for the future application of AI. For now, How is thinking about how these systems could be used to power search-and-rescue robots, perhaps for government agencies, but says that the uses for collaborative, resilient robots are nearly limitless.
“Even a fleet of delivery robots could benefit from this research,” he says of the very real possibility of bots working together to deliver packages. “They could learn from experience and share that learned knowledge.”