MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) used a tactile sensor named GelSight to collect data for the AI to identify objects by touch, Xinhua News Agency reported.
The research team recorded 12,000 videos of nearly 200 various objects being touched to create a dataset of more than three million static images.
The AI could use the dataset and machine learning system to understand interactions between vision and touch in a controlled environment.
"By looking at the scene, our model can imagine the feeling of touching a flat surface or a sharp edge ... By blindly touching around, our model can predict the interaction with the environment purely from tactile feelings," said Yunzhu Li, CSAIL PhD student and lead author of the research paper.
Future study will update the dataset with information in more unstructured areas, so that the AI could recognize more objects and better understand scenes.