MIT Low Cost Sensor Gloves Are Expected To Allow Robots To Recognize Objects By Touch

- May 30, 2019-


Researchers at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory (MIT-CSAIL) have recently developed a low-cost sensor glove designed to enable artificial intelligence to "find" how humans recognize objects by touch. Called the Retractable TActile Glove (STAG), it uses 550 tiny pressure sensors to generate patterns that can be used to create improved robots.


Sensor gloves and robot


Humans are very good at using touch to figure out what an object is (for example, groping for glasses or a cell phone in the dark). Engineers hope that robots can follow this ability. One way to do this is to collect as much information as possible about how humans actually recognize by touch. The reason is that if there is a large enough database, machine learning can be used for analysis, not only to infer how the human hand recognizes something, but also to estimate its weight - robots and prosthetics are difficult to do.


The Massachusetts Institute of Technology is collecting this data through low-cost knitted gloves with 550 pressure sensors. Gloves are connected to the computer, the computer collects the data, and the pressure measurements are converted to a video "tactile map" and entered into the Convolutional Neural Network (CNN). The network can classify images to find specific pressure patterns and match them to specific objects.


The team collected 135,000 video frames from 26 common objects such as beverage cans, scissors, tennis, spoons, pens and mugs. The neural network then matches the semi-random frame to a particular pinch point until a complete picture of the object is created - much like the way people recognize objects by rolling objects in their hands. By using semi-random images, the network can be provided with relevant image clusters, so no time is wasted on irrelevant data.


“We want to maximize the difference between the frameworks and provide the best input for our network,” said Petr Kellnhofer, a postdoctoral fellow at CSAIL. "All frames in a single cluster should have similar signatures. These signatures represent a similar way of grabbing objects. Sampling from multiple clusters simulates a human interaction when trying to explore objects and find different ways to grab."


The system currently recognizes objects with an accuracy of 76%, which also helps researchers understand how the hand grasps and manipulates them. To estimate the weight, the researchers also compiled a separate database of 11,600 frames showing that objects were picked up with your fingers and thumb before falling. The weight can be measured by measuring the pressure around the hand while it is being held and then comparing it after falling.


Another advantage of this system is cost and sensitivity. Similar sensor gloves are worth thousands of dollars, but only 50 sensors. The MIT gloves are made from off-the-shelf materials and cost just $10.


Previous:The Nanowire Sensor Is Coming, Will The Sensor Chip Be Far Behind? Next:Indian College Students Design Sensors To Make The Drowning Car Automatically Open The Roof