But its not easy to automate luncheons. Researchers first deconstruct the users eating experience so that the robot can master the use of forks.
Feeding robots also need joint research and support in many fields. The most important first step in this process is the need for robots to fork food accurately. Whether its a crisp radish, a grape with a smooth appearance and hard to hold, or a soft and waxy banana, different foods pose different problems. Gilwood Lee, a PhD student in computer science who participated in the project, said, When people poke food with a fork, they tend to think about the shape, hardness and eating style of the food. So the lab invites a group of people to use forks with sensors to feed the human model, while researchers record the data to better understand the tiny movements.
For example, bananas are completely peeled. But bananas are also easy to slide down when the robotic arm wants to fork up the fruit. Lee said: In order to improve the success rate of soft objects in forks, we intentionally set a special angle when the manipulator bends to prevent falling objects.
In the process of feeding food into the users mouth, raising the fork to the appropriate height and firmly forking the food to avoid dropping are the second problem. In this part of the experiment, the researchers specifically asked the robotic arm to feed the barrier-free people. They found that crisp foods like radishes require not only fixed arm bending angles, but also variable feeding positions. If the fork is inserted in the middle of the radish, people will not be able to fully eat food, such as radish, this long strip of food should be poked at the end of the food with a fork.
After the software has collected enough food data, the robot uses object recognition software to distinguish 12 kinds of food (apples, bananas, bell peppers, broccoli, Hami melon, radish, broccoli, Saint fruit, grapes, honeydew melon, kiwifruit, strawberry), and changes the way of acquisition according to different shapes of food. In addition, the robot can also sense that the users mouth is open for feeding, and then the food is sent into the mouth.
The team admits that this approach is still not perfect - if the user is chatting while eating, the robot may misjudge its behavior, and a simple open-mouth greeting may be considered to signal a desire to eat. At the same time, the robot cant cut food by itself, but Lee points out that under these conditions, people with inconvenient stroke can use one-handed cutting boards, and robots will learn this kind of cutting skills in the future. Now, the experimental team is creating new algorithms to let robots learn to pick up all kinds of food (rolled spaghetti is also a required course) to meet more challenges in the future.
At present, the team has no plans to launch the product. It is exciting that all parts of the robot are available. The core arm of the robot is the Jaco-assisted manipulator manufactured by Kinova, while the camera and depth sensor are manufactured by Intel. This means that our physical technology is becoming more and more mature, peoples eating problems can be solved, the final problem is how to convert complex user experience into code.