Toyota unveils ChatGPT moment for robotics
Toyota’s Research Institute arm (TRI) has discovered that adopting an AI learning approach has led to robots quickly and confidently acquiring new skills.
TRI’s robot behaviour model is taught using demonstrations combined with a language description of the goal. It then uses an AI-based Diffusion Policy to learn the demonstrated skill.
Diffusion Policy was a concept Toyota developed in partnership with Columbia Engineering and MIT and is defined by the latter as “a new way of generating robot behaviours by representing a robot’s visuomotor policy as a conditional denoising diffusion process.”
According to TRI — which describes its goal as “using Toyota products to improve the quality of life for individuals and society” — this latest AI progression is a step towards building “large behaviour models” (LBMs) for robots, similar to large language models that are transforming conversational AI, like ChatGPT.
The institute has already taught bots over 60 skills using this AI approach, including pouring liquids, using tools, and manipulating deformable objects.
According to Toyota, previous techniques to teach robots new behaviours were inefficient, slow, inconsistent, and often limited to simple tasks in constrained environments.
With the new approach, no new code is needed, Toyota claimed. The only change is supplying robots with new data. TRI plans to teach hundreds of new skills by the end of the year and 1,000 by the end of 2024.
“Our research in robotics is aimed at amplifying people rather than replacing them,” said Gill Pratt, CEO of TRI and chief scientist for Toyota Motor Corporation.
“This new teaching technique is both very efficient and produces very high performing behaviors, enabling robots to much more effectively amplify people in many ways.”
TRI added that these skills are not limited to just “pick and place”, the robots have been taught how to interact with the world.
This will, according to the institute, one day allow robots to support people in everyday situations and ever-changing environments.
“What is so exciting about this new approach is the rate and reliability with which we can add new skills,” added Russ Tedrake, VP of Robotics Research at TRI.
“These skills work directly from camera images and tactile sensing, using only learned representations, they are able to perform well even on tasks that involve deformable objects, cloth, and liquids — all of which have traditionally been extremely difficult for robots.”
As experts working in the field acknowledge, grasping objects of different sizes, shapes and textures may be easy for a human hand, but it is a major challenge for a robot.
While humans instinctively know how to gently handle an egg without shattering it, robots require training to recognise the right amount of force required.
In April this year we reported how researchers at Cambridge University have developed a hand that can grasp a range of objects using just the movements in its wrist and feeling in its ‘skin’.
To read more TechInformed stories about AI, click here
Subscribe to our Editor's weekly newsletter