Connect with us


Figure robot shows OpenAI integrated conversation skills



Figure Robot

Figure Robotics has shared a demonstration video of conversation capability after integrating OpenAI speech reasoning for the Figure 1 robot.

There are two key operations demonstrated in the video:

  • Speech-to-Speech Reasoning
  • End-to-End Neural Networks

Two weeks ago, OpenAI announced an investment in Figure AI and both of these firms entered into a strategic partnership.

OpenAI also announced to provide AI learning and large language model (LLM) support for the Figure under their newly found partnership.


The video shows Figure 1 robot having a conversation with a demonstrator with end-to-end neural networks. Brett Adcock, CEO of Figure said “There is no teleop and it was filmed at 1.0x speed and shot continuously”.

Figure Robot

Figure Robot (Image Credit: Figure)

He explained that the robot is performing fast actions and the company is aiming for human-like movements as a benchmark.

The bot’s onboard cameras take the feed and process it through a large vision-language model (VLM) trained by OpenAI. It is revealed that the neural nets are taking images at 10hz through onboard cameras. As an output, the neural net is exporting 24 degrees of freedom actions at 200hz.

Another thing to note is that the hand operations are also improved in this video compared to past demonstrations. This is a substantial progress for Figure robotics.

Still, adding a new end-to-end neural network would allow Figure to experiment with the robot-to-human conversation interface.


Sophia says technology is raising the bar of human living and she is actively trying to promote awareness among people about the latest changes in social media platforms. Social media has the power to make many positive impacts and she is continuously sharing the latest updates with fellow readers. In some spare time, she likes to tag along with friends for a walk.

Continue Reading