Skip to content

Google Introduces Local-Run Gemini Model for Robotics Applications

admin

Google DeepMind introduced its new language model, Gemini Robotics On-Device, enabling robots to perform tasks locally without internet connectivity. This model builds on the Gemini Robotics model launched in March, allowing developers to fine-tune it using natural language prompts to control robot movements.

According to Google, the on-device model performs similarly to the cloud-based version and surpasses other unnamed on-device models in benchmarks. Demonstrations showed robots executing tasks like unzipping bags and folding clothes. The model has been successfully adapted from ALOHA robots to various platforms, including the bi-arm Franka FR3 and Apollo humanoid robots, even handling unfamiliar tasks like assembly on an industrial belt.

Additionally, Google DeepMind plans to release a Gemini Robotics SDK, enabling developers to train robots through demonstrations on the MuJoCo physics simulator. Other AI developers, such as Nvidia and Hugging Face, are also exploring foundational models for robotics.

Source link

Share This Article
Leave a Comment