Today, four short stories from the world of robotics.
Humanoid robots that move and work at the same time.
Robots that learn through touch, vision, and language.
And systems that can finish full tasks without human help.
Let’s get into it.
TL;DR
Westwood Robotics showed a humanoid robot that can walk and manipulate objects at the same time.
Scientists found a new way to control shape-changing robots using physical interaction.
Microsoft Research connected vision, language, and touch into one robotics AI model.
Figure AI showed a humanoid robot finishing a full kitchen task on its own.
Westwood Robotics has shown a big update to its humanoid robot called THEMIS Gen2.5.
This robot can walk and use its hands at the same time. That is the key point. It can manipulate objects while moving, not only when standing still.
The update comes with a new system called AOS, a special operating system for humanoid robots. It controls the full body, helps the robot understand where it is, and plans motion and tasks using vision.
Hardware also got a serious upgrade. The robot body is stronger and can handle more impact. The arms now have more joints and can lift over 5 kg each. New hip actuators give more torque and create much less heat.
In short. This robot is closer to working in real spaces, not just labs. And yes. It keeps moving while working.
Want to get the most out of ChatGPT?
ChatGPT is a superpower if you know how to use it correctly.
Discover how HubSpot's guide to AI can elevate both your productivity and creativity to get more things done.
Learn to automate tasks, enhance decision-making, and foster innovation with the power of AI.
Teaching robots by touch, not code
Scientists from Swiss Federal Institute of Technology Lausanne published a new way to control modular robots that can change their shape. The research appears in Nature Communications.
Modular robots are made of many pieces that can connect in different ways, so one robot can act like an arm, a legged walker, or even help people. But this flexibility makes control very hard. The new system lets a human operator guide the robot using a physical joystick-like interface that matches the robot’s current shape, so the instructions feel natural and safe.
The control platform adds smart checks so the robot won’t break itself or do unsafe moves. It was tested on different modular robots doing tasks like picking and placing objects, helping a person, walking, or reconfiguring itself.
In short. This research could make adaptable robots easier to use in unpredictable real-world situations.
Robots that understand words and touch
Microsoft Research revealed a new robotics AI model called Rho-alpha. It’s designed so robots can understand simple language instructions and then use vision and touch to carry out real tasks.
Until now, robots worked best in clean, predictable places like factories. Rho-alpha aims to break out of those limits by linking what a robot sees, hears, and feels directly to its actions. The system turns everyday language into control signals for robots, especially those using two arms at once.
Touch sensing is a big part of this. With tactile input, a robot can adjust its grip if something slips or feels different than expected. To train Rho-alpha, Microsoft uses simulated data and reinforcement learning, so the model learns how actions connect to language and real-world physics.
The first rollout is through a research access program, and updates are expected soon.
Figure shows what full autonomy really means
Figure AI announced Helix 02, a new AI system for humanoid robots that can complete full tasks from start to finish, without human help.
In a recent demo, the robot walked into a kitchen and handled everything on its own. It unloaded a dishwasher, moved around the room, stacked dishes into cabinets, and loaded the dishwasher again. All in one continuous run. No pauses. No resets.
Helix 02 connects vision, touch, balance, and motion into one control system. The robot does not think step by step. It thinks in goals and actions. Training included many hours of human motion data.
In short. This is not a single trick. It is a full task, done end to end, by a humanoid robot.
These robots are not ideas anymore. They walk, see, feel, and act in real spaces. Homes, labs, and places built for humans. Step by step, robots are leaving demos and entering daily life. And this shift is happening faster than many expected.
Cheers, Jacek


