Physical AI is real
This changed in 2022. The release of ChatGPT showcased the power of large language models to reason, generalize, and follow abstract human intent.
For decades, robotics was limited to rigid, rule-based machines confined to factory floors.
These systems were manually programmed for narrow tasks, operating in structured environments with no ability to perceive, reason, or adapt.
They relied on hardcoded scripts, required manual reconfiguration for each task, and lacked any form of adaptive intelligence.
Subsystems such as vision, planning, and control were siloed, with no unified architecture to support learning or reasoning.
Despite occasional breakthroughs in dynamic mobility - like Boston Dynamics’ Atlas performing jumps or backflips - robots remained incapable of understanding the world.
High-fidelity sensing technologies like tactile feedback, proprioception, and force sensors remained confined to academia.
Simulators lacked realism, and physical intelligence was not a design priority.
Robotics remained deep tech: expensive, brittle, and disconnected from broader AI progress.
Between 2013–2021, the field entered a so-called “robotics winter.”
Google’s acquisition of Boston Dynamics (2013) and SoftBank’s Pepper sparked excitement, but both failed to scale.
Meanwhile, AI advanced rapidly - with ImageNet, BERT, and transformers fueling breakthroughs in vision, NLP, and generative models.
Yet robotics failed to ride this wave. Physical intelligence remained fragmented.
There were:
- no scalable datasets
- no large pretrained models for robots
- no unifying abstractions
- only primitive simulation tools
Most critically, top AI talent remained focused on software, not embodiment.
This changed in 2022. The release of ChatGPT showcased the power of large language models to reason, generalize, and follow abstract human intent.
This triggered a shift in robotics: LLMs were now seen as potential "brains" for physical machines. This sparked the rise of what’s today called ‘Physical AI’ .
Projects like SayCan, RT-1, and PaLM-E demonstrated how LLMs and multimodal models could map vision and language directly to motor actions.
Simulation and generative video models enabled large-scale embodied training, allowing robots to learn in virtual environments.
Models began generalizing across physical tasks, and the idea of foundation models powering real-world robots became a reality.
By 2024, Physical AI moved from research to deployment. Notable examples included:
→ Tesla Optimus began basic manipulation using onboard vision and proprietary neural nets
→ Figure 01 entered BMW factory pilots, demonstrating real-time pick-and-place
→ Meta’s ReSkin, PaXini’s DexH13, and joint encoders reached commercial viability
→ Electric actuators replaced hydraulics, boosting safety, compliance, and efficiency.
Looking forward, robotics is moving towards 'general-purpose autonomy'.
What does it really mean?
Robots will operate in open, dynamic environments - planning, adapting, and executing long-horizon tasks with memory and reasoning systems.
Foundation models trained on massive sensorimotor datasets are becoming standard control layers.
But perhaps the most disruptive shift is the rise of Decentralized Physical AI (DePAI) - this is where robots will become sovereign, ownable agents integrated with crypto infrastructure & incentives.
DePAI can help enable:
→ Tokenized labor: robots earn for tasks; value flows to users, developers, and trainers
→ DAO-based governance: robot fleets managed by decentralized communities
→ Proof-of-behavior: verifiable, auditable control logic for trusted autonomy
Communities can even train, verify, and earn from robot these robot training initiatives.
Some great builders in the space to follow are:
@peaq, @frodobots, @xmaquinaDAO, @PrismaXai
In short, robotics is undergoing its ChatGPT moment - not with a single product, but with an ecosystem wide convergence.
The trajectory is clear:
rule-based machines → intelligent agents → decentralized, ownable robotics infrastructure
Physical AI is real.