Skip to Content
ProductsPlatform

Menlo Platform

Menlo Platform lets developers build a robot labor force, turning software into physical labor.

It is a full-stack software platform that handles orchestration, telemetry, and safety, everything between an AI agent and physical execution.

  • Robot API — A simple API to orchestrate robots, receive data, and use skills
  • Asimov API — Direct control over Asimov hardware and actuators
  • Uranus — An environment and robot simulations engine for testing and validation
  • Cyclotron — A locomotion training pipeline to close the sim2real gap

The platform is natively integrated with Asimov hardware and provides a closed deployment loop: define an agent or workflow, deploy to a robot, collect telemetry, and iterate, compressing cycles from weeks to minutes.

👋

Software documentation is available here: https://docs.menlo.ai/

Why Platform

At its core, Platform provides an agent abstraction layer—a standardized interface between AI agents and humanoid hardware:

  • Agent-to-hardware translation — Agents express high-level intentions (navigate to location, manipulate object, respond to human) and Platform translates these into coordinated physical action
  • Sensor fusion — Depth cameras, force-torque sensors, and IMUs feed into a unified perception layer
  • Safety enforcement — Hard boundaries prevent actions that could damage hardware or harm humans
  • Telemetry collection — Real-time performance data streams back to Platform

Platform is not a motor abstraction layer. It does not expose motor controllers to agents. Instead, it presents an abstraction where agents express intent and the Platform handles the physical execution.

Design Principles

Programmable at Every Layer

The software gives you control at every layer:

  1. A simple CLI to embody an AI agent
  2. Manual and human teleoperations controls
  3. Swappable models and policies for locomotion, perception, reasoning
  4. Direct control over motors and sensor fusion via Asimov API

Agent Native

Traditional robotics treats autonomy as a tightly engineered program. Menlo treats autonomy as an agent payload:

  • packaged,
  • permissioned,
  • constrained by safety envelopes,
  • deployed with rollbacks and versioning,
  • observable through operational telemetry.

This is a software-native approach to embodied systems. The core idea is that robustness is achieved through iteration, and iteration can only be fast if deployment is standardised.

Packaging

Agents are packaged as deployable payloads, not custom integrations. An agent developed in any standard framework can be deployed to compliant humanoid hardware without modification.

Permissioning

Agents operate within defined safety envelopes. Permissioning ensures agents cannot exceed safe operational boundaries, protecting hardware and humans alike.

Safety Envelopes

Every agent deployment includes:

  • Velocity and acceleration limits
  • Workspace boundaries
  • Force and torque constraints
  • Emergency stop integration

Versioning and Rollbacks

Agent versions are tracked throughout deployment. If an issue arises, rollbacks restore previous versions instantly—no reengineering required.

Observability

Operational telemetry captures agent behavior, outcomes, and edge cases. This data feeds back into Uranus and Cyclotron for continuous improvement.

Deployment Loop

Menlo Platform enables a tight deployment loop:

  1. Design agentic robots in a simple framework
  2. Validate in Uranus simulations
  3. Refine skills in Cyclotron training if needed
  4. Deploy to a physical Asimov robot
  5. Capture telemetry for improvement
  6. Iterate and redeploy
Last updated on