Menlo Stack
Menlo Stack enables developers to embody AI agents into humanoid robots, turning software into physical labor. It consists of five interconnected components that together enable a humanoid labor force:
- Agent SDK — Deploy agent payloads to compliant humanoids with safety guarantees
- Edge API — Interface for low-level robot control and telemetry
- Uranus — World simulator and digital twin engine for scenario testing
- Cyclotron — Sim-to-real motor control pipeline for locomotion and manipulation
- Data Engine — Telemetry and continuous improvement system
You get a closed deployment loop from agent definition to physical execution and back, compressing iteration from weeks to minutes.
👋
Software documentation is available here: https://docs.menlo.ai/
Agent Native
The software is designed for agent-native robotics:
- Real-time kernel — Handles low-latency reflex loops for balance and manipulation
- Cloud connectivity — High-level reasoning happens in the platform; the robot handles execution
- Standardized interfaces — Compatible with any humanoid that meets hardware requirements
- Open API — Agent developers interact through well-documented interfaces, not motor controllers
Programmable at every layer
The software is open at every layer:
- Agent-to-hardware translation — Agents express high-level intentions (navigate to location, manipulate object, respond to human) to coordinate physical action
- Sensor fusion — Depth cameras, force-torque sensors, and IMUs feed into a unified perception layer
- Motor control — Low-level motor controls
- Safety enforcement — Hard boundaries prevent actions that could damage hardware or harm humans
- Telemetry collection — Real-time performance data streams
Last updated on