LooperRobotics has released the Insight 9, a spatial AI camera designed to give robots depth perception and autonomous navigation capabilities without requiring cloud connectivity. The device, launched on March 22, 2026, processes spatial data on-device using a D-Robotics RDK X5 processor with the Sunrise 5 intelligent computing chip, delivering up to 10 TOPS (tera operations per second) of edge inference performance.
The Insight 9 combines three camera sensors — providing both depth mapping and 2D HD imaging — with a six-axis BMI088 inertial measurement unit for orientation tracking. This sensor fusion allows robots to construct real-time 3D maps of their environment, detect obstacles, and navigate autonomously using visual SLAM (simultaneous localization and mapping) algorithms running entirely on the camera’s onboard processor. The edge computing approach eliminates the latency and connectivity requirements of cloud-based spatial processing, a critical constraint for robots operating in warehouses, construction sites, and outdoor environments with unreliable network access.
The device targets a growing market for embedded spatial intelligence in robotics. Most existing depth cameras — including Intel’s RealSense line and Stereolabs’ ZED cameras — provide raw depth data that must be processed by a separate computer. The Insight 9’s integrated processor handles both depth sensing and spatial reasoning on a single board, reducing the hardware complexity and power consumption of robot perception stacks. For robotics startups and research teams, this simplification can cut months from development timelines.
At 10 TOPS, the Insight 9’s compute capability is modest compared to NVIDIA’s Jetson Orin platform (up to 275 TOPS), but the combination of integrated sensors, edge processing, and a purpose-built form factor for robotic mounting makes it competitive for applications that prioritize power efficiency over raw performance. The camera is positioned for mobile robots, drones, and agricultural equipment where weight and power budgets are constrained.
The product enters a robotics perception market that has grown significantly as autonomous systems move from research labs to commercial deployment. Amazon’s warehouse robots, autonomous delivery vehicles from companies like Nuro and Starship, and agricultural automation platforms from John Deere all rely on spatial AI cameras for navigation. LooperRobotics is betting that an integrated, edge-processed solution at a lower price point than existing industrial options can capture the long tail of robotics applications that cannot justify the cost of an NVIDIA-based perception stack.
