The on-site AI datacenter for robot fleets.

Robot brains are growing faster than robot bodies. We're the missing infrastructure.

Backed by

NVIDIA InceptionSupermicroDCXStanford HPCNVIDIA Inception ProgramNebius

The robot autonomy ceiling

Three forces are converging to break today’s robot infrastructure.

More robots, interventions rise

A 50-robot site is a mini datacenter.

crisis zone 10 100 200 Pilot Year 1 Year 3 ~50 robots

New skills, throughput drops

Every new capability stacks more compute.

Real-time mapping VLA inference Semantic planning Perception Safety Needs shared compute ← on robot

More automation, more overhead

Duplicating GPUs per robot doesn't scale.

waterline GPU hardware The part you budget for Procurement (3–6 mo) Ops + infra team hires Cooling + power + space Maintenance + spares Refresh cycles + depreciation What you budget for What you actually own The GPU is the smallest part of the problem.

There is no purpose-built compute layer between the cloud and the robot.

That's the gap Nectar fills.

Box + Brain: The On-Site Intelligence Layer

One Box on-site runs the heavy AI for the whole fleet. Robots keep only safety-critical motion onboard.

The Nectar Box

THE BOX

27ft³ Immersion-Cooled GPU Node

Deployed on-site. Runs multi-camera perception, semantic understanding, and planning for every robot on the floor.

THE BRAIN

Fleet Orchestration Software

Manages workload scheduling, thermal monitoring, and predictive maintenance across every node in your fleet.

THE FLYWHEEL

Inference by Day, Training by Night

Your robots run inference during shifts. After hours, the same GPUs fine-tune models on the day’s data — every cycle makes the fleet smarter.

Before Nectar

After Nectar

Cloud-only, $3.50/hr per GPU

On-prem, $0.40/hr amortised

50-200 ms round-trip latency

<5 ms on-site inference

Data leaves the building

Data never leaves your site

We are a software company disguised as a box. Hardware is the toll booth. Brain is the recurring revenue.

Frequently Asked Questions

How does the Nectar Box differ from cloud GPU?

+

The Nectar Box sits on your factory floor, delivering sub-5 ms inference with zero data leaving your site. Cloud GPUs add 50-200 ms of round-trip latency and send proprietary production data off-premise.

What hardware powers the Nectar Box?

+

Each 27 ft³ immersion-cooled node packs NVIDIA H100 GPUs, high-bandwidth NVLink interconnects, and redundant power in a form factor that fits through a standard warehouse door.

Can I scale up after starting with one Box?

+

Absolutely. The Brain orchestration layer manages workload distribution across any number of Boxes. Add nodes as your fleet grows and the software handles scheduling, failover, and model versioning automatically.

Request a Demo

Train in the cloud. Decide at the edge.

Deploy in one click. No setup. No stress.

Frequently asked questions

How does the Nectar Box differ from cloud GPU?

What hardware powers the Nectar Box?

Can I scale up after starting with one Box?

Get in touch

Have questions? We'd love to hear from you.

Name
Email

By submitting, you agree to our Terms of Service and Privacy Policy.

Message
Request a demo