On-Premise AI Infrastructure
Deploy Anywhere

Comino Mobile Data Center — turnkey liquid-cooled AI infrastructure

✓ One partner
✓ One point of entry
✓ One warranty
Three Solutions

Choose the right size for your use case

From a single rack module to a full 40ft container — same liquid cooling architecture, scaled to your needs

Flexible
Small Container
M — Compact Container

Small Container

Compact container with drycooler on top. Self-contained unit for campus deployments.

3-4 racks
100-200 kW
Drycooler on top

Ideal for: Corporate AI clusters, research labs, mid-size deployments

Custom projects
40ft Data Center
L — Full Container

40ft Data Center

Full-size container with remote drycoolers. Production-grade AI/HPC facility.

6-10+ racks
300-500+ kW
Remote cooling

Ideal for: Large enterprise, HPC centers, major installations

Why Containerized

Key advantages of mobile AI infrastructure

📍
Deploy anywhere
💧
Direct liquid cooling
Lead time from 3 months
Applications

Built for AI inference workloads

Deploy at corporate sites or remote locations

🤖
Agentic AI
📚
RAG
👁
Computer Vision
📊
Predictive Analytics
🎤
Speech AI
📄
Document AI

Also: Fine-tuning and other AI workloads

♻️ Heat Recuperation & Sustainability

Option to reuse waste heat for building heating via plate heat-exchanger. Direct liquid cooling enables PUE below 1.1 — significantly more efficient than traditional air-cooled facilities.

<1.1
Typical PUE
<1.1
Typical PUE
Technical

Reference specifications

Container format
Micro module / 20ft / 40ft HC ISO
Rack capacity
1-2 / 3-4 / 6-10+ racks
IT load range
50 kW — 500+ kW
Cooling
Direct liquid + CDU + drycooler
Typical PUE
< 1.1
Heat recovery
Optional additional plate heat-exchanger
Power input
Medium/low voltage, tailored to grid
GPU stack
RTX Pro 6000, H200, B200, B300, GB300
Fire suppression
Clean-agent system (optional)
Monitoring
Integrated telemetry with remote access

Looking for an out-of-the-box server solution?

Comino GRANDO — liquid-cooled multi-GPU servers, ready to deploy in your container infrastructure.
explore grando →
FAQ

Frequently Asked Questions

Start with your workload and site constraints. Micro DC (S) is ideal for first deployments, PoC, or sites with no existing infrastructure. Small container (M) works well for mid-size corporate clusters. Full 40ft (L) is for large-scale enterprise AI/HPC.

Comino acts as your main contractor. We coordinate compute, cooling and engineering partners under a single SLA, so you always have one point of contact for uptime, service and warranty questions.

We support the full range of NVIDIA accelerators: RTX Pro 6000 for inference workloads, H200 for current-gen AI, and the latest Blackwell generation — B200, B300, and GB300 NVL. We also work with other GPU platforms on request.

Yes — we are already accepting orders. Lead time starts from 3 months depending on configuration and GPU availability. Contact us to discuss your timeline and lock in a production slot.

Traditional data center builds take 12–18 months from planning to go-live. In AI, that delay means competitors ship products while you're still waiting for infrastructure. Our containerized approach compresses that timeline dramatically — you can deploy production AI capacity in months, not years.

You contact Comino. We triage, dispatch and resolve — whether it's a compute issue, cooling question or container infrastructure. SLA defines response times and escalation paths. One point of contact for everything.

Yes. We offer optional plate heat-exchanger integration that allows waste heat from GPU cooling to be directed into building heating circuits. This improves overall energy efficiency and reduces operating costs.

Ready to deploy AI infrastructure?

Tell us about your workload and site — we'll recommend the right size and configuration.

Request Project Call →