New
Desktop form factor DGX Spark

NVIDIA DGX Spark — data center class AI at your desk

A compact personal AI supercomputer powered by the NVIDIA Grace Blackwell architecture — your gateway to the Comino AI ecosystem.

Up to 1 petaFLOP of AI performance (FP4)
128 GB unified system memory
ConnectX‑7 Smart NIC, 10 GbE, Wi‑Fi 7
Scales to Comino Grando when you need more power
AI performance
Up to 1000 AI TOPS
Memory
128 GB LPDDR5x
Form factor
150 × 150 × 50.5 mm

Suitable for working with models of approximately up to 200 billion parameters locally, with an option to scale to around 405 billion parameters by connecting two systems.

DGX Spark — front view
NVIDIA DGX Spark desktop AI supercomputer
GB10 Superchip with NVIDIA Grace Blackwell architecture
4 TB NVMe M.2 with self‑encrypting storage
4× USB Type‑C, HDMI 2.1a, 10 GbE
Product overview

NVIDIA DGX Spark — compact AI supercomputer for developers

DGX Spark combines NVIDIA Grace Blackwell architecture, modern networking and the NVIDIA AI software stack to bring large language models, generative AI and classic ML workloads to the desktop.

  • Personal AI supercomputer. Data center class performance in a 1.2 kg desktop device.
  • 128 GB unified memory. Work with models of around 200 billion parameters without building a complex distributed infrastructure.
  • Deep integration with NVIDIA AI. Full stack of frameworks, libraries and pretrained models for generative AI and classic ML workloads.
  • Networking for scale. ConnectX‑7 Smart NIC and 10 GbE enable pairing two DGX Spark systems and scaling to models of around 405 billion parameters.
  • Compact footprint. Fits naturally into a developer or researcher desk setup without sacrificing performance.

Based on analysis of NVIDIA, Novatech, PNY and other partner product pages.
Focus: performance, compactness and ready‑to‑run experience.

Target audienceAI development teams, research labs and data scientists who need fast experimentation cycles close to where they work.

Developers and researchersPrototyping LLMs, generative models and hyperparameter search workflows.

Small and mid‑size teamsDedicated AI compute without deploying a full GPU cluster.

Grace Blackwell
DGX OS
NVIDIA AI Enterprise
Key features

Built for modern AI workloads

The feature section follows the structure used by Novatech and PNY: concise cards focused on architecture, memory, networking and software stack.

NVIDIA GB10 Superchip

NVIDIA Grace Blackwell architecture delivers up to 1 petaFLOP of AI performance at FP4 precision, ideal for generative models and high‑throughput inference workloads.

M
128 GB unified memory

A single unified system memory space simplifies working with large models and datasets, reducing overhead from moving data between CPU and GPU.

High‑performance networking

Integrated ConnectX‑7 Smart NIC and 10 GbE provide the bandwidth needed to link two DGX Spark systems and scale to models of around 405 billion parameters.

Compact form factor

A 150 × 150 × 50.5 mm chassis weighing around 1.2 kg allows DGX Spark to sit on the desk while still delivering data center class compute.

End‑to‑end AI software stack

NVIDIA DGX OS and the NVIDIA AI software stack provide frameworks, libraries and tools to shorten the time from unboxing to running the first experiment.

Ready to scale

DGX Spark integrates with existing Comino server and rack infrastructure, providing a seamless path from prototype on the desk to full‑scale production clusters.

Technical specifications

NVIDIA DGX Spark specification

The table follows the structure used by Novatech and PNY product pages: architecture, memory, storage, networking, connectivity and physical dimensions.

Parameter
Value
Architecture
NVIDIA Grace Blackwell, GB10 Superchip
CPU
20‑core Arm (10× Cortex‑X925 + 10× Cortex‑A725)
GPU
NVIDIA Blackwell architecture GPU
CUDA Cores
NVIDIA Blackwell generation
Tensor Cores
5th generation Tensor Cores
RT Cores
4th generation RT Cores
AI performance
Up to 1000 AI TOPS (depending on mode and precision)
System memory
128 GB LPDDR5x unified system memory
Memory interface
256‑bit
Memory bandwidth
Up to 273 GB/s
Storage
Up to 4 TB NVMe M.2 with self‑encrypting capabilities
USB
4× USB Type‑C (one power input, three user ports)
Ethernet
1× RJ‑45, 10 GbE
Network adapter
NVIDIA ConnectX‑7 Smart NIC
Wi‑Fi
Wi‑Fi 7
Bluetooth
Bluetooth 5.3 with Low Energy support
Display output
1× HDMI 2.1a
NVENC / NVDEC
1× NVENC / 1× NVDEC
Operating system
NVIDIA DGX OS
Dimensions
150 mm (L) × 150 mm (W) × 50.5 mm (H)
Weight
Approximately 1.2 kg

Values are based on public partner descriptions (Novatech, PNY and others) and should be cross‑checked against the official NVIDIA technical documentation.

Ideal for prototyping LLMs and generative models
Optimized for developers, researchers and data scientists
System connections

Ports and connectivity

This section is inspired by PNY: a clear explanation of each port and a visual connection diagram.

  • Network RJ‑45, 10 GbE. High‑speed network connectivity for data transfer, remote access and linking systems together.
  • ConnectX‑7 Smart NIC. Data center grade network adapter designed for high‑performance computing and scalable AI workloads.
  • HDMI 2.1a. Display output with support for high resolutions and refresh rates, convenient for visualizing experiment results.
  • 4× USB Type‑C. One dedicated power input and three user ports for peripherals, docking stations and high‑speed external storage.

Each connector is paired with a clear usage scenario, following PNY's approach to help engineers plan how DGX Spark will be integrated into their environment.

Diagram of NVIDIA DGX Spark system connections and ports based on partner materials

Connection layout based on PNY materials: placement of ports, network interfaces and nput‑output connectors.

Use cases and benefits

Who NVIDIA DGX Spark is for

Based on NVIDIA official pages, Marketplace and partner sites, these are the key usage scenarios and benefits for teams working with AI.

AI model development and testing

DGX Spark is well suited for training and fine‑tuning machine learning models, including large language and multimodal models, without having to provision a separate GPU cluster.

Research and experimentation

Research groups gain a dedicated resource for fast hypothesis testing, exploring new model architectures and running generative AI experiments directly at the desk.

Engineering and product teams

Engineering and product teams can run AI service prototypes, validate integrations and scenarios before moving workloads into Comino‑based production infrastructure.

Why teams choose DGX Spark

  • Data center class performance in a compact desktop form factor.
  • Ready‑to‑run solution with a preconfigured NVIDIA AI software stack.
  • Smooth scaling path from a single device to larger clusters.
  • Shorter experiment cycles: from idea to prototype on a single workstation.
Gallery

NVIDIA DGX Spark visuals

A curated set of angles and context images based on PNY, AMAX, ServerICT and other partner materials, following the rich gallery approach from PNY.

FAQ

Frequently Asked Questions

DGX Spark is ideal for development and inference of large model workloads: language models, generative models, computer vision and classic machine learning tasks. It is especially valuable at the prototyping stage, for hypothesis testing and local experimentation.

Using ConnectX‑7 Smart NIC and 10 GbE, two DGX Spark systems can be linked together, extending the effective model size and types of workloads you can run. For further scaling, Comino server solutions and NVIDIA‑based clusters are recommended.

DGX Spark ships with NVIDIA DGX OS and access to the NVIDIA AI software stack, including frameworks, libraries and tools for generative AI and classic machine learning. The exact bundle may vary by region and configuration, so please confirm with your Comino representative.

DGX Spark complements Comino rack and server solutions: local experiments and prototyping run on the desktop system, while final training and production workloads move to a scalable cluster built on Comino hardware.

In this case DGX Spark is still a great environment for research and early prototyping, but production‑scale training and multi‑team workloads are better handled by larger systems. Comino Grando workstations and servers with multiple liquid‑cooled GPUs are designed for such scenarios and can become the main on‑premise AI backbone, while DGX Spark remains a companion device for local development.

Yes. DGX Spark uses the same NVIDIA AI software stack that powers larger DGX and Comino systems, so models, tools and workflows can be moved with minimal friction. Typical setups use DGX Spark for experimentation and small deployments and Comino Grando or rack servers for training and large‑scale inference.

DGX Spark can be used as a standalone desktop system for a single researcher, as a shared resource inside a small AI team, or as part of a lab with several systems connected via high‑speed networking. Comino specialists can help design a topology where DGX Spark devices are integrated with existing storage, authentication and CI/CD pipelines.

Request an NVIDIA DGX Spark configuration for your team

Share your use case and requirements, and Comino specialists will help you choose the right configuration, integration path and scaling options from a single desktop device to a full‑scale AI cluster.
A typical scenario: one or two DGX Spark systems for the R&D team, plus Comino Grando workstations or servers handling heavy training and production AI workloads.
request demo