A compact personal AI supercomputer powered by the NVIDIA Grace Blackwell architecture — your gateway to the Comino AI ecosystem.
Suitable for working with models of approximately up to 200 billion parameters locally, with an option to scale to around 405 billion parameters by connecting two systems.

DGX Spark combines NVIDIA Grace Blackwell architecture, modern networking and the NVIDIA AI software stack to bring large language models, generative AI and classic ML workloads to the desktop.
Target audienceAI development teams, research labs and data scientists who need fast experimentation cycles close to where they work.
Developers and researchersPrototyping LLMs, generative models and hyperparameter search workflows.
Small and mid‑size teamsDedicated AI compute without deploying a full GPU cluster.
The feature section follows the structure used by Novatech and PNY: concise cards focused on architecture, memory, networking and software stack.
NVIDIA Grace Blackwell architecture delivers up to 1 petaFLOP of AI performance at FP4 precision, ideal for generative models and high‑throughput inference workloads.
A single unified system memory space simplifies working with large models and datasets, reducing overhead from moving data between CPU and GPU.
Integrated ConnectX‑7 Smart NIC and 10 GbE provide the bandwidth needed to link two DGX Spark systems and scale to models of around 405 billion parameters.
A 150 × 150 × 50.5 mm chassis weighing around 1.2 kg allows DGX Spark to sit on the desk while still delivering data center class compute.
NVIDIA DGX OS and the NVIDIA AI software stack provide frameworks, libraries and tools to shorten the time from unboxing to running the first experiment.
DGX Spark integrates with existing Comino server and rack infrastructure, providing a seamless path from prototype on the desk to full‑scale production clusters.
The table follows the structure used by Novatech and PNY product pages: architecture, memory, storage, networking, connectivity and physical dimensions.
Values are based on public partner descriptions (Novatech, PNY and others) and should be cross‑checked against the official NVIDIA technical documentation.
This section is inspired by PNY: a clear explanation of each port and a visual connection diagram.
Each connector is paired with a clear usage scenario, following PNY's approach to help engineers plan how DGX Spark will be integrated into their environment.

Connection layout based on PNY materials: placement of ports, network interfaces and nput‑output connectors.
Based on NVIDIA official pages, Marketplace and partner sites, these are the key usage scenarios and benefits for teams working with AI.
DGX Spark is well suited for training and fine‑tuning machine learning models, including large language and multimodal models, without having to provision a separate GPU cluster.
Research groups gain a dedicated resource for fast hypothesis testing, exploring new model architectures and running generative AI experiments directly at the desk.
Engineering and product teams can run AI service prototypes, validate integrations and scenarios before moving workloads into Comino‑based production infrastructure.
A curated set of angles and context images based on PNY, AMAX, ServerICT and other partner materials, following the rich gallery approach from PNY.

.png)
.png)
.png)


.webp)
.jpeg)
DGX Spark is ideal for development and inference of large model workloads: language models, generative models, computer vision and classic machine learning tasks. It is especially valuable at the prototyping stage, for hypothesis testing and local experimentation.
Using ConnectX‑7 Smart NIC and 10 GbE, two DGX Spark systems can be linked together, extending the effective model size and types of workloads you can run. For further scaling, Comino server solutions and NVIDIA‑based clusters are recommended.
DGX Spark ships with NVIDIA DGX OS and access to the NVIDIA AI software stack, including frameworks, libraries and tools for generative AI and classic machine learning. The exact bundle may vary by region and configuration, so please confirm with your Comino representative.
DGX Spark complements Comino rack and server solutions: local experiments and prototyping run on the desktop system, while final training and production workloads move to a scalable cluster built on Comino hardware.
In this case DGX Spark is still a great environment for research and early prototyping, but production‑scale training and multi‑team workloads are better handled by larger systems. Comino Grando workstations and servers with multiple liquid‑cooled GPUs are designed for such scenarios and can become the main on‑premise AI backbone, while DGX Spark remains a companion device for local development.
Yes. DGX Spark uses the same NVIDIA AI software stack that powers larger DGX and Comino systems, so models, tools and workflows can be moved with minimal friction. Typical setups use DGX Spark for experimentation and small deployments and Comino Grando or rack servers for training and large‑scale inference.
DGX Spark can be used as a standalone desktop system for a single researcher, as a shared resource inside a small AI team, or as part of a lab with several systems connected via high‑speed networking. Comino specialists can help design a topology where DGX Spark devices are integrated with existing storage, authentication and CI/CD pipelines.