SERVERS & WORKSTATIONS
Zainode’s hardware is made with NVIDIA’s RTX Pro 6000 Blackwell GPUs, which offer incredible performance at a fraction of the cost of other enterprise GPUs. Ideal for training, research, running open LLMs, multimodal diffusion models, or building agents with RAG — all locally. Available in rack-mounted server clusters and high-performance workstations, depending on scale and use case.
Your own world class AI infrastructure built by Zainode
Expertly designed & built by Zainode
-
Custom frame
Built from reliable GoBilda robotics parts to support GPUs securely.
-
Dual boards
Dual PCIe boards — one inside the chassis, one mounted externally.
-
No superfluous cables
No extenders — direct PCIe connections avoid signal loss or lane downgrades.
-
Stable power
Clean power distribution with custom Y-splitters ensure stable power delivery.
-
Tool-less upgrades
PCIe 5.0 boards let you upgrade to newer GPUs without rebuilding.



Technical specs
Most enterprise alternatives require redrivers, PCIe switches, or expensive proprietary layouts. These Zainode builds skip all that complexity. This configuration gives technical leaders the ability to prototype, test and scale without relying on external cloud GPUs, while maintaining predictable performance and cost efficiency.
Server
GPU — 8x NVIDIA RTX 6000 PRO Blackwell Server Edition 96GB Graphics Card (768 VRAM)
CPU — 2x Intel XEON 6767P 64C/128T 2.40/3.90GHz CPU; or 2x AMD EPYC 9254 (24-core, 128MB cache)
RAM — 1024GB 6400Mhz DDR5 ECC RAM; or 384GB DDR5 ECC RDIMM
OS — Ubuntu Server 22.04 LTS
Storage — 1.92TB Micron 7450 PCIe 4.0 NVMe SSD; or 2x 1TB PCIe 4.0 NVMe M.2 SSD Boot Drive
Network — Dual 10GbE ports; or 2x 25G Network Adapter
16TB PCIe 5.0 NVMe SSD Data Drive
Chassis — ASUS ESC8000A-E12P; or NVIDIA MGX 4U
Dimensions — Width: 9.5" (240mm), Height: 22.8" (580mm), Depth: 20" (560mm), Weight: 43-55 lbs
Other — N + N Redundancy; Static IP
Datasheet — ask us
Workstation
GPU — 4x NVIDIA RTX 6000 PRO Blackwell Max-Q 96GB Graphics Card (384 VRAM) 300W per GPU
CPU — AMD Ryzen Threadripper PRO 7975WX (liquid cooled with Silverstone XE360-TR5); 32 cores / 64 threads; Base clock: 4.0 GHz, Boost up to 5.3 GHz; 8-channel DDR5 memory controller
RAM — 256GB DDR5 RAM; 8 channels
OS — Ubuntu Server
Storage — 8TB total: 4x 2TB PCIe 5.0 NVMe SSDs x4 lanes each (up to 14,900 MB/s theoretical for each NVMe module)
Network — Dual 10GbE ports; or 2x 25G Network Adapter
16TB PCIe 5.0 NVMe SSD Data Drive
Chassis — ATX
Dimensions — Width: 9.5" (240mm), Height: 12" (304mm), Depth: 20" (560mm), Weight: 43-55 lbs
Other — N + N Redundancy; Static IP
Datasheet — ask us
Key attributes of our physical infra
-
Power use
Our AI servers require extreme rack densities (30–60 kW per rack). Scaling this demands advanced electrical design, intelligent distribution and redundant power systems. Our infra, chips, models and software are carefully designed to minimise energy use and maximise value per watt utilised.
-
Cooling systems
Our closed cooling loops eliminate evaporation, drastically reducing municipal water demand. Our system design prioritises drought resilience and minimal draw from community resources. We have built a novel prototype immersion cooling GPU cluster, we believe an Australian first with significant energy efficiency benefits.
-
Networking
Our segregated high-performance network includes hardware management, hypervisor and cluster management, inference serving networks, and training networks with extreme east–west bandwidth (400–800 Gbps per node), enabling frontier LLM training. Redundancy includes multi-path fibre and automatic failover provide resilience and continuous availability.
-
Compute & storage
Dense GPU servers with NVLink and PCIe Gen5 architectures, optimised for AI training and inference. CPU compute includes high-core-count systems for orchestration, preprocessing and hybrid applications. Storage with hot NVMe-based distributed storage for active training datasets; and cool, erasure-coded object storage for archival and compliance requirements.
Delivered. Plug in. Start training. Start using.
-
Fast access, zero config
Achieve peace of mind when standing up AI infra with NVIDIA AI — application frameworks, AI dev tools and microservices, optimised to run on Zainode systems.
-
Easy upgrades
As components evolve our flexible stack quickly adopts new versions. Implemented in 24 hours, not months.
-
High powered GPUs
Reliable drivers, compatible with popular frameworks delivering unprecedented performance, scalability, and security for every workload.
-
Cost certainty & options
Cost effective, affordable, fixed private infra. Buy or rent whole or part servers and nodes — options available here.
-
Expert support & warranty
Hands-on support from dedicated engineers for any hardware, OS or ML issues.
Why have your own AI hardware?
01
No data leaves your machine — perfect for sensitive research or enterprise work.
02
No throttling — run as many nodes and jobs as you want, on your own schedule.
03
No vendor lock-in — deploy models locally with any framework you choose.
04
Public AI + cloud is expensive & variable — cost certainty, no sudden price rises in private.
05
You can’t control latency or availability with cloud compute.
06
Your data is logged, rate-limited & uncontrolled with cloud vendors.
Run serious AI workloads, fine-tune your models or host your own AI assistant with Zainode’s massive compute power.
With tools like vLLM, llama.cpp, ExLlama and DeepSpeed you can optimise memory with paged attention, quantisation and MoE routing, all without paying per token, queuing or handing over your data.
BYO options with our bare metal servers
Bare metal servers offer direct access to hardware, providing unparalleled customisation and control. Ideal for intensive ML/AI workloads, they eliminate virtual overhead, for organisations requiring max compute power.
Overview
Custom config: choose CPU, GPU, RAM and storage for your needs.
Dedicated hardware: no sharing, ensuring consistent performance.
Why?
Max performance: full use of server resources.
Security, compliance: ideal for strict regulatory requirements.
Custom setup: config to workload demands and ESG requirements.
Reliable and consistent performance.
Who
Enterprise and government IT teams, game devs, and researchers.
Ideal for low latency, dedicated resources and full control.
Use case examples
High-performance databases.
Rendering and transcoding.
Hosting latency-sensitive applications like credit platforms.