NOW AVAILABLE: our latest bare metal & virtual NVIDIA Blackwell GPU nodes at Equinix, Melbourne. Spin up in <24hrs. Ask us.
NOW AVAILABLE: our latest bare metal & virtual NVIDIA Blackwell GPU nodes at Equinix, Melbourne. Spin up in <24hrs. Ask us.
WORLD CLASS SOVEREIGN AI INFRASTRUCTURE
Your own dedicated AI infra, built by Zainode
An ideal supplement to your existing cloud strategy, built for your private AI needs.
Made with NVIDIA’s RTX Pro 6000 Blackwell GPUs, offering incredible HPC at a fraction of the cost of other GPUs.
Rack-mounted server clusters and high-performance workstations, depending on scale and use case.
Deployed in our DC, your DC or on-prem.
Rent or buy whole machines or fractional compute.
Off internet capability.
MASSIVE HPC POWER
Run serious AI workloads, train models, process data, build inference endpoints
With tools like vLLM, llama.cpp, ExLlama and DeepSpeed you can optimise memory with paged attention, quantisation and MoE routing, all without paying per token, queuing or handing over your data.
WHY HAVE YOUR OWN AI COMPUTE?
Reduce costs, have engineered data control … not policy & hope
Big savings & cost certainty
Public AI + cloud is expensive & variable — we give cost certainty and no sudden price rises.
No data loss
No data leaves your machine. Why would any organisation let public AI or cloud access their ERP data?!
No vendor lock in
No vendor lock-in — deploy models locally with any framework you choose.
Full control
You can’t control latency or availability with cloud compute and public AI.
No throttling
Run as many nodes and workloads as you need, on your own schedule.
No data logging
Your data is logged, rate-limited & uncontrolled with cloud vendors and public AI.
Key attributes of our infra
-

Power use
Our AI servers require extreme rack densities (30–60 kW per rack). Scaling this demands advanced electrical design, intelligent distribution and redundant power systems. Our infra, chips, models and software are carefully designed to minimise energy use and maximise value per watt utilised.
-

Cooling systems
Our closed cooling loops eliminate evaporation, drastically reducing municipal water demand. Our system design prioritises drought resilience and minimal draw from community resources. We have built a novel prototype immersion cooling GPU cluster, we believe an Australian first with significant energy efficiency benefits.
-

Networking
Our segregated high-performance network includes hardware management, hypervisor and cluster management, inference serving networks, and training networks with extreme east–west bandwidth (400–800 Gbps per node), enabling frontier LLM training. Redundancy includes multi-path fibre and automatic failover provide resilience and continuous availability.
-

Compute & storage
Dense GPU servers with NVLink and PCIe Gen5 architectures, optimised for AI training and inference. CPU compute includes high-core-count systems for orchestration, preprocessing and hybrid applications. Storage with hot NVMe-based distributed storage for active training datasets; and cool, erasure-coded object storage for archival and compliance requirements.
-

Simple & fast
We can spin up and spin down GPU compute with 24 hours notice. We also engineer controls to prevent unexpected run times, when users forget to pause compute.
-

Quick cross connects
As we’re colocated in major DCs, cross connects are fast and secure. Ideal for connecting your AI workloads to other devices, networks or DCs, all while avoiding the public internet. Connect to e.g. Anthropic, AWS Bedrock, AWS, Bard AI, OpenAI, Azure, Kubernetes Engine … all within minutes.
Expertly designed & built by Zainode
-

Custom frame
Built from reliable GoBilda robotics parts to support GPUs securely.
-

Dual boards
Dual PCIe boards — one inside the chassis, one mounted externally.
-

No superfluous cables
No extenders — direct PCIe connections avoid signal loss or lane downgrades.
-

Stable power
Clean power distribution with custom Y-splitters ensure stable power delivery.
-

Tool-less upgrades
PCIe 5.0 boards let you upgrade to newer GPUs without rebuilding.
Technical specs
Most enterprise alternatives require redrivers, PCIe switches, or expensive proprietary layouts. These Zainode builds skip all that complexity.
Servers
GPU — 8x NVIDIA RTX 6000 PRO Blackwell Server Edition 96GB Graphics Card (768 VRAM), TFLOPS 1008
CPU — 2x Intel XEON 6767P 64C/128T 2.40/3.90GHz CPU; or 2x AMD EPYC 9254 (24-core, 128MB cache)
RAM — 1024GB 6400Mhz DDR5 ECC RAM; or 384GB DDR5 ECC RDIMM
OS — Ubuntu Server 22.04 LTS
Storage — 1.92TB Micron 7450 PCIe 4.0 NVMe SSD; or 2x 1TB PCIe 4.0 NVMe M.2 SSD Boot Drive
Network — Dual 10GbE ports; or 2x 25G Network Adapter
16TB PCIe 5.0 NVMe SSD Data Drive
Chassis — ASUS ESC8000A-E12P; or NVIDIA MGX 4U
Dimensions — Width: 9.5" (240mm), Height: 22.8" (580mm), Depth: 20" (560mm), Weight: 43-55 lbs
Other — N + N Redundancy; Static IP
Datasheet — ask us
Workstations
GPU — 4x NVIDIA RTX 6000 PRO Blackwell Max-Q 96GB Graphics Card (384 VRAM) 300W per GPU
CPU — AMD Ryzen Threadripper PRO 7975WX (liquid cooled with Silverstone XE360-TR5); 32 cores / 64 threads; Base clock: 4.0 GHz, Boost up to 5.3 GHz; 8-channel DDR5 memory controller
RAM — 256GB DDR5 RAM; 8 channels
OS — Ubuntu Server
Storage — 8TB total: 4x 2TB PCIe 5.0 NVMe SSDs x4 lanes each (up to 14,900 MB/s theoretical for each NVMe module)
Network — Dual 10GbE ports; or 2x 25G Network Adapter
16TB PCIe 5.0 NVMe SSD Data Drive
Chassis — ATX
Dimensions — Width: 9.5" (240mm), Height: 12" (304mm), Depth: 20" (560mm), Weight: 43-55 lbs
Other — N + N Redundancy; Static IP
Datasheet — ask us
-

Hardware Manifest & Attestation
For each server or workstation, Zainode provides fully certified Hardware Manifest and Attestation documentation, ensuring security protocols are met.
-

Third Party Inspections
Unlike hyperscalers, Zainode makes our infra open for physical and digital third party inspections and verifications.
-

Certificate of Origin
Our servers and workstations are accompanied by a Certificate of Australian Origin.
BUILD YOUR OWN CUSTOM OPTIONS
BYO options with our bare metal servers
Bare metal servers offer direct access to hardware, providing unparalleled customisation and control. Ideal for intensive ML/AI workloads, they eliminate virtual overhead for organisations requiring max compute power. Choose CPU, GPU, RAM and storage for your needs. Dedicated hardware: no sharing, ensuring consistent performance.
DEPLOYMENT OPTIONS
Our team have deployed HPC in >30 DCs globally
1. On-prem
Buy or rent our servers, housed at your premises for complete control and data protection. See data flow.
2. Our DC
Use our reliable ML infra in our secure IRAP certified GPU DCs, with VPN, fast internet, and direct connection. See data flow.
3. Your DC
Manage every facet of your ML infra with deployment in your DC, installed by engineers specialising in large-scale HPC infra. See data flow.
OUR DC AT EQUINIX ME2 IN MELBOURNE
OUR NVIDIA BLACKWELL RTX 6000 GPUS
OUR HPC SERVERS, ON PREM
OUR SECURE AI COMPUTE RACKSPACE
Deployments … key feature comparison
-
Feature
Data control
Data residency requirements
Compute capacity
Cost efficiency
Cost certainty
Integration with internal systems
Performance optimisation
Scalability
Security and compliance
Support and maintenance
Existing cloud commits
-
On-prem
Full data control
Contained in country
Customised to order
High
Fixed
Custom or OOTB integration
On-chip model performance
High, tailored scalability
Organisational policies
Comprehensive support & MS
Spend down existing
-
Our DC
Managed security; we never store
Multi-region global options
Customised to order
Cost-effective, on-demand
Fixed and flexible
Easy integration via ecosystem
On-chip performance, low latency
High, flexible scaling options
SOC 2, HIPAA, GDPR compliant
Comprehensive support & MS
Spend down existing
-
Your DC
Full control in your DC
Region-locked deployments
Use existing, Zainode overflow
In-house compute when available
Fixed and flexible
Custom or OOTB integrations
On-chip performance, low latency
High, flexible scaling options
Organisational policies
Comprehensive support & MS
Use credits or commits
Secure the latest GPUs with custom AI apps for less cost and data protection.
Zainode offers comprehensive lifecycle management to ensure your AI investment remains at the forefront of technology. Deploy with the latest advancements today and be among the first to adopt the next gen. Our certified engineers handle the entire upgrade and scaling process, guaranteeing smooth and seamless transitions.
Delivered. Plug in. Start training. Start using.
-

Fast access, zero config
Achieve peace of mind when standing up AI infra with NVIDIA AI — application frameworks, AI dev tools and microservices, optimised to run on Zainode systems.
-

Easy upgrades
As components evolve our flexible stack quickly adopts new versions. Implemented in 24 hours, not months.
-

High powered GPUs
Reliable drivers, compatible with popular frameworks delivering unprecedented performance, scalability, and security for every workload.
-

Cost certainty & options
Cost effective, fixed private infra. Buy or rent whole or part servers and nodes — options available here.
-

Expert support & warranty
Hands-on support from dedicated engineers for any infra, OS or ML issues.