SupremeRAID™ Linux Driver 2.0 Beta Release Notes, Software, and Documentation
Beta Program Disclaimer
The features described in this release are part of the SupremeRAID™ 2.0 Beta Program and are provided for evaluation purposes only. Performance figures, functionality, and compatibility may change before the official release. Participants are encouraged to provide feedback to help us refine and stabilize the final version. Graid Technology Inc. makes no guarantee of production readiness for beta features at this stage.
New Features
RAID5/6 Random Write & Degraded Performance Enhancement
SupremeRAID™ 2.0 delivers major improvements in RAID5 and RAID6 performance, particularly in random write efficiency and degraded-state performance.
Through a redesigned GPU-accelerated parity pipeline and optimized degraded-I/O scheduling, version 2.0 achieves significantly higher throughput and more consistent latency across parity RAID configurations.
Key Improvement
- RAID5/6 4K optimal 4k random write (up to 5×)
- RAID5/6 degraded 4K random read (up to 2.6×)
- RAID5/6 degraded 4K random write (up to 6×)
- RAID5/6 degraded 1M random read throughput (up to 17×)
- RAID6 degraded 1M random write throughput (up to 9×)
Info
These enhancements deliver stable performance even during degraded or rebuild states, reducing the impact on live workloads such as AI training, virtualization, and large-scale analytics.
Performance Comparisons
Testing Environment
- Hardware
- CPU: AMD EPYC 9755 128-Core Processor × 2
- Memory: 32 GB DDR5-6400 RDIMM × 24
- RAID Controller: SR-PAM2-FD32 (A1000)
- NVMe Drives: KIOXIA CM7 3.2 TB × 24
- Software
- OS: Ubuntu 24.04.2 LTS
- Kernel: 6.8.0-62-generic
- SupremeRAID™ Driver Versions:
• 1.7.2 Update 61
• 2.0 Beta - Benchmark Tool: fio-3.40
- RAID Configuration
- One RAID5 group with 24 physical drives
RAID5/6 4K Random Write (Optimal)
RAID5/6 4K Random Read (Degraded)
RAID5/6 4K Random Write (Degraded, Journal Off)
RAID5/6 1M Random Read (Degraded)
RAID5/6 1M Random Write (Degraded, Journal Off)
Numerical Comparison
| Scenario | RAID5 1.7.2 | RAID5 2.0 | RAID6 1.7.2 | RAID6 2.0 | Unit | Improvement |
|---|---|---|---|---|---|---|
| 4K Random Write (Optimal) | 1.00 | 4.94 | 0.79 | 4.38 | M IOPS | ~5× |
| 4K Random Read (Degraded) | 3.45 | 9.19 | 3.45 | 9.14 | M IOPS | ~2.6× |
| 4K Random Write (Degraded, Journal Off) | 0.88 | 4.80 | 0.69 | 4.22 | M IOPS | ~5–6× |
| 1M Random Read (Degraded) | 13.1 | 225 | 13.1 | 235 | GB/s | ~17× |
| 1M Random Write (Degraded, Journal Off) | 113 | 115 | 12.2 | 112 | GB/s | ~5× |
Disclaimer
The performance data presented here were collected on SupremeRAID™ PRO (NVIDIA A1000) under controlled laboratory conditions. Actual results may vary depending on hardware configuration, SSD model, and system topology. Performance on other GPUs—such as the T400, 2000 Ada, or future architectures—may differ due to variations in GPU compute capability, memory bandwidth, and system integration factors.
Integrated SPDK Target
The Linux 2.0 driver introduces an integrated SPDK NVMe-oF target within the Graid management daemon, enabling direct, zero-copy data export from SupremeRAID™ virtual drives to remote clients over NVMe-oF (TCP or RDMA). This feature eliminates the need for an external SPDK instance and significantly reduces I/O latency by bypassing the kernel block layer.
Each Drive Group can now operate in SPDK mode, where its virtual drives are exposed as SPDK bdevs instead of /dev/gdg* devices. These bdevs can be directly exported as NVMe-oF subsystems and optionally configured in 512-byte LBA emulation (512e) mode, ensuring compatibility with VMware ESXi and other environments that require 512-byte logical sectors.
The integrated SPDK target shares the same GPU-accelerated RAID engine as the kernel path but offloads the I/O stack entirely into user space. This enables low-latency data services, simplified deployment, and unified control via graidctl, making it ideal for hypervisor, disaggregated storage, or remote-NVMe use cases.
Quick Setup Guide (for Beta Users)
This guide provides a simple end-to-end workflow to enable SPDK mode and export a VD over NVMe-oF for quick evaluation.
- Prepare the environment
- SupremeRAID™ 2.0 (Linux) with the integrated SPDK feature
- Root access
- Enough contiguous free memory (for configuring hugepages) - recommended additional 8 GiB
- NICs and routes ready for NVMe-oF (TCP or RDMA)
- Install or refresh dependencies: libibverbs1, librdmacm1, libnuma1
- Enable 1 GB hugepages
- Setting default hugepage size to 1G:
- Ubuntu/Debian
sudo sed -i 's/^GRUB_CMDLINE_LINUX_DEFAULT="/GRUB_CMDLINE_LINUX_DEFAULT="default_hugepagesz=1G /' /etc/default/grub sudo grub-mkconfig -o /boot/efi/EFI/$OS_ID/grub.cfg' sudo reboot - RHEL/CentOS/Rocky
sudo grubby --update-kernel=ALL --args="default_hugepagesz=1G" sudo reboot
- Ubuntu/Debian
- After reboot, make sure default hugepage size is set to 1048576 kB:
sudo grep Hugepagesize /proc/meminfo - Make sure no 2MB hugepage is allocated on the environment.
echo 0 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
- Setting default hugepage size to 1G:
- Allocate hugepages:
# At least totally 8 GiB hugepages is recommented for integrated SPDK echo 8 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages # Alternatively, you can allocate hugepages by numa nodes # echo 5 > /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages # echo 3 > /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages # Make sure the allocation is successful sudo grep Hugetlb /proc/meminfo - Enable the integrated SPDK service
- Edit
/etc/graid/graid_spdk.conf:[system] # Binds SPDK reactors to specific CPU cores to enable the integrated SPDK service reactor_mask = [0-3] # Optional: reserve memory if external SPDK coexists # mem_size_gb = 6 - Restart to apply:
systemctl restart graid
- Edit
- Create PDs, DGs, and VDs
- Create or reuse devices normally.
- If a DG was created with zero-init, wait until initialization completes before switching to SPDK mode.
- Enable SPDK mode
- Convert the Drive Group (DG) to SPDK mode; all VDs become SPDK bdevs (no
/dev/gdg*).graidctl edit dg 0 spdk_bdev enable graidctl list vd
Note: kernel and SPDK access cannot coexist within the same DG.
- Convert the Drive Group (DG) to SPDK mode; all VDs become SPDK bdevs (no
- Create an NVMe-oF target
- Example (TCP on port 4420):
graidctl create export_target tcp enp0s1 ipv4 4420 --spdk - Kernel and SPDK targets cannot share the same IP + port.
- Example (TCP on port 4420):
- Export a VD (optional 512e for VMware)
- Use
--spdk-512efor 512-byte LBA emulation (required by VMware ESXi):graidctl export vd 0 0 -i 0 --spdk-512e
- Use
- Verify from a client
- Discover the target:
nvme discover -t tcp -a <target_ip> -s 4420 - Connect and confirm LBA size:
nvme id-ns -H /dev/nvme0n1 | grep 'Data Size.*(in use)' # Expect "Data Size: 512 bytes" for --spdk-512e
- Discover the target:
Operational notes
- SPDK mode applies to the entire DG (no mixed modes).
- Allocate ≥8 GiB 1 GiB hugepages for stability.
- Reactor cores (
reactor_mask) require a restart to change. - Address conflicts between kernel and SPDK targets are not allowed.
- Coexistence with external SPDK is supported (
spdk_bdev=external). - SPDK path computes parity on GPU; kernel path on CPU.
- UNMAP from VMware ESXi is safely ignored if unsupported by underlying drives.
Reference CLI
graidctl edit dg <DG_ID> spdk_bdev enable|disable
graidctl create export_target <tcp|rdma> <interface> <ipv4|ipv6> <svcid> --spdk
graidctl export vd <DG_ID> <VD_ID> -i <target_id> [--spdk-512e]
graidctl ls nt
graidctl list vd
nvme discover -t <tcp|rdma> -a <target_ip> -s <svcid>
nvme id-ns -H /dev/nvmeXnY | grep 'Data Size.*(in use)'
OS and Platform Support
Supported Operating Systems
| Linux Distribution | x86_64 |
|---|---|
| AlmaLinux | 8.5-8.10 (kernel 4.18), 9.0-9.6 (Kernel 5.14), 10.0 (Kernel 6.12) |
| CentOS | 7.9 (Kernel 3.10 and 4.18), 8.3, 8.4, 8.5 (Kernel 4.18) |
| Debian | 11.6 (Kernel 5.10), 12 (Kernel 6.1), 13 (Kernel 6.12) |
| openSUSE Leap | 15.2-15.3 (Kernel 5.3), 15.4-15.5 (Kernel 5.14), 15.6 (Kernel 6.4) |
| Oracle Linux | 8.7-8.10 (RHCK 4.18 and UEK 5.15), 9.1-9.6 (RHCK 5.14 and UEK 5.15), 10.0 (RHCK 6.12 and UEK 6.12) |
| RHEL | 7.9 (Kernel 3.10 or 4.18), 8.3-8.10 (Kernel 4.18), 9.0-9.6 (Kernel 5.14), 10.0 (Kernel 6.12) |
| Proxmox VE | 8.1 (Kernel 6.5), 8.2-8.4 (Kernel 6.8), 9.0 (Kernel 6.14) |
| Rocky Linux | 8.5-8.10 (Kernel 4.18), 9.0-9.6 (Kernel 5.14), 10.0 (Kernel 6.12) |
| SLES | 15 SP2-SP3 (Kernel 5.3), 15 SP4-SP5 (Kernel 5.14), 15 SP6 (Kernel 6.4) |
| Ubuntu | 20.04.0-20.04.4 (Kernel 5.4 and 5.15), 22.04.0-22.04.2 (Kernel 5.15), 22.04.3-22.04.4 (Kernel 6.2), 24.04 (Kernel 6.8) |
Dependencies and Utilities
| Links | |
|---|---|
| NVIDIA Driver | NVIDIA-Linux-x86_64-570.195.03.run |
| SupremeRAID™ Pre-installer | graid-sr-pre-installer-2.0.0-195-x86_64.run (MD5: 83866c7a1ea69824af4cd42c3fefc267) |
| SupremeRAID™ Pre-installer (Centos 7.9) | graid-sr-pre-installer-2.0.0-centos79-195-x86_64.run (MD5: 34ebe58e18989de35561776cb00ac847) |
Driver Package
- Supported GPU: NVIDIA T400
- Download Installer: graid-sr-installer-2.0.0-beta-001-47-33.run
0745d4f9a808dd63c4a7b4986495c71f
- Supported GPU: NVIDIA RTX A400
- Download Installer: graid-sr-installer-2.0.0-beta-cam-47-33.run
35d27ed03c8a150056ef45fd4356b700
- Supported GPU: NVIDIA T1000
- Download Installer: graid-sr-installer-2.0.0-beta-000-47-33.run
820922a2c103ee709713a50e43b171d5
- Supported GPU: NVIDIA RTX A1000
- Download Installer: graid-sr-installer-2.0.0-beta-pam-47-33.run
231f2f964ce46889de36936a391b0e54
- Supported GPU: NVIDIA RTX A2000
- Download Installer: graid-sr-installer-2.0.0-beta-010-47-33.run
cc25cb95503151e9b1a5d2e32ba2c0eb
- Supported GPU: NVIDIA RTX 2000 Ada and NVIDIA RTX 2000E Ada
- Download Installer: graid-sr-installer-2.0.0-beta-uad-47-33.run
95fa7aa538c2b9384e78339716a71ef9