Skip to content

Prerequisites

MeshStor runs on bare-metal Kubernetes clusters with local NVMe drives. This page lists the hardware, software, and network requirements.

Kubernetes

  • Kubernetes 1.30+ (tested with 1.34)
  • CSI driver registration enabled (default since 1.17)

Nodes

Minimum 2 nodes for replicated volumes (numberOfCopies=2). 4+ nodes recommended for RAID10 (drivesPerCopy=2).

NVMe Drives

Each node needs at least one NVMe drive that is:

  • Not used as the boot drive — MeshStor partitions the entire drive
  • Accessible as /dev/nvmeXnY — standard NVMe device naming
  • Large enough for your workloads — minimum 512 MiB per partition, but typically 10+ GiB per volume

Verify available drives:

lsblk -d -o NAME,SIZE,MODEL,TRAN | grep nvme
nvme0n1  931.5G  Samsung SSD 990 PRO 1TB  nvme
nvme1n1  931.5G  Samsung SSD 990 PRO 1TB  nvme

Required Packages

Install on every node:

# RHEL/CentOS/Fedora
dnf install -y nvme-cli mdadm xfsprogs parted

# Ubuntu/Debian
apt install -y nvme-cli mdadm xfsprogs parted

Kernel Modules

MeshStor requires NVMe-oF target kernel modules. Load them and ensure they persist across reboots:

# Required for all setups
modprobe nvmet
modprobe nvmet_tcp

# Optional: only if using RDMA transport
modprobe nvmet_rdma

Persist across reboots:

cat > /etc/modules-load.d/meshstor.conf <<EOF
nvmet
nvmet_tcp
EOF

Verify the configfs target directory exists:

ls /sys/kernel/config/nvmet/
hosts  ports  subsystems

Missing configfs

If /sys/kernel/config/nvmet/ does not exist after loading modules, ensure configfs is mounted: mount -t configfs configfs /sys/kernel/config

Network

Node Addresses

Each node must be annotated with its NVMe-oF network address:

# Required: TCP address (must be reachable from all other nodes)
kubectl annotate node <node-name> meshstor.io/nvme-over-tcp-address=<ip-address>

# Optional: RDMA address (enables RDMA transport preference)
kubectl annotate node <node-name> meshstor.io/nvme-over-rdma-address=<ip-address>

If no annotation is set, MeshStor falls back to the node's InternalIP.

Ports

Port Transport Direction
4420 NVMe-oF TCP Between all MeshStor nodes
4421 NVMe-oF RDMA Between RDMA-capable nodes

Both ports must be open on the host network (the node DaemonSet uses hostNetwork: true).

Subnet Connectivity

MeshStor only pairs two nodes as replicas if they share a common subnet derived from their meshstor.io/nvme-over-tcp-address (and meshstor.io/nvme-over-rdma-address if set) annotations. Plan flat L2 or routed connectivity between any pair of nodes that may host replica partitions for the same volume — otherwise the controller will exclude one of them from the placement set and partition scheduling will silently skip that pair.

Block Device Selection

By default, MeshStor uses all available NVMe drives on each node. To restrict which drives are used, label the node:

# Use only nvme0n1 and nvme1n1 (separated by ".." due to Kubernetes label value restrictions)
kubectl label node <node-name> meshstor.io/selected-block-devices=nvme0n1..nvme1n1

Verification Checklist

Run on each node to confirm readiness:

# 1. NVMe drives visible
lsblk -d -o NAME,SIZE,TRAN | grep nvme

# 2. Required packages installed
nvme version && mdadm --version && mkfs.xfs -V

# 3. NVMe-oF target modules loaded
ls /sys/kernel/config/nvmet/

# 4. Node annotated
kubectl get node <node-name> -o jsonpath='{.metadata.annotations.meshstor\.io/nvme-over-tcp-address}'

What's Next

  • Installation — deploy MeshStor to your cluster
  • Compatibility — supported Kubernetes versions, kernels, and Linux distributions
  • Internals — understand the components before deploying