MeshStor CSI¶
Distributed block storage for bare-metal Kubernetes — replicated NVMe volumes with no external dependencies.
MeshStor is a Kubernetes CSI driver that creates replicated block volumes by partitioning local NVMe drives, exporting them across nodes via NVMe-oF, and assembling them into MD RAID arrays with XFS filesystem.
Why MeshStor¶
Native NVMe Performance¶
Volumes are backed by GPT partitions on local NVMe drives, exported to remote nodes using NVMe-oF (TCP or RDMA). The entire data path stays in the kernel — no userspace proxies, no protocol translation.
Kernel-Level Replication¶
Replication uses Linux MD RAID1 and RAID10 — the same subsystem that has protected data in production Linux systems for decades. No custom replication protocol to debug or distrust.
Zero External Dependencies¶
No etcd cluster, no separate storage control plane, no additional agents. MeshStor runs as a single binary deployed as a StatefulSet (controller) and DaemonSet (node plugin) in your existing Kubernetes cluster.
How It Works¶
flowchart LR
A["1. Partition\nlocal NVMe drives\nvia GPT"] --> B["2. Export partitions\nto remote nodes\nvia NVMe-oF"]
B --> C["3. Assemble\nMD RAID array\nacross nodes"]
C --> D["4. Mount XFS\nfilesystem\nto pod"]
The controller selects which nodes should host partitions based on available capacity, network latency, and RDMA support. Each node manages its own partitions, exports, and imports through a reconciliation loop that runs every 10 seconds.
Feature Matrix¶
| Feature | Support |
|---|---|
| Replication | 1–3+ copies (RAID1) |
| Striping | RAID10 via drivesPerCopy |
| Transport | NVMe-oF TCP (port 4420) + RDMA (port 4421), auto-selected |
| Volume Expansion | Online resize (drivesPerCopy=1 only) |
| Filesystem | XFS |
| Minimum Volume Size | 512 MiB |
| Access Modes | ReadWriteOnce (RWO) |
| Snapshots | Not supported |
| Raw Block Volumes | Not supported |
| Encryption | Not supported |
| Multi-Node Write | Not supported |
Comparison¶
| MeshStor | Longhorn / OpenEBS Replicated | TopoLVM / Local PV | Rook Ceph | |
|---|---|---|---|---|
| Data path | Kernel (NVMe-oF + MD) | Userspace (iSCSI / NBD) | Kernel (local only) | Kernel (RADOS) |
| Replication | MD RAID1/10 across nodes | Custom replication engine | None (single node) | CRUSH + PG replication |
| External dependencies | None | Longhorn manager | None | Ceph cluster (MON, OSD, MDS) |
| Survives node failure | Yes | Yes | No | Yes |
| Complexity | Low | Medium | Low | High |
Quick Start¶
Create a StorageClass and provision your first replicated volume in minutes:
kubectl apply -f deploy/
kubectl apply -f - <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mesh-2copy-tcp
provisioner: io.meshstor.csi.mesh
parameters:
numberOfCopies: "2"
drivesPerCopy: "1"
reclaimPolicy: Delete
allowVolumeExpansion: true
EOF
See the full Quickstart Guide for prerequisites, node setup, and verification steps.
What's Next¶
- Architecture — how data flows from pod to disk through the kernel
- Internals — understand the controller and node components
- Prerequisites — check if your cluster is ready
- Quickstart — deploy MeshStor and create your first volume