Use Cases¶
This page is a gut-check. It does not list every workload that might fit MeshStor — it lists the patterns where MeshStor is clearly the right answer, the patterns where it is clearly not, and the patterns where MeshStor will eventually fit but does not yet.
For the substantive head-to-head against named alternatives, see Comparison. For the maturity story, see Project Status.
Use MeshStor when…¶
- You run Kubernetes on bare-metal nodes with NVMe drives. This is the primary target environment. NVMe drives are required; SATA SSDs and spinning disks are not supported.
- You want the simplest possible storage control plane. MeshStor runs as a single binary in your existing cluster. There is no etcd cluster to operate, no separate storage cluster, no additional agents.
- You need replicated block storage for stateful workloads that don't replicate themselves. Databases without built-in clustering, file servers, message queues with single-node persistence — these are the canonical fit.
- You want local-storage simplicity but with operational features that pure local CSIs cannot provide. Use
numberOfCopies=1. The volume goes through the same data path as a replicated volume but only writes to one underlying partition. You get pod rescheduling across nodes, partition relocation on drain, and (when shipped) cross-node snapshots — none of which are possible with TopoLVM, OpenEBS LocalPV-LVM, orlocal-path-provisioner. - You want one storage class to cover both replicated and unreplicated workloads. A single MeshStor StorageClass can serve replicated databases (
numberOfCopies=2or3) and unreplicated caches (numberOfCopies=1) without deploying two different CSI drivers.
Don't use MeshStor when…¶
Don't use ever¶
- You need real ReadWriteMany. MeshStor is a block storage driver. RWX is not on the roadmap and not under consideration. Use a file storage system (NFS, CephFS, an SMB server) for shared filesystems.
- You need file or object storage. MeshStor only provides block volumes. For object storage use MinIO, Rook-Ceph RGW, or a cloud object store. For file storage use the same NFS/CephFS/SMB options as above.
- You need multi-cluster active-active replication. MeshStor replicates within a single Kubernetes cluster. Cross-cluster replication is not on the roadmap.
Don't use yet (feature is on the roadmap)¶
- You need snapshots or clones. They are planned for the open-source roadmap. The implementation will pay its cost at snapshot creation time only — see Project Status.
- You need raw block volumes. Planned for the open-source roadmap. MeshStor currently supports XFS-formatted volumes only.
- You need an Ext4 filesystem. Planned for the open-source roadmap. MeshStor currently supports XFS only.
- You need encryption at rest. Planned for paid customers. Not on the open-source roadmap.
- You need volume expansion for
drivesPerCopy ≥ 2(RAID10). Planned for paid customers. Online expansion currently works only whendrivesPerCopy=1. - You need KubeVirt live migration support. Planned for paid customers. Implemented as a controlled multi-attach shim that lets two pods on different nodes hold the same volume during the migration window.
Use a different category entirely¶
- You're on managed-cloud Kubernetes (EKS, GKE, AKS) with EBS / Persistent Disk available. Use the cloud's native CSI driver. MeshStor's value proposition (kernel-grade replicated block on bare metal) does not apply.
- Your application replicates its own data and you are at extreme latency-sensitivity. Evaluate
local-path-provisioneragainstnumberOfCopies=1MeshStor on your specific workload — see the local-storage section of Comparison.
What's Next¶
- Comparison — head-to-head against alternatives
- Project Status — maturity statement and roadmap
- Limitations — canonical list of unsupported features and roadmap status
- Prerequisites — confirm your cluster meets the hardware requirements