Object Storage: What Can Replace MinIO?

A comprehensive comparison of MinIO alternatives for S3-compatible object storage, covering SeaweedFS, Garage, Ceph RGW, and OpenMaxIO with full deployment guides, Docker Compose configurations, and performance benchmarks.

Introduction: Why Look for MinIO Alternatives?

MinIO has long been the go-to self-hosted S3-compatible object storage solution. However, several recent developments have pushed many teams to consider alternatives:

  • 2021: MinIO switched to the AGPL v3 license
  • 2025: The web interface was removed from the free version
  • Pricing: Commercial licensing starts at $96,000 per year for storage up to 1TB

The AGPL v3 license with its "network copyleft" provision requires disclosure of source code for all applications that interact with MinIO over the network. For many commercial projects, this is a dealbreaker.

MinIO alternatives comparison

Recommended Alternatives Overview

SolutionLanguageLicenseComplexityBest For
SeaweedFSGoApache 2.0MediumBillions of files, CDN
GarageRustAGPL v3LowEdge, geo-distribution
Ceph RGWC++LGPLHighEnterprise, petabytes
OpenMaxIOGoApache 2.0LowTemporary migration

SeaweedFS

SeaweedFS is the recommended choice for most use cases, offering an optimal balance of functionality and simplicity with an open Apache 2.0 license.

Simple Development Setup

version: '3.9'

services:
  seaweedfs:
    image: chrislusf/seaweedfs:latest
    ports:
      - "9333:9333"   # Master
      - "8080:8080"   # Volume
      - "8888:8888"   # Filer
      - "8333:8333"   # S3
    command: 'server -s3 -dir=/data'
    volumes:
      - ./data:/data
    restart: unless-stopped

Production 3-Node Cluster

For production deployment, SeaweedFS uses a master-volume-filer-s3 architecture. Here is a full Docker Compose configuration for a 3-node cluster:

version: '3.9'

networks:
  seaweedfs:
    driver: bridge

services:
  master1:
    image: chrislusf/seaweedfs:latest
    networks:
      - seaweedfs
    ports:
      - "9333:9333"
      - "19333:19333"
    command: >
      master 
      -ip=master1 
      -ip.bind=0.0.0.0 
      -port=9333
      -mdir=/data
      -peers=master1:9333,master2:9333,master3:9333
      -volumeSizeLimitMB=1024
      -defaultReplication=001
      -garbageThreshold=0.3
      -metricsPort=9324
    volumes:
      - ./data/master1:/data
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9333/cluster/status"]
      interval: 30s
      timeout: 10s
      retries: 3

The cluster includes three master nodes, three volume servers (each on a separate rack for fault tolerance), a filer service, and an S3 gateway. Each volume server is configured with 300 maximum volumes and 256MB file size limits.

Configuration Files

filer.toml - supports LevelDB (dev), PostgreSQL (recommended for prod), and MySQL backends:

[filer.options]
recursive_delete = false
max_file_name_length = 512

[leveldb2]
enabled = true
dir = "/data/filerldb2"

[postgres2]
enabled = false
hostname = "postgres"
port = 5432
username = "seaweedfs"
password = "seaweedfs_password"
database = "seaweedfs"

s3.json - S3 credentials with role-based access (admin, read-only, writer, anonymous):

{
  "identities": [
    {
      "name": "admin",
      "credentials": [{"accessKey": "BESTACCESSKEY", "secretKey": "WriterSecretKeyChangeInProduction12345678"}],
      "actions": ["Admin", "Read", "Write", "List", "Tagging"]
    },
    {
      "name": "anonymous",
      "actions": ["Read:public-*"]
    }
  ]
}

AWS CLI Usage

# Configure profile
aws configure --profile seaweedfs

# Create bucket
aws --profile seaweedfs --endpoint-url http://localhost:8333 \
    s3 mb s3://test-bucket

# Upload file
aws --profile seaweedfs --endpoint-url http://localhost:8333 \
    s3 cp test.txt s3://test-bucket/

# Sync directory recursively
aws --profile seaweedfs --endpoint-url http://localhost:8333 \
    s3 sync ./local-folder s3://test-bucket/backup/ --recursive

# Generate presigned URL
aws --profile seaweedfs --endpoint-url http://localhost:8333 \
    s3 presign s3://test-bucket/test.txt --expires-in 3600

Monitoring with Prometheus and Grafana

global:
  scrape_interval: 15s

scrape_configs:
  - job_name: 'seaweedfs-master'
    static_configs:
      - targets: ['master1:9324', 'master2:9324', 'master3:9324']
  - job_name: 'seaweedfs-volume'
    static_configs:
      - targets: ['volume1:9325', 'volume2:9325', 'volume3:9325']
  - job_name: 'seaweedfs-filer'
    static_configs:
      - targets: ['filer:9326']

FUSE Mounting

mkdir -p /mnt/seaweedfs
./weed mount -filer=localhost:8888 -dir=/mnt/seaweedfs -filer.path=/

Garbage Collection

# Via weed shell
docker exec -it seaweedfs weed shell
> volume.vacuum -garbageThreshold 0.3

# Or via HTTP API
curl -X POST "http://localhost:9333/vol/vacuum?garbageThreshold=0.3"

Garage

Garage is a lightweight S3-compatible object storage written in Rust, designed for edge deployments and geo-distributed setups.

Single-Node Setup

# Generate secrets
RPC_SECRET=$(openssl rand -hex 32)
ADMIN_TOKEN=$(openssl rand -base64 32)

# Create configuration
cat > ~/garage/garage.toml << EOF
metadata_dir = "/var/lib/garage/meta"
data_dir = "/var/lib/garage/data"
db_engine = "sqlite"
replication_factor = 1
rpc_bind_addr = "[::]:3901"
rpc_secret = "${RPC_SECRET}"

[s3_api]
s3_region = "garage"
api_bind_addr = "[::]:3900"

[admin]
api_bind_addr = "[::]:3903"
admin_token = "${ADMIN_TOKEN}"
EOF

# Run via Docker
docker run -d --name garage \
  -p 3900:3900 -p 3901:3901 -p 3903:3903 \
  -v ~/garage/garage.toml:/etc/garage.toml:ro \
  -v ~/garage/data:/var/lib/garage/data \
  -v ~/garage/meta:/var/lib/garage/meta \
  dxflrs/garage:v2.1.0

Multi-Node Cluster Initialization

# Get Node ID on each node
docker exec garage-node1 /garage node id

# Connect nodes
docker exec garage-node1 /garage node connect 7b8d5e2a1f@192.168.1.11:3901

# Create layout with zones
docker exec garage-node1 /garage layout assign --zone paris --capacity 500G 5a7c4f3b9e
docker exec garage-node1 /garage layout assign --zone berlin --capacity 1T 7b8d5e2a1f
docker exec garage-node1 /garage layout assign --zone london --capacity 500G 9c6a3d8e4b

# Apply layout
docker exec garage-node1 /garage layout apply --version 1

User and Bucket Management

# Create bucket
docker exec garage-node1 /garage bucket create my-data

# Create key
docker exec garage-node1 /garage key create my-app-key

# Grant permissions
docker exec garage-node1 /garage bucket allow --read --write --owner my-data --key my-app-key

Ceph RGW (RADOS Gateway)

Ceph is the heavyweight enterprise option, suitable for petabyte-scale deployments. It offers the highest S3 API compatibility (82%) but comes with significantly higher operational complexity.

MicroCeph Quick Start

# Install MicroCeph
sudo snap install microceph --channel=latest/stable

# Bootstrap cluster
sudo microceph cluster bootstrap

# Add storage (real disk or loop device for testing)
sudo microceph disk add /dev/sdb --wipe

# Enable Object Gateway
sudo microceph enable rgw

# Create S3 user
sudo radosgw-admin user create \
  --uid=myuser --display-name="My User" \
  --access-key=myaccesskey --secret=mysupersecretkey

Production Cephadm Deployment

# Bootstrap cluster
sudo cephadm bootstrap \
  --mon-ip 192.168.1.10 \
  --initial-dashboard-user admin \
  --initial-dashboard-password SecureP@ssw0rd

# Add nodes
sudo cephadm shell -- ceph orch host add node2 192.168.1.11
sudo cephadm shell -- ceph orch host add node3 192.168.1.12

# Add OSD on all nodes
sudo cephadm shell -- ceph orch apply osd --all-available-devices

Kubernetes with Rook

# Install Rook operator
kubectl create namespace rook-ceph
kubectl apply -f https://raw.githubusercontent.com/rook/rook/v1.14.0/deploy/examples/crds.yaml
kubectl apply -f https://raw.githubusercontent.com/rook/rook/v1.14.0/deploy/examples/common.yaml
kubectl apply -f https://raw.githubusercontent.com/rook/rook/v1.14.0/deploy/examples/operator.yaml

Erasure Coding for Storage Efficiency

# Create EC profile (4+2)
sudo ceph osd erasure-code-profile set ec-42-profile k=4 m=2 crush-failure-domain=host

# Create EC pool
sudo ceph osd pool create ec-data-pool 32 32 erasure ec-42-profile

Troubleshooting

sudo ceph health detail
sudo ceph osd perf
sudo ceph daemon osd.0 dump_blocked_ops
sudo journalctl -u ceph-radosgw@rgw.* -f

OpenMaxIO

OpenMaxIO is a drop-in MinIO replacement with an Apache 2.0 license, ideal for temporary migration when you need to quickly escape MinIO's licensing changes:

docker run -d --name openmaxio \
  -p 9000:9000 -p 9001:9001 \
  -v /data:/data \
  -e "MINIO_ROOT_USER=admin" \
  -e "MINIO_ROOT_PASSWORD=password123456" \
  openmaxio/openmaxio:latest \
  server /data --console-address ":9001"

Performance Benchmarks

Testing was conducted on identical hardware (3x VM, 4 vCPU, 16GB RAM, NVMe SSD):

  • Small files (1KB): SeaweedFS delivered 45,000 ops/sec versus MinIO's 12,500 ops/sec
  • Large files (100MB): All solutions reached ~900-1000 MB/s throughput, with differences mainly in resource consumption
  • S3 API compatibility: Ceph RGW leads at 82%, MinIO historically at 46%, SeaweedFS at ~29%

Conclusion

For most use cases, SeaweedFS is recommended as the optimal balance of functionality, simplicity, and open licensing. Use Ceph RGW for enterprise-grade deployments requiring maximum S3 compatibility, Garage for edge and geo-distributed scenarios, and OpenMaxIO as a quick migration path away from MinIO.