Welcome to Danube Messaging
๐ Danube Messaging is an open-source messaging platform built in Rust for teams that need reliable pub/sub and streaming without the operational overhead. Built on Tokio and openraft, metadata is replicated through embedded Raft consensus, so there are no external dependencies to deploy or manage.
Producers publish to topics, consumers receive messages via named subscriptions. Choose Non-Reliable (best-effort pub/sub) or Reliable (at-least-once streaming) per topic to match your workload. For design details, see the Architecture.
Get Started
The fastest way to try Danube, download the binary from the releases page:
No config file, no dependencies. Broker on 127.0.0.1:6650, admin on 127.0.0.1:50051.
For other deployment options: Docker Compose ยท Kubernetes ยท Local multi-broker
Deployment Modes
Danube runs as a single binary (danube-broker) in three modes. Choose based on your use case:
๐ฅ๏ธ Standalone
A single self-contained broker with zero config. Ideal for development, CI, and single-server deployments.
๐ Cluster
Multiple brokers forming a Raft consensus group with leader election, automated topic distribution, and load-based rebalancing. The recommended mode for production.
danube-broker --config-file danube_broker.yml \
--broker-addr 0.0.0.0:6650 --raft-addr 0.0.0.0:7650 \
--data-dir ./data/raft --seed-nodes "node1:7650,node2:7650,node3:7650"
๐ญ Edge
A lightweight MQTT gateway that ingests IoT device data at the edge and replicates it to the central cluster. Devices publish via standard MQTT (v3.1.1 / v5.0); the edge broker validates payloads, buffers into a local WAL, and continuously replicates to the cloud.
๐ Full Broker Modes guide with configuration reference and examples.
Core Capabilities
๐จ Message Delivery : Topics (partitioned and non-partitioned), reliable dispatch (at-least-once with NACK, retry backoff, dead-letter queues), and non-reliable high-throughput dispatch.
๐ Subscriptions : Exclusive, Shared, Failover, and Key-Shared (per-key ordering via consistent hashing with optional key filtering).
๐พ Persistence : Local WAL for fast writes, with optional durable segments on shared filesystems or object stores (S3, GCS, Azure Blob). Tiered replay and metadata-driven recovery across restarts and broker transfers.
๐ Schema Registry : Centralized schema management with versioning and compatibility enforcement. Supports JSON Schema, Avro, and Protobuf.
๐ Security : TLS/mTLS encryption, JWT and API-key authentication, fine-grained RBAC authorization with default-deny semantics.
๐ค AI Administration : MCP integration for natural language cluster management via Claude, Cursor, and Windsurf. 40+ tools covering topics, schemas, brokers, diagnostics, and metrics.
Architecture
Explore how Danube works under the hood:
- System Overview : component interaction, message flow, and cluster topology
- Load Manager & Rebalancing : topic assignment strategies and automated rebalancing
- Persistence Architecture : WAL, durable segments, tiered reads, and recovery
- Schema Registry : versioning, compatibility checking, and governance
- Key-Shared Dispatch : consistent hashing, per-key ordering, and consumer elasticity
Integrations
Danube Connect : plug-and-play connector ecosystem
- Source connectors: import data from MQTT, HTTP webhooks, databases, Kafka, etc.
- Sink connectors: export to Delta Lake, ClickHouse, vector databases, APIs, etc.
Learn more: Architecture ยท Build Source Connector ยท Build Sink Connector
Client Libraries
- Rust : danube-client ยท examples
- Go : danube-go ยท examples
- Java : danube-java ยท examples
- Python : danube-client ยท examples
Contributions for other languages (Node.js, C#, etc.) are welcome.
Tools
- danube-cli : command-line producer and consumer for quick testing
- danube-admin : cluster administration (CLI, AI/MCP, Web UI)