Skip to content

Welcome to Danube Messaging

๐ŸŒŠ Danube Messaging is an open-source messaging platform built in Rust for teams that need reliable pub/sub and streaming without the operational overhead. Built on Tokio and openraft, metadata is replicated through embedded Raft consensus, so there are no external dependencies to deploy or manage.

Producers publish to topics, consumers receive messages via named subscriptions. Choose Non-Reliable (best-effort pub/sub) or Reliable (at-least-once streaming) per topic to match your workload. For design details, see the Architecture.


Get Started

The fastest way to try Danube, download the binary from the releases page:

danube-broker --mode standalone --data-dir ./danube-data

No config file, no dependencies. Broker on 127.0.0.1:6650, admin on 127.0.0.1:50051.

For other deployment options: Docker Compose ยท Kubernetes ยท Local multi-broker


Deployment Modes

Danube runs as a single binary (danube-broker) in three modes. Choose based on your use case:

๐Ÿ–ฅ๏ธ Standalone

A single self-contained broker with zero config. Ideal for development, CI, and single-server deployments.

danube-broker --mode standalone --data-dir ./danube-data

๐ŸŒ Cluster

Multiple brokers forming a Raft consensus group with leader election, automated topic distribution, and load-based rebalancing. The recommended mode for production.

danube-broker --config-file danube_broker.yml \
  --broker-addr 0.0.0.0:6650 --raft-addr 0.0.0.0:7650 \
  --data-dir ./data/raft --seed-nodes "node1:7650,node2:7650,node3:7650"

๐Ÿญ Edge

A lightweight MQTT gateway that ingests IoT device data at the edge and replicates it to the central cluster. Devices publish via standard MQTT (v3.1.1 / v5.0); the edge broker validates payloads, buffers into a local WAL, and continuously replicates to the cloud.

danube-broker --mode edge --data-dir ./edge-data --edge-config edge.yaml

๐Ÿ“– Full Broker Modes guide with configuration reference and examples.


Core Capabilities

๐Ÿ“จ Message Delivery : Topics (partitioned and non-partitioned), reliable dispatch (at-least-once with NACK, retry backoff, dead-letter queues), and non-reliable high-throughput dispatch.

๐Ÿ”„ Subscriptions : Exclusive, Shared, Failover, and Key-Shared (per-key ordering via consistent hashing with optional key filtering).

๐Ÿ’พ Persistence : Local WAL for fast writes, with optional durable segments on shared filesystems or object stores (S3, GCS, Azure Blob). Tiered replay and metadata-driven recovery across restarts and broker transfers.

๐Ÿ“‹ Schema Registry : Centralized schema management with versioning and compatibility enforcement. Supports JSON Schema, Avro, and Protobuf.

๐Ÿ”’ Security : TLS/mTLS encryption, JWT and API-key authentication, fine-grained RBAC authorization with default-deny semantics.

๐Ÿค– AI Administration : MCP integration for natural language cluster management via Claude, Cursor, and Windsurf. 40+ tools covering topics, schemas, brokers, diagnostics, and metrics.


Architecture

Explore how Danube works under the hood:


Integrations

Danube Connect : plug-and-play connector ecosystem

  • Source connectors: import data from MQTT, HTTP webhooks, databases, Kafka, etc.
  • Sink connectors: export to Delta Lake, ClickHouse, vector databases, APIs, etc.

Learn more: Architecture ยท Build Source Connector ยท Build Sink Connector


Client Libraries

Contributions for other languages (Node.js, C#, etc.) are welcome.

Tools

  • danube-cli : command-line producer and consumer for quick testing
  • danube-admin : cluster administration (CLI, AI/MCP, Web UI)