Introduction
Dugite is a Cardano node implementation written in Rust, aiming for 100% compatibility with cardano-node (Haskell).
Built by Sandstone Pool.
Why Dugite?
The Cardano ecosystem benefits from client diversity. Running multiple independent node implementations strengthens the network by:
- Resilience — A bug in one implementation does not bring down the entire network.
- Performance — Rust's zero-cost abstractions and memory safety without garbage collection enable high-throughput block processing.
- Verification — An independent implementation validates the Cardano specification against the reference Haskell node, catching ambiguities and edge cases.
- Accessibility — A Rust codebase broadens the pool of developers who can contribute to Cardano infrastructure.
Key Features
- Full Ouroboros Praos consensus — Slot leader checks, VRF validation, KES period tracking, epoch nonce computation.
- Multi-era support — Byron, Shelley, Allegra, Mary, Alonzo, Babbage, and Conway eras.
- Conway governance (CIP-1694) — DRep registration, voting, proposals, constitutional committee, treasury withdrawals.
- Pipelined multi-peer sync — Header collection from a primary peer with parallel block fetching from multiple peers.
- Plutus script execution — Plutus V1/V2/V3 evaluation via the uplc CEK machine.
- Node-to-Node (N2N) protocol — Full Ouroboros mini-protocol suite: ChainSync, BlockFetch, TxSubmission2, KeepAlive, PeerSharing.
- Node-to-Client (N2C) protocol — Unix domain socket server with LocalChainSync, LocalStateQuery, LocalTxSubmission, and LocalTxMonitor.
- cardano-cli compatible CLI — Key generation, transaction building, signing, submission, queries, and governance commands.
- Prometheus metrics — Real-time node metrics on port 12798.
- P2P networking — Peer manager with cold/warm/hot lifecycle, DNS multi-resolution, ledger-based peer discovery, and inbound rate limiting.
- Mithril snapshot import — Fast initial sync by importing a Mithril-certified snapshot.
- SIGHUP topology reload — Update peer configuration without restarting the node.
Project Status
Dugite is under active development. It can sync against both the Cardano mainnet and preview/preprod testnets. The node implements the full N2N and N2C protocol stacks, ledger validation, epoch transitions with stake snapshots and reward distribution, and Conway-era governance.
For a detailed checklist of implemented and pending features, see the Developer Wiki.
License
Dugite is released under the Apache-2.0 License.
Installation
Dugite can be installed from pre-built binaries, container images, or built from source.
Pre-built Binaries
Download the latest release from GitHub Releases:
| Platform | Architecture | Download |
|---|---|---|
| Linux | x86_64 | dugite-x86_64-linux.tar.gz |
| Linux | aarch64 | dugite-aarch64-linux.tar.gz |
| macOS | x86_64 (Intel) | dugite-x86_64-macos.tar.gz |
| macOS | Apple Silicon | dugite-aarch64-macos.tar.gz |
# Example: download and extract for Linux x86_64
curl -LO https://github.com/michaeljfazio/dugite/releases/latest/download/dugite-x86_64-linux.tar.gz
tar xzf dugite-x86_64-linux.tar.gz
sudo mv dugite-node dugite-cli dugite-monitor dugite-config /usr/local/bin/
Verify checksums:
curl -LO https://github.com/michaeljfazio/dugite/releases/latest/download/SHA256SUMS.txt
sha256sum -c SHA256SUMS.txt
Container Image
Multi-architecture container images (amd64 and arm64) are published to GitHub Container Registry:
docker pull ghcr.io/michaeljfazio/dugite:latest
The image uses a distroless base (gcr.io/distroless/cc-debian12:nonroot) for minimal attack surface — no shell, no package manager, runs as nonroot (UID 65532).
Run the node:
docker run -d \
--name dugite \
-p 3001:3001 \
-p 12798:12798 \
-v dugite-data:/opt/dugite/db \
ghcr.io/michaeljfazio/dugite:latest
See Kubernetes Deployment for production container deployments.
Building from Source
Prerequisites
Rust Toolchain
Install the latest stable Rust toolchain via rustup:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
Verify the installation:
rustc --version
cargo --version
Dugite requires the latest stable Rust toolchain (edition 2021). Use rustup update stable to stay current.
System Dependencies
Dugite's storage layer is pure Rust with no system dependencies beyond the Rust toolchain. Block storage uses append-only chunk files, and the UTxO set uses dugite-lsm, a pure Rust LSM tree. On all platforms, cargo build works out of the box.
Build
Clone the repository:
git clone https://github.com/michaeljfazio/dugite.git
cd dugite
Build in release mode:
cargo build --release
On Linux with kernel 5.1+, you can enable io_uring for improved disk I/O in the UTxO LSM tree:
cargo build --release --features io-uring
This produces four binaries in target/release/:
| Binary | Description |
|---|---|
dugite-node | The Cardano node |
dugite-cli | The cardano-cli compatible command-line interface |
dugite-monitor | Terminal monitoring dashboard (ratatui-based, real-time metrics via Prometheus polling) |
dugite-config | Interactive TUI configuration editor with tree navigation, inline editing, and diff view |
Install Binaries
To install the binaries into your $CARGO_HOME/bin (typically ~/.cargo/bin/):
cargo install --path crates/dugite-node
cargo install --path crates/dugite-cli
cargo install --path crates/dugite-monitor
cargo install --path crates/dugite-config
Running Tests
Verify everything is working (requires cargo-nextest):
cargo nextest run --workspace
Or with the built-in test runner:
cargo test --workspace
The project enforces a zero-warning policy. You can run the full CI check locally:
cargo fmt --all -- --check
cargo clippy --all-targets -- -D warnings
cargo nextest run --workspace
Development Build
For faster compilation during development, use the debug profile (the default):
cargo build
Debug builds are significantly faster to compile but produce slower binaries. Always use --release for running a node against a live network.
Quick Start
This guide walks you through getting Dugite running on the Cardano preview testnet.
1. Install
Option A: Pre-built binary (fastest)
curl -LO https://github.com/michaeljfazio/dugite/releases/latest/download/dugite-x86_64-linux.tar.gz
tar xzf dugite-x86_64-linux.tar.gz
sudo mv dugite-node dugite-cli dugite-monitor dugite-config /usr/local/bin/
Option B: Container image
docker pull ghcr.io/michaeljfazio/dugite:latest
Option C: Build from source
git clone https://github.com/michaeljfazio/dugite.git
cd dugite
cargo build --release
2. Fast Sync with Mithril (Recommended)
Import a Mithril-certified snapshot to skip syncing millions of blocks from genesis:
dugite-node mithril-import \
--network-magic 2 \
--database-path ./db-preview
This downloads the latest snapshot from the Mithril aggregator, extracts it, and imports all blocks into the database. On preview testnet this takes approximately 9 minutes (downloading a ~2.7 GB snapshot containing ~4M blocks).
3. Run the Node
Dugite ships with configuration files for all networks. If you built from source, they are in the config/ directory:
dugite-node run \
--config config/preview-config.json \
--topology config/preview-topology.json \
--database-path ./db-preview \
--socket-path ./node.sock \
--host-addr 0.0.0.0 \
--port 3001
Or with Docker:
docker run -d \
--name dugite \
-p 3001:3001 \
-p 12798:12798 \
-v dugite-data:/opt/dugite/db \
ghcr.io/michaeljfazio/dugite:latest
The node will:
- Load the configuration and genesis files
- Replay imported blocks through the ledger (builds UTxO set, protocol params, delegations)
- Connect to preview testnet peers
- Sync remaining blocks to chain tip
Progress is logged every 5 seconds, showing sync percentage, blocks-per-second throughput, UTxO count, and epoch number. Logs go to stdout by default; add --log-output file --log-dir /var/log/dugite for file logging. See Logging for all options.
4. Query the Node
Once the node is running, query it using the CLI via the Unix domain socket:
# Query the current tip
dugite-cli query tip \
--socket-path ./node.sock \
--testnet-magic 2
Example output:
{
"slot": 106453897,
"hash": "8498ccda...",
"block": 4094745,
"epoch": 1232,
"era": "Conway",
"syncProgress": "100.00"
}
# Query protocol parameters
dugite-cli query protocol-parameters \
--socket-path ./node.sock \
--testnet-magic 2
# Query mempool
dugite-cli query tx-mempool info \
--socket-path ./node.sock \
--testnet-magic 2
5. Check Metrics
Prometheus metrics are served on port 12798:
curl -s http://localhost:12798/metrics | grep sync_progress
# sync_progress_percent 10000
Next Steps
- Configuration — Detailed configuration options
- Networks — Connecting to mainnet, preview, or preprod
- Mithril Import — Fast initial sync details
- Monitoring — Prometheus metrics endpoint
- Kubernetes Deployment — Helm chart for production deployments
- Relay Node — Running relay nodes for a stake pool
- Block Producer — Running a stake pool
- CLI Reference — Full CLI command reference
Configuration
Dugite reads a JSON configuration file that controls network settings, genesis file paths, P2P parameters, and tracing options. The format is compatible with the cardano-node configuration format.
Configuration File Format
The configuration file uses PascalCase keys (matching the cardano-node convention):
{
"Network": "Testnet",
"NetworkMagic": 2,
"EnableP2P": true,
"DiffusionMode": "InitiatorAndResponder",
"PeerSharing": null,
"Protocol": {
"RequiresNetworkMagic": "RequiresMagic"
},
"ShelleyGenesisFile": "shelley-genesis.json",
"ByronGenesisFile": "byron-genesis.json",
"AlonzoGenesisFile": "alonzo-genesis.json",
"ConwayGenesisFile": "conway-genesis.json",
"TargetNumberOfRootPeers": 60,
"TargetNumberOfActivePeers": 15,
"TargetNumberOfEstablishedPeers": 40,
"TargetNumberOfKnownPeers": 85,
"TargetNumberOfActiveBigLedgerPeers": 5,
"TargetNumberOfEstablishedBigLedgerPeers": 10,
"TargetNumberOfKnownBigLedgerPeers": 15,
"MinSeverity": "Info",
"TraceOptions": {
"TraceBlockFetchClient": false,
"TraceBlockFetchServer": false,
"TraceChainDb": false,
"TraceChainSyncClient": false,
"TraceChainSyncServer": false,
"TraceForge": false,
"TraceMempool": false
}
}
Fields Reference
Network Settings
| Field | Type | Default | Description |
|---|---|---|---|
Network | string | "Mainnet" | Network identifier: "Mainnet" or "Testnet" |
NetworkMagic | integer | auto | Network magic number. If omitted, derived from Network (764824073 for mainnet) |
EnableP2P | boolean | true | Enable P2P networking mode. When true (the default), the peer governor manages peer connections with automatic churn, ledger-based discovery, and peer sharing. When false, the node uses only static topology connections |
DiffusionMode | string | "InitiatorAndResponder" | Controls inbound connection acceptance. "InitiatorAndResponder" (default): relay mode, accepts inbound N2N connections. "InitiatorOnly": block producer mode, outbound only (no listening port opened) |
PeerSharing | boolean/null | null | Enable the peer sharing mini-protocol. When null (default), peer sharing is automatically disabled for block producers (when --shelley-kes-key is provided) and enabled for relays. Set explicitly to override |
Protocol
| Field | Type | Default | Description |
|---|---|---|---|
Protocol.RequiresNetworkMagic | string | "RequiresMagic" | Whether network magic is required in handshake |
Genesis Files
Genesis file paths are resolved relative to the directory containing the configuration file. For example, if your config is at /opt/cardano/config.json and specifies "ShelleyGenesisFile": "shelley-genesis.json", Dugite will look for /opt/cardano/shelley-genesis.json.
| Field | Type | Default | Description |
|---|---|---|---|
ShelleyGenesisFile | string | none | Path to Shelley genesis JSON |
ByronGenesisFile | string | none | Path to Byron genesis JSON |
AlonzoGenesisFile | string | none | Path to Alonzo genesis JSON |
ConwayGenesisFile | string | none | Path to Conway genesis JSON |
Tip: Genesis files for each network can be downloaded from the Cardano Operations Book.
P2P Parameters
These parameters control the P2P peer governor's target counts, matching the cardano-node defaults. The governor continuously works to maintain these targets by promoting/demoting peers and discovering new ones.
| Field | Type | Default | Description |
|---|---|---|---|
TargetNumberOfRootPeers | integer | 60 | Target number of root peers (bootstrap + local + public roots) |
TargetNumberOfActivePeers | integer | 15 | Target number of active (hot) peers — fully syncing with ChainSync + BlockFetch |
TargetNumberOfEstablishedPeers | integer | 40 | Target number of established (warm) peers — TCP connected, keepalive running |
TargetNumberOfKnownPeers | integer | 85 | Target number of known (cold) peers in the peer table |
TargetNumberOfActiveBigLedgerPeers | integer | 5 | Target number of active big ledger peers (high-stake SPOs, prioritised during sync) |
TargetNumberOfEstablishedBigLedgerPeers | integer | 10 | Target number of established big ledger peers |
TargetNumberOfKnownBigLedgerPeers | integer | 15 | Target number of known big ledger peers |
Tracing
| Field | Type | Default | Description |
|---|---|---|---|
MinSeverity | string | "Info" | Minimum log severity level |
LogDirective | string | none | RUST_LOG-style filter directive applied on SIGHUP. Set at runtime to change per-subsystem verbosity without restarting |
TraceOptions.TraceBlockFetchClient | boolean | false | Trace block fetch client activity |
TraceOptions.TraceBlockFetchServer | boolean | false | Trace block fetch server activity |
TraceOptions.TraceChainDb | boolean | false | Trace ChainDB operations |
TraceOptions.TraceChainSyncClient | boolean | false | Trace chain sync client activity |
TraceOptions.TraceChainSyncServer | boolean | false | Trace chain sync server activity |
TraceOptions.TraceForge | boolean | false | Trace block forging |
TraceOptions.TraceMempool | boolean | false | Trace mempool activity |
Log Level Control
The log level can be set via CLI flag or environment variable:
# Via CLI flag
dugite-node run --log-level debug ...
# Via environment variable (takes priority over --log-level)
RUST_LOG=info dugite-node run ...
# Debug only for specific crates
RUST_LOG=dugite_network=debug,dugite_consensus=debug dugite-node run ...
Dugite supports multiple log output targets (stdout, file, journald) and file rotation. See Logging for full details on output configuration.
Minimal Configuration
The smallest viable configuration file specifies only the network:
{
"Network": "Testnet",
"NetworkMagic": 2
}
All other fields use sensible defaults. When no genesis files are specified, the node operates with built-in default parameters.
Format Support
Dugite supports both JSON (.json) and TOML (.toml) configuration files. The format is determined by the file extension. JSON files use the cardano-node compatible PascalCase format shown above.
Interactive Configuration Editor (dugite-config)
dugite-config is a standalone TUI tool for creating and editing Dugite configuration files interactively, without needing to remember field names or valid value ranges.
Installation
Built as part of the standard workspace:
cargo build --release -p dugite-config
Commands
# Interactively create a new configuration file
dugite-config init --out-file config.json
# Launch the interactive editor for an existing config file
dugite-config edit config.json
# Validate a configuration file and report errors
dugite-config validate config.json
# Get the value of a specific field
dugite-config get config.json TargetNumberOfActivePeers
# Set the value of a specific field
dugite-config set config.json TargetNumberOfActivePeers 30
Interactive Editor Features
The editor provides a tree view of all configuration fields with inline editing:
- Tree navigation — expand/collapse sections, navigate fields with arrow keys
- Inline editing — press
Enteron any field to edit its value in place - Type validation — invalid values are rejected with an inline error message and a description of the expected type and range
- Tuning hints — contextual hints appear alongside each field explaining the impact of changes (for example, peer count targets and their effect on network connectivity)
- Search/filter — press
/to search fields by name - Diff view — press
dto see a side-by-side diff of your changes before saving - Save/discard —
Ctrl+Sto save,Ctrl+QorEscto discard
The editor validates the full configuration on save and will not write an invalid file.
Configuration Editor (dugite-config)
dugite-config is a standalone TUI tool for creating and editing Dugite configuration files interactively. It provides a full-screen terminal interface with tree navigation, inline editing, type validation, and a diff view — no need to remember field names or look up valid ranges.
Installation
dugite-config is built as part of the standard workspace:
cargo build --release -p dugite-config
cp target/release/dugite-config /usr/local/bin/
Commands
| Command | Description |
|---|---|
init | Interactively create a new configuration file |
edit | Launch the full-screen TUI editor for an existing file |
validate | Validate a configuration file and report all errors |
get | Print the value of a single field |
set | Set the value of a single field non-interactively |
init
Create a new configuration file, guided step by step:
dugite-config init --out-file config.json
The init wizard prompts for the network (mainnet/preview/preprod), genesis file paths, P2P targets, and tracing options, then writes a validated JSON file.
edit
Launch the full-screen interactive editor:
dugite-config edit config.json
validate
Check a configuration file for errors without modifying it:
dugite-config validate config.json
Output on success:
config.json: OK (all fields valid)
Output on failure:
config.json: 2 error(s)
Line 7 — TargetNumberOfActivePeers: value 200 exceeds maximum (100)
Line 12 — MinSeverity: unknown value "Verbose" (expected: Trace, Debug, Info, Warning, Error)
get / set
Non-interactive field access for scripting:
# Get a field
dugite-config get config.json TargetNumberOfActivePeers
# Output: 20
# Set a field
dugite-config set config.json TargetNumberOfActivePeers 30
# Set a nested field
dugite-config set config.json TraceOptions.TraceForge true
Interactive Editor
The interactive editor (dugite-config edit) renders a full-screen TUI with three panes:
┌─ Fields ──────────────────────┬─ Value ───────────┬─ Hints ───────────────────────────┐
│ > Network Settings │ │ │
│ Network │ Testnet │ Network identifier. Use "Mainnet" │
│ NetworkMagic │ 2 │ for mainnet or "Testnet" for │
│ EnableP2P │ true │ testnets. If omitted, defaults │
│ > Genesis Files │ │ based on Network field. │
│ ShelleyGenesisFile │ shelley-gen... │ │
│ ByronGenesisFile │ byron-genesi... │ │
│ AlonzoGenesisFile │ alonzo-genes... │ │
│ ConwayGenesisFile │ conway-genes... │ │
│ > P2P Parameters │ │ │
│ DiffusionMode │ InitiatorAn... │ │
│ PeerSharing │ PeerSharing... │ │
│ TargetNumberOfActivePeers │ 15 │ │
└───────────────────────────────┴───────────────────┴───────────────────────────────────┘
Navigation
| Key | Action |
|---|---|
| Arrow Up / Down | Move between fields |
| Arrow Right / Enter | Expand a group or edit a field |
| Arrow Left / Escape | Collapse a group or cancel edit |
/ | Open search/filter |
d | Toggle diff view |
Ctrl+S | Save and exit |
Ctrl+Q | Discard changes and exit |
? | Toggle help overlay |
Inline Editing
Pressing Enter on a field opens it for editing in place. The current value is pre-filled. Type a new value and press Enter to confirm or Escape to cancel.
Type validation runs immediately on confirmation. If the value is invalid (for example, a string where an integer is expected, or a number outside the valid range), an inline error message appears below the field. The cursor stays on the field until a valid value is entered or the edit is cancelled.
Tuning Hints
The right-hand pane shows contextual hints for the selected field, including:
- A description of what the field controls
- The valid type and range
- Practical advice on the impact of different values
For example, TargetNumberOfActivePeers shows advice on the trade-off between connectivity and bandwidth, and notes that values above 50 are rarely beneficial for relay nodes.
Search and Filter
Press / to open the search bar. Typing narrows the visible fields to those whose names match the query. Press Escape to clear the filter and return to the full tree.
Diff View
Press d to toggle the diff view, which shows a side-by-side comparison of the original file and your pending changes. Fields with modified values are highlighted. Use this before saving to confirm your edits.
Scripted Workflows
dugite-config can be used in deployment scripts for automated configuration management:
#!/usr/bin/env bash
# Example: configure a relay node for preview testnet
CONFIG="config/preview-config.json"
dugite-config init --out-file "$CONFIG" \
--network Testnet \
--network-magic 2 \
--shelley-genesis shelley-genesis.json \
--byron-genesis byron-genesis.json \
--alonzo-genesis alonzo-genesis.json \
--conway-genesis conway-genesis.json
dugite-config set "$CONFIG" EnableP2P true
dugite-config set "$CONFIG" DiffusionMode InitiatorAndResponder
dugite-config set "$CONFIG" TargetNumberOfActivePeers 15
dugite-config set "$CONFIG" TargetNumberOfEstablishedPeers 40
dugite-config set "$CONFIG" TargetNumberOfKnownPeers 85
dugite-config validate "$CONFIG"
Topology
The topology file defines the peers that the node connects to. Dugite supports the full cardano-node 10.x+ P2P topology format.
Topology File Format
{
"bootstrapPeers": [
{ "address": "backbone.cardano.iog.io", "port": 3001 },
{ "address": "backbone.mainnet.cardanofoundation.org", "port": 3001 },
{ "address": "backbone.mainnet.emurgornd.com", "port": 3001 }
],
"localRoots": [
{
"accessPoints": [
{ "address": "192.168.1.100", "port": 3001 }
],
"advertise": false,
"hotValency": 1,
"warmValency": 2,
"trustable": true
}
],
"publicRoots": [
{
"accessPoints": [
{ "address": "relays-new.cardano-mainnet.iohk.io", "port": 3001 }
],
"advertise": false
}
],
"useLedgerAfterSlot": 177724800
}
Peer Categories
Bootstrap Peers
Trusted peers from founding organizations, used during initial sync. These are the first peers the node contacts when starting.
"bootstrapPeers": [
{ "address": "backbone.cardano.iog.io", "port": 3001 }
]
Set to null or an empty array to disable bootstrap peers:
"bootstrapPeers": null
Local Roots
Peers the node should always maintain connections with. Typically used for:
- Your block producer (if running a relay)
- Peer arrangements with other stake pool operators
- Trusted relay nodes you operate
"localRoots": [
{
"accessPoints": [
{ "address": "192.168.1.100", "port": 3001 }
],
"advertise": true,
"hotValency": 2,
"warmValency": 3,
"trustable": true,
"behindFirewall": false,
"diffusionMode": "InitiatorAndResponder"
}
]
| Field | Type | Default | Description |
|---|---|---|---|
accessPoints | array | required | List of {address, port} entries |
advertise | boolean | false | Whether to share these peers via peer sharing protocol |
valency | integer | 1 | Deprecated. Target number of active connections. Use hotValency instead |
hotValency | integer | valency | Target number of hot (actively syncing) peers |
warmValency | integer | hotValency+1 | Target number of warm (connected, not syncing) peers |
trustable | boolean | false | Whether these peers are trusted for sync. Trusted peers are preferred during initial sync |
behindFirewall | boolean | false | If true, the node waits for inbound connections from these peers instead of connecting outbound |
diffusionMode | string | "InitiatorAndResponder" | Per-group diffusion mode. "InitiatorOnly" for unidirectional connections |
Public Roots
Publicly known nodes (e.g., IOG relays) serving as fallback peers before the node has synced to the useLedgerAfterSlot threshold.
"publicRoots": [
{
"accessPoints": [
{ "address": "relays-new.cardano-mainnet.iohk.io", "port": 3001 }
],
"advertise": false
}
]
Ledger-Based Peer Discovery
After the node syncs past the useLedgerAfterSlot slot, it discovers peers from stake pool registrations in the ledger state. This provides decentralized peer discovery without relying on centralized relay lists.
"useLedgerAfterSlot": 177724800
Set to a negative value or omit to disable ledger peer discovery.
Peer Snapshot File
Optional path to a big ledger peer snapshot file for Genesis bootstrap:
"peerSnapshotFile": "peer-snapshot.json"
Example Topologies
Preview Testnet Relay
{
"bootstrapPeers": [
{ "address": "preview-node.play.dev.cardano.org", "port": 3001 }
],
"localRoots": [
{ "accessPoints": [], "advertise": false, "valency": 1 }
],
"publicRoots": [
{ "accessPoints": [], "advertise": false }
],
"useLedgerAfterSlot": 102729600
}
Mainnet Relay
{
"bootstrapPeers": [
{ "address": "backbone.cardano.iog.io", "port": 3001 },
{ "address": "backbone.mainnet.cardanofoundation.org", "port": 3001 },
{ "address": "backbone.mainnet.emurgornd.com", "port": 3001 }
],
"localRoots": [
{ "accessPoints": [], "advertise": false, "valency": 1 }
],
"publicRoots": [
{ "accessPoints": [], "advertise": false }
],
"useLedgerAfterSlot": 177724800
}
Relay with Block Producer
A relay node that maintains a connection to your block producer:
{
"bootstrapPeers": [
{ "address": "backbone.cardano.iog.io", "port": 3001 },
{ "address": "backbone.mainnet.cardanofoundation.org", "port": 3001 }
],
"localRoots": [
{
"accessPoints": [
{ "address": "10.0.0.10", "port": 3001 }
],
"advertise": false,
"hotValency": 1,
"warmValency": 2,
"trustable": true,
"behindFirewall": true
}
],
"publicRoots": [
{ "accessPoints": [], "advertise": false }
],
"useLedgerAfterSlot": 177724800
}
SIGHUP Topology Reload
Dugite supports live topology reloading. Send a SIGHUP signal to the running node process, and it will re-read the topology file and update the peer manager with the new configuration:
kill -HUP $(pidof dugite-node)
This allows you to add or remove peers without restarting the node.
Networks
Dugite can connect to any Cardano network. Each network is identified by a unique magic number used during the N2N handshake.
Network Magic Values
| Network | Magic | Description |
|---|---|---|
| Mainnet | 764824073 | The production Cardano network |
| Preview | 2 | Fast-moving testnet for early feature testing |
| Preprod | 1 | Stable testnet that mirrors mainnet behavior |
Connecting to Mainnet
Create a config-mainnet.json:
{
"Network": "Mainnet",
"NetworkMagic": 764824073
}
Create a topology-mainnet.json:
{
"bootstrapPeers": [
{ "address": "backbone.cardano.iog.io", "port": 3001 },
{ "address": "backbone.mainnet.cardanofoundation.org", "port": 3001 },
{ "address": "backbone.mainnet.emurgornd.com", "port": 3001 }
],
"localRoots": [{ "accessPoints": [], "advertise": false, "valency": 1 }],
"publicRoots": [{ "accessPoints": [], "advertise": false }],
"useLedgerAfterSlot": 177724800
}
Run the node:
dugite-node run \
--config config-mainnet.json \
--topology topology-mainnet.json \
--database-path ./db-mainnet \
--socket-path ./node-mainnet.sock \
--host-addr 0.0.0.0 \
--port 3001
Tip: For a faster initial mainnet sync, consider using Mithril snapshot import first.
Connecting to Preview Testnet
Create a config-preview.json:
{
"Network": "Testnet",
"NetworkMagic": 2
}
Create a topology-preview.json:
{
"bootstrapPeers": [
{ "address": "preview-node.play.dev.cardano.org", "port": 3001 }
],
"localRoots": [{ "accessPoints": [], "advertise": false, "valency": 1 }],
"publicRoots": [{ "accessPoints": [], "advertise": false }],
"useLedgerAfterSlot": 102729600
}
Run the node:
dugite-node run \
--config config-preview.json \
--topology topology-preview.json \
--database-path ./db-preview \
--socket-path ./node-preview.sock \
--host-addr 0.0.0.0 \
--port 3001
Connecting to Preprod Testnet
Create a config-preprod.json:
{
"Network": "Testnet",
"NetworkMagic": 1
}
Create a topology-preprod.json:
{
"bootstrapPeers": [
{ "address": "preprod-node.play.dev.cardano.org", "port": 3001 }
],
"localRoots": [{ "accessPoints": [], "advertise": false, "valency": 1 }],
"publicRoots": [{ "accessPoints": [], "advertise": false }],
"useLedgerAfterSlot": 76924800
}
Run the node:
dugite-node run \
--config config-preprod.json \
--topology topology-preprod.json \
--database-path ./db-preprod \
--socket-path ./node-preprod.sock \
--host-addr 0.0.0.0 \
--port 3001
Official Configuration Files
Official configuration and topology files for each network are maintained in the Cardano Operations Book:
- Preview: book.world.dev.cardano.org/environments/preview/
- Preprod: book.world.dev.cardano.org/environments/preprod/
- Mainnet: book.world.dev.cardano.org/environments/mainnet/
These include the full genesis files (Byron, Shelley, Alonzo, Conway) required for complete protocol parameter initialization.
Using the CLI with Different Networks
When querying a node connected to a testnet, pass the --testnet-magic flag to the CLI:
# Preview
dugite-cli query tip --socket-path ./node-preview.sock --testnet-magic 2
# Preprod
dugite-cli query tip --socket-path ./node-preprod.sock --testnet-magic 1
# Mainnet (default, --testnet-magic not needed)
dugite-cli query tip --socket-path ./node-mainnet.sock
Multiple Nodes
You can run multiple Dugite instances on the same machine by using different ports, database paths, and socket paths:
# Preview on port 3001
dugite-node run --port 3001 --database-path ./db-preview --socket-path ./preview.sock ...
# Preprod on port 3002
dugite-node run --port 3002 --database-path ./db-preprod --socket-path ./preprod.sock ...
Mithril Snapshot Import
Syncing a Cardano node from genesis can take a very long time. Dugite supports importing Mithril-certified snapshots of the immutable database to drastically reduce initial sync time.
How It Works
Mithril is a stake-based threshold multi-signature scheme that produces certified snapshots of the Cardano immutable database. These snapshots are verified by Mithril signers (stake pool operators) and made available through Mithril aggregator endpoints.
The import process:
- Queries the Mithril aggregator for the latest available snapshot
- Downloads the snapshot archive (compressed with zstandard)
- Extracts the cardano-node chunk files
- Parses each block using the pallas CBOR decoder
- Bulk-imports blocks into Dugite's ImmutableDB (append-only chunk files)
Usage
dugite-node mithril-import \
--network-magic <magic> \
--database-path <path>
Arguments
| Argument | Default | Description |
|---|---|---|
--network-magic | 764824073 | Network magic (764824073=mainnet, 2=preview, 1=preprod) |
--database-path | db | Path to the database directory |
--temp-dir | system temp | Temporary directory for download and extraction |
Examples
Mainnet:
dugite-node mithril-import \
--network-magic 764824073 \
--database-path ./db-mainnet
Preview testnet:
dugite-node mithril-import \
--network-magic 2 \
--database-path ./db-preview
Preprod testnet:
dugite-node mithril-import \
--network-magic 1 \
--database-path ./db-preprod
Mithril Aggregator Endpoints
Dugite automatically selects the correct aggregator for each network:
| Network | Aggregator URL |
|---|---|
| Mainnet | https://aggregator.release-mainnet.api.mithril.network/aggregator |
| Preview | https://aggregator.pre-release-preview.api.mithril.network/aggregator |
| Preprod | https://aggregator.release-preprod.api.mithril.network/aggregator |
Resume Support
The import process supports resuming interrupted downloads and imports:
- If the snapshot archive has already been downloaded (same size), the download is skipped
- If the archive has already been extracted, extraction is skipped
- Blocks already present in the database are skipped during import
This means you can safely interrupt the import and restart it later.
After Import
Once the import completes, start the node normally. It will detect the imported blocks and resume syncing from where the snapshot left off:
dugite-node run \
--config config.json \
--topology topology.json \
--database-path ./db-mainnet \
--socket-path ./node.sock \
--host-addr 0.0.0.0 \
--port 3001
Disk Space Requirements
Mithril snapshots are large. Approximate sizes (which grow over time):
| Network | Compressed Archive | Extracted | Final DB |
|---|---|---|---|
| Mainnet | ~60-90 GB | ~120-180 GB | ~90-140 GB |
| Preview | ~5-10 GB | ~10-20 GB | ~8-15 GB |
| Preprod | ~15-25 GB | ~30-50 GB | ~20-35 GB |
The temporary directory needs enough space for both the compressed archive and the extracted files. After import, temporary files are automatically cleaned up.
Note: Ensure you have sufficient disk space before starting the import. The
--temp-dirflag can be used to direct temporary files to a different volume if needed.
Logging
Dugite uses the tracing ecosystem for structured logging. It supports multiple output targets, structured and human-readable formats, log rotation for file output, and fine-grained level control.
Output Formats
Dugite supports two log formats, selectable via the --log-format flag:
Text (default)
Human-readable compact output with timestamps, level, target module, and structured fields:
dugite-node run --log-format text ...
2026-03-12T12:34:56.789Z INFO dugite_node::node: Syncing progress="95.42%" epoch=512 block=11283746 tip=11300000 remaining=16254 speed="312 blk/s" utxos=15234892
2026-03-12T12:34:56.790Z INFO dugite_node::node: Peer connected peer=1.2.3.4:3001 rtt_ms=42
JSON
Structured JSON output, one object per line. Ideal for log aggregation systems (ELK, Loki, Datadog):
dugite-node run --log-format json ...
{"timestamp":"2026-03-12T12:34:56.789Z","level":"INFO","target":"dugite_node::node","fields":{"message":"Syncing","progress":"95.42%","epoch":512,"block":11283746}}
Output Targets
Dugite can log to one or more output targets simultaneously using the --log-output flag. You can specify this flag multiple times to enable multiple targets:
# Stdout only (default)
dugite-node run --log-output stdout ...
# File only
dugite-node run --log-output file ...
# Both stdout and file
dugite-node run --log-output stdout --log-output file ...
# Systemd journal (requires journald feature)
dugite-node run --log-output journald ...
Stdout
The default output target. Logs are written to standard output with ANSI color codes when the output is a terminal. Colors can be disabled with --log-no-color.
File
Logs are written to rotating log files in the directory specified by --log-dir (default: logs/). The rotation strategy is configured with --log-file-rotation:
| Strategy | Description |
|---|---|
daily | Rotate log files daily (default) |
hourly | Rotate log files every hour |
never | Write to a single dugite.log file with no rotation |
dugite-node run \
--log-output file \
--log-dir /var/log/dugite \
--log-file-rotation daily \
...
File output uses non-blocking I/O with buffered writes. The buffer is flushed automatically on shutdown.
Journald
Native systemd journal integration. This requires building Dugite with the journald feature:
cargo build --release --features journald
Then run with:
dugite-node run --log-output journald ...
View logs with journalctl:
journalctl -u dugite-node -f
journalctl -u dugite-node --since "1 hour ago"
Log Levels
The log level can be set via the --log-level CLI flag or the RUST_LOG environment variable. If both are set, RUST_LOG takes priority.
# Via CLI flag
dugite-node run --log-level debug ...
# Via environment variable (takes priority)
RUST_LOG=debug dugite-node run ...
Available levels (from most to least verbose):
| Level | Description |
|---|---|
trace | Very detailed internal diagnostics |
debug | Internal operations: genesis loading, storage ops, network handshakes, epoch transitions |
info | Operator-relevant events: sync progress, peer connections, block production (default) |
warn | Potential issues: stale snapshots, replay failures |
error | Errors that may affect node operation |
Per-Crate Filtering
Use RUST_LOG for fine-grained control over which components produce output:
# Debug only for specific crates
RUST_LOG=dugite_network=debug,dugite_consensus=debug dugite-node run ...
# Trace storage operations, debug everything else
RUST_LOG=dugite_storage=trace,debug dugite-node run ...
# Silence noisy crates
RUST_LOG=info,dugite_network=warn dugite-node run ...
CLI Reference
All logging flags are shared between the run and mithril-import subcommands:
| Flag | Default | Description |
|---|---|---|
--log-output | stdout | Log output target: stdout, file, or journald. Can be specified multiple times. |
--log-format | text | Log format: text (human-readable) or json (structured). |
--log-level | info | Log level: trace, debug, info, warn, error. Overridden by RUST_LOG. |
--log-dir | logs | Directory for log files (used with --log-output file) |
--log-file-rotation | daily | Log file rotation: daily, hourly, or never |
--log-no-color | false | Disable ANSI colors in stdout output |
Runtime Log Verbosity Reload (SIGHUP)
Dugite supports changing per-subsystem log verbosity at runtime without restarting the node. This is useful for debugging a specific issue (for example, enabling trace logging for the network layer) without disrupting ongoing block production or sync.
Workflow:
-
Edit the node configuration file and add (or update) the
LogDirectivefield:{ "LogDirective": "info,dugite_network=trace,dugite_consensus=debug" }The value accepts any
RUST_LOG-compatible directive, including*=debug,trace, or per-module overrides likedugite_ledger=warn. -
Send
SIGHUPto the running node:kill -HUP $(pidof dugite-node)The node re-reads the config file. If
LogDirectiveis present and valid, the filter is reloaded immediately and logged:INFO dugite_node: Reloaded log directive: "info,dugite_network=trace"If the directive string is invalid, the previous filter is left intact.
-
To restore the original level, remove
LogDirectivefrom the config and send SIGHUP again, or set it back to"info".
Note:
LogDirectiveis only applied via SIGHUP. The initial startup level is controlled by--log-level/RUST_LOGas before.
Production Recommendations
For production deployments with log aggregation:
dugite-node run \
--log-output file \
--log-output journald \
--log-format json \
--log-dir /var/log/dugite \
--log-file-rotation daily \
...
This configuration:
- Writes structured JSON logs to systemd journal for
journalctlintegration - Writes rotated JSON log files for archival and ingestion by log aggregators
- JSON format ensures all structured fields are machine-parseable
For human operators monitoring the console:
dugite-node run --log-output stdout --log-format text ...
For containerized deployments (Docker, Kubernetes), stdout with JSON is ideal since the container runtime captures output and log drivers can parse the structured format:
dugite-node run --log-output stdout --log-format json ...
Monitoring
Dugite provides two complementary monitoring tools: a terminal dashboard (dugite-monitor) for quick at-a-glance status, and a Prometheus-compatible metrics endpoint for production alerting and dashboards.
Terminal Dashboard (dugite-monitor)
dugite-monitor is a standalone binary that renders a real-time status dashboard in the terminal by polling the node's Prometheus endpoint. It requires no external infrastructure and works over SSH.
# Monitor a local node (default: http://localhost:12798/metrics)
dugite-monitor
# Monitor a remote node
dugite-monitor --metrics-url http://192.168.1.100:12798/metrics
# Custom refresh interval (default: 2 seconds)
dugite-monitor --refresh-interval 5
The dashboard displays four panels:
- Chain Status — sync progress, current slot/block/epoch, tip age, GSM state
- Peers — out/in/total connection counts, hot/warm/cold breakdown, EWMA latency
- Performance — block rate sparkline, replay throughput, transaction counts
- Governance — treasury balance, DRep count, active proposals, pool count
Color-coded health indicators (green/yellow/red) reflect tip age and sync progress. The block rate sparkline shows the last 30 data points so you can spot throughput trends at a glance.
Keyboard navigation: q to quit, Tab to cycle panels, j/k (vim-style) to scroll within a panel.
Prometheus Metrics Endpoint
Dugite exposes a Prometheus-compatible metrics endpoint for monitoring node health and sync progress.
Metrics Endpoint
The metrics server runs on port 12798 by default and responds to any HTTP request with Prometheus exposition format metrics:
http://localhost:12798/metrics
Example response:
# HELP dugite_blocks_received_total Total blocks received from peers
# TYPE dugite_blocks_received_total gauge
dugite_blocks_received_total 1523847
# HELP dugite_blocks_applied_total Total blocks applied to ledger
# TYPE dugite_blocks_applied_total gauge
dugite_blocks_applied_total 1523845
# HELP dugite_slot_number Current slot number
# TYPE dugite_slot_number gauge
dugite_slot_number 142857392
# HELP dugite_block_number Current block number
# TYPE dugite_block_number gauge
dugite_block_number 11283746
# HELP dugite_epoch_number Current epoch number
# TYPE dugite_epoch_number gauge
dugite_epoch_number 512
# HELP dugite_sync_progress_percent Chain sync progress (0-10000, divide by 100 for %)
# TYPE dugite_sync_progress_percent gauge
dugite_sync_progress_percent 9542
# HELP dugite_utxo_count Number of entries in the UTxO set
# TYPE dugite_utxo_count gauge
dugite_utxo_count 15234892
# HELP dugite_mempool_tx_count Number of transactions in the mempool
# TYPE dugite_mempool_tx_count gauge
dugite_mempool_tx_count 42
# HELP dugite_peers_connected Number of connected peers
# TYPE dugite_peers_connected gauge
dugite_peers_connected 8
Health Endpoint
The metrics server exposes a /health endpoint for monitoring node status:
GET http://localhost:12798/health
Returns JSON with three possible statuses:
- healthy: Sync progress >= 99.9%
- syncing: Actively catching up to chain tip
- stalled: No blocks received for > 5 minutes AND sync < 99%
{
"status": "healthy",
"uptime_seconds": 3421,
"slot": 142857392,
"block": 11283746,
"epoch": 512,
"sync_progress": 99.95,
"peers": 8,
"last_block_received": "2026-03-14T12:34:56.789Z"
}
Readiness Endpoint
For Kubernetes readiness probes:
GET http://localhost:12798/ready
Returns 200 OK when sync_progress >= 99.9%, 503 Service Unavailable otherwise:
{"ready": true}
or:
{"ready": false, "sync_progress": 75.42}
Available Metrics
Counters
| Metric | Description |
|---|---|
dugite_blocks_received_total | Total blocks received from peers |
dugite_blocks_applied_total | Total blocks successfully applied to the ledger |
dugite_transactions_received_total | Total transactions received |
dugite_transactions_validated_total | Total transactions validated |
dugite_transactions_rejected_total | Total transactions rejected |
dugite_rollback_count_total | Total number of chain rollbacks |
dugite_blocks_forged_total | Total blocks forged by this node |
dugite_leader_checks_total | Total VRF leader checks performed |
dugite_leader_checks_not_elected_total | Leader checks where node was not elected |
dugite_forge_failures_total | Block forge attempts that failed |
dugite_blocks_announced_total | Blocks successfully announced to peers |
dugite_n2n_connections_total | Total N2N (peer-to-peer) connections accepted |
dugite_n2c_connections_total | Total N2C (client) connections accepted |
dugite_validation_errors_total{error="..."} | Transaction validation errors, broken down by error type |
dugite_protocol_errors_total{error="..."} | Protocol-level errors by type (e.g. handshake failures, connection errors) |
Gauges
| Metric | Description |
|---|---|
dugite_peers_connected | Number of connected peers |
dugite_peers_cold | Number of cold (known but unconnected) peers |
dugite_peers_warm | Number of warm (connected, not syncing) peers |
dugite_peers_hot | Number of hot (actively syncing) peers |
dugite_sync_progress_percent | Chain sync progress (0-10000; divide by 100 for percentage) |
dugite_slot_number | Current slot number |
dugite_block_number | Current block number |
dugite_epoch_number | Current epoch number |
dugite_utxo_count | Number of entries in the UTxO set |
dugite_mempool_tx_count | Number of transactions in the mempool |
dugite_mempool_bytes | Size of the mempool in bytes |
dugite_delegation_count | Number of active stake delegations |
dugite_treasury_lovelace | Total lovelace in the treasury |
dugite_drep_count | Number of registered DReps |
dugite_proposal_count | Number of active governance proposals |
dugite_pool_count | Number of registered stake pools |
dugite_uptime_seconds | Seconds since node startup |
dugite_disk_available_bytes | Available disk space on the database volume |
dugite_n2n_connections_active | Currently active N2N connections |
dugite_n2c_connections_active | Currently active N2C connections |
dugite_p2p_enabled | Whether P2P governance is active (0 or 1) |
dugite_diffusion_mode | Current diffusion mode (0=InitiatorOnly, 1=InitiatorAndResponder) |
dugite_peer_sharing_enabled | Whether peer sharing is active (0 or 1) |
dugite_tip_age_seconds | Seconds since the tip slot time |
dugite_chainsync_idle_seconds | Seconds since last ChainSync RollForward event |
dugite_ledger_replay_duration_seconds | Duration of last ledger replay in seconds |
dugite_mem_resident_bytes | Resident set size (RSS) in bytes |
Histograms
| Metric | Buckets (ms) | Description |
|---|---|---|
dugite_peer_handshake_rtt_ms | 1, 5, 10, 25, 50, 100, 250, 500, 1000, 2500, 5000, 10000 | Peer N2N handshake round-trip time |
dugite_peer_block_fetch_ms | (same) | Per-block fetch latency |
Histograms expose _bucket, _count, and _sum suffixes for standard Prometheus histogram queries.
Prometheus Configuration
Add the Dugite node as a scrape target in your prometheus.yml:
scrape_configs:
- job_name: 'dugite'
scrape_interval: 15s
static_configs:
- targets: ['localhost:12798']
labels:
network: 'mainnet'
node: 'relay-1'
Grafana Dashboard
Dugite ships with a pre-built Grafana dashboard at config/grafana-dashboard.json. The dashboard covers all node metrics organized into nine sections:
- Overview — Sync progress gauge, block height, epoch, slot, connected peers, blocks forged
- Node Health — Uptime, disk available (stat + time series)
- Sync & Throughput — Sync progress over time, block apply/receive rate (blk/s), block height, rollbacks
- Peers — Connected peer count over time, peer state breakdown (hot/warm/cold stacked)
- Mempool & Transactions — Mempool tx count, mempool size (bytes), transaction rate (received/validated/rejected)
- Ledger State — UTxO set size, stake delegations, treasury balance (ADA), registered stake pools
- Governance — Registered DReps, active governance proposals
- Block Production — Total blocks forged, block forge rate (blk/h)
- Network Latency — Handshake RTT and block fetch latency percentiles (p50/p95/p99), request counts
- Validation Errors — Error breakdown by type (stacked bars), error totals (bar chart)
Quick Start (Docker)
The fastest way to start a local monitoring stack is with the included script:
# Start Prometheus + Grafana
./scripts/start-monitoring.sh
# Open the dashboard (admin/admin)
open http://localhost:3000/d/dugite-node/dugite-node
# Check status
./scripts/start-monitoring.sh status
# Stop
./scripts/start-monitoring.sh stop
The script starts Prometheus (port 9090) and Grafana (port 3000) as Docker containers, auto-configures the Prometheus datasource, and imports the Dugite dashboard. Prometheus data is persisted in .monitoring-data/ so metrics survive restarts.
Environment variables for port customization:
| Variable | Default | Description |
|---|---|---|
PROMETHEUS_PORT | 9090 | Prometheus web UI port |
GRAFANA_PORT | 3000 | Grafana web UI port |
DUGITE_METRICS_PORT | 12798 | Port where Dugite exposes metrics |
Importing the Dashboard
- Open Grafana and go to Dashboards > Import
- Click Upload JSON file and select
config/grafana-dashboard.json - Select your Prometheus data source when prompted
- Click Import
The dashboard includes an instance template variable so you can monitor multiple Dugite nodes (relays + block producer) from a single dashboard. It auto-refreshes every 30 seconds.
Provisioning
To auto-provision the dashboard, copy it into your Grafana provisioning directory:
cp config/grafana-dashboard.json /etc/grafana/provisioning/dashboards/dugite.json
Add a dashboard provider in /etc/grafana/provisioning/dashboards/dugite.yaml:
apiVersion: 1
providers:
- name: Dugite
folder: Cardano
type: file
options:
path: /etc/grafana/provisioning/dashboards
foldersFromFilesStructure: false
Quick Start (macOS)
To quickly preview the dashboard locally with Homebrew:
# Install Prometheus and Grafana
brew install prometheus grafana
# Configure Prometheus to scrape Dugite
cat > /opt/homebrew/etc/prometheus.yml << 'EOF'
global:
scrape_interval: 5s
scrape_configs:
- job_name: dugite
static_configs:
- targets: ['localhost:12798']
EOF
# Provision the datasource
cat > "$(brew --prefix)/opt/grafana/share/grafana/conf/provisioning/datasources/dugite.yaml" << 'EOF'
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
url: http://localhost:9090
isDefault: true
uid: DS_PROMETHEUS
EOF
# Provision the dashboard
cat > "$(brew --prefix)/opt/grafana/share/grafana/conf/provisioning/dashboards/dugite.yaml" << 'EOF'
apiVersion: 1
providers:
- name: Dugite
folder: Cardano
type: file
options:
path: /opt/homebrew/var/lib/grafana/dashboards
EOF
mkdir -p /opt/homebrew/var/lib/grafana/dashboards
sed 's/${DS_PROMETHEUS}/DS_PROMETHEUS/g' config/grafana-dashboard.json \
> /opt/homebrew/var/lib/grafana/dashboards/dugite.json
# Start services
brew services start prometheus
brew services start grafana
# Open the dashboard (default login: admin/admin)
open "http://localhost:3000/d/dugite-node/dugite-node"
To stop:
brew services stop prometheus grafana
Key Queries
| Panel | PromQL |
|---|---|
| Sync progress | dugite_sync_progress_percent / 100 |
| Block throughput | rate(dugite_blocks_applied_total[5m]) |
| Transaction rejection rate | rate(dugite_transactions_rejected_total[5m]) |
| Treasury balance (ADA) | dugite_treasury_lovelace / 1e6 |
| Block forge rate (per hour) | rate(dugite_blocks_forged_total[1h]) * 3600 |
| Handshake RTT p95 | histogram_quantile(0.95, rate(dugite_peer_handshake_rtt_ms_bucket[5m])) |
| Block fetch latency p95 | histogram_quantile(0.95, rate(dugite_peer_block_fetch_ms_bucket[5m])) |
| Validation errors by type | rate(dugite_validation_errors_total[5m]) |
| Protocol errors by type | rate(dugite_protocol_errors_total[5m]) |
| Leader election rate | rate(dugite_leader_checks_total[5m]) |
| Active N2N connections | dugite_n2n_connections_active |
| Disk available | dugite_disk_available_bytes |
Console Logging
In addition to the Prometheus endpoint, Dugite logs sync progress to the console every 5 seconds. The log output includes:
- Current slot and block number
- Epoch number
- UTxO count
- Sync percentage
- Blocks-per-second throughput
Example log line:
2026-03-12T12:34:56.789Z INFO dugite_node::node: Syncing progress="95.42%" epoch=512 block=11283746 tip=11300000 remaining=16254 speed="312 blk/s" utxos=15234892
Log output can be directed to stdout, file, or systemd journal. See Logging for full details on output targets, file rotation, and log level configuration.
Relay Node
A relay node is the public-facing component of a stake pool deployment. It bridges your block producer to the wider Cardano network while shielding the BP from direct internet exposure.
Role in Stake Pool Architecture
In a properly secured stake pool, the block producer never communicates directly with the public network. Instead, one or more relay nodes handle all external connectivity:
graph LR
Internet["Cardano Network"] <-->|N2N| Relay1["Relay 1<br/>Public IP"]
Internet <-->|N2N| Relay2["Relay 2<br/>Public IP"]
Relay1 <-->|Private| BP["Block Producer<br/>Private IP"]
Relay2 <-->|Private| BP
- Relays accept inbound connections from any Cardano peer, discover peers via bootstrap/ledger, and forward blocks to/from the BP.
- Block producer connects only to your relays, never to the public internet.
Running a Relay
A relay is simply a Dugite node started without block production keys:
dugite-node run \
--config config.json \
--topology topology-relay.json \
--database-path ./db \
--socket-path ./node.sock \
--host-addr 0.0.0.0 \
--port 3001
Tip: For initial sync, use Mithril snapshot import first to skip millions of blocks.
Relay Topology
A relay topology combines public peer discovery with a local root pointing to your block producer.
Preview Testnet Relay
{
"bootstrapPeers": [
{ "address": "preview-node.play.dev.cardano.org", "port": 3001 }
],
"localRoots": [
{
"accessPoints": [
{ "address": "10.0.0.10", "port": 3001 }
],
"advertise": false,
"hotValency": 1,
"warmValency": 2,
"trustable": true,
"behindFirewall": true
}
],
"publicRoots": [
{ "accessPoints": [], "advertise": false }
],
"useLedgerAfterSlot": 102729600
}
Mainnet Relay
{
"bootstrapPeers": [
{ "address": "backbone.cardano.iog.io", "port": 3001 },
{ "address": "backbone.mainnet.cardanofoundation.org", "port": 3001 },
{ "address": "backbone.mainnet.emurgornd.com", "port": 3001 }
],
"localRoots": [
{
"accessPoints": [
{ "address": "10.0.0.10", "port": 3001 }
],
"advertise": false,
"hotValency": 1,
"warmValency": 2,
"trustable": true,
"behindFirewall": true
}
],
"publicRoots": [
{ "accessPoints": [], "advertise": false }
],
"useLedgerAfterSlot": 177724800
}
Key topology settings for relays:
bootstrapPeers— Trusted initial peers for syncing from genesis or after restart.localRootswithbehindFirewall: true— Your block producer. The relay waits for inbound connections from the BP rather than connecting outbound, which works correctly when the BP is behind a firewall.useLedgerAfterSlot— Enables ledger-based peer discovery once synced past this slot, providing decentralized peer resolution from on-chain stake pool registrations.advertise: false— Set totrueif you want your relay to be discoverable via peer sharing.
Multiple Relays
Running two or more relays provides redundancy. If one relay goes down, the block producer stays connected through the other.
To run multiple relays on the same machine, use different ports, database paths, and socket paths:
# Relay 1 on port 3001
dugite-node run \
--config config.json \
--topology topology-relay1.json \
--database-path ./db-relay1 \
--socket-path ./relay1.sock \
--host-addr 0.0.0.0 \
--port 3001
# Relay 2 on port 3002
dugite-node run \
--config config.json \
--topology topology-relay2.json \
--database-path ./db-relay2 \
--socket-path ./relay2.sock \
--host-addr 0.0.0.0 \
--port 3002
Each relay's topology should include the block producer as a local root. The block producer's topology should list all relays (see Block Producer Topology).
For production deployments, run relays on separate machines or in different availability zones for better fault tolerance.
Firewall Configuration
Relay nodes need port 3001 (or your chosen port) open to the public for Cardano N2N traffic. The block producer should only be reachable from your relays.
Relay firewall rules
# Allow inbound Cardano N2N from anywhere
sudo ufw allow 3001/tcp
# Allow SSH (adjust as needed)
sudo ufw allow 22/tcp
sudo ufw enable
Block producer firewall rules
# Allow inbound only from relay IPs
sudo ufw allow from <relay1-ip> to any port 3001
sudo ufw allow from <relay2-ip> to any port 3001
# Allow SSH (adjust as needed)
sudo ufw allow 22/tcp
# Deny everything else
sudo ufw default deny incoming
sudo ufw enable
Important: The block producer should have no public-facing ports. All Cardano traffic flows exclusively through your relays.
Monitoring
Dugite exposes Prometheus metrics on port 12798 by default. Key metrics to watch on a relay:
| Metric | What it tells you |
|---|---|
peers_connected | Number of active peer connections. Should be > 0 at all times |
sync_progress_percent | Sync progress (10000 = 100%). Must be at 100% for the BP to produce blocks |
blocks_received | Total blocks received from peers. Should increase steadily |
slot_number | Current slot. Compare against network tip to verify sync |
curl -s http://localhost:12798/metrics | grep -E "peers_connected|sync_progress"
See Monitoring for the full list of available metrics and Grafana dashboard setup.
Next Steps
- Block Producer — Set up key generation, operational certificates, and block production
- Topology — Full topology format reference
- Monitoring — Prometheus metrics and alerting
Block Producer
Dugite can operate as a block-producing node (stake pool). This requires KES keys, VRF keys, and an operational certificate.
Architecture
A block producer is never directly exposed to the public internet. Instead, it sits behind one or more relay nodes that handle all external network connectivity. The relays forward blocks and transactions to the BP over a private network, and the BP announces forged blocks back through the relays.
Status (2026-05-09): Dugite block forging is operational and on-chain verified. Block 4265661 at slot 111661041 was forged by Dugite, accepted by the network, and confirmed on the canonical chain (Conway era, 1 tx, built upon by a subsequent block). Ongoing soak testing continues on the preview testnet via Sandstone Pool ([SAND], pool ID
6954ec11cf7097a693721104139b96c54e7f3e2a8f9e7577630f7856).
See the Complete Deployment section at the bottom of this page for the full architecture diagram and setup checklist.
Overview
A block producer is a node that has been registered as a stake pool and is capable of minting new blocks when it is elected as a slot leader. The block production pipeline involves:
- Slot leader check — Each slot, the node uses its VRF key and the epoch nonce to determine if it is elected to produce a block.
- Block forging — If elected, the node assembles a block from pending mempool transactions, signs it with the KES key, and includes the VRF proof.
- Block announcement — The forged block is propagated to connected peers via the N2N protocol.
Required Keys
Cold Keys (Offline)
Cold keys identify the stake pool and should be kept offline (air-gapped) after initial setup.
Generate cold keys using the CLI:
dugite-cli node key-gen \
--cold-verification-key-file cold.vkey \
--cold-signing-key-file cold.skey \
--operational-certificate-counter-file opcert.counter
KES Keys (Hot)
KES (Key Evolving Signature) keys are rotated periodically. Each KES key is valid for a limited number of KES periods (typically 62 periods of 129600 slots each on mainnet, approximately 90 days total).
Generate KES keys:
dugite-cli node key-gen-kes \
--verification-key-file kes.vkey \
--signing-key-file kes.skey
VRF Keys
VRF (Verifiable Random Function) keys are used for slot leader election. They are generated once and do not need rotation.
Generate VRF keys:
dugite-cli node key-gen-vrf \
--verification-key-file vrf.vkey \
--signing-key-file vrf.skey
Operational Certificate
The operational certificate binds the cold key to the current KES key. It must be regenerated each time the KES key is rotated.
Issue an operational certificate:
dugite-cli node issue-op-cert \
--kes-verification-key-file kes.vkey \
--cold-signing-key-file cold.skey \
--operational-certificate-counter-file opcert.counter \
--kes-period <current-kes-period> \
--out-file opcert.cert
The --kes-period should be set to the current KES period at the time of issuance. You can calculate the current KES period as:
current_kes_period = current_slot / slots_per_kes_period
On mainnet, slots_per_kes_period is 129600.
Running as Block Producer
Pass the key and certificate paths when starting the node:
dugite-node run \
--config config.json \
--topology topology.json \
--database-path ./db \
--socket-path ./node.sock \
--host-addr 0.0.0.0 \
--port 3001 \
--shelley-kes-key kes.skey \
--shelley-vrf-key vrf.skey \
--shelley-operational-certificate opcert.cert
When all three arguments are provided, the node enters block production mode. Without them, it operates as a relay-only node.
Block Producer Topology
A block producer should not be directly exposed to the public internet. Instead, it should connect only to your relay nodes:
{
"bootstrapPeers": null,
"localRoots": [
{
"accessPoints": [
{ "address": "relay1.example.com", "port": 3001 },
{ "address": "relay2.example.com", "port": 3001 }
],
"advertise": false,
"hotValency": 2,
"warmValency": 3,
"trustable": true
}
],
"publicRoots": [{ "accessPoints": [], "advertise": false }],
"useLedgerAfterSlot": -1
}
Key points:
- No bootstrap peers — The block producer syncs exclusively through your relays.
- No public roots — No connections to unknown peers.
- Ledger peers disabled —
useLedgerAfterSlot: -1disables ledger-based peer discovery. - Only local roots — All connections are to your own relay nodes.
Leader Schedule
You can compute your pool's leader schedule for an epoch:
dugite-cli query leadership-schedule \
--vrf-signing-key-file vrf.skey \
--epoch-nonce <64-char-hex> \
--epoch-start-slot <slot> \
--epoch-length 432000 \
--relative-stake 0.001 \
--active-slot-coeff 0.05
This outputs all slots where your pool is elected to produce a block in the given epoch.
KES Key Rotation
KES keys must be rotated before they expire. The rotation process:
-
Generate new KES keys:
dugite-cli node key-gen-kes \ --verification-key-file kes-new.vkey \ --signing-key-file kes-new.skey -
Issue a new operational certificate with the new KES key (on the air-gapped machine):
dugite-cli node issue-op-cert \ --kes-verification-key-file kes-new.vkey \ --cold-signing-key-file cold.skey \ --operational-certificate-counter-file opcert.counter \ --kes-period <current-kes-period> \ --out-file opcert-new.cert -
Replace the KES key and certificate on the block producer and restart:
cp kes-new.skey kes.skey cp opcert-new.cert opcert.cert # Restart the node
Important: Always rotate KES keys before they expire. If a KES key expires, your pool will stop producing blocks until a new key is issued.
Security Recommendations
- Keep cold keys on an air-gapped machine. They are only needed to issue new operational certificates.
- Restrict access to the block producer machine. Only your relay nodes should be able to connect.
- Monitor your pool's block production. Use the Prometheus metrics endpoint to track
dugite_blocks_applied_total. - Set up KES key rotation reminders well before expiry (2 weeks in advance is a good practice).
- Use firewalls to ensure the block producer is not reachable from the public internet.
Snapshot Recovery & Block Forging Readiness
When a block producer starts up, several subsystems must be initialized before it can begin forging blocks. The path to readiness depends on how the node was bootstrapped.
Epoch Nonce
The epoch nonce is critical for VRF leader election. It is serialized in the ledger snapshot alongside the consensus state (epoch_nonce, evolving_nonce, candidate_nonce, last_epoch_block_nonce), so it is immediately authoritative after a snapshot load or Mithril import — matching the Haskell cardano-node's treatment of praosStateEpochNonce. Forging is enabled as soon as the node catches up to the chain tip and its pool has non-zero stake in the "set" snapshot; no additional epoch boundary is required.
Pool Stake Reconstruction
On startup, after loading a ledger snapshot, the node rebuilds the stake distribution from the UTxO store to ensure consistency:
rebuild_stake_distribution()recomputes per-pool stake totals from the current UTxO set and delegation map.recompute_snapshot_pool_stakes()updates the mark/set/go snapshots so that the "set" snapshot (used for leader election) reflects the rebuilt distribution.
This runs automatically when the UTxO store is non-empty. After completion, the node logs the pool's stake in the "set" snapshot:
Block producer: pool stake in 'set' snapshot (used for leader election)
pool_id=<hash>, pool_stake_lovelace=<n>, total_active_stake_lovelace=<n>, relative_stake=<f>
If your pool shows zero stake after startup, verify:
- The pool registration certificate transaction is confirmed on-chain.
- At least one stake address is delegated to the pool and that delegation is confirmed.
- The UTxO store was properly attached (the node logs
Rebuilding stake distribution from UTxO storeon startup). - The "set" snapshot epoch is recent enough to include your pool's registration and delegation.
Epoch Numbering
Each network has its own epoch length defined in the Shelley genesis configuration:
| Network | epoch_length | Approximate Duration |
|---|---|---|
| Mainnet | 432,000 | 5 days |
| Preview | 86,400 | 1 day |
| Preprod | 432,000 | 5 days |
When a ledger snapshot is loaded, the node recalculates the current epoch from the tip slot using the genesis parameters. If the snapshot was saved with incorrect epoch parameters (for example, using mainnet's default 432,000 instead of preview's 86,400), the epoch number baked into the snapshot will be wrong. The node detects this automatically and corrects it:
Snapshot epoch differs from computed epoch — correcting
snapshot_epoch=<wrong>, correct_epoch=<right>, tip_slot=<slot>
Without this correction, apply_block would attempt to process hundreds of spurious epoch transitions, and the stake snapshots would land at wrong epochs, causing pool_stake=0 for block producers.
Fork Recovery
When a block producer forges a block but another pool wins the slot battle (their block is adopted by the network instead), the forged block becomes orphaned. Dugite detects this situation during chain synchronization and recovers automatically.
How Fork Detection Works
During ChainSync, the node presents historical chain points (up to 10 ancestors, walked backwards through the volatile DB) to the upstream peer. If the local tip is an orphaned forged block that the peer does not recognize, the ancestor blocks provide fallback intersection points.
Recovery Cases
Case A: Full Reset. The intersection falls back to Origin despite having a non-trivial ledger tip. This means no peer recognizes any of the node's chain points. The node:
- Clears the volatile DB.
- Rolls back the ledger state to Origin.
- Disables strict VRF verification (so replay can proceed without rejecting blocks due to stale nonce).
- Reconnects and replays from the ImmutableDB.
Case B: Targeted ImmutableDB Replay. The intersection is behind the ledger tip but not at Origin. The node:
- Clears the volatile DB.
- Detaches the LSM UTxO store (switches to fast in-memory replay).
- Replays the ImmutableDB from genesis up to the intersection slot.
- Reattaches the UTxO store and resumes syncing from the canonical chain.
In both cases, orphaned forged blocks are not propagated to downstream peers. The node resumes normal operation on the canonical chain after recovery completes.
Troubleshooting Block Producer Issues
"Block producer has ZERO stake"
Block producer has ZERO stake in 'set' snapshot — will not be elected slot leader.
This warning appears at startup when the "set" snapshot contains no stake for your pool. Possible causes:
- Pool not registered: Submit a pool registration certificate transaction and wait for it to be confirmed.
- No delegations: At least one stake address must delegate to the pool. Submit a delegation certificate and wait for confirmation.
- Snapshot too old: The "set" snapshot reflects stake from two epoch boundaries ago. A newly registered pool must wait 2 epoch transitions before appearing in the "set" snapshot.
- UTxO store not attached: If the node started without a UTxO store, stake reconstruction is skipped. Check for the
Rebuilding stake distribution from UTxO storelog message.
"VRF leader eligibility check failed"
VRF leader check failures during the first few epochs after a full replay are non-fatal and expected. The mark/set/go snapshot rotation means the "set" snapshot needs up to 3 epoch transitions to stabilize with correct stake distributions derived from the replayed state. During this window:
- The node may compute incorrect leader eligibility for some slots.
- Your pool may miss some leader slots — this is temporary and self-correcting.
Pool Registered but No Forge Attempts
If your pool is registered on-chain but the node never logs any forge attempts:
- Check the "set" snapshot log: Look for the startup message
Block producer: pool stake in 'set' snapshot. Verify thatpool_stake_lovelaceis greater than zero. - Check the "set" snapshot availability: If you see
Block producer: no 'set' snapshot available — leader election disabled until epoch transition, the node has not yet completed enough epoch transitions. Wait for at least 2 epoch boundaries. - Verify key files: Ensure
--shelley-kes-key,--shelley-vrf-key, and--shelley-operational-certificateare all provided and point to valid files. Without all three, the node runs in relay-only mode. - Check KES period: If the KES key has expired (current KES period exceeds the operational certificate's start period plus
maxKESEvolutions), rotate the KES key and issue a new operational certificate.
macOS App Nap (macOS only)
macOS can suspend background processes via "App Nap" to save power. A suspended node misses every leader slot during the freeze window. Wrap the node in caffeinate to prevent this:
caffeinate -dimsu dugite-node run \
--config config.json \
--topology topology.json \
--database-path ./db \
--socket-path ./node.sock \
--shelley-kes-key kes.skey \
--shelley-vrf-key vrf.skey \
--shelley-operational-certificate opcert.cert
The -dimsu flags prevent disk-idle, display, idle, system, and user-idle sleep from suspending the process. Required for reliable block production on macOS development machines.
Complete Deployment
A full stake pool deployment consists of one block producer and one or more relay nodes working together:
graph TB
subgraph Public Network
Peers["Cardano Peers"]
end
subgraph Your Infrastructure
subgraph Relay Tier
R1["Relay 1<br/>Port 3001<br/>Public IP"]
R2["Relay 2<br/>Port 3001<br/>Public IP"]
end
subgraph Private Network
BP["Block Producer<br/>Port 3001<br/>Private IP<br/>KES + VRF + OpCert"]
end
end
Peers <-->|N2N| R1
Peers <-->|N2N| R2
R1 <-->|Private| BP
R2 <-->|Private| BP
Deployment Checklist
-
Set up relay nodes (Relay Node guide)
- Install Dugite on relay machines
- Import Mithril snapshot for fast initial sync
- Configure relay topology with bootstrap peers and BP as local root
- Open port 3001 to the public
- Start relay nodes and verify they sync to tip
-
Set up the block producer (this page)
- Install Dugite on the BP machine
- Import Mithril snapshot
- Generate cold keys, VRF keys, and KES keys
- Issue an operational certificate
- Configure BP topology with relays as local roots (no public peers)
- Restrict firewall to relay IPs only
- Start the BP node with
--shelley-kes-key,--shelley-vrf-key,--shelley-operational-certificate
-
Register the stake pool on-chain (requires a transaction with pool registration certificate)
-
Verify block production
- Confirm sync progress is 100% on all nodes
- Check
peers_connectedmetrics on relays and BP - Monitor
blocks_forgedmetric on the BP after epoch transition - Set up monitoring and KES rotation reminders
Local Testnet
A 3-node loopback testnet for verifying dugite block production and diffusion against the Haskell reference implementation. One dugite block producer, one dugite relay, and one cardano-node block producer, all on the same machine, all on the loopback interface — both BPs connect to the network only through the dugite relay.
What this is
graph LR dbp[dugite-bp<br/>N2N 3001<br/>metrics 12798<br/>pool1] <--> dr[dugite-relay<br/>N2N 3002<br/>metrics 12799<br/>hub] dr <--> cbp[cardano-node bp<br/>N2N 3003<br/>pool2]
The dugite relay is the only path between the two BPs. A block forged by dugite-bp must transit dugite-relay's BlockFetch server before cardano-bp can fetch it (and vice versa). This is what the soak test exercises.
The chain boots into Conway PV10 from a fresh genesis with two equally-staked
pools, 1-second slots, activeSlotsCoeff = 0.2, epochLength = 500, and
securityParam = 10. Blocks are minted every ~5 seconds on average, and the
30-minute soak crosses 3–4 epoch boundaries.
Prerequisites
cardano-node>= 11.0.1 andcardano-cli>= 11.0.0 on$PATHtarget/release/dugite-nodebuilt (cargo build --releasefrom the repo root)jqfor JSON manipulation- ~2 GB of free disk for the soak
- macOS: the system-builtin
caffeinate(used to suppress App Nap during the soak). This dependency is macOS-only; on Linux the soak runs unwrapped.
The setup script's prereq check will refuse to run if any of these are missing.
One-time setup
./testnet/local-devnet/setup.sh
This generates a fresh genesis (4 files: byron, shelley, alonzo, conway),
key sets for two stake pools, four stake delegators, three genesis keys, and
one UTxO funding key. All output lands under testnet/local-devnet/genesis/,
testnet/local-devnet/keys/, and testnet/local-devnet/config/ (rendered
configs). Generated keys and genesis files are gitignored.
Expected output ends with:
[INFO] All configs + topologies rendered to testnet/local-devnet/config/
[INFO] Setup complete. Next: ./run.sh
The genesis start time is set to "now + 30 seconds." Re-run setup.sh if more
than ~5 minutes pass before you call run.sh (the start-time freshness check
will refuse to start a stale chain).
Running the network
./testnet/local-devnet/run.sh
This starts the three nodes in the background (caffeinate-wrapped on macOS),
records PIDs to state/<node>.pid, sends logs to logs/<node>.log, and
exposes N2C sockets at:
testnet/local-devnet/state/dugite-relay.socktestnet/local-devnet/state/dugite-bp.socktestnet/local-devnet/state/cardano-bp.sock
You can query each socket with cardano-cli (or dugite-cli, which speaks
the same N2C protocol):
cardano-cli query tip \
--testnet-magic 42 \
--socket-path testnet/local-devnet/state/dugite-bp.sock
To stop the network:
./testnet/local-devnet/stop.sh
This sends SIGTERM, waits 5 seconds, then SIGKILL if needed. DBs and logs
are preserved in state/ and logs/.
Running the soak test
./testnet/local-devnet/soak.sh # default: 1800s (30 minutes)
./testnet/local-devnet/soak.sh 300 # 5-minute smoke test
The soak runs three concurrent samplers while it's alive:
- tip-sampler — every 5 seconds, queries
tipon each socket and appends(ts, node, slot, block_no, hash, era)rows totip-samples.csv. - block-recorder — tails the three node logs and writes one row per first- sight of each block, with observer, forge/recv flag, slot, hash, and (for forge events) the issuer's vkey.
- tx-injector — at T+2 min, T+10 min, and T+20 min, submits 5 self-transfer payment transactions to each of the 3 sockets (15 per wave, 45 total) and records each submission's txid and return code.
Evidence lands in testnet/local-devnet/evidence/<timestamp>/. A heartbeat
line is printed every 30 seconds with current tips from all three nodes.
Verifying results
After the soak finishes, soak.sh does not automatically run verify — run it
manually:
./testnet/local-devnet/verify.sh testnet/local-devnet/evidence/<timestamp>/
The verifier evaluates four pass/fail predicates and writes
evidence/<timestamp>/report.md. Predicates:
| # | Predicate | Pass condition |
|---|---|---|
| 1 | Block forge cross-check | Every confirmed (slot, hash) pair is seen by all three observers in blocks.csv. (Most-recent 10 blocks are excluded from the check to allow rollback grace.) |
| 2 | Per-BP forge attribution | Both pools forged >= 3 blocks each. Expected ~180 each at f=0.2, sigma=0.5 — failure at 3 is a real wiring bug, not a slot-lottery flake. |
| 3 | Transaction inclusion round-trip | Every submitted tx has submit_rc=0 and (when run with the devnet up) appears in all three nodes' UTxO sets at the genesis payment address. |
| 4 | Tip parity over time | At >=95% of 5-second ticks (excluding the first 60s warmup), all three nodes report tips within 2 blocks of each other. |
The report.md includes a metadata snapshot (versions, genesis hashes, magic),
counts (block events, tx submissions, tip samples), a forge-attribution
breakdown, and a per-predicate result table.
You can also self-test the verifier (without a real soak) using committed test fixtures:
./testnet/local-devnet/verify.sh --self-test
Topology & port reference
| Process | N2N | Metrics | Socket | Config | Topology |
|---|---|---|---|---|---|
dugite-bp (pool1) | 3001 | 12798 | dugite-bp.sock | dugite-bp.config.json | dugite-bp.topology.json |
dugite-relay (hub) | 3002 | 12799 | dugite-relay.sock | dugite-relay.config.json | dugite-relay.topology.json |
cardano-node bp (pool2) | 3003 | — | cardano-bp.sock | cardano-bp.config.json | cardano-bp.topology.json |
The devnet uses the standard Cardano N2N port (3001) for dugite-bp and
single-digit increments for the relay (3002) and the Haskell BP (3003). The
metrics ports follow the same convention: dugite-bp keeps the well-known
Prometheus default (12798), the relay exposes 12799. If a public-network
soak is running on the same host it must be stopped before the devnet boots,
since both processes would otherwise bind 3001/12798.
Monitoring with dugite-monitor
Because dugite-bp runs on the default metrics port (12798), the bundled
TUI monitor connects with no overrides. In a separate terminal once the
devnet is up:
./target/release/dugite-monitor
To inspect the relay instead of the BP, point the monitor at its metrics port:
./target/release/dugite-monitor --metrics-url http://localhost:12799/metrics
The cardano-node BP does not expose a Prometheus endpoint in this devnet
(its EKG/Prometheus exporters are disabled to keep the configuration minimal
and avoid port collisions).
Configuration reference
The genesis is generated by cardano-cli conway genesis create-testnet-data
with two override fragments committed under config/spec/. Only fields that
differ from cardano-cli's defaults are listed below.
config/spec/shelley-spec.json:
| field | value | purpose |
|---|---|---|
slotLength | 1.0 | 1-second slot duration |
activeSlotsCoeff | 0.2 | f = 0.2; ~5s expected block time |
epochLength | 500 | ~8.3 minutes per epoch -> 3–4 epoch transitions in the 30-min soak |
securityParam | 10 | small k -> fast immutability (3k/f ~= 150 slots) |
updateQuorum | 2 | matches the 3 genesis keys (2-of-3) |
maxLovelaceSupply | 60_000_000_000_000_000 | 60 B ADA, mainnet-shaped |
networkMagic | 42 | local devnet magic |
config/spec/conway-spec.json carries Conway governance parameters; for a
30-min run with no proposals only the protocol version matters. PV is set
to 10.0 so the chain boots straight into Conway.
Troubleshooting
ERROR: cardano-cli x.y.z < 11.0.0 required— install a newer cardano-cli; see prerequisites.ERROR: Port 3001 is in use— another devnet (or the public soak rig) is using a port. Run./stop.shorlsof -iTCP:3001.Genesis is N seconds old (>300s). Re-run ./setup.sh— the start time has drifted; re-run setup.- Tips not advancing after
run.sh— checklogs/<node>.logfor the failing node. Most common cause is a KES key path mismatch (re-runsetup.sh). - Soak hangs on macOS — confirm
caffeinateis wrapping the dugite processes viaps auxw | grep caffeinate. App Nap can freeze dugite for tens of minutes without it. dugite-monitorshows no data — confirm the BP is exposing metrics on 12798 withcurl -s localhost:12798/metrics | head. If empty, the BP either failed to start or its config doesn't have the Prometheus exporter enabled (the devnet template enables it by default).
What this validates
- Dugite block production end-to-end (forge -> adopt -> diffuse)
- Dugite relay's bidirectional ChainSync/BlockFetch (Haskell <-> Rust <-> Haskell)
- Dugite N2N peer connection lifecycle on loopback
- Dugite N2C local-socket tx submission, query tip, query utxo
- Cross-implementation chain agreement under healthy conditions
What this does NOT validate
- Byron-era code paths (chain boots in Conway)
- Hard-fork combinator era transitions (none occur during the soak)
- Multi-relay diffusion topologies (single hub)
- Plutus phase-2 / governance enactment (no proposals or scripts)
- Mainnet-scale peer counts, NAT/firewall behaviour, or BGP-level routing
- Mithril snapshot import (covered by other tests)
Known dugite-node bugs surfaced by this testnet
Bringing up the devnet for the first time exposed four real defects in
dugite-node. Three were fixed on this branch as part of bringing the test
infrastructure online; the fourth is tracked separately and remains the
blocker for full predicate parity.
- Bug A (FIXED in this branch as
7e6a4af54): ChainSync intersection at Origin with a non-Origin local tip used to leave the node permanently stuck on its own fork — the node sat at the local tip and never re-intersected once the peer caught up. The fix disconnects after a backoff so the next reconnection can intersect at a real shared point. This was the root cause behind the "node appears to hang" reports. - Bug B (FIXED in this branch as
59a5fc64d): the live BlockFetch apply path did not pushLedgerDeltas onto theLedgerSeq, so a fork-switch rollback would fail to find the rollback ledger state on shallow chains and silently keep the wrong tip. The fix pushes deltas in all apply paths (live, replay, and triggered-fork) so rollback is always possible up to k. - Bug C (FIXED in this branch as
9d30beaf2): the forge loop fired as soon as the node finished booting, even before any peer connection or ChainSync intersection had completed. On a fresh devnet, dugite-bp would self-forge an orphan block at chain start, then refuse to abandon it. The fix gates the forge loop onpeer_hot_count > 0AND at least one successful ChainSync intersection. - Bug D (NOT YET FIXED — tracked in
#497): after initial
sync, dugite-bp's chain selection does not switch to a peer's longer
competing chain when two BPs forge concurrently. The peer chain's blocks
are received and stored in VolatileDB, but the local chain selector never
adopts them —
dugite_blocks_applied_totalstays stuck at the initial-sync count whiledugite_blocks_forged_totalcontinues to grow on the local fork. Predicates 1 (forge cross-check) and 3 (tx round-trip) will FAIL fordugite-bpuntil this is resolved. The relay is unaffected (it doesn't forge) and adopts the canonical chain cleanly.
Tracking: see the GitHub issue linked from
docs/superpowers/specs/2026-05-16-local-testnet-design.md.
Kubernetes Deployment
Dugite includes a Helm chart for deploying to Kubernetes as either a relay node or a block producer.
Prerequisites
- Kubernetes 1.25+
- Helm 3.x
- A
StorageClassthat supportsReadWriteOncepersistent volumes
Quick Start
Deploy a relay node on the preview testnet:
helm install dugite-relay ./charts/dugite-node \
--set network.name=preview
This will:
- Run a Mithril snapshot import (init container) for fast bootstrap
- Start the node syncing with the preview testnet
- Create a 100Gi persistent volume for the chain database
- Expose Prometheus metrics on port 12798
Chart Reference
Node Role
The chart supports two deployment modes:
# Relay node (default)
role: relay
# Block producer
role: producer
Network Selection
network:
name: preview # mainnet, preview, or preprod
port: 3001 # N2N port
hostAddr: "0.0.0.0"
Network magic is derived automatically from the network name. Override with network.magic if needed.
Persistence
persistence:
enabled: true
storageClass: "" # Use default StorageClass
size: 100Gi # 100Gi for testnet, 500Gi+ for mainnet
accessMode: ReadWriteOnce
existingClaim: "" # Use an existing PVC
Resources
resources:
requests:
cpu: "1"
memory: 4Gi
limits:
cpu: "4"
memory: 16Gi
For mainnet, increase memory limits to 24-32Gi during initial sync and ledger replay.
Mithril Import
mithril:
enabled: true # Run Mithril import on first startup
The init container is idempotent — it skips the import on subsequent restarts if blocks already exist.
Ledger Replay
ledger:
replayLimit: null # null = unlimited (replay all blocks)
pipelineDepth: 150 # Chain sync pipeline depth
After Mithril import, the node replays all imported blocks through the ledger to build correct UTxO state, delegations, and protocol parameters. Set replayLimit: 0 to skip replay for faster startup (at the cost of incomplete ledger state).
Metrics and Monitoring
metrics:
enabled: true
port: 12798
serviceMonitor:
enabled: false # Set true if using Prometheus Operator
interval: 30s
labels: {}
When serviceMonitor.enabled is true, the chart creates a ServiceMonitor resource for automatic Prometheus scraping.
Available metrics include sync_progress_percent, blocks_applied_total, utxo_count, epoch_number, peers_connected, and more. See Monitoring for the full list.
Relay Node Deployment
A relay node connects to the Cardano network, syncs blocks, and serves them to connected peers and local clients.
Minimal Relay
helm install dugite-relay ./charts/dugite-node \
--set network.name=mainnet \
--set persistence.size=500Gi
Relay with Custom Topology
helm install dugite-relay ./charts/dugite-node \
--set network.name=mainnet \
--set persistence.size=500Gi \
-f relay-values.yaml
relay-values.yaml:
topology:
bootstrapPeers:
- address: relays-new.cardano-mainnet.iohk.io
port: 3001
localRoots:
- accessPoints:
- address: dugite-producer.default.svc.cluster.local
port: 3001
advertise: false
trustable: true
valency: 1
publicRoots:
- accessPoints:
- address: relays-new.cardano-mainnet.iohk.io
port: 3001
advertise: false
useLedgerAfterSlot: 110332800
Relay with Prometheus Operator
helm install dugite-relay ./charts/dugite-node \
--set network.name=mainnet \
--set metrics.serviceMonitor.enabled=true \
--set metrics.serviceMonitor.labels.release=prometheus
Block Producer Deployment
A block producer creates blocks when elected as slot leader. It requires KES, VRF, and operational certificate keys.
Create Keys Secret
First, create a Kubernetes secret with your block producer keys:
kubectl create secret generic dugite-producer-keys \
--from-file=kes.skey=kes.skey \
--from-file=vrf.skey=vrf.skey \
--from-file=node.cert=node.cert
Deploy the Producer
helm install dugite-producer ./charts/dugite-node \
--set role=producer \
--set network.name=mainnet \
--set producer.existingSecret=dugite-producer-keys \
--set persistence.size=500Gi
Producer Security
When role=producer, the chart automatically creates a NetworkPolicy that:
- Restricts N2N ingress to pods labeled
app.kubernetes.io/component: relay - Allows metrics scraping from any pod in the cluster
- Block producers should never be exposed directly to the internet
Producer + Relay Architecture
A typical production deployment uses one or more relay nodes that shield the block producer:
graph LR
Internet[Cardano Network] --> R1[Relay 1]
Internet --> R2[Relay 2]
R1 --> BP[Block Producer]
R2 --> BP
BP -. blocks .-> R1
BP -. blocks .-> R2
Deploy both:
# Deploy the block producer
helm install dugite-producer ./charts/dugite-node \
--set role=producer \
--set network.name=mainnet \
--set producer.existingSecret=dugite-producer-keys \
-f producer-values.yaml
# Deploy relay(s) pointing to the producer
helm install dugite-relay ./charts/dugite-node \
--set role=relay \
--set network.name=mainnet \
-f relay-values.yaml
producer-values.yaml:
topology:
bootstrapPeers: []
localRoots:
- accessPoints:
- address: dugite-relay-dugite-node.default.svc.cluster.local
port: 3001
advertise: false
trustable: true
valency: 1
publicRoots: []
useLedgerAfterSlot: -1
relay-values.yaml:
topology:
bootstrapPeers:
- address: relays-new.cardano-mainnet.iohk.io
port: 3001
localRoots:
- accessPoints:
- address: dugite-producer-dugite-node.default.svc.cluster.local
port: 3001
advertise: false
trustable: true
valency: 1
publicRoots:
- accessPoints:
- address: relays-new.cardano-mainnet.iohk.io
port: 3001
advertise: false
useLedgerAfterSlot: 110332800
Verifying the Deployment
Check pod status:
kubectl get pods -l app.kubernetes.io/name=dugite-node
View logs:
kubectl logs -f deploy/dugite-relay-dugite-node
Query the node tip:
kubectl exec deploy/dugite-relay-dugite-node -- \
dugite-cli query tip --testnet-magic 2
Check metrics:
kubectl port-forward svc/dugite-relay-dugite-node 12798:12798
curl -s http://localhost:12798/metrics | grep sync_progress
Configuration Reference
All configurable values with defaults:
| Parameter | Default | Description |
|---|---|---|
role | relay | Node role: relay or producer |
image.repository | ghcr.io/michaeljfazio/dugite | Container image |
image.tag | Chart appVersion | Image tag |
network.name | preview | Network: mainnet, preview, preprod |
network.port | 3001 | N2N port |
mithril.enabled | true | Run Mithril import on first start |
ledger.replayLimit | null | Max blocks to replay (null = unlimited) |
ledger.pipelineDepth | 150 | Chain sync pipeline depth |
persistence.enabled | true | Enable persistent storage |
persistence.size | 100Gi | Volume size |
metrics.enabled | true | Enable Prometheus metrics |
metrics.port | 12798 | Metrics port |
metrics.serviceMonitor.enabled | false | Create ServiceMonitor |
producer.existingSecret | "" | Secret with KES/VRF/cert keys |
resources.requests.cpu | 1 | CPU request |
resources.requests.memory | 4Gi | Memory request |
resources.limits.memory | 16Gi | Memory limit |
CLI Overview
Dugite provides dugite-cli, a cardano-cli compatible command-line interface for interacting with a running Dugite node and managing keys, transactions, and governance.
Binary
dugite-cli [COMMAND] [OPTIONS]
Command Groups
| Command | Description |
|---|---|
address | Address generation and manipulation |
key | Payment and stake key generation |
transaction | Transaction building, signing, and submission |
query | Node queries (tip, UTxO, protocol parameters, etc.) |
stake-address | Stake address registration, delegation, and vote delegation |
stake-pool | Stake pool operations (retirement certificates) |
governance | Conway governance (DRep, voting, proposals) |
node | Node key operations (cold keys, KES, VRF, operational certificates) |
Common Patterns
Socket Path
Most commands that interact with a running node require --socket-path to specify the Unix domain socket:
dugite-cli query tip --socket-path ./node.sock
The default socket path is node.sock in the current directory.
Testnet Magic
When querying a node on a testnet, pass the --testnet-magic flag:
dugite-cli query tip --socket-path ./node.sock --testnet-magic 2
For mainnet, --testnet-magic is not needed (defaults to mainnet magic 764824073).
Text Envelope Format
Keys, certificates, and transactions are stored in the cardano-node "text envelope" JSON format:
{
"type": "PaymentSigningKeyShelley_ed25519",
"description": "Payment Signing Key",
"cborHex": "5820..."
}
This format is interchangeable with files produced by cardano-cli.
Output Files
Commands that produce artifacts use --out-file:
dugite-cli transaction build ... --out-file tx.body
dugite-cli transaction sign ... --out-file tx.signed
Help
Every command supports --help:
dugite-cli --help
dugite-cli transaction --help
dugite-cli transaction build --help
dugite-node Reference
dugite-node is the main Dugite node binary. It supports two subcommands: run (start the node) and mithril-import (import a Mithril snapshot for fast initial sync).
run
Start the Dugite node:
dugite-node run [OPTIONS]
Options
| Flag | Default | Description |
|---|---|---|
--config | config/mainnet-config.json | Path to the node configuration file |
--topology | config/mainnet-topology.json | Path to the topology file |
--database-path | db | Path to the database directory |
--socket-path | node.sock | Unix domain socket path for N2C (local client) connections |
--port | 3001 | TCP port for N2N (node-to-node) connections |
--host-addr | 0.0.0.0 | Host address to bind to |
--metrics-port | 12798 | Prometheus metrics port (set to 0 to disable) |
--shelley-kes-key | Path to the KES signing key (enables block production) | |
--shelley-vrf-key | Path to the VRF signing key (enables block production) | |
--shelley-operational-certificate | Path to the operational certificate (enables block production) | |
--log-output | stdout | Log output target: stdout, file, or journald. Can be specified multiple times. |
--log-format | text | Log format: text (human-readable) or json (structured). |
--log-level | info | Log level (trace, debug, info, warn, error). Overridden by RUST_LOG. |
--log-dir | logs | Directory for log files (used with --log-output file) |
--log-file-rotation | daily | Log file rotation strategy: daily, hourly, or never |
--log-no-color | false | Disable ANSI colors in stdout output |
--mempool-max-tx | 16384 | Maximum number of transactions in the mempool |
--mempool-max-bytes | 536870912 | Maximum mempool size in bytes (default 512 MB) |
--snapshot-max-retained | 2 | Maximum number of ledger snapshots to retain on disk |
--snapshot-bulk-min-blocks | 50000 | Minimum blocks between bulk-sync snapshots |
--snapshot-bulk-min-secs | 360 | Minimum seconds between bulk-sync snapshots |
--storage-profile | high-memory | Storage profile: ultra-memory (32GB), high-memory (16GB), low-memory (8GB), or minimal (4GB) |
--immutable-index-type | Override block index type: in-memory or mmap | |
--utxo-backend | Override UTxO backend: in-memory or lsm | |
--utxo-memtable-size-mb | Override LSM memtable size in MB | |
--utxo-block-cache-size-mb | Override LSM block cache size in MB | |
--utxo-bloom-filter-bits | Override LSM bloom filter bits per key |
Relay Node (default)
Run as a relay node with no block production keys:
dugite-node run \
--config config/preview-config.json \
--topology config/preview-topology.json \
--database-path ./db-preview \
--socket-path ./node.sock \
--host-addr 0.0.0.0 \
--port 3001
Block Producer
Run as a block producer by providing all three key/certificate paths:
dugite-node run \
--config config/preview-config.json \
--topology config/preview-topology.json \
--database-path ./db-preview \
--socket-path ./node.sock \
--host-addr 0.0.0.0 \
--port 3001 \
--shelley-kes-key ./keys/kes.skey \
--shelley-vrf-key ./keys/vrf.skey \
--shelley-operational-certificate ./keys/opcert.cert
When all three block producer flags are provided, the node enters block production mode. The cold signing key is not needed at runtime — the cold verification key is extracted from the operational certificate, matching cardano-node behavior.
If any of the three flags is missing, the node runs in relay-only mode.
Environment Variables
| Variable | Default | Description |
|---|---|---|
DUGITE_PIPELINE_DEPTH | 300 | ChainSync pipeline depth (number of blocks requested ahead) |
RUST_LOG | info | Log level filter (e.g., debug, info, warn, dugite_node=debug). Overrides --log-level. |
See Logging for details on output targets, file rotation, and per-crate filtering.
Configuration File
The --config file follows the same JSON format as cardano-node. Key fields:
{
"Protocol": "Cardano",
"RequiresNetworkMagic": "RequiresMagic",
"ByronGenesisFile": "byron-genesis.json",
"ShelleyGenesisFile": "shelley-genesis.json",
"AlonzoGenesisFile": "alonzo-genesis.json",
"ConwayGenesisFile": "conway-genesis.json"
}
Genesis file paths are resolved relative to the directory containing the config file.
Metrics
When --metrics-port is non-zero, Prometheus metrics are served at http://localhost:<port>/metrics. See Monitoring for the full list of available metrics.
mithril-import
Import a Mithril snapshot for fast initial sync. This downloads and verifies a certified snapshot from a Mithril aggregator, then imports all blocks into the local database.
dugite-node mithril-import [OPTIONS]
Options
| Flag | Default | Description |
|---|---|---|
--network-magic | 764824073 | Network magic value |
--database-path | db | Path to the database directory |
--temp-dir | Temporary directory for download and extraction (uses system temp if omitted) | |
--log-output | stdout | Log output target: stdout, file, or journald. Can be specified multiple times. |
--log-format | text | Log format: text (human-readable) or json (structured). |
--log-level | info | Log level (trace, debug, info, warn, error). Overridden by RUST_LOG. |
--log-dir | logs | Directory for log files (used with --log-output file) |
--log-file-rotation | daily | Log file rotation strategy: daily, hourly, or never |
--log-no-color | false | Disable ANSI colors in stdout output |
Network Magic Values
| Network | Magic |
|---|---|
| Mainnet | 764824073 |
| Preview | 2 |
| Preprod | 1 |
Example: Preview Testnet
dugite-node mithril-import \
--network-magic 2 \
--database-path ./db-preview
# Then start the node to sync from the snapshot to tip
dugite-node run \
--config config/preview-config.json \
--topology config/preview-topology.json \
--database-path ./db-preview \
--socket-path ./node.sock
The import process:
- Downloads the latest snapshot from the Mithril aggregator
- Verifies the snapshot digest (SHA256)
- Extracts and parses immutable chunk files
- Imports blocks into ChainDB with CRC32 verification
- Supports resume — skips blocks already in the database
On preview testnet, importing ~4M blocks takes approximately 2 minutes.
Key Generation
Dugite CLI supports generating all key types needed for Cardano operations.
Payment Keys
Generate an Ed25519 key pair for payments:
dugite-cli key generate-payment-key \
--signing-key-file payment.skey \
--verification-key-file payment.vkey
Output files:
payment.skey— Payment signing key (keep secret)payment.vkey— Payment verification key (safe to share)
Stake Keys
Generate an Ed25519 key pair for staking:
dugite-cli key generate-stake-key \
--signing-key-file stake.skey \
--verification-key-file stake.vkey
Output files:
stake.skey— Stake signing keystake.vkey— Stake verification key
Verification Key Hash
Compute the Blake2b-224 hash of any verification key:
dugite-cli key verification-key-hash \
--verification-key-file payment.vkey
This outputs the 28-byte key hash in hexadecimal, used in addresses and certificates.
DRep Keys
Generate keys for a Delegated Representative (Conway governance):
dugite-cli governance drep key-gen \
--signing-key-file drep.skey \
--verification-key-file drep.vkey
Get the DRep ID:
# Bech32 format (default)
dugite-cli governance drep id \
--drep-verification-key-file drep.vkey
# Hex format
dugite-cli governance drep id \
--drep-verification-key-file drep.vkey \
--output-format hex
Node Keys
Cold Keys
Generate cold keys and an operational certificate issue counter:
dugite-cli node key-gen \
--cold-verification-key-file cold.vkey \
--cold-signing-key-file cold.skey \
--operational-certificate-counter-file opcert.counter
KES Keys
Generate Key Evolving Signature keys (rotated periodically):
dugite-cli node key-gen-kes \
--verification-key-file kes.vkey \
--signing-key-file kes.skey
VRF Keys
Generate Verifiable Random Function keys (for slot leader election):
dugite-cli node key-gen-vrf \
--verification-key-file vrf.vkey \
--signing-key-file vrf.skey
Operational Certificate
Issue an operational certificate binding the cold key to the current KES key:
dugite-cli node issue-op-cert \
--kes-verification-key-file kes.vkey \
--cold-signing-key-file cold.skey \
--operational-certificate-counter-file opcert.counter \
--kes-period 400 \
--out-file opcert.cert
Address Generation
Payment Address
Build a payment address from keys:
# Enterprise address (no staking)
dugite-cli address build \
--payment-verification-key-file payment.vkey \
--testnet-magic 2
# Base address (with staking)
dugite-cli address build \
--payment-verification-key-file payment.vkey \
--stake-verification-key-file stake.vkey \
--testnet-magic 2
# Mainnet address
dugite-cli address build \
--payment-verification-key-file payment.vkey \
--stake-verification-key-file stake.vkey \
--mainnet
Key File Format
All keys are stored in the cardano-node text envelope format:
{
"type": "PaymentSigningKeyShelley_ed25519",
"description": "Payment Signing Key",
"cborHex": "5820a1b2c3d4..."
}
The cborHex field contains the CBOR-encoded key bytes. The type field identifies the key type and is used for validation when loading keys.
Key files generated by Dugite are compatible with cardano-cli and vice versa.
Complete Workflow Example
Generate all keys needed for a basic wallet:
# 1. Generate payment keys
dugite-cli key generate-payment-key \
--signing-key-file payment.skey \
--verification-key-file payment.vkey
# 2. Generate stake keys
dugite-cli key generate-stake-key \
--signing-key-file stake.skey \
--verification-key-file stake.vkey
# 3. Build a testnet address
dugite-cli address build \
--payment-verification-key-file payment.vkey \
--stake-verification-key-file stake.vkey \
--testnet-magic 2
# 4. Get the payment key hash
dugite-cli key verification-key-hash \
--verification-key-file payment.vkey
Transactions
Dugite CLI supports the full transaction lifecycle: building, signing, submitting, and inspecting transactions.
Building a Transaction
dugite-cli transaction build \
--tx-in <tx_hash>#<index> \
--tx-out <address>+<lovelace> \
--change-address <address> \
--fee <lovelace> \
--out-file tx.body
Arguments
| Argument | Description |
|---|---|
--tx-in | Transaction input in tx_hash#index format. Can be specified multiple times |
--tx-out | Transaction output in address+lovelace format. Can be specified multiple times |
--change-address | Address to receive change |
--fee | Fee in lovelace (default: 200000) |
--ttl | Time-to-live slot number (optional) |
--certificate-file | Path to a certificate file to include (can be repeated) |
--withdrawal | Withdrawal in stake_address+lovelace format (can be repeated) |
--metadata-json-file | Path to a JSON metadata file (optional) |
--out-file | Output file for the transaction body |
Example: Simple ADA Transfer
dugite-cli transaction build \
--tx-in "abc123...#0" \
--tx-out "addr_test1qz...+5000000" \
--change-address "addr_test1qp..." \
--fee 200000 \
--ttl 50000000 \
--out-file tx.body
Multi-Asset Outputs
To include native tokens in an output, use the extended format:
address+lovelace+"policy_id.asset_name quantity"
Example:
dugite-cli transaction build \
--tx-in "abc123...#0" \
--tx-out 'addr_test1qz...+2000000+"a1b2c3...d4e5f6.4d79546f6b656e 100"' \
--change-address "addr_test1qp..." \
--fee 200000 \
--out-file tx.body
Multiple tokens can be separated with + inside the quoted string:
"policy1.asset1 100+policy2.asset2 50"
Including Certificates
dugite-cli transaction build \
--tx-in "abc123...#0" \
--tx-out "addr_test1qz...+5000000" \
--change-address "addr_test1qp..." \
--fee 200000 \
--certificate-file stake-reg.cert \
--certificate-file stake-deleg.cert \
--out-file tx.body
Including Metadata
Create a metadata JSON file with integer keys:
{
"674": {
"msg": ["Hello, Cardano!"]
}
}
dugite-cli transaction build \
--tx-in "abc123...#0" \
--tx-out "addr_test1qz...+5000000" \
--change-address "addr_test1qp..." \
--fee 200000 \
--metadata-json-file metadata.json \
--out-file tx.body
Signing a Transaction
dugite-cli transaction sign \
--tx-body-file tx.body \
--signing-key-file payment.skey \
--out-file tx.signed
Multiple signing keys can be provided:
dugite-cli transaction sign \
--tx-body-file tx.body \
--signing-key-file payment.skey \
--signing-key-file stake.skey \
--out-file tx.signed
Submitting a Transaction
dugite-cli transaction submit \
--tx-file tx.signed \
--socket-path ./node.sock
The node validates the transaction (Phase-1 and Phase-2 for Plutus transactions) and, if valid, adds it to the mempool for propagation.
Viewing a Transaction
dugite-cli transaction view --tx-file tx.signed
Output includes:
- Transaction type
- CBOR size
- Transaction hash
- Number of inputs and outputs
- Fee
- TTL (if set)
Transaction ID
Compute the transaction hash:
dugite-cli transaction txid --tx-file tx.body
Works with both transaction body files and signed transaction files.
Calculate Minimum Fee
dugite-cli transaction calculate-min-fee \
--tx-body-file tx.body \
--witness-count 2 \
--protocol-params-file protocol-params.json
The fee calculation accounts for:
- Base fee:
txFeeFixed + txFeePerByte * tx_size - Script execution:
executionUnitPrices * total_ExUnitsfor any Plutus witnesses - Reference script surcharge: CIP-0112 tiered fee for reference scripts (25KiB tiers, 1.2x multiplier per tier)
To get the current protocol parameters:
dugite-cli query protocol-parameters \
--socket-path ./node.sock \
--out-file protocol-params.json
Calculate Minimum Required UTxO
Compute the minimum lovelace required for a transaction output to satisfy the minUTxOValue protocol parameter:
dugite-cli transaction calculate-min-required-utxo \
--protocol-params-file protocol-params.json \
--tx-out "addr_test1qz...+0+\"policy1.asset1 100\""
Output:
Minimum required lovelace: 1724100
This is particularly useful when constructing outputs that carry native tokens, since the minimum lovelace depends on the byte-size of the value bundle (number of policy IDs, asset names, and quantities).
Creating Witnesses
For multi-signature workflows, you can create witnesses separately and assemble them:
Create a Witness
dugite-cli transaction witness \
--tx-body-file tx.body \
--signing-key-file payment.skey \
--out-file payment.witness
Assemble a Transaction
dugite-cli transaction assemble \
--tx-body-file tx.body \
--witness-file payment.witness \
--witness-file stake.witness \
--out-file tx.signed
Policy ID
Compute the policy ID (Blake2b-224 hash) of a native script:
dugite-cli transaction policyid --script-file policy.script
Complete Workflow
# 1. Query UTxOs to find inputs
dugite-cli query utxo \
--address addr_test1qz... \
--socket-path ./node.sock \
--testnet-magic 2
# 2. Get protocol parameters for fee calculation
dugite-cli query protocol-parameters \
--socket-path ./node.sock \
--testnet-magic 2 \
--out-file pp.json
# 3. Build the transaction
dugite-cli transaction build \
--tx-in "abc123...#0" \
--tx-out "addr_test1qr...+5000000" \
--change-address "addr_test1qz..." \
--fee 200000 \
--out-file tx.body
# 4. Calculate the exact fee
dugite-cli transaction calculate-min-fee \
--tx-body-file tx.body \
--witness-count 1 \
--protocol-params-file pp.json
# 5. Rebuild with the correct fee (repeat step 3 with updated --fee)
# 6. Sign
dugite-cli transaction sign \
--tx-body-file tx.body \
--signing-key-file payment.skey \
--out-file tx.signed
# 7. Submit
dugite-cli transaction submit \
--tx-file tx.signed \
--socket-path ./node.sock
Queries
Dugite CLI provides a comprehensive set of queries against a running node via the N2C (Node-to-Client) protocol over a Unix domain socket.
Chain Tip
Query the current chain tip:
dugite-cli query tip --socket-path ./node.sock
For testnets:
dugite-cli query tip --socket-path ./node.sock --testnet-magic 2
Output:
{
"slot": 73429851,
"hash": "a1b2c3d4e5f6...",
"block": 2847392,
"epoch": 170,
"era": "Conway",
"syncProgress": "99.87"
}
UTxO Query
Query UTxOs at a specific address:
dugite-cli query utxo \
--address addr_test1qz... \
--socket-path ./node.sock \
--testnet-magic 2
Output:
TxHash#Ix Datum Lovelace
------------------------------------------------------------------------------------------------
a1b2c3d4...#0 no 5000000
e5f6a7b8...#1 yes 10000000
Total UTxOs: 2
Protocol Parameters
Query current protocol parameters:
# Print to stdout
dugite-cli query protocol-parameters \
--socket-path ./node.sock
# Save to file
dugite-cli query protocol-parameters \
--socket-path ./node.sock \
--out-file protocol-params.json
The output is a JSON object containing all active protocol parameters, including fee settings, execution unit limits, and governance thresholds.
Stake Distribution
Query the stake distribution across all registered pools:
dugite-cli query stake-distribution \
--socket-path ./node.sock
Output:
PoolId Stake (lovelace) Pledge (lovelace)
----------------------------------------------------------------------------------------------------------
pool1abc... 15234892000000 500000000000
pool1def... 8923451000000 250000000000
Total pools: 3200
Stake Address Info
Query delegation and rewards for a stake address:
dugite-cli query stake-address-info \
--address stake_test1uz... \
--socket-path ./node.sock \
--testnet-magic 2
Output:
[
{
"address": "stake_test1uz...",
"delegation": "pool1abc...",
"rewardAccountBalance": 5234000
}
]
Stake Pools
List all registered stake pools with their parameters:
dugite-cli query stake-pools \
--socket-path ./node.sock
Output:
PoolId Pledge (ADA) Cost (ADA) Margin
----------------------------------------------------------------------------------------------------
pool1abc... 500.000000 340.000000 1.00%
pool1def... 250.000000 340.000000 2.50%
Total pools: 3200
Pool Parameters
Query detailed parameters for a specific pool:
dugite-cli query pool-params \
--socket-path ./node.sock \
--stake-pool-id pool1abc...
Stake Snapshots
Query the mark/set/go stake snapshots:
dugite-cli query stake-snapshot \
--socket-path ./node.sock
# Filter by pool
dugite-cli query stake-snapshot \
--socket-path ./node.sock \
--stake-pool-id pool1abc...
Governance State (Conway)
Query the overall governance state:
dugite-cli query gov-state --socket-path ./node.sock
Output:
Governance State (Conway)
========================
Treasury: 1234567890 ADA
Registered DReps: 456
Committee Members: 7
Active Proposals: 12
Proposals:
Type TxId Yes No Abstain
----------------------------------------------------
InfoAction a1b2c3#0 42 3 5
TreasuryWithdrawals d4e5f6#1 28 12 8
DRep State (Conway)
Query registered DReps:
# All DReps
dugite-cli query drep-state --socket-path ./node.sock
# Specific DRep by key hash
dugite-cli query drep-state \
--socket-path ./node.sock \
--drep-key-hash a1b2c3d4...
Output:
DRep State (Conway)
===================
Total DReps: 456
Credential Hash Deposit (ADA) Epoch
--------------------------------------------------------------------------------------------
a1b2c3d4... 500 412
Anchor: https://example.com/drep-metadata.json
Committee State (Conway)
Query the constitutional committee:
dugite-cli query committee-state --socket-path ./node.sock
Output:
Constitutional Committee State (Conway)
=======================================
Active Members: 7
Resigned Members: 1
Cold Credential Hot Credential
--------------------------------------------------------------------------------------------------------------------------------------
a1b2c3d4... e5f6a7b8...
Resigned:
d4e5f6a7...
Transaction Mempool
Query the node's transaction mempool:
# Mempool info (size, capacity, tx count)
dugite-cli query tx-mempool info --socket-path ./node.sock
# Check if a specific transaction is in the mempool
dugite-cli query tx-mempool has-tx \
--socket-path ./node.sock \
--tx-id a1b2c3d4...
Info output:
Mempool snapshot at slot 73429851:
Capacity: 2000000 bytes
Size: 45320 bytes
Transactions: 12
Treasury
Query the treasury and reserves:
dugite-cli query treasury --socket-path ./node.sock
Output:
Account State
=============
Treasury: 1234567 ADA
Reserves: 9876543 ADA
Constitution (Conway)
Query the current constitution:
dugite-cli query constitution --socket-path ./node.sock
Output:
Constitution
============
URL: https://constitution.gov/hash.json
Data Hash: a1b2c3d4e5f6...
Script Hash: none
Ratification State (Conway)
Query the ratification state (enacted/expired proposals from the most recent epoch transition):
dugite-cli query ratify-state --socket-path ./node.sock
Output:
Ratification State
==================
Enacted proposals: 1
a1b2c3d4e5f6...#0
Expired proposals: 2
d4e5f6a7b8c9...#1
e5f6a7b8c9d0...#0
Delayed: false
Slot Number
Convert a wall-clock time to a Cardano slot number:
dugite-cli query slot-number \
--socket-path ./node.sock \
--testnet-magic 2 \
--utc-time "2026-03-20T12:00:00Z"
Output:
Slot: 73851200
This is useful for computing TTL values or verifying that a specific point in time falls within a given epoch.
KES Period Info
Query KES period information for an operational certificate:
dugite-cli query kes-period-info \
--socket-path ./node.sock \
--op-cert-file opcert.cert
Output:
KES Period Info
===============
On-chain: yes
Operational certificate counter on-chain: 3
Certificate issue counter: 3
Current KES period: 418
Operational certificate start KES period: 418
KES max evolutions: 62
KES periods remaining: 62
Node start time: 2026-03-19T08:00:00Z
KES key expiry: 2026-09-14T08:00:00Z
Use this command to verify that a KES key is current and to determine when rotation is needed.
Leadership Schedule
Compute the leader schedule for a stake pool:
dugite-cli query leadership-schedule \
--vrf-signing-key-file vrf.skey \
--epoch-nonce a1b2c3d4... \
--epoch-start-slot 73000000 \
--epoch-length 432000 \
--relative-stake 0.001 \
--active-slot-coeff 0.05
Output:
Computing leader schedule for epoch starting at slot 73000000...
Epoch length: 432000 slots
Relative stake: 0.001000
Active slot coefficient: 0.05
SlotNo VRF Output (first 16 bytes)
--------------------------------------------------
73012345 a1b2c3d4e5f6a7b8...
73045678 d4e5f6a7b8c9d0e1...
Total leader slots: 2
Expected: ~22 (f=0.05, stake=0.001000)
Stake Address Commands
The dugite-cli stake-address subcommands manage stake key generation, reward address construction, and certificate creation for staking operations.
key-gen
Generate a stake key pair:
dugite-cli stake-address key-gen \
--verification-key-file stake.vkey \
--signing-key-file stake.skey
| Flag | Required | Description |
|---|---|---|
--verification-key-file | Yes | Output path for the stake verification key |
--signing-key-file | Yes | Output path for the stake signing key |
build
Build a stake (reward) address from a stake verification key:
dugite-cli stake-address build \
--stake-verification-key-file stake.vkey \
--network testnet
| Flag | Required | Default | Description |
|---|---|---|---|
--stake-verification-key-file | Yes | Path to the stake verification key | |
--network | No | mainnet | Network: mainnet or testnet |
--out-file | No | Output file (prints to stdout if omitted) |
registration-certificate
Create a stake address registration certificate:
# Conway era (with deposit)
dugite-cli stake-address registration-certificate \
--stake-verification-key-file stake.vkey \
--key-reg-deposit-amt 2000000 \
--out-file stake-reg.cert
# Legacy Shelley era (no deposit parameter)
dugite-cli stake-address registration-certificate \
--stake-verification-key-file stake.vkey \
--out-file stake-reg.cert
| Flag | Required | Description |
|---|---|---|
--stake-verification-key-file | Yes | Path to the stake verification key |
--key-reg-deposit-amt | No | Deposit amount in lovelace (Conway era; omit for legacy Shelley cert) |
--out-file | Yes | Output path for the certificate |
The deposit amount should match the current stakeAddressDeposit protocol parameter (typically 2 ADA = 2000000 lovelace).
deregistration-certificate
Create a stake address deregistration certificate to reclaim the deposit:
dugite-cli stake-address deregistration-certificate \
--stake-verification-key-file stake.vkey \
--key-reg-deposit-amt 2000000 \
--out-file stake-dereg.cert
| Flag | Required | Description |
|---|---|---|
--stake-verification-key-file | Yes | Path to the stake verification key |
--key-reg-deposit-amt | No | Deposit refund amount (Conway era; omit for legacy Shelley cert) |
--out-file | Yes | Output path for the certificate |
delegation-certificate
Create a stake delegation certificate to delegate to a stake pool:
dugite-cli stake-address delegation-certificate \
--stake-verification-key-file stake.vkey \
--stake-pool-id pool1abc... \
--out-file delegation.cert
| Flag | Required | Description |
|---|---|---|
--stake-verification-key-file | Yes | Path to the stake verification key |
--stake-pool-id | Yes | Pool ID to delegate to (bech32 or hex) |
--out-file | Yes | Output path for the certificate |
vote-delegation-certificate
Create a vote delegation certificate (Conway era) to delegate voting power to a DRep:
# Delegate to a specific DRep
dugite-cli stake-address vote-delegation-certificate \
--stake-verification-key-file stake.vkey \
--drep-verification-key-file drep.vkey \
--out-file vote-deleg.cert
# Delegate to always-abstain
dugite-cli stake-address vote-delegation-certificate \
--stake-verification-key-file stake.vkey \
--always-abstain \
--out-file vote-deleg.cert
# Delegate to always-no-confidence
dugite-cli stake-address vote-delegation-certificate \
--stake-verification-key-file stake.vkey \
--always-no-confidence \
--out-file vote-deleg.cert
| Flag | Required | Description |
|---|---|---|
--stake-verification-key-file | Yes | Path to the stake verification key |
--drep-verification-key-file | No | DRep verification key file (mutually exclusive with --always-abstain/--always-no-confidence) |
--always-abstain | No | Use the special always-abstain DRep |
--always-no-confidence | No | Use the special always-no-confidence DRep |
--out-file | Yes | Output path for the certificate |
Complete Staking Workflow
# 1. Generate stake keys
dugite-cli stake-address key-gen \
--verification-key-file stake.vkey \
--signing-key-file stake.skey
# 2. Create registration certificate
dugite-cli stake-address registration-certificate \
--stake-verification-key-file stake.vkey \
--key-reg-deposit-amt 2000000 \
--out-file stake-reg.cert
# 3. Create delegation certificate
dugite-cli stake-address delegation-certificate \
--stake-verification-key-file stake.vkey \
--stake-pool-id pool1abc... \
--out-file delegation.cert
# 4. Submit both in a single transaction
dugite-cli transaction build \
--tx-in "abc123...#0" \
--tx-out "addr_test1qz...+5000000" \
--change-address "addr_test1qp..." \
--fee 200000 \
--certificate-file stake-reg.cert \
--certificate-file delegation.cert \
--out-file tx.body
dugite-cli transaction sign \
--tx-body-file tx.body \
--signing-key-file payment.skey \
--signing-key-file stake.skey \
--out-file tx.signed
dugite-cli transaction submit \
--tx-file tx.signed \
--socket-path ./node.sock
Stake Pool Commands
The dugite-cli stake-pool subcommands manage stake pool key generation, pool registration, and operational certificate issuance.
key-gen
Generate pool cold keys and an operational certificate counter:
dugite-cli stake-pool key-gen \
--cold-verification-key-file cold.vkey \
--cold-signing-key-file cold.skey \
--operational-certificate-counter-file opcert.counter
| Flag | Required | Description |
|---|---|---|
--cold-verification-key-file | Yes | Output path for the cold verification key |
--cold-signing-key-file | Yes | Output path for the cold signing key |
--operational-certificate-counter-file | Yes | Output path for the opcert issue counter |
id
Get the pool ID (Blake2b-224 hash of the cold verification key):
dugite-cli stake-pool id \
--cold-verification-key-file cold.vkey
| Flag | Required | Description |
|---|---|---|
--cold-verification-key-file | Yes | Path to the cold verification key |
vrf-key-gen
Generate a VRF key pair:
dugite-cli stake-pool vrf-key-gen \
--verification-key-file vrf.vkey \
--signing-key-file vrf.skey
| Flag | Required | Description |
|---|---|---|
--verification-key-file | Yes | Output path for the VRF verification key |
--signing-key-file | Yes | Output path for the VRF signing key |
kes-key-gen
Generate a KES key pair:
dugite-cli stake-pool kes-key-gen \
--verification-key-file kes.vkey \
--signing-key-file kes.skey
| Flag | Required | Description |
|---|---|---|
--verification-key-file | Yes | Output path for the KES verification key |
--signing-key-file | Yes | Output path for the KES signing key |
issue-op-cert
Issue an operational certificate:
dugite-cli stake-pool issue-op-cert \
--kes-verification-key-file kes.vkey \
--cold-signing-key-file cold.skey \
--operational-certificate-counter-file opcert.counter \
--kes-period 400 \
--out-file opcert.cert
| Flag | Required | Description |
|---|---|---|
--kes-verification-key-file | Yes | Path to the KES verification key |
--cold-signing-key-file | Yes | Path to the cold signing key |
--operational-certificate-counter-file | Yes | Path to the opcert issue counter |
--kes-period | Yes | Current KES period |
--out-file | Yes | Output path for the operational certificate |
registration-certificate
Create a stake pool registration certificate:
dugite-cli stake-pool registration-certificate \
--cold-verification-key-file cold.vkey \
--vrf-verification-key-file vrf.vkey \
--pledge 500000000 \
--cost 340000000 \
--margin 0.02 \
--reward-account-verification-key-file stake.vkey \
--pool-owner-verification-key-file stake.vkey \
--single-host-pool-relay "relay.example.com:3001" \
--metadata-url "https://example.com/pool-metadata.json" \
--metadata-hash "a1b2c3d4..." \
--out-file pool-reg.cert
| Flag | Required | Description |
|---|---|---|
--cold-verification-key-file | Yes | Path to the cold verification key |
--vrf-verification-key-file | Yes | Path to the VRF verification key |
--pledge | Yes | Pledge amount in lovelace |
--cost | Yes | Fixed cost per epoch in lovelace |
--margin | Yes | Pool margin (0.0 to 1.0) |
--reward-account-verification-key-file | Yes | Stake key for the reward account |
--pool-owner-verification-key-file | No | Pool owner stake key (can be repeated) |
--pool-relay-ipv4 | No | Relay IP address with port (e.g., 1.2.3.4:3001) |
--single-host-pool-relay | No | Relay DNS hostname with port (e.g., relay.example.com:3001) |
--multi-host-pool-relay | No | Relay DNS SRV record (e.g., _cardano._tcp.example.com) |
--metadata-url | No | URL to pool metadata JSON |
--metadata-hash | No | Blake2b-256 hash of the metadata file (hex) |
--testnet | No | Use testnet network ID for the reward account |
--out-file | Yes | Output path for the certificate |
metadata-hash
Compute the Blake2b-256 hash of a pool metadata file:
dugite-cli stake-pool metadata-hash \
--pool-metadata-file pool-metadata.json
Output:
a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2
This hash is required when registering a pool. The metadata file must be served at the URL specified in the registration certificate and the hash must match. The file contents at that URL are checked by other nodes during pool discovery.
Example pool metadata file:
{
"name": "Sandstone Pool",
"description": "A Cardano stake pool running Dugite",
"ticker": "SAND",
"homepage": "https://sandstone.io"
}
retirement-certificate
Create a stake pool retirement certificate:
dugite-cli stake-pool retirement-certificate \
--cold-verification-key-file cold.vkey \
--epoch 500 \
--out-file pool-retire.cert
| Flag | Required | Description |
|---|---|---|
--cold-verification-key-file | Yes | Path to the cold verification key |
--epoch | Yes | Epoch at which the pool retires |
--out-file | Yes | Output path for the certificate |
Complete Pool Registration Workflow
# 1. Generate all keys
dugite-cli stake-pool key-gen \
--cold-verification-key-file cold.vkey \
--cold-signing-key-file cold.skey \
--operational-certificate-counter-file opcert.counter
dugite-cli stake-pool vrf-key-gen \
--verification-key-file vrf.vkey \
--signing-key-file vrf.skey
dugite-cli stake-pool kes-key-gen \
--verification-key-file kes.vkey \
--signing-key-file kes.skey
# 2. Issue operational certificate
dugite-cli stake-pool issue-op-cert \
--kes-verification-key-file kes.vkey \
--cold-signing-key-file cold.skey \
--operational-certificate-counter-file opcert.counter \
--kes-period 400 \
--out-file opcert.cert
# 3. Create registration certificate
dugite-cli stake-pool registration-certificate \
--cold-verification-key-file cold.vkey \
--vrf-verification-key-file vrf.vkey \
--pledge 500000000 \
--cost 340000000 \
--margin 0.02 \
--reward-account-verification-key-file stake.vkey \
--pool-owner-verification-key-file stake.vkey \
--single-host-pool-relay "relay.example.com:3001" \
--metadata-url "https://example.com/pool.json" \
--metadata-hash "a1b2c3..." \
--out-file pool-reg.cert
# 4. Submit registration in a transaction
dugite-cli transaction build \
--tx-in "abc123...#0" \
--tx-out "addr_test1qz...+5000000" \
--change-address "addr_test1qp..." \
--fee 200000 \
--certificate-file pool-reg.cert \
--out-file tx.body
dugite-cli transaction sign \
--tx-body-file tx.body \
--signing-key-file payment.skey \
--signing-key-file cold.skey \
--signing-key-file stake.skey \
--out-file tx.signed
dugite-cli transaction submit \
--tx-file tx.signed \
--socket-path ./node.sock
Node Commands
The dugite-cli node subcommands manage cold keys, KES keys, VRF keys, and operational certificates for block producer setup.
key-gen
Generate a cold key pair and an operational certificate issue counter:
dugite-cli node key-gen \
--cold-verification-key-file cold.vkey \
--cold-signing-key-file cold.skey \
--operational-certificate-counter-file opcert.counter
| Flag | Required | Description |
|---|---|---|
--cold-verification-key-file | Yes | Output path for the cold verification key |
--cold-signing-key-file | Yes | Output path for the cold signing key |
--operational-certificate-counter-file | Yes | Output path for the opcert issue counter |
The cold key identifies your stake pool. Keep the signing key offline (air-gapped) after initial setup.
key-gen-kes
Generate a KES (Key Evolving Signature) key pair:
dugite-cli node key-gen-kes \
--verification-key-file kes.vkey \
--signing-key-file kes.skey
| Flag | Required | Description |
|---|---|---|
--verification-key-file | Yes | Output path for the KES verification key |
--signing-key-file | Yes | Output path for the KES signing key |
KES keys are rotated periodically. Each key is valid for a limited number of KES periods (62 periods on mainnet, approximately 90 days total).
key-gen-vrf
Generate a VRF (Verifiable Random Function) key pair:
dugite-cli node key-gen-vrf \
--verification-key-file vrf.vkey \
--signing-key-file vrf.skey
| Flag | Required | Description |
|---|---|---|
--verification-key-file | Yes | Output path for the VRF verification key |
--signing-key-file | Yes | Output path for the VRF signing key |
VRF keys are used for slot leader election and do not need rotation.
issue-op-cert
Issue an operational certificate binding the cold key to the current KES key:
dugite-cli node issue-op-cert \
--kes-verification-key-file kes.vkey \
--cold-signing-key-file cold.skey \
--operational-certificate-counter-file opcert.counter \
--kes-period 400 \
--out-file opcert.cert
| Flag | Required | Description |
|---|---|---|
--kes-verification-key-file | Yes | Path to the KES verification key |
--cold-signing-key-file | Yes | Path to the cold signing key |
--operational-certificate-counter-file | Yes | Path to the opcert issue counter (incremented automatically) |
--kes-period | Yes | Current KES period (current_slot / slots_per_kes_period) |
--out-file | Yes | Output path for the operational certificate |
The opcert must be regenerated each time you rotate KES keys. The counter file is incremented each time to prevent replay attacks.
new-counter
Create a new operational certificate issue counter (useful if the original counter is lost):
dugite-cli node new-counter \
--cold-verification-key-file cold.vkey \
--counter-value 5 \
--operational-certificate-counter-file opcert.counter
| Flag | Required | Description |
|---|---|---|
--cold-verification-key-file | Yes | Path to the cold verification key |
--counter-value | Yes | Counter value to set |
--operational-certificate-counter-file | Yes | Output path for the counter file |
Governance
Dugite CLI supports Conway-era governance operations as defined in CIP-1694. This includes DRep management, voting, and governance action creation.
DRep Operations
Generate DRep Keys
dugite-cli governance drep key-gen \
--signing-key-file drep.skey \
--verification-key-file drep.vkey
Get DRep ID
# Bech32 format (default)
dugite-cli governance drep id \
--drep-verification-key-file drep.vkey
# Hex format
dugite-cli governance drep id \
--drep-verification-key-file drep.vkey \
--output-format hex
DRep Registration
Create a DRep registration certificate:
dugite-cli governance drep registration-certificate \
--drep-verification-key-file drep.vkey \
--key-reg-deposit-amt 500000000 \
--anchor-url "https://example.com/drep-metadata.json" \
--anchor-data-hash "a1b2c3d4..." \
--out-file drep-reg.cert
The --key-reg-deposit-amt should match the current DRep deposit parameter (currently 500 ADA = 500000000 lovelace on mainnet).
DRep Retirement
dugite-cli governance drep retirement-certificate \
--drep-verification-key-file drep.vkey \
--deposit-amt 500000000 \
--out-file drep-retire.cert
DRep Update
Update DRep metadata:
dugite-cli governance drep update-certificate \
--drep-verification-key-file drep.vkey \
--anchor-url "https://example.com/drep-metadata-v2.json" \
--anchor-data-hash "d4e5f6a7..." \
--out-file drep-update.cert
Voting
Create a Vote
Votes can be cast by DReps, SPOs, or Constitutional Committee members:
DRep vote:
dugite-cli governance vote create \
--governance-action-tx-id "a1b2c3d4..." \
--governance-action-index 0 \
--vote yes \
--drep-verification-key-file drep.vkey \
--out-file vote.json
SPO vote:
dugite-cli governance vote create \
--governance-action-tx-id "a1b2c3d4..." \
--governance-action-index 0 \
--vote no \
--cold-verification-key-file cold.vkey \
--out-file vote.json
Constitutional Committee vote:
dugite-cli governance vote create \
--governance-action-tx-id "a1b2c3d4..." \
--governance-action-index 0 \
--vote yes \
--cc-hot-verification-key-file cc-hot.vkey \
--out-file vote.json
Vote Values
| Value | Description |
|---|---|
yes | Vote in favor |
no | Vote against |
abstain | Abstain from voting |
Vote with Anchor
Attach rationale metadata to a vote:
dugite-cli governance vote create \
--governance-action-tx-id "a1b2c3d4..." \
--governance-action-index 0 \
--vote yes \
--drep-verification-key-file drep.vkey \
--anchor-url "https://example.com/vote-rationale.json" \
--anchor-data-hash "e5f6a7b8..." \
--out-file vote.json
Governance Actions
Info Action
A governance action that carries no on-chain effect (used for signaling):
dugite-cli governance action create-info \
--anchor-url "https://example.com/proposal.json" \
--anchor-data-hash "a1b2c3d4..." \
--deposit 100000000000 \
--return-addr "addr_test1qz..." \
--out-file info-action.json
No Confidence Motion
Express no confidence in the current constitutional committee:
dugite-cli governance action create-no-confidence \
--anchor-url "https://example.com/no-confidence.json" \
--anchor-data-hash "a1b2c3d4..." \
--deposit 100000000000 \
--return-addr "addr_test1qz..." \
--prev-governance-action-tx-id "d4e5f6a7..." \
--prev-governance-action-index 0 \
--out-file no-confidence.json
New Constitution
Propose a new constitution:
dugite-cli governance action create-constitution \
--anchor-url "https://example.com/constitution-proposal.json" \
--anchor-data-hash "a1b2c3d4..." \
--deposit 100000000000 \
--return-addr "addr_test1qz..." \
--constitution-url "https://example.com/constitution.txt" \
--constitution-hash "e5f6a7b8..." \
--constitution-script-hash "b8c9d0e1..." \
--out-file new-constitution.json
Hard Fork Initiation
Propose a protocol version change:
dugite-cli governance action create-hard-fork-initiation \
--anchor-url "https://example.com/hardfork.json" \
--anchor-data-hash "a1b2c3d4..." \
--deposit 100000000000 \
--return-addr "addr_test1qz..." \
--protocol-major-version 10 \
--protocol-minor-version 0 \
--out-file hardfork.json
Protocol Parameters Update
Propose changes to protocol parameters:
dugite-cli governance action create-protocol-parameters-update \
--anchor-url "https://example.com/pp-update.json" \
--anchor-data-hash "a1b2c3d4..." \
--deposit 100000000000 \
--return-addr "addr_test1qz..." \
--protocol-parameters-update pp-changes.json \
--out-file pp-update.json
The pp-changes.json file contains the parameter fields to change:
{
"txFeePerByte": 44,
"txFeeFixed": 155381,
"maxBlockBodySize": 90112,
"maxTxSize": 16384
}
Update Committee
Propose changes to the constitutional committee:
dugite-cli governance action create-update-committee \
--anchor-url "https://example.com/committee-update.json" \
--anchor-data-hash "a1b2c3d4..." \
--deposit 100000000000 \
--return-addr "addr_test1qz..." \
--remove-cc-cold-verification-key-hash "old_member_hash" \
--add-cc-cold-verification-key-hash "new_member_hash,500" \
--threshold "2/3" \
--out-file committee-update.json
The --add-cc-cold-verification-key-hash uses the format key_hash,expiry_epoch.
Treasury Withdrawal
Propose a withdrawal from the treasury:
dugite-cli governance action create-treasury-withdrawal \
--anchor-url "https://example.com/withdrawal.json" \
--anchor-data-hash "a1b2c3d4..." \
--deposit 100000000000 \
--return-addr "addr_test1qz..." \
--funds-receiving-stake-verification-key-file recipient.vkey \
--transfer 50000000000 \
--out-file treasury-withdrawal.json
Hash Anchor Data
Compute the Blake2b-256 hash of an anchor data file:
# Binary file
dugite-cli governance action hash-anchor-data \
--file-binary proposal.json
# Text file
dugite-cli governance action hash-anchor-data \
--file-text proposal.txt
Submitting Governance Actions
Governance actions and votes are submitted as part of transactions. Include the certificate or vote file when building the transaction:
# Submit a DRep registration
dugite-cli transaction build \
--tx-in "abc123...#0" \
--tx-out "addr_test1qz...+5000000" \
--change-address "addr_test1qp..." \
--fee 200000 \
--certificate-file drep-reg.cert \
--out-file tx.body
dugite-cli transaction sign \
--tx-body-file tx.body \
--signing-key-file payment.skey \
--signing-key-file drep.skey \
--out-file tx.signed
dugite-cli transaction submit \
--tx-file tx.signed \
--socket-path ./node.sock
Architecture Overview
Dugite is organized as a 14-crate Cargo workspace. Each crate has a focused responsibility and well-defined dependencies.
Crate Workspace
| Crate | Description |
|---|---|
dugite-primitives | Core types: hashes, blocks, transactions, addresses, values, protocol parameters (Byron through Conway) |
dugite-crypto | Ed25519 keys, VRF, KES, text envelope format |
dugite-serialization | CBOR encoding/decoding for Cardano wire format via pallas |
dugite-lsm | Pure Rust LSM-tree engine with WAL, compaction, bloom filters, and snapshots |
dugite-network | Ouroboros mini-protocols (ChainSync, BlockFetch, TxSubmission, KeepAlive), N2N client/server, N2C server, multi-peer block fetch pool |
dugite-consensus | Ouroboros Praos, chain selection, epoch transitions, slot leader checks |
dugite-ledger | UTxO set (LSM-backed via UTxO-HD), transaction validation, ledger state, certificate processing, native script evaluation, reward calculation |
dugite-mempool | Thread-safe transaction mempool with input-conflict checking and TTL sweep |
dugite-storage | ChainDB (ImmutableDB append-only chunk files + VolatileDB in-memory) |
dugite-node | Main binary, config, topology, pipelined chain sync loop, Mithril import, block forging |
dugite-cli | cardano-cli compatible CLI (38+ subcommands) |
dugite-monitor | Terminal monitoring dashboard (ratatui-based, real-time metrics via Prometheus polling) |
dugite-config | Interactive TUI configuration editor with tree navigation, inline editing, type validation, and diff view |
dugite-integration-tests | End-to-end integration tests across the workspace |
Crate Dependency Graph
graph TD
NODE[dugite-node] --> NET[dugite-network]
NODE --> CONS[dugite-consensus]
NODE --> LEDGER[dugite-ledger]
NODE --> STORE[dugite-storage]
NODE --> POOL[dugite-mempool]
CLI[dugite-cli] --> NET
CLI --> PRIM[dugite-primitives]
CLI --> CRYPTO[dugite-crypto]
CLI --> SER[dugite-serialization]
MON[dugite-monitor] --> PRIM
CFG[dugite-config] --> PRIM
NET --> PRIM
NET --> CRYPTO
NET --> SER
NET --> POOL
CONS --> PRIM
CONS --> CRYPTO
LEDGER --> PRIM
LEDGER --> CRYPTO
LEDGER --> SER
LEDGER --> LSM[dugite-lsm]
STORE --> PRIM
STORE --> SER
POOL --> PRIM
SER --> PRIM
CRYPTO --> PRIM
Key Dependencies
Dugite leverages the pallas family of crates (v1.0.0-alpha.5) for Cardano wire-format compatibility:
- pallas-network — Ouroboros multiplexer and handshake
- pallas-codec — CBOR encoding/decoding
- pallas-primitives — Cardano primitive types
- pallas-traverse — Multi-era block traversal
- pallas-crypto — Cryptographic primitives
- pallas-addresses — Address parsing and construction
Other key dependencies:
- tokio — Async runtime
- dugite-lsm — Pure Rust LSM tree for the on-disk UTxO set (UTxO-HD)
- minicbor — CBOR encoding for custom types
- ed25519-dalek — Ed25519 signatures
- blake2b_simd — SIMD-accelerated Blake2b hashing
- uplc — Plutus CEK machine for script evaluation
- clap — CLI argument parsing
- tracing — Structured logging
Design Principles
Zero-Warning Policy
All code must compile with RUSTFLAGS="-D warnings" and pass cargo clippy --all-targets -- -D warnings. This is enforced by CI.
Pallas Interoperability
Dugite uses pallas for network protocol handling and block deserialization, ensuring wire-format compatibility with cardano-node. Internal types (in dugite-primitives) are converted from pallas types during deserialization.
Key conversion patterns:
Transaction.hashis set during deserialization frompallas tx.hash()ChainSyncEvent::RollForwardusesBox<Block>to avoid large enum variant size- Invalid transactions (
is_valid: false) are skipped duringapply_block - Pool IDs are
Hash28(Blake2b-224), notHash32
Multi-Era Support
Dugite handles all Cardano eras from Byron through Conway. The serialization layer handles era-specific block formats transparently, while the ledger layer applies era-appropriate validation rules.
Sync Pipeline
Dugite uses a pipelined multi-peer architecture for block synchronization, separating header collection from block fetching for maximum throughput.
Architecture
flowchart LR
subgraph Primary Peer
CS[ChainSync<br/>Header Collection]
end
CS -->|headers| HQ[Header Queue]
subgraph Block Fetch Pool
BF1[Peer 1<br/>BlockFetch]
BF2[Peer 2<br/>BlockFetch]
BF3[Peer N<br/>BlockFetch]
end
HQ -->|range 1| BF1
HQ -->|range 2| BF2
HQ -->|range N| BF3
BF1 -->|blocks| BP[Block Processor]
BF2 -->|blocks| BP
BF3 -->|blocks| BP
BP --> CDB[(ChainDB)]
BP --> LS[Ledger State]
Pipeline Stages
1. Header Collection (ChainSync)
A primary peer is selected for the ChainSync protocol. The node requests block headers sequentially using the N2N ChainSync mini-protocol (V14+). Headers are collected into batches.
The ChainSync protocol involves:
- MsgFindIntersect — Find a common point between the node and the peer
- MsgRequestNext — Request the next header
- MsgRollForward — Receive a new header
- MsgRollBackward — Handle a chain reorganization
2. Block Fetch Pool
Collected headers are distributed across multiple peers for parallel block retrieval. The block fetch pool supports up to 4 concurrent peers, each fetching a range of blocks.
The BlockFetch protocol involves:
- MsgRequestRange — Request a range of blocks by header hash
- MsgBlock — Receive a block
- MsgBatchDone — Signal the end of a batch
Blocks are fetched in batches of 500 headers, with sub-batches of 100 headers each. Each sub-batch is decoded on a spawn_blocking task to avoid blocking the async runtime.
3. Block Processing
Fetched blocks are applied to the ledger state in order:
- Deserialization — Raw CBOR bytes are decoded into Dugite's internal
Blocktype using pallas - Ledger validation — Each block is validated against the current ledger state (UTxO checks, fee validation, certificate processing)
- Storage — Valid blocks are added to the ChainDB (volatile database first, flushed to immutable when k-deep)
- Epoch transitions — At epoch boundaries, stake snapshots are rotated and rewards are calculated
Batched Lock Acquisition
To minimize lock contention, the sync loop acquires a single lock on both the ChainDB and ledger state for each batch of 500 blocks, rather than locking per-block.
Progress Reporting
Progress is logged every 5 seconds, showing:
- Current slot and block number
- Epoch number
- UTxO count
- Sync percentage (based on slot vs. wall-clock time)
- Blocks-per-second throughput metric
Rollback Handling
When the ChainSync peer sends a MsgRollBackward message, the node:
- Identifies the rollback point (a slot/hash pair)
- Removes rolled-back blocks from the VolatileDB
- Reverts the ledger state to the rollback point
- Resumes header collection from the new tip
Only blocks in the VolatileDB (the last k=2160 blocks) can be rolled back. Blocks that have been flushed to the ImmutableDB are permanent.
Pipelined ChainSync
Dugite uses pipelined ChainSync to avoid the round-trip latency bottleneck of serial header requests. Instead of waiting for each MsgRollForward before requesting the next header, the node sends up to 300 MsgRequestNext messages concurrently (configurable via DUGITE_PIPELINE_DEPTH).
This bypasses pallas' serial ChainSync state machine in favor of a custom implementation that manages the pipeline depth directly.
Performance Characteristics
- Header collection is pipelined per peer (up to 300 in-flight requests, configurable via
DUGITE_PIPELINE_DEPTH) - Block fetching is parallelized across up to 4 concurrent peers
- Block processing is batched (500 blocks per batch) with single-lock acquisition
- Throughput depends on network latency, peer count, and block sizes
On preview testnet, full sync from genesis completes in approximately 10 hours, with block replay (from Mithril snapshot) achieving ~13,700 blocks/second.
Storage
Dugite's storage layer is implemented in the dugite-storage and dugite-ledger crates. It closely mirrors the cardano-node architecture with three distinct storage subsystems coordinated by ChainDB.
Storage Architecture
flowchart TD
CDB[ChainDB] --> VOL[VolatileDB<br/>In-Memory HashMap<br/>Last k=2160 blocks]
CDB --> IMM[ImmutableDB<br/>Append-Only Chunk Files<br/>Finalized blocks]
NEW[New Block] -->|add_block| VOL
VOL -->|flush when > k blocks| IMM
READ[Block Query] -->|1. check volatile| VOL
READ -->|2. fallback to immutable| IMM
ROLL[Rollback] -->|remove from volatile| VOL
LS[LedgerState] --> UTXO[UtxoStore<br/>dugite-lsm LSM tree<br/>On-disk UTxO set]
LS --> DIFF[DiffSeq<br/>Last k UTxO diffs<br/>For rollback]
Block Storage
ImmutableDB (Append-Only Chunk Files)
The ImmutableDB stores finalized blocks in append-only chunk files on disk. This matches cardano-node's ImmutableDB design — blocks are simply appended to files and are inherently durable without any snapshot mechanism.
Properties:
- Always durable — append-only writes survive process crashes without special persistence logic
- No LSM tree — plain chunk files, no compaction or memtable overhead
- Sequential access — optimized for the append-heavy, read-sequential block storage workload
- Secondary indexes — slot-to-offset and hash-to-slot mappings for efficient lookups
- Memory-mapped block index — on-disk open-addressing hash table (
hash_index.dat) provides 3-5x faster lookups than in-memory HashMap while using near-zero RSS
VolatileDB (In-Memory HashMap)
The VolatileDB stores recent blocks (the last k=2160 blocks) in an in-memory HashMap. This enables:
- Fast reads — no disk I/O for recent blocks
- Efficient rollback — blocks can be removed without touching disk
- Simple eviction — when a block becomes k-deep, it is flushed to the ImmutableDB
The VolatileDB has no on-disk representation — it exists only in memory and is rebuilt from the ImmutableDB tip on restart.
ChainDB
ChainDB is the unified interface for block storage. It coordinates the ImmutableDB and VolatileDB:
- New blocks arrive from peers and are added to the VolatileDB
- Once a block is more than k slots deep (k=2160 for mainnet), it is flushed from the VolatileDB to the ImmutableDB
- Flushed blocks are removed from the VolatileDB
When querying for a block:
- The VolatileDB is checked first (fast, in-memory)
- If not found, the ImmutableDB is consulted (disk-based)
Block Range Queries
ChainDB supports querying blocks by slot range:
- VolatileDB scans its HashMap for matching slots
- ImmutableDB uses secondary indexes for slot range scanning
- Results from both databases are merged
UTxO Storage (UTxO-HD)
The UTxO set is stored on disk using dugite-lsm, a pure Rust LSM tree. This matches Haskell cardano-node's UTxO-HD architecture, where the UTxO set lives in an LSM-backed on-disk store rather than entirely in memory.
UtxoStore
The UtxoStore (in dugite-ledger) wraps a dugite-lsm LsmTree and provides:
- Disk-backed UTxO set — the full UTxO set lives on disk, not in memory
- Efficient point lookups — bloom filters for fast negative lookups
- Batch writes — UTxO inserts and deletes are batched per block
- Snapshots — periodic snapshots for crash recovery
dugite-lsm is configured via storage profiles that maximize available system memory:
| Profile | Target System | Memtable | Block Cache | Expected RSS |
|---|---|---|---|---|
ultra-memory | 32GB | 2GB | 24GB | ~27GB |
high-memory (default) | 16GB | 1GB | 12GB | ~14GB |
low-memory | 8GB | 512MB | 5GB | ~6.5GB |
minimal | 4GB | 256MB | 2GB | ~3GB |
All profiles use 10 bits per key bloom filters and hybrid compaction (tiered L0, leveled L1+).
DiffSeq (Rollback Support)
The DiffSeq (in dugite-ledger) maintains the last k blocks of UTxO diffs, enabling rollback without replaying blocks:
- Each block produces a
UtxoDiffrecording which UTxOs were added and removed - The
DiffSeqholds the last k=2160 diffs - On rollback, diffs are applied in reverse to restore the UTxO set
io_uring Support (Linux)
On Linux with kernel 5.1+, enable io_uring for async I/O in the UTxO LSM tree:
cargo build --release --features io-uring
On other platforms (macOS, Windows), the feature flag is accepted but falls back to synchronous I/O automatically.
Snapshot Policy
Dugite uses a time-based snapshot policy matching Haskell's cardano-node:
- Normal sync: snapshots every 72 minutes (k * 2 seconds, where k=2160)
- Bulk sync: snapshots every 50,000 blocks plus 6 minutes of wall-clock time
- Maximum retained: 2 snapshots on disk at any time
Ledger snapshots include the full ledger state (stake distribution, protocol parameters, governance state, etc.). The UTxO set is persisted separately via the UtxoStore's LSM snapshots.
Tip Recovery
When the node restarts:
- The ImmutableDB tip is read from the chunk files (always durable)
- The VolatileDB starts empty (in-memory state is rebuilt)
- The ledger state is restored from the most recent snapshot
- The UTxO set is restored from the UtxoStore's LSM snapshot
- The node resumes syncing from the recovered tip
Disk Layout
database-path/
immutable/ # Append-only block chunk files
chunks/ # Block data files
index/ # Secondary indexes (slot, hash)
hash_index.dat # Mmap block index (open-addressing hash table)
utxo-store/ # dugite-lsm database (UTxO set)
active/ # Current SSTables
snapshots/ # Durable snapshots
ledger/ # Ledger state snapshots
Performance Considerations
- Block writes — append-only chunk files provide consistent write performance without compaction pauses
- UTxO lookups — LSM tree with bloom filters provides efficient point lookups for transaction validation
- Memory usage — the VolatileDB holds approximately k blocks in memory (typically a few hundred MB). The UTxO set lives on disk, significantly reducing memory pressure compared to an all-in-memory approach
- Batch size — the flush batch size balances memory usage against write efficiency
Storage Profiles
Dugite provides four storage profiles sized to maximize available system memory:
# Select a profile via CLI
./dugite-node run --storage-profile high-memory ...
# Override individual parameters
./dugite-node run --storage-profile low-memory --utxo-block-cache-size-mb 4096 ...
Profiles can also be set in the node configuration file:
{
"storage": {
"profile": "high-memory",
"utxoBlockCacheSizeMb": 8192
}
}
Resolution order: profile defaults < config file overrides < CLI overrides.
Fork Recovery & ImmutableDB Contamination
Problem
When a forged block loses a slot battle, flush_all_to_immutable on graceful shutdown can persist orphaned blocks permanently in the ImmutableDB. Since the ImmutableDB is append-only and designed for finalized blocks, these orphaned blocks contaminate the canonical chain history and can cause intersection failures on reconnect.
sequenceDiagram
participant Node as Dugite Node
participant Vol as VolatileDB
participant Imm as ImmutableDB
participant Peer as Upstream Peer
Node->>Vol: Forge block at slot S
Peer->>Node: Competing block at slot S wins
Note over Vol: Orphaned forged block still in VolatileDB
Node->>Imm: flush_all_to_immutable (graceful shutdown)
Note over Imm: Orphaned block now persisted permanently
Node->>Peer: Restart — intersection negotiation fails
Detection
ChainDB.get_chain_points()walks backwards through volatile blocks viaprev_hashlinks, providing the peer with enough ancestry for intersection even when the tip is orphaned.ImmutableDB.get_historical_points()samples older chunk secondary indexes in reverse order, providing canonical intersection points even when the immutable tip is contaminated.- When fork divergence is detected, contaminated ChainDB chain points are excluded from intersection negotiation, preventing the node from advertising orphaned blocks to peers.
Recovery
- Case A (Origin intersection): The volatile DB is cleared, the ledger state is reset, and the node reconnects from genesis. This is the fallback when no valid intersection can be found.
- Case B (Intersection behind ledger): A targeted ImmutableDB replay is performed up to the intersection slot using a detached LSM store, achieving approximately 50K blocks/second replay speed. This avoids a full resync while restoring the ledger to a consistent state.
Benchmarks
Run storage benchmarks with:
# Storage benchmarks (block index, ImmutableDB, ChainDB, scaling to 1M entries)
cargo bench -p dugite-storage --bench storage_bench
# UTxO store benchmarks (insert, lookup, apply_tx, LSM configs, scaling to 1M entries)
cargo bench -p dugite-ledger --bench utxo_bench
# Crypto benchmarks (Ed25519, blake2b keyhash)
cargo bench -p dugite-crypto --bench crypto_bench
# Hash benchmarks (blake2b_256, blake2b_224, batch hashing)
cargo bench -p dugite-primitives --bench hash_bench
Results are saved to target/criterion/ with HTML reports. Baseline results are tracked in benches/.
Latest Results (Apple M2 Max, 32GB, 2026-03-14)
Block Index Lookup (500 random lookups, mmap vs in-memory HashMap)
| Size | In-Memory | Mmap | Speedup |
|---|---|---|---|
| 10K | 10.0µs | 2.83µs | 3.5x |
| 100K | 10.1µs | 2.17µs | 4.7x |
| 1M | 10.6µs | 2.01µs | 5.3x |
Mmap lookup advantage grows with scale — at mainnet block counts (~10M), the gap widens further.
UTxO Store Scaling (dugite-lsm LSM tree)
| Size | Insert (per-entry) | Lookup (per-entry) | Total Lovelace Scan |
|---|---|---|---|
| 10K | 455ns | 191ns | 2.38ms |
| 100K | 479ns | 236ns | 29.1ms |
| 1M | 569ns | 308ns | 330ms |
Insert and lookup scale near-linearly. At mainnet scale (~20M UTxOs), estimated full scan ~6.6s.
Crypto & Hashing
| Operation | Time |
|---|---|
| Ed25519 verify (single) | 28.6µs |
| Blake2b-224 keyhash (32B) | 128ns |
| Blake2b-256 tx hash (1KB) | 949ns |
A typical block with 50 witnesses: ~1.4ms for signature verification, ~6.4µs for keyhash computation.
LSM Config Comparison (100K entries)
All storage profiles perform identically at benchmark scale — config differences emerge at mainnet scale (20M+ UTxOs) where working set exceeds cache capacity.
See benches/2026-03-14-all-profiles.md for full results.
Ledger
Dugite's ledger layer (dugite-ledger) implements full Cardano transaction validation, UTxO management, stake distribution, reward calculation, and Conway-era governance. It closely follows the Haskell cardano-ledger STS (State Transition System) rules.
Ledger State
The LedgerState is the complete mutable state of the Cardano ledger at a given point in the chain:
flowchart TD
LS[LedgerState] --> UTXO[UtxoSet<br/>On-disk via LSM tree]
LS --> DELEG[Delegations<br/>Stake → Pool mapping]
LS --> POOLS[Pool Parameters<br/>Registered pools + future updates]
LS --> REWARDS[Reward Accounts<br/>Per-credential balances]
LS --> GOV[GovernanceState<br/>DReps, proposals, committee, constitution]
LS --> SNAP[EpochSnapshots<br/>Mark / Set / Go]
LS --> PP[Protocol Parameters<br/>Current + previous epoch]
LS --> FIN[Treasury + Reserves<br/>Financial state]
Key design decisions:
- Arc-wrapped collections — Large mutable fields (
delegations,pool_params,reward_accounts,governance) are wrapped inArcfor copy-on-write semantics. CloningLedgerStatebumps reference counts; mutations viaArc::make_mut()only copy when shared. - On-disk UTxO — The UTxO set lives in an LSM tree (
dugite-lsm) rather than in memory, matching Haskell's UTxO-HD architecture. At mainnet scale (~20M UTxOs), this avoids multi-gigabyte memory pressure. - Exact rational arithmetic — Reward calculations use
Rat(backed bynum_bigint::BigInt) for lossless intermediate computation, with a single floor operation at the end matching Haskell'srationalToCoinViaFloor.
Block Application Pipeline
When a new block arrives, apply_block() processes it through this pipeline:
flowchart TD
BLK[New Block] --> CONN[Check prev_hash chain]
CONN --> EPOCH{Epoch boundary?}
EPOCH -->|Yes| ET[Process epoch transition]
EPOCH -->|No| TXS[Process transactions]
ET --> TXS
TXS --> P1[Phase-1 Validation<br/>Structural + witness checks]
P1 --> P2{Plutus scripts?}
P2 -->|Yes| EVAL[Phase-2 Evaluation<br/>uplc CEK machine]
P2 -->|No| APPLY[Apply UTxO changes]
EVAL --> APPLY
APPLY --> CERT[Process certificates]
CERT --> GOV[Process governance actions]
GOV --> DIFF[Record UtxoDiff]
Block Validation Modes
| Mode | Plutus Evaluation | Use Case |
|---|---|---|
ValidateAll | Re-evaluate, verify is_valid flag | New blocks from peers |
ApplyOnly | Trust is_valid flag | ImmutableDB replay, Mithril import, self-forged blocks |
Invalid transactions (is_valid: false) skip normal input/output processing. Instead, collateral inputs are consumed and collateral return is added.
Transaction Validation
Phase-1 (Structural + Witness)
Phase-1 validation checks structural rules without executing scripts:
- Inputs exist — All transaction inputs are present in the UTxO set
- Fee sufficient — Fee covers minimum fee based on tx size, execution units, and reference script size (CIP-0112 tiered pricing in Conway)
- Value conserved — Inputs = outputs + fee (+ minting/burning for multi-asset)
- TTL valid — Transaction has not expired (time-to-live check against current slot)
- Witness verification — Ed25519 signatures match required signers from inputs, withdrawals, and certificates
- Multi-asset rules — No negative quantities, minting requires policy witness
- Reference inputs — All reference inputs exist (not consumed, only read)
- Output minimum — Each output meets the minimum lovelace requirement
- Transaction size — Does not exceed max transaction size
- Network ID — Matches the expected network
Phase-2 (Plutus Script Execution)
For transactions containing Plutus scripts (V1/V2/V3):
- Script data hash — Matches the hash of redeemers + datums + cost models
- Collateral — Sufficient collateral provided (150% of estimated fees in Conway)
- Execution units — Each redeemer's CPU and memory within budget
- Script evaluation — Each script is executed via the uplc CEK machine with the appropriate cost model
- Block budget — Total execution units across all transactions do not exceed block limits
Scripts are evaluated in parallel using rayon when the parallel-verification feature is enabled (default).
Validation Error Types
The ValidationError enum covers 50+ error variants across all categories: structural, UTxO, fees, witnesses, time, scripts, collateral, Plutus, era-gating, certificates, governance, datums, withdrawals, network, and auxiliary data.
Certificate Processing
Dugite processes all Shelley through Conway certificate types:
| Certificate | Description |
|---|---|
| StakeRegistration | Register a stake credential (deposit required) |
| StakeDeregistration | Deregister a stake credential (deposit refunded) |
| StakeDelegation | Delegate stake to a pool |
| PoolRegistration | Register a new stake pool |
| PoolRetirement | Schedule pool retirement at a future epoch |
| RegDRep | Register a delegated representative (Conway) |
| UnregDRep | Deregister a DRep (Conway) |
| UpdateDRep | Update DRep metadata anchor (Conway) |
| VoteDelegation | Delegate voting power to a DRep (Conway) |
| StakeVoteDelegation | Combined stake + vote delegation (Conway) |
| RegStakeDeleg | Combined registration + stake delegation (Conway) |
| RegStakeVoteDeleg | Combined registration + stake + vote delegation (Conway) |
| CommitteeHotAuth | Authorize a hot key for a constitutional committee member (Conway) |
| CommitteeColdResign | Resign a constitutional committee cold key (Conway) |
| MoveInstantaneousRewards | Transfer between treasury and reserves (pre-Conway) |
Governance (CIP-1694)
The GovernanceState tracks all Conway-era governance:
DRep Lifecycle
- Registration — DReps register with a deposit, becoming eligible to vote
- Activity tracking — DReps must vote within
drepActivityepochs or become inactive - Expiration — Inactive DReps' delegated stake counts as abstaining
- Delegation — Stake credentials delegate voting power to DReps, AlwaysAbstain, or AlwaysNoConfidence
Constitutional Committee
- Hot key authorization — Cold keys authorize hot keys for voting
- Member expiration — Each member has an epoch-based term limit
- Quorum — Threshold fraction of non-expired, non-resigned members must approve
Governance Actions
Seven action types with per-type ratification thresholds:
| Action | DRep Threshold | SPO Threshold | CC Required |
|---|---|---|---|
| ParameterChange | Varies by param group (4 groups) | Varies by param group (5 groups) | Yes |
| HardForkInitiation | DRep threshold | SPO threshold | Yes |
| TreasuryWithdrawals | DRep threshold | No | Yes |
| NoConfidence | DRep threshold | SPO threshold | No |
| UpdateCommittee | DRep threshold | SPO threshold | No (if NoConfidence) |
| NewConstitution | DRep threshold | No | Yes |
| InfoAction | No threshold | No threshold | No |
Ratification
Ratification uses a two-epoch delay: proposals and votes from epoch E are considered at the E+1 → E+2 boundary using a frozen RatificationSnapshot. This prevents mid-epoch voting from affecting the current epoch's ratification. Thresholds use exact rational arithmetic via u128 cross-multiplication.
Epoch Transitions
At each epoch boundary, process_epoch_transition() follows the Haskell NEWEPOCH STS rule:
flowchart TD
NE[NEWEPOCH] --> RUPD[Apply pending RUPD<br/>treasury += deltaT<br/>reserves -= deltaR<br/>credit rewards]
RUPD --> SNAP[SNAP<br/>Rotate mark → set → go<br/>Capture current fees]
SNAP --> POOLREAP[POOLREAP<br/>Process pool retirements<br/>Refund deposits]
POOLREAP --> RAT[RATIFY<br/>Governance ratification<br/>Enact approved actions]
RAT --> RESET[Reset block counters<br/>Clear RUPD state]
Reward Distribution (RUPD)
Rewards follow a deferred schedule matching Haskell's pulsing reward computation:
- Epoch E → E+1: Compute RUPD (monetary expansion + fees - treasury cut)
- Epoch E+1 → E+2: Apply RUPD (credit rewards to accounts, update treasury/reserves)
The reward calculation uses the "go" snapshot (two epochs old) for stake distribution, ensuring a stable base for computation.
Stake Snapshots
The mark/set/go model ensures different subsystems use consistent, non-overlapping snapshots:
| Snapshot | Age | Used For |
|---|---|---|
| Mark | Current epoch boundary | Future leader election (2 epochs later) |
| Set | Previous epoch boundary | Current epoch leader election |
| Go | Two epochs ago | Current epoch reward distribution |
UTxO Storage
UtxoStore
The persistent UTxO set wraps a dugite-lsm LSM tree:
- 36-byte keys — 32-byte transaction hash + 4-byte output index (big-endian)
- Bincode values —
TransactionOutputserialized via bincode - Address index — In-memory
HashMap<Address, HashSet<TransactionInput>>for N2C LocalStateQueryGetUTxOByAddressefficiency - Bloom filters — 10 bits per key (~1% false positive rate) for fast negative lookups during validation
DiffSeq (Rollback Support)
Each block produces a UtxoDiff recording inserted and deleted UTxOs. The DiffSeq holds the last k=2160 diffs, enabling O(1) rollback by applying diffs in reverse without reloading snapshots.
LedgerSeq (Anchored State Sequence)
LedgerSeq implements Haskell's V2 LedgerDB architecture:
- Anchor — One full
LedgerStateat the immutable tip (persisted to disk) - Volatile deltas — Per-block
LedgerDeltafor the last k blocks - Checkpoints — Full state snapshots every ~100 blocks for fast reconstruction
- Rollback — Drop trailing deltas and reconstruct from the nearest checkpoint
This avoids the 17-34 GB memory overhead of storing k full state copies.
CompositeUtxoView (Mempool Support)
All validate_transaction_* functions accept any UtxoLookup implementation. The CompositeUtxoView layers a mempool overlay on top of the on-chain UTxO set, enabling validation of chained mempool transactions (where one tx spends outputs of another unconfirmed tx) without mutating the live ledger state.
Consensus
Dugite implements the Ouroboros Praos consensus protocol, the proof-of-stake protocol used by Cardano since the Shelley era.
Ouroboros Praos Overview
Ouroboros Praos divides time into fixed-length slots. Each slot, a slot leader is selected based on their stake proportion. The leader is entitled to produce a block for that slot. Key properties:
- Slot-based — Time is divided into slots (1 second each on mainnet)
- Epoch-based — Slots are grouped into epochs (432000 slots / 5 days on mainnet)
- Stake-proportional — The probability of being elected is proportional to the pool's active stake
- Private leader selection — Only the pool operator knows if they are elected (until they publish the block)
Slot Leader Election
VRF-Based Selection
Each slot, the pool operator evaluates a VRF (Verifiable Random Function) using:
- Their VRF signing key
- The slot number
- The epoch nonce
The VRF produces:
- A VRF output — A deterministic pseudo-random value
- A VRF proof — A proof that the output was correctly computed
Leader Threshold
The VRF output is compared against a threshold derived from:
- The pool's relative stake (sigma)
- The active slot coefficient (f = 0.05 on mainnet)
The threshold is computed using the phi function:
phi(sigma) = 1 - (1 - f)^sigma
A slot leader is elected if VRF_output < phi(sigma).
VRF Exact Rational Arithmetic
The leader check is a critical consensus operation — any deviation from the Haskell reference implementation would cause a node to disagree on which blocks are valid. Dugite uses exact 34-digit fixed-point arithmetic via dashu-int IBig, matching Haskell's FixedPoint E34 type exactly. No floating-point operations are used anywhere in the VRF computation path.
Era-dependent VRF modes:
| Era | Protocol Version | VRF Output Derivation | certNatMax |
|---|---|---|---|
| Shelley — Alonzo (TPraos) | proto < 7 | Raw 64-byte VRF output | 2^512 |
| Babbage — Conway (Praos) | proto >= 7 | Blake2b-256("L" || output) | 2^256 |
In TPraos mode (Shelley through Alonzo), the raw 64-byte VRF output is used directly for the leader check, with a certNatMax of 2^512 defining the output space. In Praos mode (Babbage onward), the VRF output is hashed with Blake2b-256("L" || output) to produce a 32-byte value, reducing certNatMax to 2^256. The "L" prefix distinguishes the leader VRF output from the nonce VRF output (which uses "N").
Mathematical primitives:
ln(1 + x)— Uses the Euler continued fraction expansion, matching Haskell'slncffunction. This converges for allx >= 0, unlike Taylor series which has a limited radius of convergence.taylorExpCmp— Computesexp()via Taylor series with rigorous error bounds, enabling early termination when the comparison result can be determined without computing the full expansion. This avoids unnecessary precision in the common case where the VRF output is far from the threshold.
Epoch Nonce
The epoch nonce is computed at each epoch boundary:
epoch_nonce = hash(candidate_nonce || lab_nonce)
Where:
candidate_nonceis the evolving nonce frozen at the stability window boundary of the previous epochlab_nonceis a hash derived from the previous epoch's first block (the "laboratory" nonce)
The initial nonce is derived from the Shelley genesis hash.
Nonce Establishment
The nonce lifecycle follows a precise sequence across epoch boundaries:
- Evolving nonce — Accumulates VRF nonce contributions from every block:
evolving_nonce = hash(prev_evolving_nonce || hash(vrf_nonce_output)) - Candidate nonce — The evolving nonce is frozen (snapshotted) at the stability window boundary within each epoch. After this point, new VRF contributions only affect the evolving nonce, not the candidate.
- Epoch nonce — At the epoch boundary, the new epoch nonce is computed as
hash(candidate_nonce_from_prev_epoch || lab_nonce).
flowchart LR
A["Block VRF<br/>contributions"] -->|"accumulated<br/>every block"| B["Evolving<br/>Nonce"]
B -->|"frozen at<br/>stability window"| C["Candidate<br/>Nonce"]
C -->|"hash(candidate ∥ lab)"| D["Epoch Nonce<br/>(next epoch)"]
Nonce availability after startup:
The epoch nonce and its accumulators (evolving_nonce, candidate_nonce, last_epoch_block_nonce) are serialized in the ledger snapshot and considered authoritative immediately after a snapshot load or Mithril import. This matches Haskell cardano-node's treatment of praosStateEpochNonce, which is read directly from the deserialized PraosState with no warm-up or "established" gate.
Era-Dependent Nonce Stabilisation Window
The stability window determines how early in an epoch the candidate nonce is frozen. This varies by era:
| Era | Protocol Version | Stability Window |
|---|---|---|
| Shelley — Babbage | proto < 10 | 3k/f slots |
| Conway | proto >= 10 | 4k/f slots |
Where k is the security parameter (2160 on mainnet) and f is the active slot coefficient (0.05 on mainnet). The longer Conway window provides additional time for nonce contributions to accumulate, improving randomness quality.
Chain Selection
When multiple valid chains exist, Ouroboros Praos selects the chain with the most blocks (longest chain rule). Dugite implements:
- Chain comparison — Compare the block height of competing chains
- Rollback support — Roll back up to k=2160 blocks to switch to a longer chain
- Immutability — Blocks deeper than k are considered final
Epoch Transitions
At each epoch boundary, Dugite performs:
Stake Snapshot Rotation
Dugite uses the mark/set/go snapshot model:
- Mark — The current epoch boundary snapshot (will be used for leader election two epochs from now)
- Set — The previous epoch's mark (used for leader election in the current epoch)
- Go — Two epochs ago (used for reward distribution in the current epoch)
At each epoch boundary:
- Go becomes the active snapshot for reward distribution
- Set moves to go
- Mark moves to set
- A new mark is taken from the current ledger state
flowchart LR
subgraph "Epoch N Boundary"
direction TB
L["Current Ledger<br/>State"] -->|"snapshot"| M["Mark"]
M -->|"rotate"| S["Set"]
S -->|"rotate"| G["Go"]
end
S -.- LE["Leader Election<br/>(epoch N)"]
G -.- RD["Reward Distribution<br/>(epoch N)"]
Snapshot Establishment
After a node starts, the snapshots are not immediately trustworthy for block production:
snapshots_establishedrequires at least 3 live (post-replay) epoch transitions before returning true. This ensures that all three snapshot positions (mark, set, go) have been populated by the running node with precise stake calculations.- Replay-built snapshots may contain approximate stake values due to differences in reward calculation during fast replay versus live operation. These are sufficient for validation but not authoritative for forging.
- VRF leader eligibility failures are non-fatal until snapshots are fully established. During the establishment period, a pool may fail leader checks because the stake distribution in the snapshot does not yet reflect the true on-chain state. The node logs these failures but continues normal operation.
Reward Calculation and Distribution
At each epoch boundary, rewards are calculated and distributed:
- Monetary expansion — New ADA is created from the reserves based on the monetary expansion rate
- Fee collection — Transaction fees from the epoch are collected
- Treasury cut — A fraction (tau) of rewards goes to the treasury
- Pool rewards — Remaining rewards are distributed to pools based on their performance
- Member distribution — Pool rewards are split between the operator and delegators based on pool parameters (cost, margin, pledge)
Validation Checks
Dugite validates the following consensus-level properties:
KES Period Validation
The KES (Key Evolving Signature) period in the block header must be within the valid range for the operational certificate:
opcert_start_kes_period <= current_kes_period < opcert_start_kes_period + max_kes_evolutions
VRF Verification
Full VRF verification includes:
- VRF key binding —
blake2b_256(header.vrf_vkey)must match the pool's registeredvrf_keyhash - VRF proof verification — The VRF proof is cryptographically verified against the VRF public key
- Leader eligibility — The VRF leader value is checked against the Praos threshold for the pool's relative stake using the phi function
Operational Certificate Verification
The operational certificate's Ed25519 signature is verified against the raw bytes signable format (matching Haskell's OCertSignable):
signable = hot_vkey(32 bytes) || counter(8 bytes BE) || kes_period(8 bytes BE)
signature = sign(cold_skey, signable)
The counter must be monotonically increasing per pool to prevent certificate replay.
KES Signature Verification
Block headers are signed using the Sum6Kes scheme (depth-6 binary sum composition over Ed25519). The KES key is evolved to the correct period offset from the operational certificate's start period. Verification checks:
- The KES signature over the header body bytes is valid
- The KES period matches the expected value for the block's slot
Slot Leader Eligibility
The VRF proof is checked to confirm the block producer was indeed elected for the slot, given the epoch nonce and their pool's stake.
Networking
Dugite implements the full Ouroboros network protocol stack, supporting both Node-to-Node (N2N) and Node-to-Client (N2C) communication.
Protocol Stack
flowchart TB
subgraph N2N ["Node-to-Node (TCP)"]
HS[Handshake V14/V15]
CSP[ChainSync<br/>Headers]
BFP[BlockFetch<br/>Block Bodies]
TX[TxSubmission2<br/>Transactions]
KA[KeepAlive<br/>Liveness]
end
subgraph N2C ["Node-to-Client (Unix Socket)"]
HSC[Handshake]
LCS[LocalChainSync<br/>Block Delivery]
LSQ[LocalStateQuery<br/>Ledger Queries]
LTS[LocalTxSubmission<br/>Submit Transactions]
LTM[LocalTxMonitor<br/>Mempool Queries]
end
MUX[Multiplexer] --> N2N
MUX --> N2C
Relay Node Architecture
flowchart TB
subgraph Inbound ["Inbound Connections"]
IN1[Peer A] -->|N2N| MUX_IN[Multiplexer]
IN2[Peer B] -->|N2N| MUX_IN
IN3[Wallet] -->|N2C| MUX_N2C[N2C Server]
end
subgraph Outbound ["Outbound Connections"]
MUX_OUT[Multiplexer] -->|ChainSync| PEER1[Bootstrap Peer]
MUX_OUT -->|BlockFetch| PEER1
MUX_OUT -->|TxSubmission| PEER1
end
subgraph Core ["Node Core"]
PM[Peer Manager<br/>Cold→Warm→Hot]
MP[Mempool<br/>Tx Validation]
CDB[(ChainDB)]
LS[Ledger State]
CONS[Consensus<br/>Ouroboros Praos]
end
MUX_IN -->|ChainSync| CDB
MUX_IN -->|BlockFetch| CDB
MUX_IN -->|TxSubmission| MP
MUX_N2C -->|LocalStateQuery| LS
MUX_N2C -->|LocalTxSubmission| MP
MUX_N2C -->|LocalTxMonitor| MP
PEER1 -->|blocks| CDB
CDB --> LS
LS --> CONS
PM -->|manage| MUX_OUT
PM -->|manage| MUX_IN
Node-to-Node (N2N) Protocol
N2N connections use TCP and carry multiple mini-protocols over a multiplexed connection.
Handshake (V14/V15)
The N2N handshake negotiates the protocol version and network parameters:
- Protocol version V14 (Plomin HF) and V15 (SRV DNS support)
- Network magic number
- Diffusion mode:
InitiatorOnlyorInitiatorAndResponder - Peer sharing flags
ChainSync
The ChainSync mini-protocol synchronizes block headers between peers:
- Client mode: Requests headers sequentially from a peer to track the chain
- Server mode: Serves headers to connected peers, with per-peer cursor tracking
Key messages:
MsgFindIntersect— Find a common chain pointMsgRequestNext— Request the next headerMsgRollForward— Header deliveredMsgRollBackward— Chain reorganizationMsgAwaitReply— Peer has no new headers (at tip)
BlockFetch
The BlockFetch mini-protocol retrieves block bodies by hash:
- Client mode: Requests ranges of blocks from peers
- Server mode: Serves blocks to peers, validates block existence before serving
Key messages:
MsgRequestRange— Request blocks in a slot rangeMsgBlock— Block deliveredMsgNoBlocks— Requested blocks not availableMsgBatchDone— End of batch
TxSubmission2
The TxSubmission2 mini-protocol propagates transactions between peers:
- Bidirectional handshake (
MsgInit) - Flow-controlled transaction exchange with ack/req counts
- Inflight tracking per peer
- Mempool integration for serving transaction IDs and bodies
KeepAlive
The KeepAlive mini-protocol maintains connection liveness with periodic heartbeat messages.
PeerSharing
The PeerSharing mini-protocol enables gossip-based peer discovery. Peers exchange addresses of other known peers to help the network self-organize.
Node-to-Client (N2C) Protocol
N2C connections use Unix domain sockets and serve local clients (wallets, CLI tools). The N2C handshake supports versions V16-V22 (Conway era) with automatic detection of the Haskell bit-15 version encoding used by cardano-cli 10.x.
LocalStateQuery
Supports all 39 Shelley BlockQuery tags (0-38) plus cross-era queries, providing full compatibility with cardano-node. The query protocol uses an acquire/query/release pattern:
MsgAcquire— Lock the ledger state at the current tipMsgQuery— Execute queries against the locked stateMsgRelease— Release the lock
All BlockQuery messages are wrapped in the Hard Fork Combinator (HFC) envelope. Results from era-specific BlockQuery tags are returned inside an array(1) success wrapper, while QueryAnytime and QueryHardFork results are returned unwrapped.
Shelley BlockQuery Tags 0-38
| Tag | Query | Description |
|---|---|---|
| 0 | GetLedgerTip | Current slot, hash, and block number |
| 1 | GetEpochNo | Active epoch number |
| 2 | GetCurrentPParams | Live protocol parameters (positional array(31) CBOR encoding matching Haskell ConwayPParams EncCBOR) |
| 3 | GetProposedPParamsUpdates | Proposed parameter updates (empty map in Conway) |
| 4 | GetStakeDistribution | Pool stake distribution with pledge |
| 5 | GetNonMyopicMemberRewards | Estimated rewards per pool for given stake amounts |
| 6 | GetUTxOByAddress | UTxO set filtered by address (Cardano wire format Map<[tx_hash, index], {0: addr, 1: value, 2: datum}>) |
| 7 | GetUTxOWhole | Entire UTxO set (expensive; used by testing tools) |
| 8 | DebugEpochState | Simplified epoch state summary (treasury, reserves, active stake totals) |
| 9 | GetCBOR | Meta-query that wraps the result of an inner query in CBOR tag(24), returning raw bytes |
| 10 | GetFilteredDelegationsAndRewardAccounts | Delegation targets and reward balances for a set of stake credentials |
| 11 | GetGenesisConfig | System start, epoch length, slot length, and security parameter |
| 12 | DebugNewEpochState | Simplified new epoch state summary (epoch number, block count, snapshot state) |
| 13 | DebugChainDepState | Chain-dependent state summary (last applied block, operational certificate counters) |
| 14 | GetRewardProvenance | Reward calculation provenance: reward pot, treasury tax rate, total active stake, per-pool reward breakdown |
| 15 | GetUTxOByTxIn | UTxO set filtered by transaction inputs |
| 16 | GetStakePools | Set of all registered pool key hashes |
| 17 | GetStakePoolParams | Registered pool parameters (owner, cost, margin, pledge, relays, metadata) |
| 18 | GetRewardInfoPools | Per-pool reward breakdown: relative stake, leader and member reward splits, pool margin, fixed cost, and performance metrics |
| 19 | GetPoolState | QueryPoolStateResult encoded as array(4): [poolParams, futurePoolParams, retiring, deposits] |
| 20 | GetStakeSnapshots | Mark/set/go stake snapshots used for leader schedule calculation |
| 21 | GetPoolDistr | Pool stake distribution with VRF verification key hashes |
| 22 | GetStakeDelegDeposits | Deposit amounts per registered stake credential |
| 23 | GetConstitution | Constitution anchor (URL + hash) and optional guardrail script hash |
| 24 | GetGovState | ConwayGovState encoded as array(7) CBOR: active proposals, committee state, constitution, current/previous protocol parameters, future parameters, and DRep pulse state |
| 25 | GetDRepState | Registered DReps with their delegation counts and deposit balances (supports credential filter) |
| 26 | GetDRepStakeDistr | Total delegated stake per DRep (lovelace) |
| 27 | GetCommitteeMembersState | Constitutional committee members, iterating committee_expiration entries with hot_credential_type for each member |
| 28 | GetFilteredVoteDelegatees | Vote delegation map per stake credential |
| 29 | GetAccountState | Treasury and reserves balances |
| 30 | GetSPOStakeDistr | Per-pool stake distribution filtered by a set of pool IDs |
| 31 | GetProposals | Active governance proposals with optional governance action ID filter |
| 32 | GetRatifyState | Enacted and expired proposals along with the ratify_delayed flag |
| 33 | GetFuturePParams | Pending protocol parameter changes scheduled for the next epoch (if any) |
| 34 | GetLedgerPeerSnapshot | SPO relay addresses weighted by relative stake, used for P2P ledger-based peer discovery |
| 35 | QueryStakePoolDefaultVote | Default vote per pool derived from its DRep delegation (AlwaysAbstain, AlwaysNoConfidence, or specific DRep vote) |
| 36 | GetPoolDistr2 | Extended pool distribution including total_active_stake alongside per-pool entries |
| 37 | GetStakeDistribution2 | Extended stake distribution including total_active_stake |
| 38 | GetMaxMajorProtocolVersion | Maximum supported major protocol version (returns 10) |
Cross-Era Queries
In addition to the Shelley BlockQuery tags, the following queries operate outside the HFC era-specific envelope:
| Query | Description |
|---|---|
| GetCurrentEra | Active era (Byron through Conway) |
| GetChainBlockNo | Current chain height, WithOrigin encoded as [1, blockNo] for At or [0] for Origin |
| GetChainPoint | Current tip point, encoded as [] for Origin or [slot, hash] for a specific point |
| GetSystemStart | Network genesis time as UTCTime encoded [year, dayOfYear, picosOfDay] |
| GetEraHistory | Indefinite array of EraSummary entries (Byron safe_zone = k*2, Shelley+ safe_zone = 3k/f) |
CBOR Encoding Notes
- PParams are encoded as a positional
array(31)with integer keys 0-33, matching Haskell'sEncCBORinstance (not JSON string keys). - CBOR Sets (e.g., pool IDs, stake key owners) use
tag(258)and elements must be sorted for canonical encoding. - Value encoding: plain integer for ADA-only UTxOs,
[coin, multiasset_map]for multi-asset UTxOs.
LocalTxSubmission
Submits transactions from local clients to the node's mempool:
| Message | Description |
|---|---|
MsgSubmitTx | Submit a transaction (era ID + CBOR bytes) |
MsgAcceptTx | Transaction accepted into mempool |
MsgRejectTx | Transaction rejected with reason |
Submitted transactions undergo both Phase-1 (structural) and Phase-2 (Plutus script) validation before mempool admission.
LocalTxMonitor
Monitors the transaction mempool:
| Message | Description |
|---|---|
MsgAcquire | Acquire a mempool snapshot |
MsgHasTx | Check if a transaction is in the mempool |
MsgNextTx | Get the next transaction from the mempool |
MsgGetSizes | Get mempool capacity, size, and transaction count |
P2P Networking
Dugite implements the full Ouroboros P2P peer selection governor, enabled by default (EnableP2P: true). The governor manages peer connections through a target-driven state machine that continuously maintains optimal connectivity.
Diffusion Mode
The DiffusionMode config field controls how the node participates in the network:
InitiatorAndResponder(default) — Full relay mode. The node opens a listening port and accepts inbound N2N connections from other peers, in addition to making outbound connections. This is the correct mode for relay nodes.InitiatorOnly— Block producer mode. The node only makes outbound connections to its configured relays and never opens a listening port. This prevents direct internet exposure of block producers.
Peer Sharing
The PeerSharing mini-protocol enables gossip-based peer discovery. When enabled, the node exchanges addresses of known routable peers with connected peers.
Peer sharing behaviour is auto-configured by default:
- Relays — Peer sharing is enabled, allowing the node to both request and serve peer addresses.
- Block producers — Peer sharing is disabled (when
--shelley-kes-keyis provided) to avoid leaking the BP's network position.
Override with the PeerSharing config field (true/false) if needed.
The PeerSharing protocol filters out non-routable addresses (RFC1918, CGNAT, loopback, link-local, IPv6 ULA) before sharing.
Peer Manager
The peer manager classifies peers into three temperature categories following the cardano-node model:
- Cold — Known but not connected
- Warm — TCP connected, keepalive running, but not actively syncing
- Hot — Fully active with ChainSync, BlockFetch, and TxSubmission2
Peer Lifecycle
stateDiagram-v2
[*] --> Cold: Discovered
Cold --> Warm: TCP connect + handshake
Warm --> Hot: Mini-protocols activated (5s dwell)
Hot --> Warm: Demotion (poor performance / churn)
Warm --> Cold: Disconnection / backoff
Cold --> [*]: Evicted (max failures)
Warm peers must dwell for at least 5 seconds before promotion to Hot, preventing rapid cycling.
Peer Sources
Peers enter the Cold pool from four sources:
| Source | Description |
|---|---|
| Topology | Bootstrap peers, local roots, and public roots from the topology file |
| DNS | A/AAAA resolution of hostname-based topology entries |
| Ledger | SPO relay addresses from pool registration certificates (after useLedgerAfterSlot) |
| PeerSharing | Addresses received via the gossip protocol from connected peers |
Peer Selection & Scoring
Peers are ranked using a composite score:
score = 0.4 × reputation + 0.4 × latency_score + 0.2 × failure_score
Where:
- Reputation — 0.0 (worst) to 1.0 (best), adjusted +0.01 per success, -0.1 per failure
- Latency score —
1 / (1 + ms/200), based on EWMA latency (smoothing α=0.3) - Failure score —
max(1.0 - failures×0.1, 0.0), failure counts decay (halve every 5 minutes)
Subnet diversity is enforced: peers from the same /24 (IPv4) or /48 (IPv6) subnet receive a selection penalty.
Failure Handling
- Exponential backoff on connection failures: 5s → 10s → 20s → 40s → 80s → 160s (capped), with ±2s random fuzz
- Max cold failures: 5 consecutive failures before a peer is evicted from the peer table
- Failure decay: Failure counts halve every 5 minutes, allowing peers to recover reputation over time
- Circuit breaker: Closed → Open → HalfOpen with exponential cooldown
Inbound Connections
- Per-IP token bucket rate limiting for DoS protection
- N2N server handles handshake, ChainSync, BlockFetch, KeepAlive, TxSubmission2, and PeerSharing
DiffusionModecontrols whether inbound connections are accepted
P2P Governor
The governor runs as a tokio task on a 30-second interval, continuously evaluating peer counts against configured targets and emitting promotion/demotion/connect/disconnect actions.
Target Counts
The governor maintains six independent target counts (matching cardano-node defaults):
| Target | Default | Description |
|---|---|---|
TargetNumberOfKnownPeers | 85 | Total peers in the peer table (cold + warm + hot) |
TargetNumberOfEstablishedPeers | 40 | Warm + hot peers (TCP connected) |
TargetNumberOfActivePeers | 15 | Hot peers (fully syncing) |
TargetNumberOfKnownBigLedgerPeers | 15 | Known big ledger peers |
TargetNumberOfEstablishedBigLedgerPeers | 10 | Established big ledger peers |
TargetNumberOfActiveBigLedgerPeers | 5 | Active big ledger peers |
When any target is not met, the governor promotes peers to fill the deficit. When any target is exceeded, the governor demotes the lowest-scoring surplus peers. Local root peers are never demoted.
Sync-State-Aware Targeting
The governor adjusts behaviour based on sync state:
- PreSyncing / Syncing — Big ledger peers are prioritised for fast block download
- CaughtUp — Normal target enforcement with balanced peer selection
Churn
The governor periodically rotates a subset of peers to discover better alternatives:
- Configurable churn interval (default: 20% target reduction cycle)
- Local root peers are exempt from churn
- Churn ensures the node explores the peer landscape rather than settling on suboptimal connections
Prometheus Metrics
The P2P subsystem exports the following metrics:
| Metric | Description |
|---|---|
dugite_p2p_enabled | Whether P2P governance is active (gauge: 0 or 1) |
dugite_diffusion_mode | Current diffusion mode (0=InitiatorOnly, 1=InitiatorAndResponder) |
dugite_peer_sharing_enabled | Whether peer sharing is active (gauge: 0 or 1) |
dugite_peers_cold | Number of cold (known, unconnected) peers |
dugite_peers_warm | Number of warm (established) peers |
dugite_peers_hot | Number of hot (active) peers |
Peer Discovery
Peers are discovered through multiple channels:
- Topology file — Bootstrap peers, local roots, and public roots
- PeerSharing protocol — Gossip-based discovery from connected peers
- Ledger-based discovery — SPO relay addresses extracted from pool registration certificates
Ledger-Based Peer Discovery
Once the node has synced past the slot threshold configured by useLedgerAfterSlot in the topology file, it activates ledger-based peer discovery. This mechanism extracts SPO relay addresses directly from pool registration parameters (pool_params) stored in the ledger state.
The discovery process runs on a periodic 5-minute interval and works as follows:
- Slot check — The current ledger tip slot is compared against
useLedgerAfterSlot. If the topology sets this value to a negative number or omits it entirely, ledger peer discovery remains disabled. - Relay extraction — All registered pool parameters are iterated, extracting relay entries of three types:
SingleHostAddr— IPv4 address and portSingleHostName— DNS hostname and portMultiHostName— DNS hostname with default port 3001
- Sampling — A deterministic subset (up to 20 relays) is sampled from the full relay set to avoid resolving thousands of addresses at once. The sample offset rotates based on the current slot for coverage diversity.
- DNS resolution — Hostnames are resolved to socket addresses via async DNS lookup.
- Peer manager integration — Resolved addresses are added as cold peers with
PeerSource::Ledgerclassification, alongside existing bootstrap and public root peers.
As pool registrations change over time (new pools register, existing pools update relay addresses, pools retire), the ledger peer set evolves dynamically. This provides a protocol-native discovery mechanism that does not depend on any centralized directory.
Block Relay
Dugite implements full relay node behavior, propagating blocks received from upstream peers to all downstream N2N connections. This ensures that blocks flow through the network without requiring every node to sync directly from the block producer.
Broadcast Architecture
Block propagation uses a tokio::sync::broadcast channel with a capacity of 64 announcements. The architecture has three components:
- Sender — The node core holds a
broadcast::Sender<BlockAnnouncement>obtained from the N2N server at startup. When the sync pipeline processes new blocks or the forge module produces a new block, it sends an announcement containing the slot, block hash, and block number. - Receivers — Each N2N server connection spawns with its own
broadcast::Receiversubscription. The connection handler usestokio::select!to concurrently service mini-protocol messages and listen for block announcements. - Delivery — When a downstream peer is waiting at the tip (having received
MsgAwaitReplyfrom ChainSync), an incoming block announcement triggers aMsgRollForwardmessage to that peer, along with the block header. The peer can then fetch the full block body via BlockFetch.
Relay vs. Forger Announcements
Both synced and forged blocks flow through the same broadcast channel:
- Synced blocks — When the pipelined ChainSync client receives blocks from an upstream peer and the node is following the tip (strict mode), each batch's final block is announced to all downstream connections. This enables relay behavior where blocks received from one upstream peer propagate to all other connected peers.
- Forged blocks — When the block producer creates a new block, it is announced through the same channel after being written to ChainDB and applied to the ledger.
A parallel broadcast::Sender<RollbackAnnouncement> handles chain rollbacks, sending MsgRollBackward to downstream peers when the node's chain selection switches to a different fork.
Lagged Receivers
If a downstream peer falls behind (e.g., slow network or processing), the broadcast channel's bounded capacity means the receiver may lag. Lagged receivers skip missed announcements and log the gap, ensuring a slow peer does not block propagation to others.
Multiplexer
All mini-protocols run over a single TCP connection (N2N) or Unix socket (N2C), multiplexed by protocol ID:
| Protocol ID | Mini-Protocol |
|---|---|
| 0 | Handshake |
| 2 | ChainSync (N2N) |
| 3 | BlockFetch (N2N) |
| 4 | TxSubmission2 (N2N) |
| 8 | KeepAlive (N2N) |
| 10 | PeerSharing (N2N) |
| 5 | LocalChainSync (N2C) |
| 6 | LocalTxSubmission (N2C) |
| 7 | LocalStateQuery (N2C) |
| 9 | LocalTxMonitor (N2C) |
The multiplexer uses length-prefixed frames with protocol ID headers, matching the Ouroboros specification.
P2P Governor
This document describes Dugite's peer management architecture, implementing the Ouroboros P2P peer selection governor.
Architecture
Two modules implement peer management in dugite-network:
PeerManager (peer_manager.rs)
The data layer. Tracks every known peer in a flat HashMap<SocketAddr, PeerInfo>
together with three HashSets for the cold/warm/hot buckets.
| Feature | Description |
|---|---|
| Cold / Warm / Hot temperature tracking | Three-tier peer classification matching Ouroboros |
PeerCategory | LocalRoot, PublicRoot, BigLedgerPeer, LedgerPeer, Shared, Bootstrap |
ConnectionDirection | Inbound / Outbound tracking |
PeerSource | Config, PeerSharing, Ledger |
PeerPerformance | EWMA handshake RTT + block fetch latency |
| Reputation scoring | Composite of latency + volume + reliability + recency |
| Circuit breaker | Closed / Open / HalfOpen with exponential cooldown |
| Subnet diversity penalty | /24 IPv4, /48 IPv6 penalisation for peer selection |
| Trustable-first ordering | Two-tier ordering for peers_to_connect() |
| Inbound connection limit | Configurable max inbound connections |
DiffusionMode | InitiatorOnly / InitiatorAndResponder |
| Failure-count time decay | Halves every 5 minutes |
Governor (governor.rs)
The policy layer. Runs on a 30-second tokio::interval in dugite-node.
| Feature | Description |
|---|---|
PeerTargets | root/known/established/active + BLP variants |
| Sync-state-aware target switching | Adjusts targets for PreSyncing / Syncing / CaughtUp |
| Hard/soft connection limits | ConnectionDecision for accept/reject |
| Big-ledger-peer promotion priority | BLPs promoted first during sync |
| Active (hot) peer target enforcement | Promotes/demotes to meet active target |
| Established (warm+hot) target enforcement | Maintains established peer count |
| Surplus reduction | Demote/disconnect lowest reputation, local-root protected |
| Churn mechanism | 20% target reduction cycle at configurable intervals |
| Default targets | active=15, established=40, known=85 (matching cardano-node) |
Wiring
The governor runs as a standalone tokio::spawn task in node/mod.rs.
Every 30 seconds it:
- Acquires a read lock on
Arc<RwLock<PeerManager>>and callsgovernor.evaluate()andgovernor.maybe_churn(). - Acquires a write lock and applies the resulting
GovernorEvents by callingpromote_to_hot,demote_to_warm,peer_disconnected, andrecompute_reputations. GovernorEvent::Connectis acknowledged but not executed here — outbound connections originate from the main connection loop viapeers_to_connect().
Peer Selection State Machine
Peers progress through a formal state machine:
stateDiagram-v2
[*] --> Cold
Cold --> Warm: TCP connect + handshake
Warm --> Hot: Activate mini-protocols
Hot --> Warm: Deactivate mini-protocols
Warm --> Cold: Disconnect
Hot --> Cold: Forceful disconnect
Target Counts
The governor maintains six independent target counts:
| Target | Default |
|---|---|
| Known peers | 100 |
| Established peers | 40 |
| Active peers | 15 |
| Known big-ledger peers | 15 |
| Established big-ledger peers | 10 |
| Active big-ledger peers | 5 |
When any target is not met, the governor attempts to satisfy the deficit. When any target is exceeded, surplus peers are demoted by lowest reputation.
Local Root Peer Pinning
Local root peers (from localRoots in the topology file) have pinned targets
that override the normal target counts. Local roots are never demoted for
surplus reduction and are never churned.
Churn
The governor performs periodic churn to rotate peers:
- Deadline churn (normal mode) — Approximately every 55 minutes, a fraction of established and active peers are replaced.
- Bulk sync churn — During active block download, churn cycles are more aggressive (~15 minutes) to shed peers with poor block-fetch performance.
Big Ledger Peer Preference During Sync
Big ledger peers (SPOs in the top 90% of stake, obtained via
GetLedgerPeerSnapshot) serve as trusted anchors during bulk block download.
The governor maintains a separate target bucket for BLPs. When SyncState is
Syncing or PreSyncing, BLP targets take priority.
Thread Safety
The PeerManager is wrapped in Arc<RwLock<PeerManager>>. The governor task
acquires a read lock for evaluate() and a write lock only for event
application, keeping the write-lock window minimal.
Files
| File | Purpose |
|---|---|
crates/dugite-network/src/governor.rs | Policy decisions and target enforcement |
crates/dugite-network/src/peer_manager.rs | Peer state tracking and reputation |
crates/dugite-node/src/node/mod.rs | Governor task wiring |
crates/dugite-node/src/config.rs | Topology parsing |
Ouroboros Genesis Support
Dugite includes a Genesis State Machine (GSM) that tracks the node's sync progression through the Ouroboros Genesis protocol states.
Overview
The GSM implements three states matching the Ouroboros Genesis specification:
- PreSyncing — Waiting for enough trusted big ledger peers (BLPs). The Historical Availability Assumption (HAA) requires a minimum number of active BLPs before sync begins.
- Syncing — Active block download with density-based peer evaluation. The GSM monitors chain density across peers and can disconnect peers with insufficient chain density (GDD).
- CaughtUp — Normal Praos operation. The node is at or near the chain tip and participates in standard consensus.
Enabling Genesis Mode
Genesis mode is opt-in via the --consensus-mode genesis CLI flag:
dugite-node run \
--consensus-mode genesis \
--config config/preview-config.json \
...
When not enabled (the default praos mode), the GSM immediately enters CaughtUp and all Genesis constraints are disabled. This is the recommended mode for nodes that sync from Mithril snapshots.
State Transitions
stateDiagram-v2
[*] --> PreSyncing: genesis enabled, no marker
[*] --> CaughtUp: marker file exists
PreSyncing --> Syncing: HAA satisfied (enough BLPs)
Syncing --> CaughtUp: all peers idle + tip fresh
CaughtUp --> PreSyncing: tip becomes stale
A caught_up.marker file is written to the database directory when the node reaches CaughtUp, enabling fast restart without re-evaluating the Genesis bootstrap.
Features
- State tracking: PreSyncing/Syncing/CaughtUp with automatic transitions
- Big Ledger Peer identification: Pools in the top 90% of active stake are classified as BLPs
- Genesis Density Disconnector (GDD): Compares chain density across peers within the genesis window and disconnects peers with insufficient density
- Limit on Eagerness (LoE): Computes the maximum immutable tip slot based on candidate chain tips
- Peer snapshot loading: JSON-based peer snapshot for initial peer discovery
Recommended Deployment
The recommended deployment path uses Mithril snapshot import for fast sync with the default praos consensus mode:
# Import a Mithril snapshot first
dugite-node mithril-import --network-magic 2 --database-path ./db
# Then run in default praos mode
dugite-node run --config config/preview-config.json --database-path ./db ...
Protocol Parameters Reference
Cardano protocol parameters control the behavior of the network, including fees, block sizes, staking mechanics, and governance. These parameters can be queried from a running node and updated through governance actions.
Querying Parameters
dugite-cli query protocol-parameters \
--socket-path ./node.sock \
--out-file protocol-params.json
Fee Parameters
| Parameter | JSON Key | Description | Mainnet Default |
|---|---|---|---|
| Min fee coefficient | txFeePerByte / minFeeA | Fee per byte of transaction size | 44 |
| Min fee constant | txFeeFixed / minFeeB | Fixed fee component | 155381 |
| Min UTxO value per byte | utxoCostPerByte / adaPerUtxoByte | Minimum lovelace per byte of UTxO | 4310 |
The transaction fee formula is:
fee = txFeePerByte * tx_size_in_bytes + txFeeFixed
Block Size Parameters
| Parameter | JSON Key | Description | Mainnet Default |
|---|---|---|---|
| Max block body size | maxBlockBodySize | Maximum block body size in bytes | 90112 |
| Max transaction size | maxTxSize | Maximum transaction size in bytes | 16384 |
| Max block header size | maxBlockHeaderSize | Maximum block header size in bytes | 1100 |
Staking Parameters
| Parameter | JSON Key | Description | Mainnet Default |
|---|---|---|---|
| Stake address deposit | stakeAddressDeposit / keyDeposit | Deposit for stake key registration (lovelace) | 2000000 |
| Pool deposit | stakePoolDeposit / poolDeposit | Deposit for pool registration (lovelace) | 500000000 |
| Pool retire max epoch | poolRetireMaxEpoch / eMax | Maximum future epochs for pool retirement | 18 |
| Pool target count | stakePoolTargetNum / nOpt | Target number of pools (k parameter) | 500 |
| Min pool cost | minPoolCost | Minimum fixed pool cost (lovelace) | 170000000 |
Monetary Policy
| Parameter | Description |
|---|---|
| Monetary expansion (rho) | Rate of new ADA creation from reserves per epoch |
| Treasury cut (tau) | Fraction of rewards directed to the treasury |
| Pledge influence (a0) | How pledge affects reward calculations |
Plutus Execution Parameters
| Parameter | JSON Key | Description | Mainnet Default |
|---|---|---|---|
| Max tx execution units | maxTxExecutionUnits | {memory, steps} per transaction | {14000000, 10000000000} |
| Max block execution units | maxBlockExecutionUnits | {memory, steps} per block | {62000000, 40000000000} |
| Max value size | maxValueSize | Maximum serialized value size in bytes | 5000 |
| Collateral percentage | collateralPercentage | Collateral % of total tx fee for Plutus txs | 150 |
| Max collateral inputs | maxCollateralInputs | Maximum collateral inputs per tx | 3 |
Governance Parameters (Conway)
| Parameter | JSON Key | Description | Mainnet Default |
|---|---|---|---|
| DRep deposit | drepDeposit | Deposit for DRep registration (lovelace) | 500000000 |
| Gov action deposit | govActionDeposit | Deposit for governance action submission (lovelace) | 100000000000 |
| Gov action lifetime | govActionLifetime | Governance action expiry (epochs) | 6 |
Voting Thresholds
Different governance action types require different voting thresholds from DReps, SPOs, and the Constitutional Committee:
| Action Type | DRep Threshold | SPO Threshold | CC Threshold |
|---|---|---|---|
| No Confidence | dvtMotionNoConfidence | pvtMotionNoConfidence | Required |
| Update Committee (normal) | dvtCommitteeNormal | pvtCommitteeNormal | N/A |
| Update Committee (no confidence) | dvtCommitteeNoConfidence | pvtCommitteeNoConfidence | N/A |
| New Constitution | dvtUpdateToConstitution | N/A | Required |
| Hard Fork Initiation | dvtHardForkInitiation | pvtHardForkInitiation | Required |
| Protocol Parameter Update (network) | dvtPPNetworkGroup | N/A | Required |
| Protocol Parameter Update (economic) | dvtPPEconomicGroup | pvtPPEconomicGroup | Required |
| Protocol Parameter Update (technical) | dvtPPTechnicalGroup | N/A | Required |
| Protocol Parameter Update (governance) | dvtPPGovGroup | N/A | Required |
| Treasury Withdrawal | dvtTreasuryWithdrawal | N/A | Required |
CBOR Field Numbers
When encoding protocol parameter updates in governance actions, each parameter maps to a CBOR field number:
| CBOR Key | Parameter |
|---|---|
| 0 | txFeePerByte / minFeeA |
| 1 | txFeeFixed / minFeeB |
| 2 | maxBlockBodySize |
| 3 | maxTxSize |
| 4 | maxBlockHeaderSize |
| 5 | stakeAddressDeposit / keyDeposit |
| 6 | stakePoolDeposit / poolDeposit |
| 7 | poolRetireMaxEpoch / eMax |
| 8 | stakePoolTargetNum / nOpt |
| 16 | minPoolCost |
| 17 | utxoCostPerByte / adaPerUtxoByte |
| 20 | maxTxExecutionUnits |
| 21 | maxBlockExecutionUnits |
| 22 | maxValueSize |
| 23 | collateralPercentage |
| 24 | maxCollateralInputs |
| 30 | drepDeposit |
| 31 | govActionDeposit |
| 32 | govActionLifetime |
Cardano Mini-Protocol Reference
This document is the definitive implementation reference for every Cardano
mini-protocol used in node-to-node (N2N) and node-to-client (N2C)
communication. It covers the complete state machine, exact CBOR wire format,
timing constraints, flow-control rules, and every protocol-error condition for
each protocol. The information is derived directly from the Haskell source in
the IntersectMBO/ouroboros-network repository.
Connection Model and Multiplexer
All mini-protocols share a single TCP connection per peer, multiplexed by the
network-mux layer using 8-byte SDU headers:
Bytes Field
----- -----
0-3 timestamp (u32, microseconds, used for RTT measurement)
4-5 mini_protocol_num (u16)
6 flags (bit 0 = direction: 0=initiator, 1=responder)
7-8 payload_length (u16, max 65535)
Large messages are fragmented across multiple SDUs transparently. Handshake (protocol 0) runs on the raw socket before the mux is started.
Key invariant: if any single mini-protocol thread throws an exception, the entire mux — and therefore the entire TCP connection — is torn down. Protocol errors are fatal to the connection, not just to the affected mini-protocol.
Sources:
ouroboros-network/network-mux/src/Network/Mux/Types.hsouroboros-network/network-mux/src/Network/Mux/Egress.hs
Shared Encoding Primitives
These types are used identically across all protocols.
Point
A Point identifies a position on the chain by slot and header hash.
; CBOR encoding (Haskell: encodePoint / decodePoint)
point = [] ; Origin — empty definite-length list
/ [slot_no, header_hash] ; At(slot, hash) — definite-length list of 2
slot_no = uint ; word64
header_hash = bstr ; 32 bytes (Blake2b-256 of header)
Source: ouroboros-network/ouroboros-network/api/lib/Ouroboros/Network/Block.hs
Tip
A Tip is the chain tip as seen by the server. It is a (Point, BlockNo) pair.
; N2N ChainSync / N2C LocalChainSync
tip = [slot_no, header_hash, block_no] ; At(pt, blockno)
/ [0] ; TipGenesis (Origin point, blockno=0)
block_no = uint ; word64
Source: ouroboros-network/ouroboros-network/api/lib/Ouroboros/Network/Block.hs
(encodeTip / decodeTip)
Byte and Time Limit Constants
These constants appear in state-machine timeout and size-limit tables throughout this document.
| Constant | Value | Source in Codec/Limits.hs |
|---|---|---|
smallByteLimit | 65535 bytes | Protocol/Limits.hs:smallByteLimit |
largeByteLimit | 2 500 000 bytes | Protocol/Limits.hs:largeByteLimit |
shortWait | 10 seconds | Protocol/Limits.hs:shortWait |
longWait | 60 seconds | Protocol/Limits.hs:longWait |
waitForever | no timeout | Protocol/Limits.hs:waitForever (= Nothing) |
Source: ouroboros-network/ouroboros-network/api/lib/Ouroboros/Network/Protocol/Limits.hs
N2N Mini-Protocol IDs
| Protocol | ID |
|---|---|
| Handshake | 0 |
| DeltaQ | 1 (reserved, never used) |
| ChainSync | 2 |
| BlockFetch | 3 |
| TxSubmission2 | 4 |
| KeepAlive | 8 |
| PeerSharing | 10 |
| Peras Cert | 16 (future) |
| Peras Vote | 17 (future) |
N2C Mini-Protocol IDs
| Protocol | ID |
|---|---|
| Handshake | 0 |
| LocalChainSync | 5 |
| LocalTxSubmission | 6 |
| LocalStateQuery | 7 |
| LocalTxMonitor | 9 |
Protocol Temperatures (N2N)
Protocol temperature determines when each N2N mini-protocol is started during
the peer lifecycle (cold → warm → hot).
| Temperature | Protocols | Started when |
|---|---|---|
| Established | KeepAlive (8), PeerSharing (10) | On cold→warm promotion |
| Warm | (none currently) | — |
| Hot | ChainSync (2), BlockFetch (3), TxSubmission2 (4) | On warm→hot promotion |
Hot protocols use StartOnDemand for the responder side (they wait for the
first inbound byte). Initiator sides are started eagerly by startProtocols.
Source: ouroboros-network/cardano-diffusion/lib/Cardano/Network/Diffusion/Peer/
N2N Protocol 0: Handshake
Identity
- Protocol ID: 0 (runs on raw socket bearer before mux starts)
- Direction: Initiator sends
MsgProposeVersions, responder replies - Versions: V14 (Plomin HF, mandatory since 2025-01-29), V15 (SRV DNS)
State Machine
StPropose (ClientAgency) -- initiator has agency
│
│ MsgProposeVersions
▼
StConfirm (ServerAgency) -- server chooses version
│
├─── MsgAcceptVersion ──→ StDone
├─── MsgRefuse ──→ StDone
└─── MsgQueryReply ──→ StDone
| State | Agency | Meaning |
|---|---|---|
StPropose | Client | Initiator must send its version list |
StConfirm | Server | Server must accept, refuse, or query |
StDone | Nobody | Terminal |
Terminal state: StDone — connection is closed after handshake completes (for N2N; the mux then starts).
Wire Format
Source: ouroboros-network/ouroboros-network/framework/lib/Ouroboros/Network/Protocol/Handshake/Codec.hs
and cardano-diffusion/protocols/cddl/specs/handshake-node-to-node-v14.cddl
; Every handshake message is a definite-length CBOR array.
MsgProposeVersions = [0, versionTable]
MsgAcceptVersion = [1, versionNumber, versionData]
MsgRefuse = [2, refuseReason]
MsgQueryReply = [3, versionTable]
; versionTable is a CBOR definite-length MAP (not an array).
; Keys are encoded in ascending order.
versionTable = { * versionNumber => versionData }
; N2N version numbers (V14=14, V15=15, V16=16, ...)
; Note: N2N does NOT set bit-15. Only N2C uses bit-15.
versionNumber = 14 / 15 / 16
; Version data for V14/V15: 4-element array
versionData_v14 = [networkMagic, initiatorOnly, peerSharing, query]
; Version data for V16+: 5-element array (adds perasSupport)
versionData_v16 = [networkMagic, initiatorOnly, peerSharing, query, perasSupport]
networkMagic = uint .size 4 ; word32 (mainnet=764824073, preview=2, preprod=1)
initiatorOnly = bool ; true=InitiatorOnly, false=InitiatorAndResponder
peerSharing = 0 / 1 ; 0=Disabled, 1=Enabled
query = bool
perasSupport = bool
refuseReason
= [0, [* versionNumber]] ; VersionMismatch
/ [1, versionNumber, tstr] ; HandshakeDecodeError
/ [2, versionNumber, tstr] ; Refused
Version Negotiation Rules
Source: cardano-diffusion/api/lib/Cardano/Network/NodeToNode/Version.hs
- The responder picks the highest version number that appears in both the initiator's and responder's version tables.
- If no common version:
MsgRefusewithVersionMismatch. networkMagicmust match exactly; mismatch →MsgRefusewithRefused.initiatorOnlyDiffusionMode=min(local, remote)— more restrictive wins (i.e.,InitiatorOnlyif either side is).peerSharing=local <> remote(Semigroup): both must be Enabled for Enabled; any Disabled results in Disabled.InitiatorOnlynodes automatically have Disabled.query=local || remote(logical OR).
MsgQueryReply Semantics
When the initiator sends MsgProposeVersions with query=true, the responder
must reply with MsgQueryReply (a copy of its own version table) and then
close the connection. This is used by cardano-cli for version probing. The mux
never starts in this case.
Timeout
Handshake SDU read/write: 10 seconds per SDU. There is no per-state timeout beyond this; the handshake exchange must complete within one SDU read cycle on each side.
N2N Protocol 2: ChainSync
Identity
- Protocol ID: 2
- Temperature: Hot (started on warm→hot promotion)
- Direction: N2N ChainSync streams block headers only (not full blocks). Full blocks are fetched via BlockFetch.
- Versions: All N2N versions (V7+)
State Machine
Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/ChainSync/Type.hs
StIdle (ClientAgency) -- client requests next update or intersect
│
├─── MsgRequestNext ──→ StNext(StCanAwait)
├─── MsgFindIntersect ──→ StIntersect
└─── MsgDone ──→ StDone
StNext(StCanAwait) (ServerAgency) -- server can immediately reply or defer
│
├─── MsgAwaitReply ──→ StNext(StMustReply)
├─── MsgRollForward ──→ StIdle
└─── MsgRollBackward ──→ StIdle
StNext(StMustReply) (ServerAgency) -- server MUST reply (already sent await)
│
├─── MsgRollForward ──→ StIdle
└─── MsgRollBackward ──→ StIdle
StIntersect (ServerAgency) -- server searching for intersection
│
├─── MsgIntersectFound ──→ StIdle
└─── MsgIntersectNotFound ─→ StIdle
StDone (NobodyAgency)
Critical invariant: MsgAwaitReply is only valid in state StNext(StCanAwait).
The server transitions to StNext(StMustReply) after sending it. Sending
MsgAwaitReply when the client sent a non-blocking variant (Pipeline rather
than Request) or when the server has already sent MsgAwaitReply this round
is a protocol error (ProtocolErrorRequestNonBlocking). The typed-protocol
framework enforces this at compile time; a Rust implementation must enforce it
at runtime by tracking which sub-state of StNext is current.
Wire Format
Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/ChainSync/Codec.hs
and cardano-diffusion/protocols/cddl/specs/chain-sync.cddl
MsgRequestNext = [0]
MsgAwaitReply = [1]
MsgRollForward = [2, header, tip]
MsgRollBackward = [3, point, tip]
MsgFindIntersect = [4, points]
MsgIntersectFound = [5, point, tip]
MsgIntersectNotFound = [6, tip]
MsgDone = [7]
; points is a DEFINITE-length array (not indefinite)
points = [* point]
N2N header encoding in MsgRollForward: For the CardanoBlock HFC block
type, the header is wrapped as:
header = [era_index, serialised_header_bytes]
where era_index is 0=Byron, 1=Shelley, ..., 6=Conway, 7=Dijkstra (see
TxSubmission2 section for full table), and serialised_header_bytes is
tag(24)(bstr(cbor_encoded_header)) — CBOR-in-CBOR wrapping via
wrapCBORinCBOR.
Source: ouroboros-consensus/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node/Serialisation.hs
Pipelining
Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/ChainSync/PipelineDecision.hs
ChainSync uses the pipelineDecisionLowHighMark strategy with default marks
lowMark=200, highMark=300 (Dugite uses configurable depth via
DUGITE_PIPELINE_DEPTH, default 300).
pipelineDecisionLowHighMark :: Word16 -> Word16 -> MkPipelineDecision
Decision logic (given n outstanding requests, clientTip, serverTip):
n=0, clientTip == serverTip→Request(non-pipelined, triggers await semantics)n=0, clientTip < serverTip→Pipelinen>0, clientTip + n >= serverTip→Collect(we're caught up, stop pipelining)n >= highMark→Collect(high-water: drain before adding more)n < lowMark→CollectOrPipeline(can collect or pipeline)n >= lowMark→Collect(above low mark in high state)
When n=0 and clientTip == serverTip: the client sends a non-pipelined
Request, the server is at its tip and sends MsgAwaitReply (valid because
the client sent a blocking request). This is the "at tip" steady state.
Timing
Source: ouroboros-network/cardano-diffusion/protocols/lib/Cardano/Network/Protocol/ChainSync/Codec/TimeLimits.hs
| State | Trusted peer | Untrusted peer |
|---|---|---|
StIdle | 3373 s | 3373 s (configurable via ChainSyncIdleTimeout) |
StNext(StCanAwait) | 10 s (shortWait) | 10 s |
StNext(StMustReply) | waitForever | uniform random 601–911 s |
StIntersect | 10 s | 10 s |
The random range for untrusted StMustReply corresponds to streak-of-empty-slots
probabilities between 99.9% and 99.9999% at f=0.05.
Default ChainSyncIdleTimeout = 3373 seconds.
Source: cardano-diffusion/lib/Cardano/Network/Diffusion/Configuration.hs:defaultChainSyncIdleTimeout
Ingress Queue Limit
highMark × 1400 bytes × 1.1 safety factor
With highMark=300: approximately 462 000 bytes.
N2N Protocol 3: BlockFetch
Identity
- Protocol ID: 3
- Temperature: Hot
- Purpose: Bulk download of full block bodies, driven by the BlockFetch decision logic after ChainSync supplies candidate chain headers.
State Machine
Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/BlockFetch/Type.hs
BFIdle (ClientAgency) -- client decides what to fetch
│
├─── MsgRequestRange ──→ BFBusy
└─── MsgClientDone ──→ BFDone
BFBusy (ServerAgency) -- server preparing batch
│
├─── MsgStartBatch ──→ BFStreaming
└─── MsgNoBlocks ──→ BFIdle
BFStreaming (ServerAgency) -- server streaming blocks
│
├─── MsgBlock ──→ BFStreaming (self-loop, one block per message)
└─── MsgBatchDone ──→ BFIdle
BFDone (NobodyAgency)
| State | Agency |
|---|---|
BFIdle | Client |
BFBusy | Server |
BFStreaming | Server |
BFDone | Nobody |
Wire Format
Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/BlockFetch/Codec.hs
and cardano-diffusion/protocols/cddl/specs/block-fetch.cddl
MsgRequestRange = [0, lower_point, upper_point]
MsgClientDone = [1]
MsgStartBatch = [2]
MsgNoBlocks = [3]
MsgBlock = [4, block]
MsgBatchDone = [5]
MsgRequestRange: Both lower_point and upper_point are inclusive
(the range spans from lower to upper, both included). Each point uses the
standard point encoding ([] for Origin, [slot, hash] for specific).
Block encoding in MsgBlock: For CardanoBlock, the block is encoded as:
block = [era_index, tag(24)(bstr(cbor_encoded_block))]
The full block (including header and body) is CBOR-serialized, then wrapped in
tag(24)(bytes(cbor_bytes)) (CBOR-in-CBOR), then placed in a 2-element array
with the HFC era index.
Source: ouroboros-consensus/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node/Serialisation.hs
Timing
Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/BlockFetch/Codec.hs:timeLimitsBlockFetch
| State | Timeout |
|---|---|
BFIdle | waitForever |
BFBusy | 60 s (longWait) |
BFStreaming | 60 s (longWait) |
Byte Limits
| State | Limit |
|---|---|
BFIdle | 65535 bytes (smallByteLimit) |
BFBusy | 65535 bytes (smallByteLimit) |
BFStreaming | 2 500 000 bytes (largeByteLimit) |
BlockFetch Decision Loop
The blockFetchLogic thread runs continuously, waking every 10 ms (Praos) or
40 ms (Genesis). It reads candidate chains from ChainSync via STM, computes
which block ranges need to be fetched, and issues MsgRequestRange messages.
| Parameter | Default | Source |
|---|---|---|
maxInFlightReqsPerPeer | 100 | blockFetchPipeliningMax |
maxConcurrencyBulkSync | 1 peer | bfcMaxConcurrencyBulkSync |
maxConcurrencyDeadline | 1 peer | bfcMaxConcurrencyDeadline |
| Decision loop interval (Praos) | 10 ms | bfcDecisionLoopIntervalPraos |
| Decision loop interval (Genesis) | 40 ms | bfcDecisionLoopIntervalGenesis |
Source: cardano-diffusion/lib/Cardano/Network/Diffusion/Configuration.hs:defaultBlockFetchConfiguration
Ingress Queue Limit
max(10 × 2 097 154, 100 × 90 112) × 1.1 ≈ 22 MB.
N2N Protocol 4: TxSubmission2
Identity
- Protocol ID: 4
- Temperature: Hot
- Direction: Inverted agency — the server (inbound/receiver) has agency first. The server requests transactions; the client replies with them. This is the opposite of most protocols.
- Versions: All N2N versions. V2 logic (multi-peer decision loop) is
enabled server-side when
TxSubmissionLogicV2is configured; V1 is the current default in cardano-node.
State Machine
Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/TxSubmission2/Type.hs
StInit (ClientAgency) -- client must send MsgInit before anything else
│
│ MsgInit
▼
StIdle (ServerAgency) -- server has agency; requests txids or terminates
│
├─── MsgRequestTxIds(blocking=true) ──→ StTxIds(StBlocking)
├─── MsgRequestTxIds(blocking=false) ──→ StTxIds(StNonBlocking)
├─── MsgRequestTxs ──→ StTxs
└─── MsgDone ──→ StDone
StTxIds(StBlocking) (ClientAgency) -- client MUST reply, no timeout
│
└─── MsgReplyTxIds(NonEmpty list) ──→ StIdle
(BlockingReply: list must be non-empty)
StTxIds(StNonBlocking) (ClientAgency) -- client must reply within shortWait
│
└─── MsgReplyTxIds(possibly empty) ──→ StIdle
StTxs (ClientAgency) -- client must reply with requested tx bodies
│
└─── MsgReplyTxs(tx list) ──→ StIdle
StDone (NobodyAgency)
MsgDone constraint: MsgDone can only be sent from StIdle (server side).
It is the server's prerogative to terminate, not the client's.
Wire Format
Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/TxSubmission2/Codec.hs:encodeTxSubmission2
and cardano-diffusion/protocols/cddl/specs/tx-submission2.cddl
MsgInit = [6]
MsgRequestTxIds = [0, blocking:bool, ack:word16, req:word16]
; blocking=true → StTxIds(StBlocking)
; blocking=false → StTxIds(StNonBlocking)
MsgReplyTxIds = [1, [_ *[txid, size:word32] ]]
; INDEFINITE-length outer list (encodeListLenIndef)
; Each inner entry is a DEFINITE-length array(2)
MsgRequestTxs = [2, [_ *txid ]]
; INDEFINITE-length list
MsgReplyTxs = [3, [_ *tx ]]
; INDEFINITE-length list
MsgDone = [4]
IMPORTANT: Both MsgReplyTxIds, MsgRequestTxs, and MsgReplyTxs use
indefinite-length CBOR arrays (encoded with encodeListLenIndef and
terminated with encodeBreak). The codec explicitly requires this. Using
definite-length arrays is a decoding error.
HFC era-tag wrapping for txids and txs:
For the Cardano HFC instantiation, each txid and each tx is wrapped with
the era index before being placed into the list. The wrapping is done by
encodeNS in ouroboros-consensus:
; txid (GenTxId) encoding
txid = [era_index:uint8, bstr(32)]
; era_index: 0=Byron, 1=Shelley, 2=Allegra, 3=Mary, 4=Alonzo,
; 5=Babbage, 6=Conway, 7=Dijkstra
; payload: 32 raw bytes = Blake2b-256 hash of tx body (no CBOR tag)
; tx (GenTx) encoding
tx = [era_index:uint8, tag(24)(bstr(cbor_of_tx))]
; The transaction CBOR bytes are wrapped in CBOR tag 24 (embedded CBOR)
Example for Conway (era_index=6):
txid = [6, bstr(32_bytes_of_txhash)]
tx = [6, #6.24(bstr(cbor_bytes_of_transaction))]
Source: ouroboros-consensus/ouroboros-consensus-diffusion/src/.../Consensus/Network/NodeToNode.hs
and ouroboros-consensus/src/.../HardFork/Combinator/Serialisation/Common.hs:encodeNS
MsgReplyTxIds — Size Reporting
Each entry in MsgReplyTxIds carries a SizeInBytes (word32) alongside the
txid. This size must include the full HFC envelope overhead that the tx will
have in MsgReplyTxs. For Conway: 3 bytes overhead (1 byte array-of-2 header,
1 byte era_index word8, CBOR tag 24 header). Mismatches beyond the tolerance
threshold (const_MAX_TX_SIZE_DISCREPANCY = 10 bytes in V2 inbound) terminate
the connection.
Blocking vs Non-Blocking Rules
In blocking mode (MsgRequestTxIds(blocking=true)):
req_countmust be >= 1MsgReplyTxIdsreply must contain a non-empty list (BlockingReply)- No timeout: the client MAY block indefinitely in STM waiting for new mempool entries
In non-blocking mode (MsgRequestTxIds(blocking=false)):
- At least one of
ack_countorreq_countmust be non-zero MsgReplyTxIdsreply may be empty (NonBlockingReply [])- Timeout:
shortWait(10 seconds)
Acknowledgment semantics: ack_count tells the client how many previously
announced txids can now be removed from the outbound window. The client
maintains a FIFO of unacknowledgedTxIds. When the server sends
MsgRequestTxIds(ack=N, req=M), the client drops the first N entries from
the FIFO and adds up to M new txids from the mempool.
Timing
Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/TxSubmission2/Codec.hs:timeLimitsTxSubmission2
| State | Timeout |
|---|---|
StInit | waitForever |
StIdle | waitForever |
StTxIds(StBlocking) | waitForever |
StTxIds(StNonBlocking) | 10 s (shortWait) |
StTxs | 10 s (shortWait) |
V1 Server Constants (current default)
| Parameter | Value |
|---|---|
maxTxIdsToRequest | 3 |
maxTxToRequest | 2 |
maxUnacknowledgedTxIds | 100 |
txSubmissionInitDelay | 60 s |
The 60-second init delay is applied via threadDelay before the V1 server makes
its first MsgRequestTxIds. This intentionally avoids requesting transactions
during initial chain sync.
V2 Server Constants (experimental)
| Parameter | Value |
|---|---|
maxNumTxIdsToRequest | 12 |
maxUnacknowledgedTxIds | 100 |
txsSizeInflightPerPeer | 6 × 65540 bytes |
txInflightMultiplicity | 2 |
| Decision loop delay | 5 ms |
Source: ouroboros-network/ouroboros-network/lib/Ouroboros/Network/TxSubmission/Inbound/V2/
MsgInit Requirement
MsgInit (tag=6, one-element array [6]) must be the very first message
sent by the client (outbound side) after the mux connection is established for
the TxSubmission2 protocol. The server waits for MsgInit in StInit before
transitioning to StIdle. Sending any other message first is a protocol error.
Ingress Queue Limit
maxUnacknowledgedTxIds × (44 + 65536) × 1.1
With maxUnacknowledgedTxIds=100: approximately 6 666 400 bytes.
N2N Protocol 8: KeepAlive
Identity
- Protocol ID: 8
- Temperature: Established (started on cold→warm, runs for entire connection lifetime)
- Purpose: Detects connection failure and measures round-trip time for GSV (Good-Spread-Variable) calculations used in BlockFetch prioritization.
State Machine
Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/KeepAlive/Type.hs
StClient (ClientAgency) -- client sends keep-alive request
│
├─── MsgKeepAlive(cookie) ──→ StServer
└─── MsgDone ──→ StDone
StServer (ServerAgency) -- server must respond with same cookie
│
└─── MsgKeepAliveResponse(cookie) ──→ StClient
StDone (NobodyAgency)
Wire Format
Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/KeepAlive/Codec.hs:codecKeepAlive_v2
MsgKeepAlive = [0, cookie:word16]
MsgKeepAliveResponse = [1, cookie:word16]
MsgDone = [2]
Cookie matching: The server must echo back the exact cookie value sent by
the client. A mismatch raises KeepAliveCookieMissmatch (note: the Haskell
source has the typo "Missmatch" with double-s), which terminates the connection.
Timing
Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/KeepAlive/Codec.hs:timeLimitsKeepAlive
| State | Timeout |
|---|---|
StClient | 97 seconds |
StServer | 60 seconds |
The asymmetry is intentional: the client side (97 s) is how long the client
waits before sending the next keep-alive; the server side (60 s) is how long
the server has to respond. The comment in source notes that StServer timeout
"should be 10s" (issue #2505) but is currently 60 s.
Byte Limits
Both states: smallByteLimit (65535 bytes).
Protocol Error Condition
KeepAliveCookieMissmatch oldCookie receivedCookie — thrown when
MsgKeepAliveResponse cookie does not match the outstanding request cookie.
This terminates the connection.
N2N Protocol 10: PeerSharing
Identity
- Protocol ID: 10
- Temperature: Established (started on cold→warm)
- Purpose: Exchange of peer addresses to assist in peer discovery. Only
active when both sides negotiated
peerSharing=1in Handshake.
State Machine
Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/PeerSharing/Type.hs
StIdle (ClientAgency) -- client requests peer addresses or terminates
│
├─── MsgShareRequest(amount) ──→ StBusy
└─── MsgDone ──→ StDone
StBusy (ServerAgency) -- server must reply with peer list
│
└─── MsgSharePeers(addrs) ──→ StIdle
StDone (NobodyAgency)
Wire Format
Source: ouroboros-network/cardano-diffusion/protocols/lib/Cardano/Network/Protocol/PeerSharing/Codec.hs
and cardano-diffusion/protocols/cddl/specs/peer-sharing-v14.cddl
MsgShareRequest = [0, amount:word8]
MsgSharePeers = [1, [* peerAddress]]
MsgDone = [2]
; Peer address encoding (SockAddr)
peerAddress = [0, ipv4:word32, port:word16]
; IPv4: single u32 in network byte order, then port as word16
/ [1, word32, word32, word32, word32, port:word16]
; IPv6: four u32s (network byte order), then port as word16
Protocol error condition: If the server replies with more addresses than
amount requested, it is a protocol error. The client must request no more
than 255 peers (word8 max).
Timing
| State | Timeout |
|---|---|
StIdle | waitForever |
StBusy | 60 s (longWait) |
Server Address Selection Policy
The server only shares addresses for peers that satisfy all of:
knownPeerAdvertise = DoAdvertisePeerknownSuccessfulConnection = TrueknownPeerFailCount = 0
Addresses are randomized using a hash with a salt that rotates every 823 seconds to prevent fingerprinting.
Source: ouroboros-network/ouroboros-network/api/lib/Ouroboros/Network/PeerSelection/PeerSharing/Codec.hs
and ouroboros-network/ouroboros-network/lib/Ouroboros/Network/PeerSharing.hs
Key Policy Constants
| Constant | Value |
|---|---|
policyMaxInProgressPeerShareReqs | 2 |
policyPeerShareRetryTime | 900 s |
policyPeerShareBatchWaitTime | 3 s |
policyPeerShareOverallTimeout | 10 s |
policyPeerShareActivationDelay | 300 s |
ps_POLICY_PEER_SHARE_STICKY_TIME | 823 s (salt rotation) |
ps_POLICY_PEER_SHARE_MAX_PEERS | 10 |
Source: ouroboros-network/ouroboros-network/lib/Ouroboros/Network/Diffusion/Policies.hs
N2C Protocol 0: Handshake (Node-to-Client)
Identity
- Protocol ID: 0 (same as N2N, runs on raw socket before mux)
- Direction: Same as N2N: client proposes, server accepts or refuses
- Versions: V16 (=32784) through V23 (=32791)
Wire Format
Source: CDDL: cardano-diffusion/protocols/cddl/specs/handshake-node-to-client.cddl
Codec: same codecHandshake function as N2N, parameterized on version number type.
; Messages are identical in structure to N2N handshake
MsgProposeVersions = [0, versionTable]
MsgAcceptVersion = [1, versionNumber, nodeToClientVersionData]
MsgRefuse = [2, refuseReason]
MsgQueryReply = [3, versionTable]
; N2C version numbers have bit 15 set to distinguish from N2N
; V16=32784, V17=32785, V18=32786, V19=32787,
; V20=32788, V21=32789, V22=32790, V23=32791
versionNumber = 32784 / 32785 / 32786 / 32787 / 32788 / 32789 / 32790 / 32791
; Encoding: versionNumber_wire = logical_version | 0x8000
; Decoding: logical_version = wire_value & 0x7FFF (after verifying bit 15 is set)
; Version data (V16+): 2-element array
nodeToClientVersionData = [networkMagic:uint, query:bool]
The versionTable in MsgProposeVersions is a definite-length CBOR map
with entries sorted in ascending key order.
Version Features
| N2C Version | Wire Value | What Changed |
|---|---|---|
| V16 | 32784 | Conway era; ImmutableTip acquire; GetStakeDelegDeposits |
| V17 | 32785 | GetProposals, GetRatifyState |
| V18 | 32786 | GetFuturePParams |
| V19 | 32787 | GetBigLedgerPeerSnapshot |
| V20 | 32788 | QueryStakePoolDefaultVote; MsgGetMeasures in LocalTxMonitor |
| V21 | 32789 | New ProtVer codec for Shelley-Babbage; GetPoolDistr2, GetStakeDistribution2, GetMaxMajorProtVersion |
| V22 | 32790 | SRV records in GetBigLedgerPeerSnapshot |
| V23 | 32791 | GetDRepDelegations; LedgerPeerSnapshot includes block hash + NetworkMagic |
Source: cardano-diffusion/api/lib/Cardano/Network/NodeToClient/Version.hs
Version Negotiation
Same rules as N2N:
- Highest common version wins.
networkMagicmust match.query = local || remote(logical OR).- No
initiatorOnlyDiffusionModeorpeerSharingfields in N2C version data.
N2C Protocol 5: LocalChainSync
Identity
- Protocol ID: 5
- Direction: N2C clients receive full serialized blocks (not just headers). This is the key difference from N2N ChainSync.
- Versions: All N2C versions
State Machine
Identical state machine to N2N ChainSync (same Type.hs). See that section for the complete state machine diagram.
Wire Format
Messages tags are identical to N2N ChainSync (0–7). The key difference is the
content of MsgRollForward.
N2C MsgRollForward block encoding:
; N2C LocalChainSync block payload in MsgRollForward
block = [era_id:uint, tag(24)(bstr(cbor_of_full_block))]
The entire block (header + body) is CBOR-encoded, wrapped in CBOR tag(24)
(embedded CBOR), and then paired with the era index in a 2-element array.
Era indices: same as TxSubmission2 (0=Byron through 7=Dijkstra).
This matches the same HFC wrapping used by BlockFetch MsgBlock in N2N.
Differences from N2N ChainSync
| Aspect | N2N ChainSync | N2C LocalChainSync |
|---|---|---|
| Payload type | Block headers only | Full blocks |
| Purpose | Chain selection | Wallet / tool consumption |
| Pipelining | Yes (pipelineDecisionLowHighMark) | Typically none |
| Source of blocks | Server → client | Server → client |
Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/ChainSync/Codec.hs (same codec)
N2C Protocol 6: LocalTxSubmission
Identity
- Protocol ID: 6
- Direction: Client submits a single transaction; server accepts or rejects.
- No HFC era-tag wrapping: Unlike N2N TxSubmission2, N2C LocalTxSubmission sends raw transaction CBOR without any HFC era-index prefix.
- Versions: All N2C versions
State Machine
Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/LocalTxSubmission/Type.hs
StIdle (ClientAgency) -- client submits a transaction or terminates
│
├─── MsgSubmitTx(tx) ──→ StBusy
└─── MsgDone ──→ StDone
StBusy (ServerAgency) -- server validates and responds
│
├─── MsgAcceptTx ──→ StIdle
└─── MsgRejectTx ──→ StIdle
StDone (NobodyAgency)
Blocking semantics: After sending MsgSubmitTx, the client must wait
for MsgAcceptTx or MsgRejectTx before sending another transaction. This
protocol processes one transaction at a time. This is intentional: N2C is
only used by local trusted clients (wallets, CLI), so throughput is not a
concern.
Wire Format
Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/LocalTxSubmission/Codec.hs:encodeLocalTxSubmission
and cardano-diffusion/protocols/cddl/specs/local-tx-submission.cddl
MsgSubmitTx = [0, tx]
MsgAcceptTx = [1]
MsgRejectTx = [2, rejectReason]
MsgDone = [3]
Transaction encoding (tx): Raw transaction CBOR, exactly as produced
by toCBOR on the ledger's Tx type. No HFC wrapper, no era tag, no
tag(24). The server determines the era from the ledger state.
Rejection reason (rejectReason): The full ApplyTxError encoded via
the ledger's EncCBOR instance. For Conway, this is a nested structure of
ConwayLedgerPredFailure variants. The exact encoding is era-specific and
defined in cardano-ledger.
Source: cardano-ledger/eras/conway/impl/src/Cardano/Ledger/Conway/Rules/
N2C Protocol 7: LocalStateQuery
Identity
- Protocol ID: 7
- Direction: Client acquires a ledger state snapshot and submits queries; server responds with query results.
- Versions: All N2C versions. Some queries require specific minimum versions (see Shelley query tag table).
State Machine
Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/LocalStateQuery/Type.hs
StIdle (ClientAgency) -- client acquires a state or terminates
│
├─── MsgAcquire(target) ──→ StAcquiring
└─── MsgDone ──→ StDone
StAcquiring (ServerAgency) -- server acquiring the requested state
│
├─── MsgAcquired ──→ StAcquired
└─── MsgFailure(reason) ──→ StIdle
StAcquired (ClientAgency) -- client can query or release
│
├─── MsgQuery(query) ──→ StQuerying
├─── MsgRelease ──→ StIdle
└─── MsgReAcquire(target)──→ StAcquiring
StQuerying (ServerAgency) -- server computing query result
│
└─── MsgResult(result) ──→ StAcquired
StDone (NobodyAgency)
Re-acquire: MsgReAcquire transitions from StAcquired directly back to
StAcquiring, allowing the client to acquire a new state without going through
StIdle. This avoids a round trip.
Acquire Targets
Three targets exist for MsgAcquire and MsgReAcquire:
| Target | CBOR | Semantics | Min Version |
|---|---|---|---|
SpecificPoint | [0, point] | Acquire the state at a specific slot/hash point | V8+ (any) |
VolatileTip | [8] | Acquire the current tip of the volatile chain | V8+ |
ImmutableTip | [10] | Acquire the tip of the immutable chain | N2C V16+ |
For MsgReAcquire: tags are shifted by 3 → SpecificPoint=[6, point],
VolatileTip=[9], ImmutableTip=[11] (V16+).
VolatileTip and ImmutableTip cannot fail (they always succeed with
MsgAcquired). SpecificPoint can fail if the point is not in the volatile
chain window (yields MsgFailure).
Acquire Failure Codes
AcquireFailurePointTooOld = 0
AcquireFailurePointNotOnChain = 1
Wire Format
Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/LocalStateQuery/Codec.hs:codecLocalStateQuery
and cardano-diffusion/protocols/cddl/specs/local-state-query.cddl
; Acquire / Re-acquire
MsgAcquire(SpecificPoint pt) = [0, point]
MsgAcquire(VolatileTip) = [8]
MsgAcquire(ImmutableTip) = [10] ; V16+ only
MsgAcquired = [1]
MsgFailure(reason) = [2, failure_code:uint]
; 0=PointTooOld, 1=PointNotOnChain
MsgQuery(query) = [3, query_encoding]
MsgResult(result) = [4, result_encoding]
MsgRelease = [5]
MsgReAcquire(SpecificPoint pt)= [6, point]
MsgReAcquire(VolatileTip) = [9]
MsgReAcquire(ImmutableTip) = [11] ; V16+ only
MsgDone = [7]
Query Encoding (Three-Level HFC Wrapping)
Queries are wrapped in three layers. The outermost layer is the consensus-level
Query type (in Ouroboros.Consensus.Ledger.Query):
; Outermost consensus layer
query = [2, tag=0, wrapped_block_query] ; BlockQuery — delegates to HFC
/ [1, tag=1] ; GetSystemStart
/ [1, tag=2] ; GetChainBlockNo (V16+ / QueryVersion2)
/ [1, tag=3] ; GetChainPoint (V16+ / QueryVersion2)
/ [1, tag=4] ; DebugLedgerConfig (V20+ / QueryVersion3)
For BlockQuery (tag=0), the next layer is the HFC query:
; HFC (Hard Fork Combinator) layer
hfc_query = [2, tag=0, era_query] ; QueryIfCurrent — query current era
/ [3, tag=1, era_query, era_index] ; QueryAnytime
/ [2, tag=2, hf_specific] ; QueryHardFork
For QueryIfCurrent, the era index is determined by dispatch; there is no
explicit era tag in the message. The era_query is the era-level query:
; Era-level query (Shelley BlockQuery tags)
; These are 1-element or 2-element arrays with a numeric tag
era_query = [1, tag=0] ; GetLedgerTip
/ [1, tag=1] ; GetEpochNo
/ [2, tag=2, ..] ; GetNonMyopicMemberRewards
/ [1, tag=3] ; GetCurrentPParams
; ... (see full table below)
Shelley BlockQuery Tag Table
| Tag | Query Name | Min N2C Version |
|---|---|---|
| 0 | GetLedgerTip | V8 |
| 1 | GetEpochNo | V8 |
| 2 | GetNonMyopicMemberRewards | V8 |
| 3 | GetCurrentPParams | V8 |
| 4 | GetProposedPParamsUpdates | V8 |
| 5 | GetStakeDistribution | V8 (removed in V21) |
| 6 | GetUTxOByAddress | V8 |
| 7 | GetUTxOWhole | V8 |
| 8 | DebugEpochState | V8 |
| 9 | GetCBOR (wraps inner query in tag(24)) | V8 |
| 10 | GetFilteredDelegationsAndRewardAccounts | V8 |
| 11 | GetGenesisConfig | V8 |
| 12 | DebugNewEpochState | V8 |
| 13 | DebugChainDepState | V8 |
| 14 | GetRewardProvenance | V9 |
| 15 | GetUTxOByTxIn | V10 |
| 16 | GetStakePools | V11 |
| 17 | GetStakePoolParams | V11 |
| 18 | GetRewardInfoPools | V11 |
| 19 | GetPoolState | V11 |
| 20 | GetStakeSnapshots | V11 |
| 21 | GetPoolDistr | V11 (removed in V21) |
| 22 | GetStakeDelegDeposits | V16 |
| 23 | GetConstitution | V16 |
| 24 | GetGovState | V16 |
| 25 | GetDRepState | V16 |
| 26 | GetDRepStakeDistr | V16 |
| 27 | GetCommitteeMembersState | V16 |
| 28 | GetFilteredVoteDelegatees | V16 |
| 29 | GetAccountState | V16 |
| 30 | GetSPOStakeDistr | V16 |
| 31 | GetProposals | V17 |
| 32 | GetRatifyState | V17 |
| 33 | GetFuturePParams | V18 |
| 34 | GetLedgerPeerSnapshot | V19 |
| 35 | QueryStakePoolDefaultVote | V20 |
| 36 | GetPoolDistr2 | V21 |
| 37 | GetStakeDistribution2 | V21 |
| 38 | GetMaxMajorProtVersion | V21 |
| 39 | GetDRepDelegations | V23 |
Source: cardano-diffusion/api/lib/Cardano/Network/NodeToClient/Version.hs and
ouroboros-consensus/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Block/Cardano.hs
MsgResult Wrapping
For QueryIfCurrent queries, the result is wrapped in an EitherMismatch
type to indicate whether the query was applied to the correct era:
; QueryIfCurrent result encoding
result = [result_value] ; Success: definite-length array(1) wrapping the value
/ [era_mismatch_info] ; Era mismatch: see EraEraMismatch encoding
A successful QueryIfCurrent result is wrapped in a 1-element definite-length
array. This is easy to miss and causes decoding failures if omitted.
QueryAnytime and QueryHardFork results are not wrapped in this extra
array.
N2C Protocol 9: LocalTxMonitor
Identity
- Protocol ID: 9
- Direction: Client monitors the node's mempool contents.
- Versions: All N2C versions.
MsgGetMeasures/MsgReplyGetMeasuresrequire N2C V20+.
State Machine
Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/LocalTxMonitor/Type.hs
StIdle (ClientAgency) -- client can acquire a snapshot or terminate
│
├─── MsgAcquire ──→ StAcquiring
└─── MsgDone ──→ StDone
StAcquiring (ServerAgency) -- server captures mempool snapshot
│
└─── MsgAcquired(slotNo) ──→ StAcquired
StAcquired (ClientAgency) -- client queries snapshot or releases
│
├─── MsgNextTx ──→ StBusy(NextTx)
├─── MsgHasTx(txid) ──→ StBusy(HasTx)
├─── MsgGetSizes ──→ StBusy(GetSizes)
├─── MsgGetMeasures ──→ StBusy(GetMeasures) ; V20+ only
├─── MsgAwaitAcquire ──→ StAcquiring ; refresh snapshot
└─── MsgRelease ──→ StIdle
StBusy(NextTx) (ServerAgency)
└─── MsgReplyNextTx(maybe tx) ──→ StAcquired
StBusy(HasTx) (ServerAgency)
└─── MsgReplyHasTx(bool) ──→ StAcquired
StBusy(GetSizes) (ServerAgency)
└─── MsgReplyGetSizes(sizes) ──→ StAcquired
StBusy(GetMeasures) (ServerAgency) ; V20+
└─── MsgReplyGetMeasures(m) ──→ StAcquired
StDone (NobodyAgency)
Snapshot semantics: After MsgAcquired, the client holds a fixed snapshot
of the mempool as of the slotNo returned. The snapshot does not change even
if new transactions arrive or are removed. MsgAwaitAcquire refreshes the
snapshot without going through StIdle.
Wire Format
Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/LocalTxMonitor/Codec.hs
and cardano-diffusion/protocols/cddl/specs/local-tx-monitor.cddl
MsgDone = [0]
MsgAcquire = [1] ; same tag for initial acquire from StIdle
MsgAwaitAcquire = [1] ; same tag for re-acquire from StAcquired
MsgAcquired = [2, slotNo:word64]
MsgRelease = [3]
MsgNextTx = [5] ; note: tag 4 is unused
MsgReplyNextTx = [6] ; no tx: empty mempool
/ [6, tx] ; with tx: next transaction in snapshot
MsgHasTx = [7, txId]
MsgReplyHasTx = [8, bool]
MsgGetSizes = [9]
MsgReplyGetSizes = [10, [capacityInBytes:word32,
sizeInBytes:word32,
numberOfTxs:word32]]
MsgGetMeasures = [11] ; V20+ only
MsgReplyGetMeasures = [12, txCount:word32, {* tstr => [integer, integer]}]
; V20+ only
Tag 4 is intentionally unused. Tags jump from 3 (MsgRelease) to 5
(MsgNextTx).
MsgReplyNextTx: Uses the same tag (6) for both the no-tx and has-tx
cases, distinguished by array length: [6] (len=1) means no more txs;
[6, tx] (len=2) means a tx follows.
MsgAcquire and MsgAwaitAcquire use the same wire tag [1]. The
protocol state (StIdle vs StAcquired) determines which message is
being decoded. This is handled by the state token in the codec.
Transaction encoding: Same as LocalTxSubmission — raw CBOR with no HFC wrapping.
txId encoding: Raw 32-byte Blake2b-256 hash as CBOR bytes primitive.
Initialization Sequence
N2N Connection Startup
After the TCP connection is established:
-
Handshake (protocol 0): Both sides send
MsgProposeVersionssimultaneously (simultaneous open). The one with the lower socket address keeps the outbound role; the other keeps the inbound. Each side processes the other's proposal and the higher-address side sendsMsgAcceptVersionorMsgRefuse. The connection proceeds only if both sides determine the same version. -
Mux starts: After successful handshake, the mux multiplexer and demultiplexer threads are started. Protocol threads are started based on peer temperature.
-
Cold→Warm:
KeepAlive(8) andPeerSharing(10) initiator threads start eagerly. -
Warm→Hot:
ChainSync(2),BlockFetch(3),TxSubmission2(4) initiator threads start eagerly. Responder threads start on-demand (when first inbound bytes arrive). -
TxSubmission2 MsgInit: The TxSubmission2 client (outbound side) must send
MsgInit([6]) as its very first message. Without this, the server stays inStInitindefinitely (waitForever timeout).
N2C Connection Startup
-
Handshake (protocol 0): Same mechanism, but using N2C version numbers (with bit 15 set). The local client proposes; the node accepts.
-
Mux starts: All N2C mini-protocols start eagerly on both sides.
-
No mandatory initial messages: Unlike N2N TxSubmission2, no N2C protocol requires a mandatory initial message before the first client request. The client may begin with
MsgAcquire(LocalStateQuery),MsgSubmitTx(LocalTxSubmission), orMsgAcquire(LocalTxMonitor) immediately.
HFC Era Index Table
This table applies to all N2N protocols (ChainSync headers, BlockFetch blocks, TxSubmission2 txids/txs) and N2C LocalChainSync blocks.
| Era Index | Era |
|---|---|
| 0 | Byron |
| 1 | Shelley (TPraos) |
| 2 | Allegra (TPraos) |
| 3 | Mary (TPraos) |
| 4 | Alonzo (TPraos) |
| 5 | Babbage (Praos) |
| 6 | Conway (Praos) |
| 7 | Dijkstra (Praos, future) |
Source: ouroboros-consensus/ouroboros-consensus-cardano/src/unstable-cardano-consensus/Ouroboros/Consensus/Cardano/Block.hs
Summary: Protocol Error Triggers
This table lists the most common protocol violations that terminate the connection.
| Protocol | Error Condition | Trigger |
|---|---|---|
| Handshake | VersionMismatch | No common version in propose |
| Handshake | Refused | Magic mismatch, policy rejection |
| Handshake | HandshakeDecodeError | Failed to decode version params |
| ChainSync | Agency violation | Client sends MsgRollForward (server-only message) |
| ChainSync | ProtocolErrorRequestNonBlocking | Server sends MsgAwaitReply but StNext(StMustReply) was active (not StCanAwait) |
| BlockFetch | Agency violation | Client sends MsgBlock (server-only message) |
| TxSubmission2 | Protocol error | Any message before MsgInit is processed |
| TxSubmission2 | BlockingReply empty | Server sends MsgRequestTxIds(blocking=true) and client replies with empty list |
| TxSubmission2 | Size mismatch | Reported SizeInBytes deviates >10 bytes from actual tx wire size (V2 inbound) |
| KeepAlive | KeepAliveCookieMissmatch | Response cookie != request cookie |
| PeerSharing | Protocol error | Server replies with more peers than requested |
| LocalStateQuery | AcquireFailurePointTooOld | SpecificPoint is outside the volatile window |
| LocalStateQuery | AcquireFailurePointNotOnChain | SpecificPoint not on the node's chain |
| LocalStateQuery | ImmutableTip on old version | Attempting MsgAcquire(ImmutableTip) before N2C V16 |
| Any | Byte limit exceeded | Ingress queue overflow (per-state byte limits) |
| Any | Timeout exceeded | Per-state timing limits (see per-protocol tables) |
Source File Index
All files are in the IntersectMBO/ouroboros-network repository (main branch)
unless otherwise noted.
| Protocol / Topic | File |
|---|---|
| N2N Handshake Type | ouroboros-network/framework/lib/Ouroboros/Network/Protocol/Handshake/Type.hs |
| N2N Handshake Codec | ouroboros-network/framework/lib/Ouroboros/Network/Protocol/Handshake/Codec.hs |
| N2N Handshake CDDL | cardano-diffusion/protocols/cddl/specs/handshake-node-to-node-v14.cddl |
| N2C Handshake CDDL | cardano-diffusion/protocols/cddl/specs/handshake-node-to-client.cddl |
| N2N Version data v14 CDDL | cardano-diffusion/protocols/cddl/specs/node-to-node-version-data-v14.cddl |
| N2N Version data v16 CDDL | cardano-diffusion/protocols/cddl/specs/node-to-node-version-data-v16.cddl |
| N2C Version enum | cardano-diffusion/api/lib/Cardano/Network/NodeToClient/Version.hs |
| N2N Version enum | cardano-diffusion/api/lib/Cardano/Network/NodeToNode/Version.hs |
| ChainSync Type | ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/ChainSync/Type.hs |
| ChainSync Codec | ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/ChainSync/Codec.hs |
| ChainSync TimeLimits | cardano-diffusion/protocols/lib/Cardano/Network/Protocol/ChainSync/Codec/TimeLimits.hs |
| ChainSync CDDL | cardano-diffusion/protocols/cddl/specs/chain-sync.cddl |
| ChainSync Pipelining | ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/ChainSync/PipelineDecision.hs |
| BlockFetch Type | ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/BlockFetch/Type.hs |
| BlockFetch Codec | ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/BlockFetch/Codec.hs |
| BlockFetch CDDL | cardano-diffusion/protocols/cddl/specs/block-fetch.cddl |
| TxSubmission2 Type | ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/TxSubmission2/Type.hs |
| TxSubmission2 Codec | ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/TxSubmission2/Codec.hs |
| TxSubmission2 CDDL | cardano-diffusion/protocols/cddl/specs/tx-submission2.cddl |
| KeepAlive Type | ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/KeepAlive/Type.hs |
| KeepAlive Codec | ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/KeepAlive/Codec.hs |
| KeepAlive CDDL | cardano-diffusion/protocols/cddl/specs/keep-alive.cddl |
| PeerSharing Type | ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/PeerSharing/Type.hs |
| PeerSharing Codec (Cardano) | cardano-diffusion/protocols/lib/Cardano/Network/Protocol/PeerSharing/Codec.hs |
| PeerSharing CDDL | cardano-diffusion/protocols/cddl/specs/peer-sharing-v14.cddl |
| LocalStateQuery Type | ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/LocalStateQuery/Type.hs |
| LocalStateQuery Codec | ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/LocalStateQuery/Codec.hs |
| LocalStateQuery CDDL | cardano-diffusion/protocols/cddl/specs/local-state-query.cddl |
| LocalTxSubmission Type | ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/LocalTxSubmission/Type.hs |
| LocalTxSubmission Codec | ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/LocalTxSubmission/Codec.hs |
| LocalTxSubmission CDDL | cardano-diffusion/protocols/cddl/specs/local-tx-submission.cddl |
| LocalTxMonitor Type | ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/LocalTxMonitor/Type.hs |
| LocalTxMonitor Codec | ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/LocalTxMonitor/Codec.hs |
| LocalTxMonitor CDDL | cardano-diffusion/protocols/cddl/specs/local-tx-monitor.cddl |
| Protocol Limits (byte/time constants) | ouroboros-network/api/lib/Ouroboros/Network/Protocol/Limits.hs |
| Diffusion Configuration | cardano-diffusion/lib/Cardano/Network/Diffusion/Configuration.hs |
| Mux SDU framing | ouroboros-network/network-mux/src/Network/Mux/Types.hs |
| HFC era encoding (encodeNS) | ouroboros-consensus repo: src/.../HardFork/Combinator/Serialisation/Common.hs |
| network.base.cddl | cardano-diffusion/protocols/cddl/specs/network.base.cddl |
Upgrading Dugite
Version Compatibility
Dugite is in early development. Between releases, the following may change without backward compatibility:
- Ledger snapshot format — Ledger state snapshots (saved in the
ledger/subdirectory of the database path) usebincodeserialization. The field order is fixed, but new fields added in a release will make old snapshot files unreadable. - Block storage format — ImmutableDB chunk files use the same on-disk format as
cardano-nodeand are forward-compatible. These do not need to be deleted when upgrading. - Configuration file format — New config fields are always optional with sensible defaults. Existing config files are forward-compatible.
- Protocol versions — Dugite tracks the N2N and N2C protocol versions supported by the current
cardano-noderelease. If the network upgrades to a new era, update Dugite to avoid handshake failures.
Upgrade Procedure
1. Stop the Running Node
# Graceful shutdown (SIGTERM)
kill $(pidof dugite-node)
# or
pkill dugite-node
Wait for the process to exit. Dugite flushes its write buffer and closes the socket cleanly on SIGTERM.
2. Install the New Binary
From a release tarball:
curl -LO https://github.com/michaeljfazio/dugite/releases/latest/download/dugite-x86_64-linux.tar.gz
tar xzf dugite-x86_64-linux.tar.gz
sudo mv dugite-node dugite-cli dugite-monitor dugite-config /usr/local/bin/
From source:
git pull
cargo build --release
sudo cp target/release/dugite-node target/release/dugite-cli \
target/release/dugite-monitor target/release/dugite-config \
/usr/local/bin/
Container:
docker pull ghcr.io/michaeljfazio/dugite:latest
3. Clear Ledger Snapshots (if the snapshot format changed)
If the release notes mention a ledger snapshot format change, delete the saved snapshots before restarting:
rm -f /path/to/db/ledger/*.snapshot
The node will rebuild the ledger state from the ImmutableDB (block storage) on next startup. For large databases this can take several minutes; consider using Mithril snapshot import to speed up recovery.
Block storage is safe to keep. Only
db/ledger/needs to be cleared when the snapshot format changes. Do not deletedb/immutable/ordb/volatile/unless instructed.
4. Verify Configuration
Check the release notes for any new required configuration fields. New fields are always optional; your existing config will continue to work. To validate:
dugite-config validate config.json
5. Restart
dugite-node run \
--config config.json \
--topology topology.json \
--database-path ./db \
--socket-path ./node.sock \
--host-addr 0.0.0.0 \
--port 3001
Confirm the node resumes from the correct tip:
dugite-cli query tip --socket-path ./node.sock
Checking the Installed Version
dugite-node --version
dugite-cli --version
Checking the Release Notes
Release notes are published on the GitHub Releases page. Each release notes which components are affected and whether a ledger snapshot wipe is required.
Nightly Benchmark Results — 2026-05-13
Machine: GitHub Actions ubuntu-latest Branch: main (f3311cf)
Storage Benchmarks
[1m[92m Updating[0m crates.io index
[1m[92m Updating[0m git repository `https://github.com/iquerejeta/curve25519-dalek`
[1m[92m Updating[0m git repository `https://github.com/input-output-hk/vrf`
[1m[92m Updating[0m git repository `https://github.com/aiken-lang/aiken.git`
[1m[92m Downloading[0m crates ...
[1m[92m Downloaded[0m anes v0.1.6
[1m[92m Downloaded[0m crypto-common v0.2.1
[1m[92m Downloaded[0m hybrid-array v0.4.10
[1m[92m Downloaded[0m digest v0.9.0
[1m[92m Downloaded[0m deranged v0.5.8
[1m[92m Downloaded[0m cfg-if v1.0.4
[1m[92m Downloaded[0m ciborium-io v0.2.2
[1m[92m Downloaded[0m static_assertions v1.1.0
[1m[92m Downloaded[0m walkdir v2.5.0
[1m[92m Downloaded[0m rand_core v0.5.1
[1m[92m Downloaded[0m paste v1.0.15
[1m[92m Downloaded[0m pallas-traverse v1.0.0-alpha.6
[1m[92m Downloaded[0m num-traits v0.2.19
[1m[92m Downloaded[0m curve25519-dalek v3.2.0
[1m[92m Downloaded[0m quote v1.0.45
[1m[92m Downloaded[0m shlex v1.3.0
[1m[92m Downloaded[0m powerfmt v0.2.0
[1m[92m Downloaded[0m typenum v1.19.0
[1m[92m Downloaded[0m unicode-xid v0.2.6
[1m[92m Downloaded[0m pallas-addresses v1.0.0-alpha.6
[1m[92m Downloaded[0m once_cell v1.21.4
[1m[92m Downloaded[0m subtle v2.6.1
[1m[92m Downloaded[0m thiserror v2.0.18
[1m[92m Downloaded[0m smallvec v1.15.1
[1m[92m Downloaded[0m plotters-backend v0.3.7
[1m[92m Downloaded[0m tinytemplate v1.2.1
[1m[92m Downloaded[0m spki v0.7.3
[1m[92m Downloaded[0m schemars v1.2.1
[1m[92m Downloaded[0m itoa v1.0.18
[1m[92m Downloaded[0m zerocopy-derive v0.8.48
[1m[92m Downloaded[0m signature v2.2.0
[1m[92m Downloaded[0m proc-macro2 v1.0.106
[1m[92m Downloaded[0m serde_with_macros v3.18.0
[1m[92m Downloaded[0m thiserror v1.0.69
[1m[92m Downloaded[0m zmij v1.0.21
[1m[92m Downloaded[0m sha2 v0.11.0
[1m[92m Downloaded[0m semver v1.0.28
[1m[92m Downloaded[0m serde_core v1.0.228
[1m[92m Downloaded[0m tracing-core v0.1.36
[1m[92m Downloaded[0m regex-syntax v0.8.10
[1m[92m Downloaded[0m unicode-segmentation v1.13.2
[1m[92m Downloaded[0m bech32 v0.11.1
[1m[92m Downloaded[0m plotters v0.3.7
[1m[92m Downloaded[0m time v0.3.47
[1m[92m Downloaded[0m crc-catalog v2.4.0
[1m[92m Downloaded[0m hashbrown v0.17.0
[1m[92m Downloaded[0m zerocopy v0.8.48
[1m[92m Downloaded[0m tracing v0.1.44
[1m[92m Downloaded[0m unicode-ident v1.0.24
[1m[92m Downloaded[0m rand_core v0.6.4
[1m[92m Downloaded[0m time-core v0.1.8
[1m[92m Downloaded[0m mio v1.2.0
[1m[92m Downloaded[0m version_check v0.9.5
[1m[92m Downloaded[0m zeroize v1.8.2
[1m[92m Downloaded[0m thiserror-impl v1.0.69
[1m[92m Downloaded[0m tracing-attributes v0.1.31
[1m[92m Downloaded[0m serde_derive v1.0.228
[1m[92m Downloaded[0m cryptoxide v0.4.4
[1m[92m Downloaded[0m rayon-core v1.13.0
[1m[92m Downloaded[0m base64 v0.22.1
[1m[92m Downloaded[0m const-oid v0.10.2
[1m[92m Downloaded[0m der v0.7.10
[1m[92m Downloaded[0m regex v1.12.3
[1m[92m Downloaded[0m clap v4.6.1
[1m[92m Downloaded[0m serde_with v3.18.0
[1m[92m Downloaded[0m bitflags v2.11.0
[1m[92m Downloaded[0m libc v0.2.185
[1m[92m Downloaded[0m blake2b_simd v1.0.4
[1m[92m Downloaded[0m bech32 v0.9.1
[1m[92m Downloaded[0m syn v2.0.117
[1m[92m Downloaded[0m curve25519-dalek v4.1.3
[1m[92m Downloaded[0m base64ct v1.8.3
[1m[92m Downloaded[0m num-bigint v0.4.6
[1m[92m Downloaded[0m bytes v1.11.1
[1m[92m Downloaded[0m itertools v0.13.0
[1m[92m Downloaded[0m ed25519 v2.2.3
[1m[92m Downloaded[0m cpufeatures v0.2.17
[1m[92m Downloaded[0m aho-corasick v1.1.4
[1m[92m Downloaded[0m tempfile v3.27.0
[1m[92m Downloaded[0m scopeguard v1.2.0
[1m[92m Downloaded[0m dashu-int v0.4.1
[1m[92m Downloaded[0m parking_lot_core v0.9.12
[1m[92m Downloaded[0m memchr v2.8.0
[1m[92m Downloaded[0m half v2.7.1
[1m[92m Downloaded[0m chrono v0.4.44
[1m[92m Downloaded[0m regex-automata v0.4.14
[1m[92m Downloaded[0m getrandom v0.4.2
[1m[92m Downloaded[0m find-msvc-tools v0.1.9
[1m[92m Downloaded[0m dashu-base v0.4.1
[1m[92m Downloaded[0m sha2 v0.10.9
[1m[92m Downloaded[0m rustversion v1.0.22
[1m[92m Downloaded[0m num-conv v0.2.1
[1m[92m Downloaded[0m minicbor v0.26.5
[1m[92m Downloaded[0m getrandom v0.2.17
[1m[92m Downloaded[0m crossbeam-utils v0.8.21
[1m[92m Downloaded[0m crc32fast v1.5.0
[1m[92m Downloaded[0m const-oid v0.9.6
[1m[92m Downloaded[0m criterion v0.8.2
[1m[92m Downloaded[0m clap_lex v1.1.0
[1m[92m Downloaded[0m arrayref v0.3.9
[1m[92m Downloaded[0m same-file v1.0.6
[1m[92m Downloaded[0m tokio v1.51.1
[1m[92m Downloaded[0m indexmap v1.9.3
[1m[92m Downloaded[0m generic-array v0.14.9
[1m[92m Downloaded[0m darling v0.23.0
[1m[92m Downloaded[0m block-buffer v0.9.0
[1m[92m Downloaded[0m minicbor-derive v0.16.2
[1m[92m Downloaded[0m darling_macro v0.23.0
[1m[92m Downloaded[0m blake2 v0.10.6
[1m[92m Downloaded[0m arrayvec v0.7.6
[1m[92m Downloaded[0m pallas-crypto v1.0.0-alpha.6
[1m[92m Downloaded[0m memmap2 v0.9.10
[1m[92m Downloaded[0m iana-time-zone v0.1.65
[1m[92m Downloaded[0m getrandom v0.1.16
[1m[92m Downloaded[0m derive_more v2.1.1
[1m[92m Downloaded[0m crossbeam-deque v0.8.6
[1m[92m Downloaded[0m ciborium v0.2.2
[1m[92m Downloaded[0m page_size v0.6.0
[1m[92m Downloaded[0m cpufeatures v0.3.0
[1m[92m Downloaded[0m block-buffer v0.10.4
[1m[92m Downloaded[0m socket2 v0.6.3
[1m[92m Downloaded[0m signal-hook-registry v1.4.8
[1m[92m Downloaded[0m ref-cast v1.0.25
[1m[92m Downloaded[0m ppv-lite86 v0.2.21
[1m[92m Downloaded[0m opaque-debug v0.3.1
[1m[92m Downloaded[0m num-integer v0.1.46
[1m[92m Downloaded[0m getrandom v0.3.4
[1m[92m Downloaded[0m digest v0.10.7
[1m[92m Downloaded[0m convert_case v0.10.0
[1m[92m Downloaded[0m ciborium-ll v0.2.2
[1m[92m Downloaded[0m num-rational v0.4.2
[1m[92m Downloaded[0m either v1.15.0
[1m[92m Downloaded[0m linux-raw-sys v0.12.1
[1m[92m Downloaded[0m byteorder v1.5.0
[1m[92m Downloaded[0m rustix v1.1.4
[1m[92m Downloaded[0m rayon v1.12.0
[1m[92m Downloaded[0m hashbrown v0.12.3
[1m[92m Downloaded[0m fastrand v2.4.1
[1m[92m Downloaded[0m crossbeam-epoch v0.9.18
[1m[92m Downloaded[0m parking_lot v0.12.5
[1m[92m Downloaded[0m ident_case v1.0.1
[1m[92m Downloaded[0m equivalent v1.0.2
[1m[92m Downloaded[0m darling_core v0.23.0
[1m[92m Downloaded[0m constant_time_eq v0.4.2
[1m[92m Downloaded[0m indexmap v2.14.0
[1m[92m Downloaded[0m rand_chacha v0.9.0
[1m[92m Downloaded[0m num-order v1.2.0
[1m[92m Downloaded[0m num-modular v0.6.1
[1m[92m Downloaded[0m errno v0.3.14
[1m[92m Downloaded[0m crypto-common v0.1.6
[1m[92m Downloaded[0m cc v1.2.60
[1m[92m Downloaded[0m base58 v0.2.0
[1m[92m Downloaded[0m schemars v0.9.0
[1m[92m Downloaded[0m dyn-clone v1.0.20
[1m[92m Downloaded[0m criterion-plot v0.8.2
[1m[92m Downloaded[0m clap_builder v4.6.0
[1m[92m Downloaded[0m autocfg v1.5.0
[1m[92m Downloaded[0m anstyle v1.0.14
[1m[92m Downloaded[0m time-macros v0.2.27
[1m[92m Downloaded[0m thiserror-impl v2.0.18
[1m[92m Downloaded[0m strsim v0.11.1
[1m[92m Downloaded[0m rustc_version v0.4.1
[1m[92m Downloaded[0m hex v0.4.3
[1m[92m Downloaded[0m curve25519-dalek-derive v0.1.1
[1m[92m Downloaded[0m crc v3.4.0
[1m[92m Downloaded[0m rand v0.9.4
[1m[92m Downloaded[0m digest v0.11.2
[1m[92m Downloaded[0m cast v0.3.0
[1m[92m Downloaded[0m alloca v0.4.0
[1m[92m Downloaded[0m ref-cast-impl v1.0.25
[1m[92m Downloaded[0m rand_core v0.9.5
[1m[92m Downloaded[0m pallas-codec v1.0.0-alpha.6
[1m[92m Downloaded[0m ed25519-dalek v2.2.0
[1m[92m Downloaded[0m derive_more-impl v2.1.1
[1m[92m Downloaded[0m block-buffer v0.12.0
[1m[92m Downloaded[0m serde v1.0.228
[1m[92m Downloaded[0m pallas-primitives v1.0.0-alpha.6
[1m[92m Downloaded[0m oorandom v11.1.5
[1m[92m Downloaded[0m lock_api v0.4.14
[1m[92m Downloaded[0m tokio-macros v2.7.0
[1m[92m Downloaded[0m pin-project-lite v0.2.17
[1m[92m Downloaded[0m sha2 v0.9.9
[1m[92m Downloaded[0m serde_json v1.0.149
[1m[92m Downloaded[0m pkcs8 v0.10.2
[1m[92m Downloaded[0m plotters-svg v0.3.7
[1m[92m Compiling[0m proc-macro2 v1.0.106
[1m[92m Compiling[0m unicode-ident v1.0.24
[1m[92m Compiling[0m quote v1.0.45
[1m[92m Compiling[0m cfg-if v1.0.4
[1m[92m Compiling[0m libc v0.2.185
[1m[92m Compiling[0m typenum v1.19.0
[1m[92m Compiling[0m version_check v0.9.5
[1m[92m Compiling[0m generic-array v0.14.9
[1m[92m Compiling[0m serde_core v1.0.228
[1m[92m Compiling[0m syn v2.0.117
[1m[92m Compiling[0m zerocopy v0.8.48
[1m[92m Compiling[0m serde v1.0.228
[1m[92m Compiling[0m subtle v2.6.1
[1m[92m Compiling[0m autocfg v1.5.0
[1m[92m Compiling[0m block-buffer v0.10.4
[1m[92m Compiling[0m crypto-common v0.1.6
[1m[92m Compiling[0m num-traits v0.2.19
[1m[92m Compiling[0m digest v0.10.7
[1m[92m Compiling[0m zeroize v1.8.2
[1m[92m Compiling[0m cpufeatures v0.2.17
[1m[92m Compiling[0m ident_case v1.0.1
[1m[92m Compiling[0m semver v1.0.28
[1m[92m Compiling[0m strsim v0.11.1
[1m[92m Compiling[0m getrandom v0.3.4
[1m[92m Compiling[0m rustc_version v0.4.1
[1m[92m Compiling[0m thiserror v1.0.69
[1m[92m Compiling[0m minicbor v0.26.5
[1m[92m Compiling[0m curve25519-dalek v4.1.3
[1m[92m Compiling[0m zmij v1.0.21
[1m[92m Compiling[0m rand_core v0.9.5
[1m[92m Compiling[0m hex v0.4.3
[1m[92m Compiling[0m serde_json v1.0.149
[1m[92m Compiling[0m signature v2.2.0
[1m[92m Compiling[0m sha2 v0.10.9
[1m[92m Compiling[0m getrandom v0.1.16
[1m[92m Compiling[0m memchr v2.8.0
[1m[92m Compiling[0m itoa v1.0.18
[1m[92m Compiling[0m rand_core v0.6.4
[1m[92m Compiling[0m cryptoxide v0.4.4
[1m[92m Compiling[0m darling_core v0.23.0
[1m[92m Compiling[0m zerocopy-derive v0.8.48
[1m[92m Compiling[0m serde_derive v1.0.228
[1m[92m Compiling[0m darling_macro v0.23.0
[1m[92m Compiling[0m minicbor-derive v0.16.2
[1m[92m Compiling[0m thiserror-impl v1.0.69
[1m[92m Compiling[0m darling v0.23.0
[1m[92m Compiling[0m curve25519-dalek-derive v0.1.1
[1m[92m Compiling[0m serde_with_macros v3.18.0
[1m[92m Compiling[0m serde_with v3.18.0
[1m[92m Compiling[0m ed25519 v2.2.3
[1m[92m Compiling[0m ed25519-dalek v2.2.0
[1m[92m Compiling[0m unicode-segmentation v1.13.2
[1m[92m Compiling[0m either v1.15.0
[1m[92m Compiling[0m crossbeam-utils v0.8.21
[1m[92m Compiling[0m convert_case v0.10.0
[1m[92m Compiling[0m num-integer v0.1.46
[1m[92m Compiling[0m digest v0.9.0
[1m[92m Compiling[0m hybrid-array v0.4.10
[1m[92m Compiling[0m half v2.7.1
[1m[92m Compiling[0m ppv-lite86 v0.2.21
[1m[92m Compiling[0m unicode-xid v0.2.6
[1m[92m Compiling[0m thiserror v2.0.18
[1m[92m Compiling[0m rand_chacha v0.9.0
[1m[92m Compiling[0m rand v0.9.4
[1m[92m Compiling[0m derive_more-impl v2.1.1
[1m[92m Compiling[0m num-bigint v0.4.6
[1m[92m Compiling[0m rand_core v0.5.1
[1m[92m Compiling[0m pallas-codec v1.0.0-alpha.6
[1m[92m Compiling[0m itertools v0.13.0
[1m[92m Compiling[0m pallas-crypto v1.0.0-alpha.6
[1m[92m Compiling[0m thiserror-impl v2.0.18
[1m[92m Compiling[0m arrayref v0.3.9
[1m[92m Compiling[0m rustversion v1.0.22
[1m[92m Compiling[0m crc-catalog v2.4.0
[1m[92m Compiling[0m bytes v1.11.1
[1m[92m Compiling[0m once_cell v1.21.4
[1m[92m Compiling[0m byteorder v1.5.0
[1m[92m Compiling[0m arrayvec v0.7.6
[1m[92m Compiling[0m constant_time_eq v0.4.2
[1m[92m Compiling[0m parking_lot_core v0.9.12
[1m[92m Compiling[0m paste v1.0.15
[1m[92m Compiling[0m find-msvc-tools v0.1.9
[1m[92m Compiling[0m shlex v1.3.0
[1m[92m Compiling[0m iana-time-zone v0.1.65
[1m[92m Compiling[0m cc v1.2.60
[1m[92m Compiling[0m blake2b_simd v1.0.4
[1m[92m Compiling[0m chrono v0.4.44
[1m[92m Compiling[0m crc v3.4.0
[1m[92m Compiling[0m derive_more v2.1.1
[1m[92m Compiling[0m num-rational v0.4.2
[1m[92m Compiling[0m crossbeam-epoch v0.9.18
[1m[92m Compiling[0m block-buffer v0.12.0
[1m[92m Compiling[0m crypto-common v0.2.1
[1m[92m Compiling[0m blake2 v0.10.6
[1m[92m Compiling[0m block-buffer v0.9.0
[1m[92m Compiling[0m base58 v0.2.0
[1m[92m Compiling[0m smallvec v1.15.1
[1m[92m Compiling[0m num-modular v0.6.1
[1m[92m Compiling[0m const-oid v0.10.2
[1m[92m Compiling[0m bech32 v0.11.1
[1m[92m Compiling[0m bech32 v0.9.1
[1m[92m Compiling[0m pin-project-lite v0.2.17
[1m[92m Compiling[0m rayon-core v1.13.0
[1m[92m Compiling[0m opaque-debug v0.3.1
[1m[92m Compiling[0m scopeguard v1.2.0
[1m[92m Compiling[0m lock_api v0.4.14
[1m[92m Compiling[0m pallas-addresses v1.0.0-alpha.6
[1m[92m Compiling[0m sha2 v0.9.9
[1m[92m Compiling[0m dugite-primitives v1.4.0 (/home/runner/work/dugite/dugite/crates/dugite-primitives)
[1m[92m Compiling[0m num-order v1.2.0
[1m[92m Compiling[0m digest v0.11.2
[1m[92m Compiling[0m crossbeam-deque v0.8.6
[1m[92m Compiling[0m alloca v0.4.0
[1m[92m Compiling[0m curve25519-dalek v3.2.0 (https://github.com/iquerejeta/curve25519-dalek?branch=ietf03_vrf_compat_ell2#70a36f41)
[1m[92m Compiling[0m curve25519-dalek v3.2.0
[1m[92m Compiling[0m tracing-core v0.1.36
[1m[92m Compiling[0m pallas-primitives v1.0.0-alpha.6
[1m[92m Compiling[0m tracing-attributes v0.1.31
[1m[92m Compiling[0m errno v0.3.14
[1m[92m Compiling[0m regex-syntax v0.8.10
[1m[92m Compiling[0m getrandom v0.4.2
[1m[92m Compiling[0m crc32fast v1.5.0
[1m[92m Compiling[0m cpufeatures v0.3.0
[1m[92m Compiling[0m ciborium-io v0.2.2
[1m[92m Compiling[0m anstyle v1.0.14
[1m[92m Compiling[0m clap_lex v1.1.0
[1m[92m Compiling[0m rustix v1.1.4
[1m[92m Compiling[0m static_assertions v1.1.0
[1m[92m Compiling[0m plotters-backend v0.3.7
[1m[92m Compiling[0m dashu-base v0.4.1
[1m[92m Compiling[0m regex-automata v0.4.14
[1m[92m Compiling[0m dashu-int v0.4.1
[1m[92m Compiling[0m plotters-svg v0.3.7
[1m[92m Compiling[0m clap_builder v4.6.0
[1m[92m Compiling[0m pallas-traverse v1.0.0-alpha.6
[1m[92m Compiling[0m tracing v0.1.44
[1m[92m Compiling[0m ciborium-ll v0.2.2
[1m[92m Compiling[0m sha2 v0.11.0
[1m[92m Compiling[0m signal-hook-registry v1.4.8
[1m[92m Compiling[0m vrf_dalek v0.1.0 (https://github.com/input-output-hk/vrf?rev=03ac038e9b92c754ebbcb71824866d93f25e27f3#03ac038e)
[1m[92m Compiling[0m parking_lot v0.12.5
[1m[92m Compiling[0m tokio-macros v2.7.0
[1m[92m Compiling[0m mio v1.2.0
[1m[92m Compiling[0m socket2 v0.6.3
[1m[92m Compiling[0m bitflags v2.11.0
[1m[92m Compiling[0m same-file v1.0.6
[1m[92m Compiling[0m cast v0.3.0
[1m[92m Compiling[0m linux-raw-sys v0.12.1
[1m[92m Compiling[0m criterion-plot v0.8.2
[1m[92m Compiling[0m tokio v1.51.1
[1m[92m Compiling[0m walkdir v2.5.0
[1m[92m Compiling[0m rayon v1.12.0
[1m[92m Compiling[0m dugite-serialization v1.4.0 (/home/runner/work/dugite/dugite/crates/dugite-serialization)
[1m[92m Compiling[0m dugite-crypto v1.4.0 (/home/runner/work/dugite/dugite/crates/dugite-crypto)
[1m[92m Compiling[0m clap v4.6.1
[1m[92m Compiling[0m ciborium v0.2.2
[1m[92m Compiling[0m plotters v0.3.7
[1m[92m Compiling[0m regex v1.12.3
[1m[92m Compiling[0m tinytemplate v1.2.1
[1m[92m Compiling[0m memmap2 v0.9.10
[1m[92m Compiling[0m page_size v0.6.0
[1m[92m Compiling[0m anes v0.1.6
[1m[92m Compiling[0m oorandom v11.1.5
[1m[92m Compiling[0m fastrand v2.4.1
[1m[92m Compiling[0m tempfile v3.27.0
[1m[92m Compiling[0m criterion v0.8.2
[1m[92m Compiling[0m dugite-storage v1.4.0 (/home/runner/work/dugite/dugite/crates/dugite-storage)
[1m[92m Finished[0m `bench` profile [optimized] target(s) in 1m 15s
[1m[92m Running[0m benches/storage_bench.rs (target/release/deps/storage_bench-9f185658b419e032)
Gnuplot not found, using plotters backend
Benchmarking chaindb/sequential_insert/10k_20kb
Benchmarking chaindb/sequential_insert/10k_20kb: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 45.0s.
Benchmarking chaindb/sequential_insert/10k_20kb: Collecting 10 samples in estimated 45.043 s (10 iterations)
Benchmarking chaindb/sequential_insert/10k_20kb: Analyzing
chaindb/sequential_insert/10k_20kb
time: [4.1071 s 4.2393 s 4.4357 s]
Found 1 outliers among 10 measurements (10.00%)
1 (10.00%) high severe
Benchmarking chaindb/random_read/by_hash/10000blks
Benchmarking chaindb/random_read/by_hash/10000blks: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 419.4s, or reduce sample count to 10.
Benchmarking chaindb/random_read/by_hash/10000blks: Collecting 100 samples in estimated 419.35 s (100 iterations)
Benchmarking chaindb/random_read/by_hash/10000blks: Analyzing
chaindb/random_read/by_hash/10000blks
time: [33.835 ms 34.335 ms 34.918 ms]
Found 4 outliers among 100 measurements (4.00%)
1 (1.00%) high mild
3 (3.00%) high severe
Benchmarking chaindb/random_read/by_hash/100000blks
Benchmarking chaindb/random_read/by_hash/100000blks: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 4416.0s, or reduce sample count to 10.
Benchmarking chaindb/random_read/by_hash/100000blks: Collecting 100 samples in estimated 4416.0 s (100 iterations)
Benchmarking chaindb/random_read/by_hash/100000blks: Analyzing
chaindb/random_read/by_hash/100000blks
time: [345.77 ms 350.22 ms 354.92 ms]
Found 3 outliers among 100 measurements (3.00%)
3 (3.00%) high mild
Benchmarking chaindb/tip_query
Benchmarking chaindb/tip_query: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 363.4s, or reduce sample count to 10.
Benchmarking chaindb/tip_query: Collecting 100 samples in estimated 363.35 s (100 iterations)
Benchmarking chaindb/tip_query: Analyzing
chaindb/tip_query time: [31.133 ms 31.385 ms 31.660 ms]
Found 8 outliers among 100 measurements (8.00%)
1 (1.00%) low mild
3 (3.00%) high mild
4 (4.00%) high severe
Benchmarking chaindb/has_block
Benchmarking chaindb/has_block: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 392.1s, or reduce sample count to 10.
Benchmarking chaindb/has_block: Collecting 100 samples in estimated 392.09 s (100 iterations)
Benchmarking chaindb/has_block: Analyzing
chaindb/has_block time: [31.687 ms 31.974 ms 32.292 ms]
Found 3 outliers among 100 measurements (3.00%)
2 (2.00%) high mild
1 (1.00%) high severe
Benchmarking chaindb/slot_range_100
Benchmarking chaindb/slot_range_100: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 379.7s, or reduce sample count to 10.
Benchmarking chaindb/slot_range_100: Collecting 100 samples in estimated 379.70 s (100 iterations)
Benchmarking chaindb/slot_range_100: Analyzing
chaindb/slot_range_100 time: [32.301 ms 32.537 ms 32.780 ms]
Found 3 outliers among 100 measurements (3.00%)
2 (2.00%) high mild
1 (1.00%) high severe
Benchmarking chaindb/flush_to_immutable/k_2160_blocks_20kb/2160
Benchmarking chaindb/flush_to_immutable/k_2160_blocks_20kb/2160: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 8.0s.
Benchmarking chaindb/flush_to_immutable/k_2160_blocks_20kb/2160: Collecting 10 samples in estimated 8.0090 s (10 iterations)
Benchmarking chaindb/flush_to_immutable/k_2160_blocks_20kb/2160: Analyzing
chaindb/flush_to_immutable/k_2160_blocks_20kb/2160
time: [6.6273 ms 6.7963 ms 6.9661 ms]
Benchmarking chaindb/profile_comparison/insert_10k_20kb/in_memory
Benchmarking chaindb/profile_comparison/insert_10k_20kb/in_memory: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 37.0s.
Benchmarking chaindb/profile_comparison/insert_10k_20kb/in_memory: Collecting 10 samples in estimated 37.033 s (10 iterations)
Benchmarking chaindb/profile_comparison/insert_10k_20kb/in_memory: Analyzing
chaindb/profile_comparison/insert_10k_20kb/in_memory
time: [3.6213 s 3.6630 s 3.7268 s]
Found 1 outliers among 10 measurements (10.00%)
1 (10.00%) high severe
Benchmarking chaindb/profile_comparison/insert_10k_20kb/mmap
Benchmarking chaindb/profile_comparison/insert_10k_20kb/mmap: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 36.3s.
Benchmarking chaindb/profile_comparison/insert_10k_20kb/mmap: Collecting 10 samples in estimated 36.297 s (10 iterations)
Benchmarking chaindb/profile_comparison/insert_10k_20kb/mmap: Analyzing
chaindb/profile_comparison/insert_10k_20kb/mmap
time: [3.6648 s 3.7115 s 3.7820 s]
Found 1 outliers among 10 measurements (10.00%)
1 (10.00%) high severe
Benchmarking chaindb/profile_comparison/read_500/in_memory
Benchmarking chaindb/profile_comparison/read_500/in_memory: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 36.4s.
Benchmarking chaindb/profile_comparison/read_500/in_memory: Collecting 10 samples in estimated 36.405 s (10 iterations)
Benchmarking chaindb/profile_comparison/read_500/in_memory: Analyzing
chaindb/profile_comparison/read_500/in_memory
time: [30.866 ms 31.149 ms 31.432 ms]
Benchmarking chaindb/profile_comparison/read_500/mmap
Benchmarking chaindb/profile_comparison/read_500/mmap: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 37.0s.
Benchmarking chaindb/profile_comparison/read_500/mmap: Collecting 10 samples in estimated 37.037 s (10 iterations)
Benchmarking chaindb/profile_comparison/read_500/mmap: Analyzing
chaindb/profile_comparison/read_500/mmap
time: [30.667 ms 31.392 ms 32.310 ms]
Found 2 outliers among 10 measurements (20.00%)
1 (10.00%) low severe
1 (10.00%) high severe
Benchmarking immutabledb/open/in_memory/10000
Benchmarking immutabledb/open/in_memory/10000: Warming up for 3.0000 s
Benchmarking immutabledb/open/in_memory/10000: Collecting 100 samples in estimated 5.5274 s (300 iterations)
Benchmarking immutabledb/open/in_memory/10000: Analyzing
immutabledb/open/in_memory/10000
time: [17.943 ms 18.074 ms 18.232 ms]
Found 4 outliers among 100 measurements (4.00%)
2 (2.00%) high mild
2 (2.00%) high severe
Benchmarking immutabledb/open/mmap_cached/10000
Benchmarking immutabledb/open/mmap_cached/10000: Warming up for 3.0000 s
Benchmarking immutabledb/open/mmap_cached/10000: Collecting 100 samples in estimated 5.1196 s (25k iterations)
Benchmarking immutabledb/open/mmap_cached/10000: Analyzing
immutabledb/open/mmap_cached/10000
time: [202.73 µs 202.94 µs 203.21 µs]
Found 4 outliers among 100 measurements (4.00%)
1 (1.00%) low mild
1 (1.00%) high mild
2 (2.00%) high severe
Benchmarking immutabledb/open/mmap_cold_rebuild/10000
Benchmarking immutabledb/open/mmap_cold_rebuild/10000: Warming up for 3.0000 s
Benchmarking immutabledb/open/mmap_cold_rebuild/10000: Collecting 100 samples in estimated 5.2522 s (1700 iterations)
Benchmarking immutabledb/open/mmap_cold_rebuild/10000: Analyzing
immutabledb/open/mmap_cold_rebuild/10000
time: [3.1010 ms 3.1261 ms 3.1546 ms]
Found 4 outliers among 100 measurements (4.00%)
1 (1.00%) high mild
3 (3.00%) high severe
Benchmarking immutabledb/open/in_memory/100000
Benchmarking immutabledb/open/in_memory/100000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 119.0s, or reduce sample count to 10.
Benchmarking immutabledb/open/in_memory/100000: Collecting 100 samples in estimated 118.99 s (100 iterations)
Benchmarking immutabledb/open/in_memory/100000: Analyzing
immutabledb/open/in_memory/100000
time: [875.56 ms 877.73 ms 880.04 ms]
Found 14 outliers among 100 measurements (14.00%)
13 (13.00%) high mild
1 (1.00%) high severe
Benchmarking immutabledb/open/mmap_cached/100000
Benchmarking immutabledb/open/mmap_cached/100000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 8.8s, enable flat sampling, or reduce sample count to 50.
Benchmarking immutabledb/open/mmap_cached/100000: Collecting 100 samples in estimated 8.8087 s (5050 iterations)
Benchmarking immutabledb/open/mmap_cached/100000: Analyzing
immutabledb/open/mmap_cached/100000
time: [1.7509 ms 1.7622 ms 1.7755 ms]
Found 9 outliers among 100 measurements (9.00%)
3 (3.00%) high mild
6 (6.00%) high severe
Benchmarking immutabledb/open/mmap_cold_rebuild/100000
Benchmarking immutabledb/open/mmap_cold_rebuild/100000: Warming up for 3.0000 s
Benchmarking immutabledb/open/mmap_cold_rebuild/100000: Collecting 100 samples in estimated 5.2982 s (200 iterations)
Benchmarking immutabledb/open/mmap_cold_rebuild/100000: Analyzing
immutabledb/open/mmap_cold_rebuild/100000
time: [26.140 ms 26.226 ms 26.320 ms]
Found 4 outliers among 100 measurements (4.00%)
3 (3.00%) high mild
1 (1.00%) high severe
Benchmarking immutabledb/lookup/in_memory/10000
Benchmarking immutabledb/lookup/in_memory/10000: Warming up for 3.0000 s
Benchmarking immutabledb/lookup/in_memory/10000: Collecting 100 samples in estimated 5.4251 s (500 iterations)
Benchmarking immutabledb/lookup/in_memory/10000: Analyzing
immutabledb/lookup/in_memory/10000
time: [10.827 ms 10.853 ms 10.879 ms]
Found 3 outliers among 100 measurements (3.00%)
3 (3.00%) high mild
Benchmarking immutabledb/lookup/mmap/10000
Benchmarking immutabledb/lookup/mmap/10000: Warming up for 3.0000 s
Benchmarking immutabledb/lookup/mmap/10000: Collecting 100 samples in estimated 5.4824 s (500 iterations)
Benchmarking immutabledb/lookup/mmap/10000: Analyzing
immutabledb/lookup/mmap/10000
time: [10.942 ms 10.973 ms 11.005 ms]
Found 6 outliers among 100 measurements (6.00%)
6 (6.00%) high mild
Benchmarking immutabledb/has_block/in_memory
Benchmarking immutabledb/has_block/in_memory: Warming up for 3.0000 s
Benchmarking immutabledb/has_block/in_memory: Collecting 100 samples in estimated 5.0068 s (157k iterations)
Benchmarking immutabledb/has_block/in_memory: Analyzing
immutabledb/has_block/in_memory
time: [33.454 µs 33.909 µs 34.341 µs]
Benchmarking immutabledb/has_block/mmap
Benchmarking immutabledb/has_block/mmap: Warming up for 3.0000 s
Benchmarking immutabledb/has_block/mmap: Collecting 100 samples in estimated 5.0013 s (157k iterations)
Benchmarking immutabledb/has_block/mmap: Analyzing
immutabledb/has_block/mmap
time: [33.415 µs 33.861 µs 34.290 µs]
Benchmarking immutabledb/append/1k_blocks_20kb/in_memory
Benchmarking immutabledb/append/1k_blocks_20kb/in_memory: Warming up for 3.0000 s
Benchmarking immutabledb/append/1k_blocks_20kb/in_memory: Collecting 100 samples in estimated 6.0251 s (400 iterations)
Benchmarking immutabledb/append/1k_blocks_20kb/in_memory: Analyzing
immutabledb/append/1k_blocks_20kb/in_memory
time: [14.877 ms 14.943 ms 15.025 ms]
Found 9 outliers among 100 measurements (9.00%)
7 (7.00%) high mild
2 (2.00%) high severe
Benchmarking immutabledb/append/1k_blocks_20kb/mmap
Benchmarking immutabledb/append/1k_blocks_20kb/mmap: Warming up for 3.0000 s
Benchmarking immutabledb/append/1k_blocks_20kb/mmap: Collecting 100 samples in estimated 6.2472 s (400 iterations)
Benchmarking immutabledb/append/1k_blocks_20kb/mmap: Analyzing
immutabledb/append/1k_blocks_20kb/mmap
time: [15.444 ms 15.466 ms 15.490 ms]
Found 4 outliers among 100 measurements (4.00%)
4 (4.00%) high mild
Benchmarking immutabledb/slot_range/range_100/in_memory
Benchmarking immutabledb/slot_range/range_100/in_memory: Warming up for 3.0000 s
Benchmarking immutabledb/slot_range/range_100/in_memory: Collecting 100 samples in estimated 6.5988 s (20k iterations)
Benchmarking immutabledb/slot_range/range_100/in_memory: Analyzing
immutabledb/slot_range/range_100/in_memory
time: [316.95 µs 317.70 µs 318.37 µs]
Found 5 outliers among 100 measurements (5.00%)
5 (5.00%) low mild
Benchmarking immutabledb/slot_range/range_100/mmap
Benchmarking immutabledb/slot_range/range_100/mmap: Warming up for 3.0000 s
Benchmarking immutabledb/slot_range/range_100/mmap: Collecting 100 samples in estimated 5.0590 s (15k iterations)
Benchmarking immutabledb/slot_range/range_100/mmap: Analyzing
immutabledb/slot_range/range_100/mmap
time: [327.04 µs 329.17 µs 331.46 µs]
Found 15 outliers among 100 measurements (15.00%)
7 (7.00%) low mild
7 (7.00%) high mild
1 (1.00%) high severe
Benchmarking block_index/insert/in_memory/10000
Benchmarking block_index/insert/in_memory/10000: Warming up for 3.0000 s
Benchmarking block_index/insert/in_memory/10000: Collecting 100 samples in estimated 8.1722 s (10k iterations)
Benchmarking block_index/insert/in_memory/10000: Analyzing
block_index/insert/in_memory/10000
time: [808.27 µs 809.59 µs 810.91 µs]
Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) high mild
1 (1.00%) high severe
Benchmarking block_index/insert/mmap/10000
Benchmarking block_index/insert/mmap/10000: Warming up for 3.0000 s
Benchmarking block_index/insert/mmap/10000: Collecting 100 samples in estimated 5.0758 s (800 iterations)
Benchmarking block_index/insert/mmap/10000: Analyzing
block_index/insert/mmap/10000
time: [6.3160 ms 6.3823 ms 6.4678 ms]
Found 7 outliers among 100 measurements (7.00%)
4 (4.00%) high mild
3 (3.00%) high severe
Benchmarking block_index/insert/in_memory/50000
Benchmarking block_index/insert/in_memory/50000: Warming up for 3.0000 s
Benchmarking block_index/insert/in_memory/50000: Collecting 100 samples in estimated 5.3522 s (1500 iterations)
Benchmarking block_index/insert/in_memory/50000: Analyzing
block_index/insert/in_memory/50000
time: [3.5350 ms 3.5515 ms 3.5732 ms]
Found 4 outliers among 100 measurements (4.00%)
2 (2.00%) high mild
2 (2.00%) high severe
Benchmarking block_index/insert/mmap/50000
Benchmarking block_index/insert/mmap/50000: Warming up for 3.0000 s
Benchmarking block_index/insert/mmap/50000: Collecting 100 samples in estimated 9.1198 s (200 iterations)
Benchmarking block_index/insert/mmap/50000: Analyzing
block_index/insert/mmap/50000
time: [45.979 ms 46.111 ms 46.248 ms]
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mild
Benchmarking block_index/insert/in_memory/100000
Benchmarking block_index/insert/in_memory/100000: Warming up for 3.0000 s
Benchmarking block_index/insert/in_memory/100000: Collecting 100 samples in estimated 5.8126 s (700 iterations)
Benchmarking block_index/insert/in_memory/100000: Analyzing
block_index/insert/in_memory/100000
time: [8.1474 ms 8.2619 ms 8.3784 ms]
Benchmarking block_index/insert/mmap/100000
Benchmarking block_index/insert/mmap/100000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 9.0s, or reduce sample count to 50.
Benchmarking block_index/insert/mmap/100000: Collecting 100 samples in estimated 8.9540 s (100 iterations)
Benchmarking block_index/insert/mmap/100000: Analyzing
block_index/insert/mmap/100000
time: [88.962 ms 89.268 ms 89.654 ms]
Found 3 outliers among 100 measurements (3.00%)
1 (1.00%) high mild
2 (2.00%) high severe
Benchmarking block_index/lookup/in_memory/10000
Benchmarking block_index/lookup/in_memory/10000: Warming up for 3.0000 s
Benchmarking block_index/lookup/in_memory/10000: Collecting 100 samples in estimated 5.0596 s (278k iterations)
Benchmarking block_index/lookup/in_memory/10000: Analyzing
block_index/lookup/in_memory/10000
time: [18.241 µs 18.272 µs 18.305 µs]
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severe
Benchmarking block_index/lookup/mmap/10000
Benchmarking block_index/lookup/mmap/10000: Warming up for 3.0000 s
Benchmarking block_index/lookup/mmap/10000: Collecting 100 samples in estimated 5.0157 s (177k iterations)
Benchmarking block_index/lookup/mmap/10000: Analyzing
block_index/lookup/mmap/10000
time: [29.276 µs 29.471 µs 29.671 µs]
Benchmarking block_index/lookup/in_memory/50000
Benchmarking block_index/lookup/in_memory/50000: Warming up for 3.0000 s
Benchmarking block_index/lookup/in_memory/50000: Collecting 100 samples in estimated 5.0536 s (273k iterations)
Benchmarking block_index/lookup/in_memory/50000: Analyzing
block_index/lookup/in_memory/50000
time: [18.559 µs 18.587 µs 18.618 µs]
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severe
Benchmarking block_index/lookup/mmap/50000
Benchmarking block_index/lookup/mmap/50000: Warming up for 3.0000 s
Benchmarking block_index/lookup/mmap/50000: Collecting 100 samples in estimated 5.0276 s (258k iterations)
Benchmarking block_index/lookup/mmap/50000: Analyzing
block_index/lookup/mmap/50000
time: [20.261 µs 20.358 µs 20.448 µs]
Found 16 outliers among 100 measurements (16.00%)
16 (16.00%) low mild
Benchmarking block_index/lookup/in_memory/100000
Benchmarking block_index/lookup/in_memory/100000: Warming up for 3.0000 s
Benchmarking block_index/lookup/in_memory/100000: Collecting 100 samples in estimated 5.0916 s (273k iterations)
Benchmarking block_index/lookup/in_memory/100000: Analyzing
block_index/lookup/in_memory/100000
time: [18.737 µs 18.799 µs 18.881 µs]
Found 6 outliers among 100 measurements (6.00%)
3 (3.00%) high mild
3 (3.00%) high severe
Benchmarking block_index/lookup/mmap/100000
Benchmarking block_index/lookup/mmap/100000: Warming up for 3.0000 s
Benchmarking block_index/lookup/mmap/100000: Collecting 100 samples in estimated 5.0960 s (268k iterations)
Benchmarking block_index/lookup/mmap/100000: Analyzing
block_index/lookup/mmap/100000
time: [19.764 µs 19.855 µs 19.941 µs]
Found 18 outliers among 100 measurements (18.00%)
16 (16.00%) low mild
2 (2.00%) high severe
Benchmarking block_index/contains_miss/in_memory
Benchmarking block_index/contains_miss/in_memory: Warming up for 3.0000 s
Benchmarking block_index/contains_miss/in_memory: Collecting 100 samples in estimated 5.0301 s (409k iterations)
Benchmarking block_index/contains_miss/in_memory: Analyzing
block_index/contains_miss/in_memory
time: [12.270 µs 12.274 µs 12.279 µs]
Found 7 outliers among 100 measurements (7.00%)
3 (3.00%) high mild
4 (4.00%) high severe
Benchmarking block_index/contains_miss/mmap
Benchmarking block_index/contains_miss/mmap: Warming up for 3.0000 s
Benchmarking block_index/contains_miss/mmap: Collecting 100 samples in estimated 5.1467 s (116k iterations)
Benchmarking block_index/contains_miss/mmap: Analyzing
block_index/contains_miss/mmap
time: [40.804 µs 41.415 µs 42.058 µs]
Benchmarking scaling/block_index_insert/in_memory/10000
Benchmarking scaling/block_index_insert/in_memory/10000: Warming up for 3.0000 s
Benchmarking scaling/block_index_insert/in_memory/10000: Collecting 10 samples in estimated 5.0181 s (6270 iterations)
Benchmarking scaling/block_index_insert/in_memory/10000: Analyzing
scaling/block_index_insert/in_memory/10000
time: [800.48 µs 802.53 µs 805.02 µs]
Found 2 outliers among 10 measurements (20.00%)
2 (20.00%) high mild
Benchmarking scaling/block_index_insert/mmap/10000
Benchmarking scaling/block_index_insert/mmap/10000: Warming up for 3.0000 s
Benchmarking scaling/block_index_insert/mmap/10000: Collecting 10 samples in estimated 5.1419 s (825 iterations)
Benchmarking scaling/block_index_insert/mmap/10000: Analyzing
scaling/block_index_insert/mmap/10000
time: [6.2919 ms 6.3088 ms 6.3327 ms]
Benchmarking scaling/block_index_insert/in_memory/50000
Benchmarking scaling/block_index_insert/in_memory/50000: Warming up for 3.0000 s
Benchmarking scaling/block_index_insert/in_memory/50000: Collecting 10 samples in estimated 5.0248 s (1430 iterations)
Benchmarking scaling/block_index_insert/in_memory/50000: Analyzing
scaling/block_index_insert/in_memory/50000
time: [3.5047 ms 3.5095 ms 3.5150 ms]
Found 1 outliers among 10 measurements (10.00%)
1 (10.00%) high severe
Benchmarking scaling/block_index_insert/mmap/50000
Benchmarking scaling/block_index_insert/mmap/50000: Warming up for 3.0000 s
Benchmarking scaling/block_index_insert/mmap/50000: Collecting 10 samples in estimated 5.0573 s (110 iterations)
Benchmarking scaling/block_index_insert/mmap/50000: Analyzing
scaling/block_index_insert/mmap/50000
time: [45.750 ms 45.921 ms 46.042 ms]
Benchmarking scaling/block_index_insert/in_memory/100000
Benchmarking scaling/block_index_insert/in_memory/100000: Warming up for 3.0000 s
Benchmarking scaling/block_index_insert/in_memory/100000: Collecting 10 samples in estimated 5.4189 s (660 iterations)
Benchmarking scaling/block_index_insert/in_memory/100000: Analyzing
scaling/block_index_insert/in_memory/100000
time: [7.6561 ms 7.9202 ms 8.0957 ms]
Benchmarking scaling/block_index_insert/mmap/100000
Benchmarking scaling/block_index_insert/mmap/100000: Warming up for 3.0000 s
Benchmarking scaling/block_index_insert/mmap/100000: Collecting 10 samples in estimated 9.8479 s (110 iterations)
Benchmarking scaling/block_index_insert/mmap/100000: Analyzing
scaling/block_index_insert/mmap/100000
time: [88.353 ms 89.380 ms 90.100 ms]
Found 2 outliers among 10 measurements (20.00%)
1 (10.00%) low severe
1 (10.00%) high severe
Benchmarking scaling/block_index_insert/in_memory/250000
Benchmarking scaling/block_index_insert/in_memory/250000: Warming up for 3.0000 s
Benchmarking scaling/block_index_insert/in_memory/250000: Collecting 10 samples in estimated 5.1721 s (165 iterations)
Benchmarking scaling/block_index_insert/in_memory/250000: Analyzing
scaling/block_index_insert/in_memory/250000
time: [28.711 ms 30.006 ms 32.950 ms]
Benchmarking scaling/block_index_insert/mmap/250000
Benchmarking scaling/block_index_insert/mmap/250000: Warming up for 3.0000 s
Benchmarking scaling/block_index_insert/mmap/250000: Collecting 10 samples in estimated 5.5183 s (30 iterations)
Benchmarking scaling/block_index_insert/mmap/250000: Analyzing
scaling/block_index_insert/mmap/250000
time: [181.94 ms 182.37 ms 182.84 ms]
Benchmarking scaling/block_index_insert/in_memory/500000
Benchmarking scaling/block_index_insert/in_memory/500000: Warming up for 3.0000 s
Benchmarking scaling/block_index_insert/in_memory/500000: Collecting 10 samples in estimated 7.1704 s (110 iterations)
Benchmarking scaling/block_index_insert/in_memory/500000: Analyzing
scaling/block_index_insert/in_memory/500000
time: [64.659 ms 65.380 ms 65.909 ms]
Found 1 outliers among 10 measurements (10.00%)
1 (10.00%) high severe
Benchmarking scaling/block_index_insert/mmap/500000
Benchmarking scaling/block_index_insert/mmap/500000: Warming up for 3.0000 s
Benchmarking scaling/block_index_insert/mmap/500000: Collecting 10 samples in estimated 6.3896 s (20 iterations)
Benchmarking scaling/block_index_insert/mmap/500000: Analyzing
scaling/block_index_insert/mmap/500000
time: [320.25 ms 322.60 ms 325.26 ms]
Benchmarking scaling/block_index_insert/in_memory/1000000
Benchmarking scaling/block_index_insert/in_memory/1000000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 7.9s or enable flat sampling.
Benchmarking scaling/block_index_insert/in_memory/1000000: Collecting 10 samples in estimated 7.9064 s (55 iterations)
Benchmarking scaling/block_index_insert/in_memory/1000000: Analyzing
scaling/block_index_insert/in_memory/1000000
time: [141.58 ms 144.46 ms 147.62 ms]
Found 1 outliers among 10 measurements (10.00%)
1 (10.00%) high severe
Benchmarking scaling/block_index_insert/mmap/1000000
Benchmarking scaling/block_index_insert/mmap/1000000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 5.7s.
Benchmarking scaling/block_index_insert/mmap/1000000: Collecting 10 samples in estimated 5.6914 s (10 iterations)
Benchmarking scaling/block_index_insert/mmap/1000000: Analyzing
scaling/block_index_insert/mmap/1000000
time: [561.45 ms 566.74 ms 572.47 ms]
Found 1 outliers among 10 measurements (10.00%)
1 (10.00%) high mild
Benchmarking scaling/block_index_lookup/in_memory/10000
Benchmarking scaling/block_index_lookup/in_memory/10000: Warming up for 3.0000 s
Benchmarking scaling/block_index_lookup/in_memory/10000: Collecting 10 samples in estimated 5.0004 s (271k iterations)
Benchmarking scaling/block_index_lookup/in_memory/10000: Analyzing
scaling/block_index_lookup/in_memory/10000
time: [18.114 µs 18.125 µs 18.146 µs]
Found 2 outliers among 10 measurements (20.00%)
1 (10.00%) high mild
1 (10.00%) high severe
Benchmarking scaling/block_index_lookup/mmap/10000
Benchmarking scaling/block_index_lookup/mmap/10000: Warming up for 3.0000 s
Benchmarking scaling/block_index_lookup/mmap/10000: Collecting 10 samples in estimated 5.0012 s (165k iterations)
Benchmarking scaling/block_index_lookup/mmap/10000: Analyzing
scaling/block_index_lookup/mmap/10000
time: [28.300 µs 28.312 µs 28.333 µs]
Found 1 outliers among 10 measurements (10.00%)
1 (10.00%) high severe
Benchmarking scaling/block_index_lookup/in_memory/50000
Benchmarking scaling/block_index_lookup/in_memory/50000: Warming up for 3.0000 s
Benchmarking scaling/block_index_lookup/in_memory/50000: Collecting 10 samples in estimated 5.0004 s (264k iterations)
Benchmarking scaling/block_index_lookup/in_memory/50000: Analyzing
scaling/block_index_lookup/in_memory/50000
time: [18.545 µs 18.557 µs 18.578 µs]
Found 2 outliers among 10 measurements (20.00%)
2 (20.00%) high severe
Benchmarking scaling/block_index_lookup/mmap/50000
Benchmarking scaling/block_index_lookup/mmap/50000: Warming up for 3.0000 s
Benchmarking scaling/block_index_lookup/mmap/50000: Collecting 10 samples in estimated 5.0010 s (240k iterations)
Benchmarking scaling/block_index_lookup/mmap/50000: Analyzing
scaling/block_index_lookup/mmap/50000
time: [19.481 µs 19.486 µs 19.501 µs]
Found 2 outliers among 10 measurements (20.00%)
1 (10.00%) high mild
1 (10.00%) high severe
Benchmarking scaling/block_index_lookup/in_memory/100000
Benchmarking scaling/block_index_lookup/in_memory/100000: Warming up for 3.0000 s
Benchmarking scaling/block_index_lookup/in_memory/100000: Collecting 10 samples in estimated 5.0007 s (264k iterations)
Benchmarking scaling/block_index_lookup/in_memory/100000: Analyzing
scaling/block_index_lookup/in_memory/100000
time: [18.625 µs 18.703 µs 18.944 µs]
Found 2 outliers among 10 measurements (20.00%)
2 (20.00%) high severe
Benchmarking scaling/block_index_lookup/mmap/100000
Benchmarking scaling/block_index_lookup/mmap/100000: Warming up for 3.0000 s
Benchmarking scaling/block_index_lookup/mmap/100000: Collecting 10 samples in estimated 5.0001 s (247k iterations)
Benchmarking scaling/block_index_lookup/mmap/100000: Analyzing
scaling/block_index_lookup/mmap/100000
time: [19.020 µs 19.029 µs 19.044 µs]
Found 1 outliers among 10 measurements (10.00%)
1 (10.00%) high severe
Benchmarking scaling/block_index_lookup/in_memory/250000
Benchmarking scaling/block_index_lookup/in_memory/250000: Warming up for 3.0000 s
Benchmarking scaling/block_index_lookup/in_memory/250000: Collecting 10 samples in estimated 5.0004 s (263k iterations)
Benchmarking scaling/block_index_lookup/in_memory/250000: Analyzing
scaling/block_index_lookup/in_memory/250000
time: [18.749 µs 18.809 µs 18.906 µs]
Found 2 outliers among 10 measurements (20.00%)
2 (20.00%) high severe
Benchmarking scaling/block_index_lookup/mmap/250000
Benchmarking scaling/block_index_lookup/mmap/250000: Warming up for 3.0000 s
Benchmarking scaling/block_index_lookup/mmap/250000: Collecting 10 samples in estimated 5.0009 s (248k iterations)
Benchmarking scaling/block_index_lookup/mmap/250000: Analyzing
scaling/block_index_lookup/mmap/250000
time: [18.896 µs 18.903 µs 18.921 µs]
Found 3 outliers among 10 measurements (30.00%)
1 (10.00%) low mild
2 (20.00%) high severe
Benchmarking scaling/block_index_lookup/in_memory/500000
Benchmarking scaling/block_index_lookup/in_memory/500000: Warming up for 3.0000 s
Benchmarking scaling/block_index_lookup/in_memory/500000: Collecting 10 samples in estimated 5.0008 s (261k iterations)
Benchmarking scaling/block_index_lookup/in_memory/500000: Analyzing
scaling/block_index_lookup/in_memory/500000
time: [18.667 µs 18.682 µs 18.712 µs]
Found 2 outliers among 10 measurements (20.00%)
2 (20.00%) high severe
Benchmarking scaling/block_index_lookup/mmap/500000
Benchmarking scaling/block_index_lookup/mmap/500000: Warming up for 3.0000 s
Benchmarking scaling/block_index_lookup/mmap/500000: Collecting 10 samples in estimated 5.0007 s (252k iterations)
Benchmarking scaling/block_index_lookup/mmap/500000: Analyzing
scaling/block_index_lookup/mmap/500000
time: [18.610 µs 18.693 µs 18.860 µs]
Found 2 outliers among 10 measurements (20.00%)
2 (20.00%) high severe
Benchmarking scaling/block_index_lookup/in_memory/1000000
Benchmarking scaling/block_index_lookup/in_memory/1000000: Warming up for 3.0000 s
Benchmarking scaling/block_index_lookup/in_memory/1000000: Collecting 10 samples in estimated 5.0002 s (258k iterations)
Benchmarking scaling/block_index_lookup/in_memory/1000000: Analyzing
scaling/block_index_lookup/in_memory/1000000
time: [18.874 µs 18.882 µs 18.894 µs]
Found 1 outliers among 10 measurements (10.00%)
1 (10.00%) high severe
Benchmarking scaling/block_index_lookup/mmap/1000000
Benchmarking scaling/block_index_lookup/mmap/1000000: Warming up for 3.0000 s
Benchmarking scaling/block_index_lookup/mmap/1000000: Collecting 10 samples in estimated 5.0007 s (252k iterations)
Benchmarking scaling/block_index_lookup/mmap/1000000: Analyzing
scaling/block_index_lookup/mmap/1000000
time: [18.571 µs 18.579 µs 18.597 µs]
Found 2 outliers among 10 measurements (20.00%)
1 (10.00%) high mild
1 (10.00%) high severe
Benchmarking scaling/immutabledb_open/in_memory/10000
Benchmarking scaling/immutabledb_open/in_memory/10000: Warming up for 3.0000 s
Benchmarking scaling/immutabledb_open/in_memory/10000: Collecting 10 samples in estimated 5.2830 s (220 iterations)
Benchmarking scaling/immutabledb_open/in_memory/10000: Analyzing
scaling/immutabledb_open/in_memory/10000
time: [20.841 ms 21.981 ms 22.966 ms]
Found 1 outliers among 10 measurements (10.00%)
1 (10.00%) low mild
Benchmarking scaling/immutabledb_open/mmap_cached/10000
Benchmarking scaling/immutabledb_open/mmap_cached/10000: Warming up for 3.0000 s
Benchmarking scaling/immutabledb_open/mmap_cached/10000: Collecting 10 samples in estimated 5.0085 s (25k iterations)
Benchmarking scaling/immutabledb_open/mmap_cached/10000: Analyzing
scaling/immutabledb_open/mmap_cached/10000
time: [202.56 µs 202.85 µs 203.17 µs]
Benchmarking scaling/immutabledb_open/in_memory/50000
Benchmarking scaling/immutabledb_open/in_memory/50000: Warming up for 3.0000 s
Benchmarking scaling/immutabledb_open/in_memory/50000: Collecting 10 samples in estimated 8.8865 s (20 iterations)
Benchmarking scaling/immutabledb_open/in_memory/50000: Analyzing
scaling/immutabledb_open/in_memory/50000
time: [440.13 ms 442.83 ms 445.99 ms]
Benchmarking scaling/immutabledb_open/mmap_cached/50000
Benchmarking scaling/immutabledb_open/mmap_cached/50000: Warming up for 3.0000 s
Benchmarking scaling/immutabledb_open/mmap_cached/50000: Collecting 10 samples in estimated 5.0241 s (6270 iterations)
Benchmarking scaling/immutabledb_open/mmap_cached/50000: Analyzing
scaling/immutabledb_open/mmap_cached/50000
time: [800.37 µs 801.57 µs 802.35 µs]
Benchmarking scaling/immutabledb_open/in_memory/100000
Benchmarking scaling/immutabledb_open/in_memory/100000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 11.9s.
Benchmarking scaling/immutabledb_open/in_memory/100000: Collecting 10 samples in estimated 11.890 s (10 iterations)
Benchmarking scaling/immutabledb_open/in_memory/100000: Analyzing
scaling/immutabledb_open/in_memory/100000
time: [873.32 ms 878.13 ms 882.96 ms]
Benchmarking scaling/immutabledb_open/mmap_cached/100000
Benchmarking scaling/immutabledb_open/mmap_cached/100000: Warming up for 3.0000 s
Benchmarking scaling/immutabledb_open/mmap_cached/100000: Collecting 10 samples in estimated 5.0689 s (2750 iterations)
Benchmarking scaling/immutabledb_open/mmap_cached/100000: Analyzing
scaling/immutabledb_open/mmap_cached/100000
time: [1.8082 ms 1.8425 ms 1.8639 ms]
Benchmarking scaling/immutabledb_open/in_memory/250000
Benchmarking scaling/immutabledb_open/in_memory/250000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 37.6s.
Benchmarking scaling/immutabledb_open/in_memory/250000: Collecting 10 samples in estimated 37.590 s (10 iterations)
Benchmarking scaling/immutabledb_open/in_memory/250000: Analyzing
scaling/immutabledb_open/in_memory/250000
time: [2.1801 s 2.1861 s 2.1928 s]
Benchmarking scaling/immutabledb_open/mmap_cached/250000
Benchmarking scaling/immutabledb_open/mmap_cached/250000: Warming up for 3.0000 s
Benchmarking scaling/immutabledb_open/mmap_cached/250000: Collecting 10 samples in estimated 5.2548 s (990 iterations)
Benchmarking scaling/immutabledb_open/mmap_cached/250000: Analyzing
scaling/immutabledb_open/mmap_cached/250000
time: [5.3273 ms 5.3753 ms 5.4520 ms]
Found 1 outliers among 10 measurements (10.00%)
1 (10.00%) high mild
Benchmarking scaling/immutabledb_open/in_memory/500000
Benchmarking scaling/immutabledb_open/in_memory/500000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 77.1s.
Benchmarking scaling/immutabledb_open/in_memory/500000: Collecting 10 samples in estimated 77.067 s (10 iterations)
Benchmarking scaling/immutabledb_open/in_memory/500000: Analyzing
scaling/immutabledb_open/in_memory/500000
time: [22.906 s 23.219 s 23.721 s]
Found 2 outliers among 10 measurements (20.00%)
2 (20.00%) high severe
Benchmarking scaling/immutabledb_open/mmap_cached/500000
Benchmarking scaling/immutabledb_open/mmap_cached/500000: Warming up for 3.0000 s
Benchmarking scaling/immutabledb_open/mmap_cached/500000: Collecting 10 samples in estimated 5.1439 s (495 iterations)
Benchmarking scaling/immutabledb_open/mmap_cached/500000: Analyzing
scaling/immutabledb_open/mmap_cached/500000
time: [9.8715 ms 10.038 ms 10.300 ms]
Found 3 outliers among 10 measurements (30.00%)
2 (20.00%) low mild
1 (10.00%) high mild
Benchmarking scaling/chaindb_insert/default_20kb/10000
Benchmarking scaling/chaindb_insert/default_20kb/10000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 38.6s.
Benchmarking scaling/chaindb_insert/default_20kb/10000: Collecting 10 samples in estimated 38.597 s (10 iterations)
Benchmarking scaling/chaindb_insert/default_20kb/10000: Analyzing
scaling/chaindb_insert/default_20kb/10000
time: [3.6574 s 3.6837 s 3.7105 s]
Benchmarking scaling/chaindb_insert/default_20kb/50000
Benchmarking scaling/chaindb_insert/default_20kb/50000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 184.9s.
Benchmarking scaling/chaindb_insert/default_20kb/50000: Collecting 10 samples in estimated 184.89 s (10 iterations)
Benchmarking scaling/chaindb_insert/default_20kb/50000: Analyzing
scaling/chaindb_insert/default_20kb/50000
time: [18.363 s 18.451 s 18.540 s]
Benchmarking scaling/chaindb_insert/default_20kb/100000
Benchmarking scaling/chaindb_insert/default_20kb/100000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 384.7s.
Benchmarking scaling/chaindb_insert/default_20kb/100000: Collecting 10 samples in estimated 384.73 s (10 iterations)
Benchmarking scaling/chaindb_insert/default_20kb/100000: Analyzing
scaling/chaindb_insert/default_20kb/100000
time: [37.496 s 37.826 s 38.180 s]
Benchmarking scaling/chaindb_insert/default_20kb/250000
Benchmarking scaling/chaindb_insert/default_20kb/250000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 973.8s.
Benchmarking scaling/chaindb_insert/default_20kb/250000: Collecting 10 samples in estimated 973.85 s (10 iterations)
Benchmarking scaling/chaindb_insert/default_20kb/250000: Analyzing
scaling/chaindb_insert/default_20kb/250000
time: [93.272 s 94.324 s 95.555 s]
Found 1 outliers among 10 measurements (10.00%)
1 (10.00%) high mild
UTxO Benchmarks
[1m[92m Compiling[0m zeroize_derive v1.4.3
[1m[92m Compiling[0m num-integer v0.1.46
[1m[92m Compiling[0m minicbor v0.25.1
[1m[92m Compiling[0m minicbor-derive v0.15.3
[1m[92m Compiling[0m byteorder v1.5.0
[1m[92m Compiling[0m num-bigint v0.4.6
[1m[92m Compiling[0m secp256k1-sys v0.8.2
[1m[92m Compiling[0m zeroize v1.8.2
[1m[92m Compiling[0m radium v0.7.0
[1m[92m Compiling[0m curve25519-dalek v4.1.3
[1m[92m Compiling[0m log v0.4.29
[1m[92m Compiling[0m ed25519-dalek v2.2.0
[1m[92m Compiling[0m pallas-crypto v1.0.0-alpha.6
[1m[92m Compiling[0m num-rational v0.4.2
[1m[92m Compiling[0m pallas-codec v0.33.0
[1m[92m Compiling[0m blst v0.3.16
[1m[92m Compiling[0m pallas-crypto v0.33.0
[1m[92m Compiling[0m indexmap v1.9.3
[1m[92m Compiling[0m num_cpus v1.17.0
[1m[92m Compiling[0m tap v1.0.1
[1m[92m Compiling[0m heck v0.5.0
[1m[92m Compiling[0m regex-syntax v0.8.10
[1m[92m Compiling[0m peg-runtime v0.8.5
[1m[92m Compiling[0m peg-macros v0.8.5
[1m[92m Compiling[0m strum_macros v0.26.4
[1m[92m Compiling[0m threadpool v1.8.1
[1m[92m Compiling[0m dugite-primitives v1.4.0 (/home/runner/work/dugite/dugite/crates/dugite-primitives)
[1m[92m Compiling[0m wyz v0.5.1
[1m[92m Compiling[0m pallas-addresses v0.33.0
[1m[92m Compiling[0m pallas-primitives v0.33.0
[1m[92m Compiling[0m pallas-addresses v1.0.0-alpha.6
[1m[92m Compiling[0m pallas-primitives v1.0.0-alpha.6
[1m[92m Compiling[0m curve25519-dalek v3.2.0 (https://github.com/iquerejeta/curve25519-dalek?branch=ietf03_vrf_compat_ell2#70a36f41)
[1m[92m Compiling[0m curve25519-dalek v3.2.0
[1m[92m Compiling[0m miette-derive v5.10.0
[1m[92m Compiling[0m hashbrown v0.12.3
[1m[92m Compiling[0m unicode-width v0.1.14
[1m[92m Compiling[0m typed-arena v2.0.2
[1m[92m Compiling[0m funty v2.0.0
[1m[92m Compiling[0m unicode-segmentation v1.13.2
[1m[92m Compiling[0m arrayvec v0.5.2
[1m[92m Compiling[0m bitvec v1.0.1
[1m[92m Compiling[0m pretty v0.11.3
[1m[92m Compiling[0m miette v5.10.0
[1m[92m Compiling[0m peg v0.8.5
[1m[92m Compiling[0m pallas-traverse v0.33.0
[1m[92m Compiling[0m pallas-traverse v1.0.0-alpha.6
[1m[92m Compiling[0m vrf_dalek v0.1.0 (https://github.com/input-output-hk/vrf?rev=03ac038e9b92c754ebbcb71824866d93f25e27f3#03ac038e)
[1m[92m Compiling[0m secp256k1 v0.26.0
[1m[92m Compiling[0m strum v0.26.3
[1m[92m Compiling[0m regex-automata v0.4.14
[1m[92m Compiling[0m itertools v0.10.5
[1m[92m Compiling[0m wait-timeout v0.2.1
[1m[92m Compiling[0m fs2 v0.4.3
[1m[92m Compiling[0m fnv v1.0.7
[1m[92m Compiling[0m bit-vec v0.8.0
[1m[92m Compiling[0m quick-error v1.2.3
[1m[92m Compiling[0m hamming v0.1.3
[1m[92m Compiling[0m uplc v1.1.21 (https://github.com/aiken-lang/aiken.git?rev=6806d52211fbb469ae13aa0e0290aeb6b9b3e8cf#6806d522)
[1m[92m Compiling[0m rusty-fork v0.3.1
[1m[92m Compiling[0m bit-set v0.8.0
[1m[92m Compiling[0m dugite-lsm v1.4.0 (/home/runner/work/dugite/dugite/crates/dugite-lsm)
[1m[92m Compiling[0m regex v1.12.3
[1m[92m Compiling[0m dugite-serialization v1.4.0 (/home/runner/work/dugite/dugite/crates/dugite-serialization)
[1m[92m Compiling[0m dugite-crypto v1.4.0 (/home/runner/work/dugite/dugite/crates/dugite-crypto)
[1m[92m Compiling[0m rand_xorshift v0.4.0
[1m[92m Compiling[0m bincode v1.3.3
[1m[92m Compiling[0m unarray v0.1.4
[1m[92m Compiling[0m proptest v1.11.0
[1m[92m Compiling[0m criterion v0.8.2
[1m[92m Compiling[0m dugite-ledger v1.4.0 (/home/runner/work/dugite/dugite/crates/dugite-ledger)
[1m[92m Finished[0m `bench` profile [optimized] target(s) in 1m 10s
[1m[92m Running[0m benches/utxo_bench.rs (target/release/deps/utxo_bench-e4761f70421c8e01)
Gnuplot not found, using plotters backend
Benchmarking utxo_store/insert/default/1000000
Benchmarking utxo_store/insert/default/1000000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 28.4s.
Benchmarking utxo_store/insert/default/1000000: Collecting 10 samples in estimated 28.413 s (10 iterations)
Benchmarking utxo_store/insert/default/1000000: Analyzing
utxo_store/insert/default/1000000
time: [2.7140 s 2.7388 s 2.7696 s]
Found 1 outliers among 10 measurements (10.00%)
1 (10.00%) high mild
Benchmarking utxo_store/lookup/hit/1000000
Benchmarking utxo_store/lookup/hit/1000000: Warming up for 3.0000 s
Benchmarking utxo_store/lookup/hit/1000000: Collecting 100 samples in estimated 5.4164 s (10k iterations)
Benchmarking utxo_store/lookup/hit/1000000: Analyzing
utxo_store/lookup/hit/1000000
time: [523.33 µs 524.42 µs 525.48 µs]
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severe
Benchmarking utxo_store/lookup/miss/1000000
Benchmarking utxo_store/lookup/miss/1000000: Warming up for 3.0000 s
Benchmarking utxo_store/lookup/miss/1000000: Collecting 100 samples in estimated 6.3680 s (20k iterations)
Benchmarking utxo_store/lookup/miss/1000000: Analyzing
utxo_store/lookup/miss/1000000
time: [310.87 µs 311.55 µs 312.35 µs]
Found 8 outliers among 100 measurements (8.00%)
5 (5.00%) high mild
3 (3.00%) high severe
Benchmarking utxo_store/contains/hit
Benchmarking utxo_store/contains/hit: Warming up for 3.0000 s
Benchmarking utxo_store/contains/hit: Collecting 100 samples in estimated 5.8587 s (15k iterations)
Benchmarking utxo_store/contains/hit: Analyzing
utxo_store/contains/hit time: [378.55 µs 380.33 µs 382.41 µs]
Found 25 outliers among 100 measurements (25.00%)
22 (22.00%) low severe
1 (1.00%) high mild
2 (2.00%) high severe
Benchmarking utxo_store/contains/miss
Benchmarking utxo_store/contains/miss: Warming up for 3.0000 s
Benchmarking utxo_store/contains/miss: Collecting 100 samples in estimated 6.0235 s (20k iterations)
Benchmarking utxo_store/contains/miss: Analyzing
utxo_store/contains/miss
time: [295.09 µs 295.40 µs 295.82 µs]
Found 7 outliers among 100 measurements (7.00%)
4 (4.00%) high mild
3 (3.00%) high severe
Benchmarking utxo_store/remove/sequential/1000000
Benchmarking utxo_store/remove/sequential/1000000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 50.5s.
Benchmarking utxo_store/remove/sequential/1000000: Collecting 10 samples in estimated 50.483 s (10 iterations)
Benchmarking utxo_store/remove/sequential/1000000: Analyzing
utxo_store/remove/sequential/1000000
time: [2.6801 s 2.6996 s 2.7211 s]
Benchmarking utxo_store/apply_tx/block_50tx_3in_2out
Benchmarking utxo_store/apply_tx/block_50tx_3in_2out: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 26.4s.
Benchmarking utxo_store/apply_tx/block_50tx_3in_2out: Collecting 10 samples in estimated 26.436 s (10 iterations)
Benchmarking utxo_store/apply_tx/block_50tx_3in_2out: Analyzing
utxo_store/apply_tx/block_50tx_3in_2out
time: [273.53 ms 277.97 ms 282.48 ms]
Benchmarking utxo_store/apply_tx/block_300tx_2in_2out
Benchmarking utxo_store/apply_tx/block_300tx_2in_2out: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 26.0s.
Benchmarking utxo_store/apply_tx/block_300tx_2in_2out: Collecting 10 samples in estimated 26.013 s (10 iterations)
Benchmarking utxo_store/apply_tx/block_300tx_2in_2out: Analyzing
utxo_store/apply_tx/block_300tx_2in_2out
time: [267.96 ms 274.90 ms 280.83 ms]
Found 1 outliers among 10 measurements (10.00%)
1 (10.00%) low mild
Benchmarking utxo_store/multi_asset/insert_mixed_30pct/1000000
Benchmarking utxo_store/multi_asset/insert_mixed_30pct/1000000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 34.2s.
Benchmarking utxo_store/multi_asset/insert_mixed_30pct/1000000: Collecting 10 samples in estimated 34.195 s (10 iterations)
Benchmarking utxo_store/multi_asset/insert_mixed_30pct/1000000: Analyzing
utxo_store/multi_asset/insert_mixed_30pct/1000000
time: [3.3711 s 3.3917 s 3.4153 s]
Found 2 outliers among 10 measurements (20.00%)
2 (20.00%) high mild
Benchmarking utxo_store/multi_asset/lookup_mixed_30pct/1000000
Benchmarking utxo_store/multi_asset/lookup_mixed_30pct/1000000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 34.2s.
Benchmarking utxo_store/multi_asset/lookup_mixed_30pct/1000000: Collecting 10 samples in estimated 34.202 s (10 iterations)
Benchmarking utxo_store/multi_asset/lookup_mixed_30pct/1000000: Analyzing
utxo_store/multi_asset/lookup_mixed_30pct/1000000
time: [121.04 ms 129.21 ms 137.17 ms]
Benchmarking utxo_store/total_lovelace/scan/1000000
Benchmarking utxo_store/total_lovelace/scan/1000000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 27.9s, or reduce sample count to 10.
Benchmarking utxo_store/total_lovelace/scan/1000000: Collecting 100 samples in estimated 27.859 s (100 iterations)
Benchmarking utxo_store/total_lovelace/scan/1000000: Analyzing
utxo_store/total_lovelace/scan/1000000
time: [278.81 ms 280.52 ms 282.28 ms]
Benchmarking utxo_store/rebuild_address_index/rebuild/1000000
Benchmarking utxo_store/rebuild_address_index/rebuild/1000000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 36.6s.
Benchmarking utxo_store/rebuild_address_index/rebuild/1000000: Collecting 10 samples in estimated 36.573 s (10 iterations)
Benchmarking utxo_store/rebuild_address_index/rebuild/1000000: Analyzing
utxo_store/rebuild_address_index/rebuild/1000000
time: [492.93 ms 495.64 ms 498.81 ms]
Found 1 outliers among 10 measurements (10.00%)
1 (10.00%) high mild
Benchmarking utxo_store/insert_configs/low_8gb/1000000
Benchmarking utxo_store/insert_configs/low_8gb/1000000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 25.8s.
Benchmarking utxo_store/insert_configs/low_8gb/1000000: Collecting 10 samples in estimated 25.779 s (10 iterations)
Benchmarking utxo_store/insert_configs/low_8gb/1000000: Analyzing
utxo_store/insert_configs/low_8gb/1000000
time: [2.5880 s 2.6151 s 2.6526 s]
Found 1 outliers among 10 measurements (10.00%)
1 (10.00%) high severe
Benchmarking utxo_store/insert_configs/mid_16gb/1000000
Benchmarking utxo_store/insert_configs/mid_16gb/1000000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 26.5s.
Benchmarking utxo_store/insert_configs/mid_16gb/1000000: Collecting 10 samples in estimated 26.542 s (10 iterations)
Benchmarking utxo_store/insert_configs/mid_16gb/1000000: Analyzing
utxo_store/insert_configs/mid_16gb/1000000
time: [2.5957 s 2.6150 s 2.6394 s]
Found 1 outliers among 10 measurements (10.00%)
1 (10.00%) high severe
Benchmarking utxo_store/insert_configs/high_32gb/1000000
Benchmarking utxo_store/insert_configs/high_32gb/1000000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 26.7s.
Benchmarking utxo_store/insert_configs/high_32gb/1000000: Collecting 10 samples in estimated 26.718 s (10 iterations)
Benchmarking utxo_store/insert_configs/high_32gb/1000000: Analyzing
utxo_store/insert_configs/high_32gb/1000000
time: [2.6015 s 2.6310 s 2.6630 s]
Benchmarking utxo_store/insert_configs/high_bloom_16gb/1000000
Benchmarking utxo_store/insert_configs/high_bloom_16gb/1000000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 26.4s.
Benchmarking utxo_store/insert_configs/high_bloom_16gb/1000000: Collecting 10 samples in estimated 26.413 s (10 iterations)
Benchmarking utxo_store/insert_configs/high_bloom_16gb/1000000: Analyzing
utxo_store/insert_configs/high_bloom_16gb/1000000
time: [2.6466 s 2.6946 s 2.7385 s]
Benchmarking utxo_store/insert_configs/legacy_small/1000000
Benchmarking utxo_store/insert_configs/legacy_small/1000000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 25.9s.
Benchmarking utxo_store/insert_configs/legacy_small/1000000: Collecting 10 samples in estimated 25.946 s (10 iterations)
Benchmarking utxo_store/insert_configs/legacy_small/1000000: Analyzing
utxo_store/insert_configs/legacy_small/1000000
time: [2.5821 s 2.5895 s 2.5981 s]
Found 1 outliers among 10 measurements (10.00%)
1 (10.00%) high mild
Benchmarking utxo_store/lookup_configs/low_8gb/1000000
Benchmarking utxo_store/lookup_configs/low_8gb/1000000: Warming up for 3.0000 s
Benchmarking utxo_store/lookup_configs/low_8gb/1000000: Collecting 100 samples in estimated 6.6881 s (15k iterations)
Benchmarking utxo_store/lookup_configs/low_8gb/1000000: Analyzing
utxo_store/lookup_configs/low_8gb/1000000
time: [448.86 µs 450.43 µs 451.98 µs]
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mild
Benchmarking utxo_store/lookup_configs/mid_16gb/1000000
Benchmarking utxo_store/lookup_configs/mid_16gb/1000000: Warming up for 3.0000 s
Benchmarking utxo_store/lookup_configs/mid_16gb/1000000: Collecting 100 samples in estimated 6.6292 s (15k iterations)
Benchmarking utxo_store/lookup_configs/mid_16gb/1000000: Analyzing
utxo_store/lookup_configs/mid_16gb/1000000
time: [442.80 µs 444.32 µs 445.79 µs]
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severe
Benchmarking utxo_store/lookup_configs/high_32gb/1000000
Benchmarking utxo_store/lookup_configs/high_32gb/1000000: Warming up for 3.0000 s
Benchmarking utxo_store/lookup_configs/high_32gb/1000000: Collecting 100 samples in estimated 6.5825 s (15k iterations)
Benchmarking utxo_store/lookup_configs/high_32gb/1000000: Analyzing
utxo_store/lookup_configs/high_32gb/1000000
time: [439.16 µs 440.79 µs 442.36 µs]
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severe
Benchmarking utxo_store/lookup_configs/high_bloom_16gb/1000000
Benchmarking utxo_store/lookup_configs/high_bloom_16gb/1000000: Warming up for 3.0000 s
Benchmarking utxo_store/lookup_configs/high_bloom_16gb/1000000: Collecting 100 samples in estimated 6.5997 s (15k iterations)
Benchmarking utxo_store/lookup_configs/high_bloom_16gb/1000000: Analyzing
utxo_store/lookup_configs/high_bloom_16gb/1000000
time: [440.98 µs 442.50 µs 443.94 µs]
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severe
Benchmarking utxo_store/lookup_configs/legacy_small/1000000
Benchmarking utxo_store/lookup_configs/legacy_small/1000000: Warming up for 3.0000 s
Benchmarking utxo_store/lookup_configs/legacy_small/1000000: Collecting 100 samples in estimated 6.6918 s (15k iterations)
Benchmarking utxo_store/lookup_configs/legacy_small/1000000: Analyzing
utxo_store/lookup_configs/legacy_small/1000000
time: [446.42 µs 447.90 µs 449.32 µs]
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mild
Benchmarking utxo_scaling/insert/default/100000
Benchmarking utxo_scaling/insert/default/100000: Warming up for 3.0000 s
Benchmarking utxo_scaling/insert/default/100000: Collecting 10 samples in estimated 6.1902 s (30 iterations)
Benchmarking utxo_scaling/insert/default/100000: Analyzing
utxo_scaling/insert/default/100000
time: [205.18 ms 206.03 ms 206.96 ms]
Benchmarking utxo_scaling/insert/default/500000
Benchmarking utxo_scaling/insert/default/500000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 12.1s.
Benchmarking utxo_scaling/insert/default/500000: Collecting 10 samples in estimated 12.076 s (10 iterations)
Benchmarking utxo_scaling/insert/default/500000: Analyzing
utxo_scaling/insert/default/500000
time: [1.1920 s 1.1974 s 1.2029 s]
Benchmarking utxo_scaling/insert/default/1000000
Benchmarking utxo_scaling/insert/default/1000000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 26.0s.
Benchmarking utxo_scaling/insert/default/1000000: Collecting 10 samples in estimated 25.999 s (10 iterations)
Benchmarking utxo_scaling/insert/default/1000000: Analyzing
utxo_scaling/insert/default/1000000
time: [2.5821 s 2.5916 s 2.6007 s]
Benchmarking utxo_scaling/lookup/hit/100000
Benchmarking utxo_scaling/lookup/hit/100000: Warming up for 3.0000 s
Benchmarking utxo_scaling/lookup/hit/100000: Collecting 10 samples in estimated 5.0097 s (14k iterations)
Benchmarking utxo_scaling/lookup/hit/100000: Analyzing
utxo_scaling/lookup/hit/100000
time: [359.64 µs 359.80 µs 360.13 µs]
Found 1 outliers among 10 measurements (10.00%)
1 (10.00%) high severe
Benchmarking utxo_scaling/lookup/hit/500000
Benchmarking utxo_scaling/lookup/hit/500000: Warming up for 3.0000 s
Benchmarking utxo_scaling/lookup/hit/500000: Collecting 10 samples in estimated 5.0227 s (12k iterations)
Benchmarking utxo_scaling/lookup/hit/500000: Analyzing
utxo_scaling/lookup/hit/500000
time: [410.78 µs 410.93 µs 411.20 µs]
Found 1 outliers among 10 measurements (10.00%)
1 (10.00%) high severe
Benchmarking utxo_scaling/lookup/hit/1000000
Benchmarking utxo_scaling/lookup/hit/1000000: Warming up for 3.0000 s
Benchmarking utxo_scaling/lookup/hit/1000000: Collecting 10 samples in estimated 5.0007 s (11k iterations)
Benchmarking utxo_scaling/lookup/hit/1000000: Analyzing
utxo_scaling/lookup/hit/1000000
time: [441.47 µs 441.67 µs 442.11 µs]
Found 1 outliers among 10 measurements (10.00%)
1 (10.00%) high severe
Benchmarking utxo_scaling/apply_tx/block_50tx_3in_2out/100000
Benchmarking utxo_scaling/apply_tx/block_50tx_3in_2out/100000: Warming up for 3.0000 s
Benchmarking utxo_scaling/apply_tx/block_50tx_3in_2out/100000: Collecting 10 samples in estimated 6.3846 s (30 iterations)
Benchmarking utxo_scaling/apply_tx/block_50tx_3in_2out/100000: Analyzing
utxo_scaling/apply_tx/block_50tx_3in_2out/100000
time: [13.247 ms 13.710 ms 14.197 ms]
Benchmarking utxo_scaling/apply_tx/block_50tx_3in_2out/500000
Benchmarking utxo_scaling/apply_tx/block_50tx_3in_2out/500000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 12.1s.
Benchmarking utxo_scaling/apply_tx/block_50tx_3in_2out/500000: Collecting 10 samples in estimated 12.149 s (10 iterations)
Benchmarking utxo_scaling/apply_tx/block_50tx_3in_2out/500000: Analyzing
utxo_scaling/apply_tx/block_50tx_3in_2out/500000
time: [129.81 ms 132.53 ms 134.85 ms]
Benchmarking utxo_scaling/apply_tx/block_50tx_3in_2out/1000000
Benchmarking utxo_scaling/apply_tx/block_50tx_3in_2out/1000000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 26.0s.
Benchmarking utxo_scaling/apply_tx/block_50tx_3in_2out/1000000: Collecting 10 samples in estimated 26.026 s (10 iterations)
Benchmarking utxo_scaling/apply_tx/block_50tx_3in_2out/1000000: Analyzing
utxo_scaling/apply_tx/block_50tx_3in_2out/1000000
time: [273.81 ms 277.15 ms 280.37 ms]
Benchmarking utxo_scaling/total_lovelace/scan/100000
Benchmarking utxo_scaling/total_lovelace/scan/100000: Warming up for 3.0000 s
Benchmarking utxo_scaling/total_lovelace/scan/100000: Collecting 10 samples in estimated 5.7573 s (220 iterations)
Benchmarking utxo_scaling/total_lovelace/scan/100000: Analyzing
utxo_scaling/total_lovelace/scan/100000
time: [26.061 ms 26.534 ms 26.968 ms]
Benchmarking utxo_scaling/total_lovelace/scan/500000
Benchmarking utxo_scaling/total_lovelace/scan/500000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 7.8s or enable flat sampling.
Benchmarking utxo_scaling/total_lovelace/scan/500000: Collecting 10 samples in estimated 7.7829 s (55 iterations)
Benchmarking utxo_scaling/total_lovelace/scan/500000: Analyzing
utxo_scaling/total_lovelace/scan/500000
time: [140.93 ms 141.17 ms 141.32 ms]
Found 1 outliers among 10 measurements (10.00%)
1 (10.00%) low severe
Benchmarking utxo_scaling/total_lovelace/scan/1000000
Benchmarking utxo_scaling/total_lovelace/scan/1000000: Warming up for 3.0000 s
Benchmarking utxo_scaling/total_lovelace/scan/1000000: Collecting 10 samples in estimated 5.7236 s (20 iterations)
Benchmarking utxo_scaling/total_lovelace/scan/1000000: Analyzing
utxo_scaling/total_lovelace/scan/1000000
time: [284.00 ms 285.35 ms 286.32 ms]
Found 1 outliers among 10 measurements (10.00%)
1 (10.00%) low severe
Benchmarking utxo_large_scale/insert/default/5000000
Benchmarking utxo_large_scale/insert/default/5000000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 156.3s.
Benchmarking utxo_large_scale/insert/default/5000000: Collecting 10 samples in estimated 156.25 s (10 iterations)
Benchmarking utxo_large_scale/insert/default/5000000: Analyzing
utxo_large_scale/insert/default/5000000
time: [16.084 s 16.233 s 16.425 s]
Found 1 outliers among 10 measurements (10.00%)
1 (10.00%) high severe
Benchmarking utxo_large_scale/insert/default/10000000
Benchmarking utxo_large_scale/insert/default/10000000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 354.2s.
Benchmarking utxo_large_scale/insert/default/10000000: Collecting 10 samples in estimated 354.15 s (10 iterations)
Benchmarking utxo_large_scale/insert/default/10000000: Analyzing
utxo_large_scale/insert/default/10000000
time: [33.926 s 34.208 s 34.472 s]
Found 2 outliers among 10 measurements (20.00%)
1 (10.00%) low mild
1 (10.00%) high mild
Benchmarking utxo_large_scale/lookup/hit/5000000
Benchmarking utxo_large_scale/lookup/hit/5000000: Warming up for 3.0000 s
Benchmarking utxo_large_scale/lookup/hit/5000000: Collecting 10 samples in estimated 5.0607 s (3795 iterations)
Benchmarking utxo_large_scale/lookup/hit/5000000: Analyzing
utxo_large_scale/lookup/hit/5000000
time: [1.3151 ms 1.3159 ms 1.3169 ms]
Benchmarking utxo_large_scale/lookup/hit/10000000
Benchmarking utxo_large_scale/lookup/hit/10000000: Warming up for 3.0000 s
Benchmarking utxo_large_scale/lookup/hit/10000000: Collecting 10 samples in estimated 5.0623 s (3080 iterations)
Benchmarking utxo_large_scale/lookup/hit/10000000: Analyzing
utxo_large_scale/lookup/hit/10000000
time: [1.6051 ms 1.6076 ms 1.6105 ms]
Benchmarking utxo_large_scale/total_lovelace/scan/5000000
Benchmarking utxo_large_scale/total_lovelace/scan/5000000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 16.0s.
Benchmarking utxo_large_scale/total_lovelace/scan/5000000: Collecting 10 samples in estimated 16.020 s (10 iterations)
Benchmarking utxo_large_scale/total_lovelace/scan/5000000: Analyzing
utxo_large_scale/total_lovelace/scan/5000000
time: [1.5908 s 1.6013 s 1.6116 s]
Found 3 outliers among 10 measurements (30.00%)
1 (10.00%) low severe
2 (20.00%) high severe
Benchmarking utxo_large_scale/total_lovelace/scan/10000000
Benchmarking utxo_large_scale/total_lovelace/scan/10000000: Warming up for 3.0000 s
Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 31.9s.
Benchmarking utxo_large_scale/total_lovelace/scan/10000000: Collecting 10 samples in estimated 31.868 s (10 iterations)
Benchmarking utxo_large_scale/total_lovelace/scan/10000000: Analyzing
utxo_large_scale/total_lovelace/scan/10000000
time: [3.1515 s 3.1715 s 3.1841 s]
Found 2 outliers among 10 measurements (20.00%)
1 (10.00%) low severe
1 (10.00%) high mild
Network Benchmarks
[1m[92m Compiling[0m syn v2.0.117
[1m[92m Compiling[0m stable_deref_trait v1.2.1
[1m[92m Compiling[0m memchr v2.8.0
[1m[92m Compiling[0m portable-atomic v1.13.1
[1m[92m Compiling[0m smallvec v1.15.1
[1m[92m Compiling[0m litemap v0.8.2
[1m[92m Compiling[0m critical-section v1.2.0
[1m[92m Compiling[0m writeable v0.6.3
[1m[92m Compiling[0m icu_normalizer_data v2.2.0
[1m[92m Compiling[0m icu_properties_data v2.2.0
[1m[92m Compiling[0m utf8_iter v1.0.4
[1m[92m Compiling[0m futures-sink v0.3.32
[1m[92m Compiling[0m once_cell v1.21.4
[1m[92m Compiling[0m futures-core v0.3.32
[1m[92m Compiling[0m serde_json v1.0.149
[1m[92m Compiling[0m parking_lot_core v0.9.12
[1m[92m Compiling[0m crossbeam-epoch v0.9.18
[1m[92m Compiling[0m parking_lot v0.12.5
[1m[92m Compiling[0m futures-channel v0.3.32
[1m[92m Compiling[0m futures-task v0.3.32
[1m[92m Compiling[0m futures-io v0.3.32
[1m[92m Compiling[0m slab v0.4.12
[1m[92m Compiling[0m tracing-core v0.1.36
[1m[92m Compiling[0m percent-encoding v2.3.2
[1m[92m Compiling[0m form_urlencoded v1.2.2
[1m[92m Compiling[0m tinyvec_macros v0.1.1
[1m[92m Compiling[0m tinyvec v1.11.0
[1m[92m Compiling[0m curve25519-dalek v3.2.0 (https://github.com/iquerejeta/curve25519-dalek?branch=ietf03_vrf_compat_ell2#70a36f41)
[1m[92m Compiling[0m curve25519-dalek v3.2.0
[1m[92m Compiling[0m crossbeam-deque v0.8.6
[1m[92m Compiling[0m uuid v1.23.0
[1m[92m Compiling[0m crossbeam-channel v0.5.15
[1m[92m Compiling[0m synstructure v0.13.2
[1m[92m Compiling[0m darling_core v0.23.0
[1m[92m Compiling[0m ipnet v2.12.0
[1m[92m Compiling[0m equivalent v1.0.2
[1m[92m Compiling[0m tagptr v0.2.0
[1m[92m Compiling[0m data-encoding v2.10.0
[1m[92m Compiling[0m zerocopy-derive v0.8.48
[1m[92m Compiling[0m zerofrom-derive v0.1.7
[1m[92m Compiling[0m zerofrom v0.1.7
[1m[92m Compiling[0m yoke-derive v0.8.2
[1m[92m Compiling[0m serde_derive v1.0.228
[1m[92m Compiling[0m zerocopy v0.8.48
[1m[92m Compiling[0m zerovec-derive v0.11.3
[1m[92m Compiling[0m yoke v0.8.2
[1m[92m Compiling[0m displaydoc v0.2.5
[1m[92m Compiling[0m darling_macro v0.23.0
[1m[92m Compiling[0m zerovec v0.11.6
[1m[92m Compiling[0m thiserror-impl v1.0.69
[1m[92m Compiling[0m tinystr v0.8.3
[1m[92m Compiling[0m minicbor-derive v0.16.2
[1m[92m Compiling[0m thiserror v1.0.69
[1m[92m Compiling[0m icu_locale_core v2.2.0
[1m[92m Compiling[0m serde v1.0.228
[1m[92m Compiling[0m potential_utf v0.1.5
[1m[92m Compiling[0m darling v0.23.0
[1m[92m Compiling[0m zerotrie v0.2.4
[1m[92m Compiling[0m curve25519-dalek-derive v0.1.1
[1m[92m Compiling[0m icu_provider v2.2.0
[1m[92m Compiling[0m half v2.7.1
[1m[92m Compiling[0m ppv-lite86 v0.2.21
[1m[92m Compiling[0m curve25519-dalek v4.1.3
[1m[92m Compiling[0m minicbor v0.26.5
[1m[92m Compiling[0m serde_with_macros v3.18.0
[1m[92m Compiling[0m rand_chacha v0.9.0
[1m[92m Compiling[0m rand v0.9.4
[1m[92m Compiling[0m icu_collections v2.2.0
[1m[92m Compiling[0m ed25519 v2.2.3
[1m[92m Compiling[0m ed25519-dalek v2.2.0
[1m[92m Compiling[0m pallas-codec v1.0.0-alpha.6
[1m[92m Compiling[0m serde_with v3.18.0
[1m[92m Compiling[0m thiserror-impl v2.0.18
[1m[92m Compiling[0m icu_properties v2.2.0
[1m[92m Compiling[0m icu_normalizer v2.2.0
[1m[92m Compiling[0m thiserror v2.0.18
[1m[92m Compiling[0m futures-macro v0.3.32
[1m[92m Compiling[0m idna_adapter v1.2.1
[1m[92m Compiling[0m derive_more-impl v2.1.1
[1m[92m Compiling[0m tokio-macros v2.7.0
[1m[92m Compiling[0m futures-util v0.3.32
[1m[92m Compiling[0m tracing-attributes v0.1.31
[1m[92m Compiling[0m pallas-crypto v1.0.0-alpha.6
[1m[92m Compiling[0m derive_more v2.1.1
[1m[92m Compiling[0m tokio v1.51.1
[1m[92m Compiling[0m tracing v0.1.44
[1m[92m Compiling[0m idna v1.1.0
[1m[92m Compiling[0m chrono v0.4.44
[1m[92m Compiling[0m url v2.5.8
[1m[92m Compiling[0m pallas-primitives v1.0.0-alpha.6
[1m[92m Compiling[0m dugite-primitives v1.4.0 (/home/runner/work/dugite/dugite/crates/dugite-primitives)
[1m[92m Compiling[0m pallas-addresses v1.0.0-alpha.6
[1m[92m Compiling[0m enum-as-inner v0.6.1
[1m[92m Compiling[0m async-trait v0.1.89
[1m[92m Compiling[0m hickory-proto v0.25.2
[1m[92m Compiling[0m futures-executor v0.3.32
[1m[92m Compiling[0m ciborium-ll v0.2.2
[1m[92m Compiling[0m vrf_dalek v0.1.0 (https://github.com/input-output-hk/vrf?rev=03ac038e9b92c754ebbcb71824866d93f25e27f3#03ac038e)
[1m[92m Compiling[0m pallas-traverse v1.0.0-alpha.6
[1m[92m Compiling[0m moka v0.12.15
[1m[92m Compiling[0m tempfile v3.27.0
[1m[92m Compiling[0m rayon-core v1.13.0
[1m[92m Compiling[0m resolv-conf v0.7.6
[1m[92m Compiling[0m rusty-fork v0.3.1
[1m[92m Compiling[0m hickory-resolver v0.25.2
[1m[92m Compiling[0m rayon v1.12.0
[1m[92m Compiling[0m dugite-crypto v1.4.0 (/home/runner/work/dugite/dugite/crates/dugite-crypto)
[1m[92m Compiling[0m dugite-serialization v1.4.0 (/home/runner/work/dugite/dugite/crates/dugite-serialization)
[1m[92m Compiling[0m ciborium v0.2.2
[1m[92m Compiling[0m futures v0.3.32
[1m[92m Compiling[0m tokio-util v0.7.18
[1m[92m Compiling[0m tinytemplate v1.2.1
[1m[92m Compiling[0m criterion v0.8.2
[1m[92m Compiling[0m proptest v1.11.0
[1m[92m Compiling[0m dugite-network v1.4.0 (/home/runner/work/dugite/dugite/crates/dugite-network)
[1m[92m Finished[0m `bench` profile [optimized] target(s) in 58.13s
[1m[92m Running[0m benches/network_bench.rs (target/release/deps/network_bench-432b166dd88189ac)
Gnuplot not found, using plotters backend
Benchmarking network/handshake_encode/n2n_version_data
Benchmarking network/handshake_encode/n2n_version_data: Warming up for 3.0000 s
Benchmarking network/handshake_encode/n2n_version_data: Collecting 100 samples in estimated 5.0002 s (113M iterations)
Benchmarking network/handshake_encode/n2n_version_data: Analyzing
network/handshake_encode/n2n_version_data
time: [43.830 ns 44.036 ns 44.358 ns]
Found 5 outliers among 100 measurements (5.00%)
1 (1.00%) low mild
1 (1.00%) high mild
3 (3.00%) high severe
Benchmarking network/handshake_encode/n2c_version_data
Benchmarking network/handshake_encode/n2c_version_data: Warming up for 3.0000 s
Benchmarking network/handshake_encode/n2c_version_data: Collecting 100 samples in estimated 5.0001 s (216M iterations)
Benchmarking network/handshake_encode/n2c_version_data: Analyzing
network/handshake_encode/n2c_version_data
time: [23.376 ns 23.441 ns 23.504 ns]
Found 2 outliers among 100 measurements (2.00%)
2 (2.00%) high severe
Benchmarking network/chainsync/roll_forward/encode/256
Benchmarking network/chainsync/roll_forward/encode/256: Warming up for 3.0000 s
Benchmarking network/chainsync/roll_forward/encode/256: Collecting 100 samples in estimated 5.0004 s (46M iterations)
Benchmarking network/chainsync/roll_forward/encode/256: Analyzing
network/chainsync/roll_forward/encode/256
time: [109.55 ns 109.70 ns 109.83 ns]
thrpt: [2.6372 GiB/s 2.6403 GiB/s 2.6438 GiB/s]
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severe
Benchmarking network/chainsync/roll_forward/decode/256
Benchmarking network/chainsync/roll_forward/decode/256: Warming up for 3.0000 s
Benchmarking network/chainsync/roll_forward/decode/256: Collecting 100 samples in estimated 5.0002 s (51M iterations)
Benchmarking network/chainsync/roll_forward/decode/256: Analyzing
network/chainsync/roll_forward/decode/256
time: [95.415 ns 95.629 ns 95.856 ns]
thrpt: [3.0216 GiB/s 3.0288 GiB/s 3.0356 GiB/s]
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severe
Benchmarking network/chainsync/roll_forward/encode/1024
Benchmarking network/chainsync/roll_forward/encode/1024: Warming up for 3.0000 s
Benchmarking network/chainsync/roll_forward/encode/1024: Collecting 100 samples in estimated 5.0006 s (30M iterations)
Benchmarking network/chainsync/roll_forward/encode/1024: Analyzing
network/chainsync/roll_forward/encode/1024
time: [159.72 ns 161.63 ns 164.67 ns]
thrpt: [6.1027 GiB/s 6.2172 GiB/s 6.2918 GiB/s]
Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) high mild
1 (1.00%) high severe
Benchmarking network/chainsync/roll_forward/decode/1024
Benchmarking network/chainsync/roll_forward/decode/1024: Warming up for 3.0000 s
Benchmarking network/chainsync/roll_forward/decode/1024: Collecting 100 samples in estimated 5.0003 s (42M iterations)
Benchmarking network/chainsync/roll_forward/decode/1024: Analyzing
network/chainsync/roll_forward/decode/1024
time: [104.36 ns 104.73 ns 105.16 ns]
thrpt: [9.5555 GiB/s 9.5950 GiB/s 9.6288 GiB/s]
Found 5 outliers among 100 measurements (5.00%)
3 (3.00%) high mild
2 (2.00%) high severe
Benchmarking network/chainsync/roll_forward/encode/4096
Benchmarking network/chainsync/roll_forward/encode/4096: Warming up for 3.0000 s
Benchmarking network/chainsync/roll_forward/encode/4096: Collecting 100 samples in estimated 5.0009 s (26M iterations)
Benchmarking network/chainsync/roll_forward/encode/4096: Analyzing
network/chainsync/roll_forward/encode/4096
time: [195.45 ns 195.68 ns 195.90 ns]
thrpt: [19.734 GiB/s 19.757 GiB/s 19.779 GiB/s]
Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) high mild
1 (1.00%) high severe
Benchmarking network/chainsync/roll_forward/decode/4096
Benchmarking network/chainsync/roll_forward/decode/4096: Warming up for 3.0000 s
Benchmarking network/chainsync/roll_forward/decode/4096: Collecting 100 samples in estimated 5.0008 s (19M iterations)
Benchmarking network/chainsync/roll_forward/decode/4096: Analyzing
network/chainsync/roll_forward/decode/4096
time: [256.85 ns 263.65 ns 269.47 ns]
thrpt: [14.346 GiB/s 14.663 GiB/s 15.051 GiB/s]
Benchmarking network/chainsync/roll_backward/encode
Benchmarking network/chainsync/roll_backward/encode: Warming up for 3.0000 s
Benchmarking network/chainsync/roll_backward/encode: Collecting 100 samples in estimated 5.0006 s (31M iterations)
Benchmarking network/chainsync/roll_backward/encode: Analyzing
network/chainsync/roll_backward/encode
time: [161.69 ns 161.98 ns 162.50 ns]
thrpt: [516.46 MiB/s 518.10 MiB/s 519.02 MiB/s]
Found 11 outliers among 100 measurements (11.00%)
5 (5.00%) high mild
6 (6.00%) high severe
Benchmarking network/chainsync/roll_backward/decode
Benchmarking network/chainsync/roll_backward/decode: Warming up for 3.0000 s
Benchmarking network/chainsync/roll_backward/decode: Collecting 100 samples in estimated 5.0002 s (69M iterations)
Benchmarking network/chainsync/roll_backward/decode: Analyzing
network/chainsync/roll_backward/decode
time: [71.592 ns 71.843 ns 72.121 ns]
thrpt: [1.1364 GiB/s 1.1408 GiB/s 1.1448 GiB/s]
Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) high mild
1 (1.00%) high severe
Benchmarking network/blockfetch/msg_block/encode/2048
Benchmarking network/blockfetch/msg_block/encode/2048: Warming up for 3.0000 s
Benchmarking network/blockfetch/msg_block/encode/2048: Collecting 100 samples in estimated 5.0003 s (46M iterations)
Benchmarking network/blockfetch/msg_block/encode/2048: Analyzing
network/blockfetch/msg_block/encode/2048
time: [108.42 ns 108.54 ns 108.69 ns]
thrpt: [17.592 GiB/s 17.616 GiB/s 17.635 GiB/s]
Found 15 outliers among 100 measurements (15.00%)
6 (6.00%) low mild
9 (9.00%) high severe
Benchmarking network/blockfetch/msg_block/decode/2048
Benchmarking network/blockfetch/msg_block/decode/2048: Warming up for 3.0000 s
Benchmarking network/blockfetch/msg_block/decode/2048: Collecting 100 samples in estimated 5.0001 s (67M iterations)
Benchmarking network/blockfetch/msg_block/decode/2048: Analyzing
network/blockfetch/msg_block/decode/2048
time: [74.704 ns 74.949 ns 75.176 ns]
thrpt: [25.434 GiB/s 25.511 GiB/s 25.594 GiB/s]
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severe
Benchmarking network/blockfetch/msg_block/encode/20480
Benchmarking network/blockfetch/msg_block/encode/20480: Warming up for 3.0000 s
Benchmarking network/blockfetch/msg_block/encode/20480: Collecting 100 samples in estimated 5.0015 s (9.7M iterations)
Benchmarking network/blockfetch/msg_block/encode/20480: Analyzing
network/blockfetch/msg_block/encode/20480
time: [518.10 ns 522.43 ns 529.25 ns]
thrpt: [36.047 GiB/s 36.518 GiB/s 36.823 GiB/s]
Found 5 outliers among 100 measurements (5.00%)
5 (5.00%) high severe
Benchmarking network/blockfetch/msg_block/decode/20480
Benchmarking network/blockfetch/msg_block/decode/20480: Warming up for 3.0000 s
Benchmarking network/blockfetch/msg_block/decode/20480: Collecting 100 samples in estimated 5.0276 s (732k iterations)
Benchmarking network/blockfetch/msg_block/decode/20480: Analyzing
network/blockfetch/msg_block/decode/20480
time: [6.8672 µs 6.9007 µs 6.9461 µs]
thrpt: [2.7466 GiB/s 2.7647 GiB/s 2.7782 GiB/s]
Found 9 outliers among 100 measurements (9.00%)
1 (1.00%) low mild
1 (1.00%) high mild
7 (7.00%) high severe
Benchmarking network/blockfetch/msg_block/encode/90000
Benchmarking network/blockfetch/msg_block/encode/90000: Warming up for 3.0000 s
Benchmarking network/blockfetch/msg_block/encode/90000: Collecting 100 samples in estimated 5.0031 s (2.4M iterations)
Benchmarking network/blockfetch/msg_block/encode/90000: Analyzing
network/blockfetch/msg_block/encode/90000
time: [2.0609 µs 2.0627 µs 2.0649 µs]
thrpt: [40.595 GiB/s 40.638 GiB/s 40.675 GiB/s]
Found 4 outliers among 100 measurements (4.00%)
3 (3.00%) high mild
1 (1.00%) high severe
Benchmarking network/blockfetch/msg_block/decode/90000
Benchmarking network/blockfetch/msg_block/decode/90000: Warming up for 3.0000 s
Benchmarking network/blockfetch/msg_block/decode/90000: Collecting 100 samples in estimated 5.0110 s (2.1M iterations)
Benchmarking network/blockfetch/msg_block/decode/90000: Analyzing
network/blockfetch/msg_block/decode/90000
time: [2.4784 µs 2.4830 µs 2.4877 µs]
thrpt: [33.696 GiB/s 33.760 GiB/s 33.823 GiB/s]
Found 4 outliers among 100 measurements (4.00%)
3 (3.00%) high mild
1 (1.00%) high severe
Benchmarking network/blockfetch/request_range/encode
Benchmarking network/blockfetch/request_range/encode: Warming up for 3.0000 s
Benchmarking network/blockfetch/request_range/encode: Collecting 100 samples in estimated 5.0003 s (48M iterations)
Benchmarking network/blockfetch/request_range/encode: Analyzing
network/blockfetch/request_range/encode
time: [104.22 ns 104.27 ns 104.33 ns]
Found 7 outliers among 100 measurements (7.00%)
2 (2.00%) low mild
2 (2.00%) high mild
3 (3.00%) high severe
Benchmarking network/blockfetch/request_range/decode
Benchmarking network/blockfetch/request_range/decode: Warming up for 3.0000 s
Benchmarking network/blockfetch/request_range/decode: Collecting 100 samples in estimated 5.0001 s (73M iterations)
Benchmarking network/blockfetch/request_range/decode: Analyzing
network/blockfetch/request_range/decode
time: [69.710 ns 70.103 ns 70.469 ns]
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mild
Benchmarking network/n2c_query_encode/pparams/hfc_success
Benchmarking network/n2c_query_encode/pparams/hfc_success: Warming up for 3.0000 s
Benchmarking network/n2c_query_encode/pparams/hfc_success: Collecting 100 samples in estimated 5.0002 s (99M iterations)
Benchmarking network/n2c_query_encode/pparams/hfc_success: Analyzing
network/n2c_query_encode/pparams/hfc_success
time: [50.601 ns 50.623 ns 50.646 ns]
thrpt: [1.6734 GiB/s 1.6742 GiB/s 1.6749 GiB/s]
Found 5 outliers among 100 measurements (5.00%)
3 (3.00%) high mild
2 (2.00%) high severe
Benchmarking network/n2c_query_encode/pparams/tag24
Benchmarking network/n2c_query_encode/pparams/tag24: Warming up for 3.0000 s
Benchmarking network/n2c_query_encode/pparams/tag24: Collecting 100 samples in estimated 5.0000 s (92M iterations)
Benchmarking network/n2c_query_encode/pparams/tag24: Analyzing
network/n2c_query_encode/pparams/tag24
time: [54.696 ns 54.758 ns 54.857 ns]
thrpt: [1.5449 GiB/s 1.5477 GiB/s 1.5495 GiB/s]
Found 4 outliers among 100 measurements (4.00%)
2 (2.00%) high mild
2 (2.00%) high severe
Benchmarking network/n2c_query_encode/govstate/hfc_success
Benchmarking network/n2c_query_encode/govstate/hfc_success: Warming up for 3.0000 s
Benchmarking network/n2c_query_encode/govstate/hfc_success: Collecting 100 samples in estimated 5.0002 s (63M iterations)
Benchmarking network/n2c_query_encode/govstate/hfc_success: Analyzing
network/n2c_query_encode/govstate/hfc_success
time: [79.136 ns 79.200 ns 79.304 ns]
thrpt: [9.1835 GiB/s 9.1956 GiB/s 9.2031 GiB/s]
Found 8 outliers among 100 measurements (8.00%)
1 (1.00%) low mild
5 (5.00%) high mild
2 (2.00%) high severe
Benchmarking network/n2c_query_encode/govstate/tag24
Benchmarking network/n2c_query_encode/govstate/tag24: Warming up for 3.0000 s
Benchmarking network/n2c_query_encode/govstate/tag24: Collecting 100 samples in estimated 5.0003 s (55M iterations)
Benchmarking network/n2c_query_encode/govstate/tag24: Analyzing
network/n2c_query_encode/govstate/tag24
time: [90.433 ns 90.786 ns 91.267 ns]
thrpt: [7.9798 GiB/s 8.0221 GiB/s 8.0534 GiB/s]
Found 8 outliers among 100 measurements (8.00%)
2 (2.00%) high mild
6 (6.00%) high severe
Benchmarking network/n2c_query_encode/era_mismatch
Benchmarking network/n2c_query_encode/era_mismatch: Warming up for 3.0000 s
Benchmarking network/n2c_query_encode/era_mismatch: Collecting 100 samples in estimated 5.0001 s (203M iterations)
Benchmarking network/n2c_query_encode/era_mismatch: Analyzing
network/n2c_query_encode/era_mismatch
time: [24.009 ns 24.117 ns 24.232 ns]
thrpt: [30.055 GiB/s 30.199 GiB/s 30.335 GiB/s]
Consensus Benchmarks
[1m[92m Compiling[0m futures-sink v0.3.32
[1m[92m Compiling[0m futures-core v0.3.32
[1m[92m Compiling[0m futures-macro v0.3.32
[1m[92m Compiling[0m pallas-primitives v1.0.0-alpha.6
[1m[92m Compiling[0m dugite-crypto v1.4.0 (/home/runner/work/dugite/dugite/crates/dugite-crypto)
[1m[92m Compiling[0m futures-channel v0.3.32
[1m[92m Compiling[0m tinytemplate v1.2.1
[1m[92m Compiling[0m futures-util v0.3.32
[1m[92m Compiling[0m async-trait v0.1.89
[1m[92m Compiling[0m criterion v0.8.2
[1m[92m Compiling[0m futures-executor v0.3.32
[1m[92m Compiling[0m futures v0.3.32
[1m[92m Compiling[0m pallas-traverse v1.0.0-alpha.6
[1m[92m Compiling[0m dugite-serialization v1.4.0 (/home/runner/work/dugite/dugite/crates/dugite-serialization)
[1m[92m Compiling[0m dugite-consensus v1.4.0 (/home/runner/work/dugite/dugite/crates/dugite-consensus)
[1m[92m Finished[0m `bench` profile [optimized] target(s) in 34.14s
[1m[92m Running[0m benches/consensus_bench.rs (target/release/deps/consensus_bench-8e1bcc233a110257)
Gnuplot not found, using plotters backend
Benchmarking consensus/vrf_leader_check/single/sigma=0.0000247
Benchmarking consensus/vrf_leader_check/single/sigma=0.0000247: Warming up for 3.0000 s
Benchmarking consensus/vrf_leader_check/single/sigma=0.0000247: Collecting 100 samples in estimated 5.0415 s (131k iterations)
Benchmarking consensus/vrf_leader_check/single/sigma=0.0000247: Analyzing
consensus/vrf_leader_check/single/sigma=0.0000247
time: [38.268 µs 38.292 µs 38.318 µs]
Found 6 outliers among 100 measurements (6.00%)
2 (2.00%) high mild
4 (4.00%) high severe
Benchmarking consensus/vrf_leader_check/single/sigma=0.001
Benchmarking consensus/vrf_leader_check/single/sigma=0.001: Warming up for 3.0000 s
Benchmarking consensus/vrf_leader_check/single/sigma=0.001: Collecting 100 samples in estimated 5.0616 s (131k iterations)
Benchmarking consensus/vrf_leader_check/single/sigma=0.001: Analyzing
consensus/vrf_leader_check/single/sigma=0.001
time: [38.446 µs 38.489 µs 38.552 µs]
Found 5 outliers among 100 measurements (5.00%)
3 (3.00%) high mild
2 (2.00%) high severe
Benchmarking consensus/vrf_leader_check/single/sigma=0.01
Benchmarking consensus/vrf_leader_check/single/sigma=0.01: Warming up for 3.0000 s
Benchmarking consensus/vrf_leader_check/single/sigma=0.01: Collecting 100 samples in estimated 5.0538 s (131k iterations)
Benchmarking consensus/vrf_leader_check/single/sigma=0.01: Analyzing
consensus/vrf_leader_check/single/sigma=0.01
time: [38.342 µs 38.375 µs 38.417 µs]
Found 8 outliers among 100 measurements (8.00%)
2 (2.00%) high mild
6 (6.00%) high severe
Benchmarking consensus/vrf_leader_check/batch/21600
Benchmarking consensus/vrf_leader_check/batch/21600: Warming up for 3.0000 s
Warning: Unable to complete 20 samples in 5.0s. You may wish to increase target time to 16.8s, or reduce sample count to 10.
Benchmarking consensus/vrf_leader_check/batch/21600: Collecting 20 samples in estimated 16.757 s (20 iterations)
Benchmarking consensus/vrf_leader_check/batch/21600: Analyzing
consensus/vrf_leader_check/batch/21600
time: [835.08 ms 836.69 ms 839.12 ms]
thrpt: [25.741 Kelem/s 25.816 Kelem/s 25.866 Kelem/s]
Found 4 outliers among 20 measurements (20.00%)
2 (10.00%) high mild
2 (10.00%) high severe
Benchmarking consensus/validate_header/replay_mode
Benchmarking consensus/validate_header/replay_mode: Warming up for 3.0000 s
Benchmarking consensus/validate_header/replay_mode: Collecting 100 samples in estimated 5.0000 s (763M iterations)
Benchmarking consensus/validate_header/replay_mode: Analyzing
consensus/validate_header/replay_mode
time: [6.5426 ns 6.5459 ns 6.5498 ns]
Found 8 outliers among 100 measurements (8.00%)
6 (6.00%) high mild
2 (2.00%) high severe
Benchmarking consensus/chain_selection/longer_fork_100
Benchmarking consensus/chain_selection/longer_fork_100: Warming up for 3.0000 s
Benchmarking consensus/chain_selection/longer_fork_100: Collecting 100 samples in estimated 5.0000 s (3.2B iterations)
Benchmarking consensus/chain_selection/longer_fork_100: Analyzing
consensus/chain_selection/longer_fork_100
time: [1.5557 ns 1.5560 ns 1.5564 ns]
Found 8 outliers among 100 measurements (8.00%)
6 (6.00%) high mild
2 (2.00%) high severe
Benchmarking consensus/chain_selection/equal_length_tiebreak
Benchmarking consensus/chain_selection/equal_length_tiebreak: Warming up for 3.0000 s
Benchmarking consensus/chain_selection/equal_length_tiebreak: Collecting 100 samples in estimated 5.0005 s (15M iterations)
Benchmarking consensus/chain_selection/equal_length_tiebreak: Analyzing
consensus/chain_selection/equal_length_tiebreak
time: [328.66 ns 329.04 ns 329.45 ns]
Found 2 outliers among 100 measurements (2.00%)
2 (2.00%) high mild
Benchmarking consensus/chain_selection/prefer_simple
Benchmarking consensus/chain_selection/prefer_simple: Warming up for 3.0000 s
Benchmarking consensus/chain_selection/prefer_simple: Collecting 100 samples in estimated 5.0000 s (2.0B iterations)
Benchmarking consensus/chain_selection/prefer_simple: Analyzing
consensus/chain_selection/prefer_simple
time: [2.4114 ns 2.4166 ns 2.4213 ns]
Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) low mild
1 (1.00%) high mild
LSM Benchmarks
[1m[92m Compiling[0m serde_core v1.0.228
[1m[92m Compiling[0m syn v2.0.117
[1m[92m Compiling[0m num-traits v0.2.19
[1m[92m Compiling[0m serde v1.0.228
[1m[92m Compiling[0m once_cell v1.21.4
[1m[92m Compiling[0m tempfile v3.27.0
[1m[92m Compiling[0m rusty-fork v0.3.1
[1m[92m Compiling[0m plotters v0.3.7
[1m[92m Compiling[0m rand v0.9.4
[1m[92m Compiling[0m byteorder v1.5.0
[1m[92m Compiling[0m serde_json v1.0.149
[1m[92m Compiling[0m zerocopy-derive v0.8.48
[1m[92m Compiling[0m serde_derive v1.0.228
[1m[92m Compiling[0m thiserror-impl v2.0.18
[1m[92m Compiling[0m thiserror v2.0.18
[1m[92m Compiling[0m dugite-lsm v1.4.0 (/home/runner/work/dugite/dugite/crates/dugite-lsm)
[1m[92m Compiling[0m zerocopy v0.8.48
[1m[92m Compiling[0m tinytemplate v1.2.1
[1m[92m Compiling[0m half v2.7.1
[1m[92m Compiling[0m ppv-lite86 v0.2.21
[1m[92m Compiling[0m ciborium-ll v0.2.2
[1m[92m Compiling[0m rand_chacha v0.9.0
[1m[92m Compiling[0m ciborium v0.2.2
[1m[92m Compiling[0m proptest v1.11.0
[1m[92m Compiling[0m criterion v0.8.2
[1m[92m Finished[0m `bench` profile [optimized] target(s) in 30.83s
[1m[92m Running[0m benches/lsm_bench.rs (target/release/deps/lsm_bench-fc3938be8dfc863b)
Gnuplot not found, using plotters backend
Benchmarking lsm/insert/random_keys/10000
Benchmarking lsm/insert/random_keys/10000: Warming up for 3.0000 s
Benchmarking lsm/insert/random_keys/10000: Collecting 10 samples in estimated 5.1012 s (1485 iterations)
Benchmarking lsm/insert/random_keys/10000: Analyzing
lsm/insert/random_keys/10000
time: [3.2842 ms 3.3018 ms 3.3233 ms]
thrpt: [3.0090 Melem/s 3.0286 Melem/s 3.0449 Melem/s]
Found 1 outliers among 10 measurements (10.00%)
1 (10.00%) high severe
Benchmarking lsm/point_lookup/hit_random/10000
Benchmarking lsm/point_lookup/hit_random/10000: Warming up for 3.0000 s
Benchmarking lsm/point_lookup/hit_random/10000: Collecting 100 samples in estimated 6.4175 s (15k iterations)
Benchmarking lsm/point_lookup/hit_random/10000: Analyzing
lsm/point_lookup/hit_random/10000
time: [432.02 µs 432.26 µs 432.51 µs]
thrpt: [2.3121 Melem/s 2.3134 Melem/s 2.3147 Melem/s]
Found 3 outliers among 100 measurements (3.00%)
1 (1.00%) high mild
2 (2.00%) high severe
Benchmarking lsm/point_lookup/miss_random/10000
Benchmarking lsm/point_lookup/miss_random/10000: Warming up for 3.0000 s
Benchmarking lsm/point_lookup/miss_random/10000: Collecting 100 samples in estimated 5.0158 s (50k iterations)
Benchmarking lsm/point_lookup/miss_random/10000: Analyzing
lsm/point_lookup/miss_random/10000
time: [99.395 µs 99.486 µs 99.575 µs]
thrpt: [10.043 Melem/s 10.052 Melem/s 10.061 Melem/s]
Found 5 outliers among 100 measurements (5.00%)
3 (3.00%) high mild
2 (2.00%) high severe
Benchmarking lsm/range_scan/window_100_of_10k/10000
Benchmarking lsm/range_scan/window_100_of_10k/10000: Warming up for 3.0000 s
Benchmarking lsm/range_scan/window_100_of_10k/10000: Collecting 20 samples in estimated 5.0086 s (88k iterations)
Benchmarking lsm/range_scan/window_100_of_10k/10000: Analyzing
lsm/range_scan/window_100_of_10k/10000
time: [57.131 µs 57.206 µs 57.298 µs]
thrpt: [1.7453 Melem/s 1.7481 Melem/s 1.7504 Melem/s]
Benchmarking lsm/range_scan/full_scan/10000
Benchmarking lsm/range_scan/full_scan/10000: Warming up for 3.0000 s
Benchmarking lsm/range_scan/full_scan/10000: Collecting 20 samples in estimated 5.3004 s (3360 iterations)
Benchmarking lsm/range_scan/full_scan/10000: Analyzing
lsm/range_scan/full_scan/10000
time: [1.5733 ms 1.5756 ms 1.5775 ms]
thrpt: [6.3392 Melem/s 6.3469 Melem/s 6.3562 Melem/s]
Found 1 outliers among 20 measurements (5.00%)
1 (5.00%) high severe
Benchmarking lsm/apply_batch/inserts_10k_deletes_2.5k/10000
Benchmarking lsm/apply_batch/inserts_10k_deletes_2.5k/10000: Warming up for 3.0000 s
Benchmarking lsm/apply_batch/inserts_10k_deletes_2.5k/10000: Collecting 10 samples in estimated 5.6770 s (220 iterations)
Benchmarking lsm/apply_batch/inserts_10k_deletes_2.5k/10000: Analyzing
lsm/apply_batch/inserts_10k_deletes_2.5k/10000
time: [19.015 ms 19.130 ms 19.226 ms]
thrpt: [520.13 Kelem/s 522.75 Kelem/s 525.91 Kelem/s]
Benchmarking lsm/snapshot/save_10k
Benchmarking lsm/snapshot/save_10k: Warming up for 3.0000 s
Benchmarking lsm/snapshot/save_10k: Collecting 10 samples in estimated 5.1794 s (770 iterations)
Benchmarking lsm/snapshot/save_10k: Analyzing
lsm/snapshot/save_10k time: [509.56 µs 511.30 µs 512.93 µs]
Benchmarking lsm/snapshot/load_10k
Benchmarking lsm/snapshot/load_10k: Warming up for 3.0000 s
Benchmarking lsm/snapshot/load_10k: Collecting 10 samples in estimated 5.2962 s (660 iterations)
Benchmarking lsm/snapshot/load_10k: Analyzing
lsm/snapshot/load_10k time: [1.6091 ms 1.6161 ms 1.6238 ms]
LSM Stress Tests
[1m[92m Compiling[0m dugite-lsm v1.4.0 (/home/runner/work/dugite/dugite/crates/dugite-lsm)
[1m[92m Finished[0m `release` profile [optimized] target(s) in 10.61s
[1m[92m Running[0m unittests src/lib.rs (target/release/deps/dugite_lsm-92e0a9a6efe83f91)
running 3 tests
test tree::mainnet_scale_tests::test_mainnet_scale_wal_crash_recovery ... ok
test tree::mainnet_scale_tests::test_mainnet_scale_insert_read ... ok
test tree::mainnet_scale_tests::test_mainnet_scale_delete_amplification ... ok
test result: ok. 3 passed; 0 failed; 0 ignored; 0 measured; 93 filtered out; finished in 8.51s
[1m[92m Doc-tests[0m dugite_lsm
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 1 filtered out; finished in 0.00s
Third-Party Licenses
Dugite depends on a number of open-source Rust crates. This page documents all third-party dependencies and their license terms.
Total dependencies: 393
License Summary
| License | Count |
|---|---|
| MIT OR Apache-2.0 | 205 |
| MIT | 71 |
| Apache-2.0 OR MIT | 33 |
| Unicode-3.0 | 18 |
| Apache-2.0 | 17 |
| Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT | 14 |
| Unlicense OR MIT | 6 |
| BSD-3-Clause | 4 |
| Apache-2.0 OR ISC OR MIT | 2 |
| MIT OR Apache-2.0 OR Zlib | 2 |
| BlueOak-1.0.0 | 2 |
| ISC | 2 |
| CC0-1.0 | 2 |
| BSD-2-Clause OR Apache-2.0 OR MIT | 2 |
| BSD-2-Clause | 1 |
| CC0-1.0 OR Apache-2.0 OR Apache-2.0 WITH LLVM-exception | 1 |
| CC0-1.0 OR MIT-0 OR Apache-2.0 | 1 |
| MIT OR Apache-2.0 OR BSD-1-Clause | 1 |
| Apache-2.0 OR MIT | 1 |
| Zlib | 1 |
| MIT OR Apache-2.0 OR LGPL-2.1-or-later | 1 |
| Apache-2.0 AND ISC | 1 |
| Apache-2.0 OR BSL-1.0 | 1 |
| Zlib OR Apache-2.0 OR MIT | 1 |
| (MIT OR Apache-2.0) AND Unicode-3.0 | 1 |
| Unknown | 1 |
| CDLA-Permissive-2.0 | 1 |
Key Dependencies
These are the primary libraries that Dugite directly depends on:
| Crate | Version | License | Description |
|---|---|---|---|
| pallas-codec | 1.0.0-alpha.5 | Apache-2.0 | Pallas common CBOR encoding interface and utilities |
| pallas-crypto | 1.0.0-alpha.5 | Apache-2.0 | Cryptographic primitives for Cardano |
| pallas-primitives | 1.0.0-alpha.5 | Apache-2.0 | Ledger primitives and cbor codec for the different Cardano eras |
| pallas-traverse | 1.0.0-alpha.5 | Apache-2.0 | Utilities to traverse over multi-era block data |
| pallas-addresses | 1.0.0-alpha.5 | Apache-2.0 | Ergonomic library to work with different Cardano addresses |
| pallas-network | 1.0.0-alpha.5 | Apache-2.0 | Ouroboros networking stack using async IO |
| uplc | 1.1.21 | Apache-2.0 | Utilities for working with Untyped Plutus Core |
| tokio | 1.50.0 | MIT | An event-driven, non-blocking I/O platform for writing asynchronous I/O |
| backe... | |||
| hyper | 1.8.1 | MIT | A protective and efficient HTTP library for all. |
| reqwest | 0.12.28 | MIT OR Apache-2.0 | higher level HTTP client library |
| clap | 4.6.0 | MIT OR Apache-2.0 | A simple to use, efficient, and full-featured Command Line Argument Parser |
| serde | 1.0.228 | MIT OR Apache-2.0 | A generic serialization/deserialization framework |
| serde_json | 1.0.149 | MIT OR Apache-2.0 | A JSON serialization file format |
| bincode | 1.3.3 | MIT | A binary serialization / deserialization strategy that uses Serde for transfo... |
| blake2b_simd | 1.0.4 | MIT | a pure Rust BLAKE2b implementation with dynamic SIMD |
| sha2 | 0.9.9 | MIT OR Apache-2.0 | Pure Rust implementation of the SHA-2 hash function family |
| including SHA-224,... | |||
| ed25519-dalek | 2.2.0 | BSD-3-Clause | Fast and efficient ed25519 EdDSA key generations, signing, and verification i... |
| curve25519-dalek | 4.1.3 | BSD-3-Clause | A pure-Rust implementation of group operations on ristretto255 and Curve25519 |
| blst | 0.3.16 | Apache-2.0 | Bindings for blst BLS12-381 library |
| k256 | 0.13.4 | Apache-2.0 OR MIT | secp256k1 elliptic curve library written in pure Rust with support for ECDSA |
| ... | |||
| minicbor | 0.26.5 | BlueOak-1.0.0 | A small CBOR codec suitable for no_std environments. |
| tracing | 0.1.44 | MIT | Application-level tracing for Rust. |
| tracing-subscriber | 0.3.22 | MIT | Utilities for implementing and composing tracing subscribers. |
| dashmap | 6.1.0 | MIT | Blazing fast concurrent HashMap for Rust. |
| crossbeam | 0.8.4 | MIT OR Apache-2.0 | Tools for concurrent programming |
| dashu-int | 0.4.1 | MIT OR Apache-2.0 | A big integer library with good performance |
| memmap2 | 0.9.10 | MIT OR Apache-2.0 | Cross-platform Rust API for memory-mapped file IO |
| lz4 | 1.28.1 | MIT | Rust LZ4 bindings library. |
| zstd | 0.13.3 | MIT | Binding for the zstd compression library. |
| tar | 0.4.44 | MIT OR Apache-2.0 | A Rust implementation of a TAR file reader and writer. This library does not |
| ... | |||
| crc32fast | 1.5.0 | MIT OR Apache-2.0 | Fast, SIMD-accelerated CRC32 (IEEE) checksum computation |
| hex | 0.4.3 | MIT OR Apache-2.0 | Encoding and decoding data into/from hexadecimal representation. |
| bs58 | 0.5.1 | MIT/Apache-2.0 | Another Base58 codec implementation. |
| bech32 | 0.9.1 | MIT | Encodes and decodes the Bech32 format |
| base64 | 0.22.1 | MIT OR Apache-2.0 | encodes and decodes base64 as bytes or utf8 |
| rand | 0.9.2 | MIT OR Apache-2.0 | Random number generators and other randomness functionality. |
| chrono | 0.4.44 | MIT OR Apache-2.0 | Date and time library for Rust |
| uuid | 1.22.0 | Apache-2.0 OR MIT | A library to generate and parse UUIDs. |
| indicatif | 0.17.11 | MIT | A progress bar and cli reporting library for Rust |
| vrf_dalek | 0.1.0 | Unknown |
All Dependencies
Complete list of all third-party crates used by Dugite, sorted alphabetically.
| Crate | Version | License |
|---|---|---|
| aho-corasick | 1.1.4 | Unlicense OR MIT |
| android_system_properties | 0.1.5 | MIT/Apache-2.0 |
| anes | 0.1.6 | MIT OR Apache-2.0 |
| anstream | 1.0.0 | MIT OR Apache-2.0 |
| anstyle | 1.0.13 | MIT OR Apache-2.0 |
| anstyle-parse | 1.0.0 | MIT OR Apache-2.0 |
| anstyle-query | 1.1.5 | MIT OR Apache-2.0 |
| anstyle-wincon | 3.0.11 | MIT OR Apache-2.0 |
| anyhow | 1.0.102 | MIT OR Apache-2.0 |
| arrayref | 0.3.9 | BSD-2-Clause |
| arrayvec | 0.7.6 | MIT OR Apache-2.0 |
| async-trait | 0.1.89 | MIT OR Apache-2.0 |
| atomic-waker | 1.1.2 | Apache-2.0 OR MIT |
| autocfg | 1.5.0 | Apache-2.0 OR MIT |
| base16ct | 0.2.0 | Apache-2.0 OR MIT |
| base58 | 0.2.0 | MIT |
| base64 | 0.22.1 | MIT OR Apache-2.0 |
| base64ct | 1.8.3 | Apache-2.0 OR MIT |
| bech32 | 0.9.1 | MIT |
| bincode | 1.3.3 | MIT |
| bit-set | 0.8.0 | Apache-2.0 OR MIT |
| bit-vec | 0.8.0 | Apache-2.0 OR MIT |
| bitflags | 2.11.0 | MIT OR Apache-2.0 |
| bitvec | 1.0.1 | MIT |
| blake2 | 0.10.6 | MIT OR Apache-2.0 |
| blake2b_simd | 1.0.4 | MIT |
| blake3 | 1.8.3 | CC0-1.0 OR Apache-2.0 OR Apache-2.0 WITH LLVM-exception |
| block-buffer | 0.9.0 | MIT OR Apache-2.0 |
| blst | 0.3.16 | Apache-2.0 |
| bs58 | 0.5.1 | MIT/Apache-2.0 |
| bumpalo | 3.20.2 | MIT OR Apache-2.0 |
| byteorder | 1.5.0 | Unlicense OR MIT |
| bytes | 1.11.1 | MIT |
| cast | 0.3.0 | MIT OR Apache-2.0 |
| cc | 1.2.56 | MIT OR Apache-2.0 |
| cfg-if | 1.0.4 | MIT OR Apache-2.0 |
| cfg_aliases | 0.2.1 | MIT |
| chrono | 0.4.44 | MIT OR Apache-2.0 |
| ciborium | 0.2.2 | Apache-2.0 |
| ciborium-io | 0.2.2 | Apache-2.0 |
| ciborium-ll | 0.2.2 | Apache-2.0 |
| clap | 4.6.0 | MIT OR Apache-2.0 |
| clap_builder | 4.6.0 | MIT OR Apache-2.0 |
| clap_derive | 4.6.0 | MIT OR Apache-2.0 |
| clap_lex | 1.1.0 | MIT OR Apache-2.0 |
| colorchoice | 1.0.4 | MIT OR Apache-2.0 |
| console | 0.15.11 | MIT |
| const-oid | 0.9.6 | Apache-2.0 OR MIT |
| constant_time_eq | 0.4.2 | CC0-1.0 OR MIT-0 OR Apache-2.0 |
| core-foundation-sys | 0.8.7 | MIT OR Apache-2.0 |
| cpufeatures | 0.2.17 | MIT OR Apache-2.0 |
| crc | 3.4.0 | MIT OR Apache-2.0 |
| crc-catalog | 2.4.0 | MIT OR Apache-2.0 |
| crc32fast | 1.5.0 | MIT OR Apache-2.0 |
| criterion | 0.5.1 | Apache-2.0 OR MIT |
| criterion-plot | 0.5.0 | MIT/Apache-2.0 |
| crossbeam | 0.8.4 | MIT OR Apache-2.0 |
| crossbeam-channel | 0.5.15 | MIT OR Apache-2.0 |
| crossbeam-deque | 0.8.6 | MIT OR Apache-2.0 |
| crossbeam-epoch | 0.9.18 | MIT OR Apache-2.0 |
| crossbeam-queue | 0.3.12 | MIT OR Apache-2.0 |
| crossbeam-utils | 0.8.21 | MIT OR Apache-2.0 |
| crunchy | 0.2.4 | MIT |
| crypto-bigint | 0.5.5 | Apache-2.0 OR MIT |
| crypto-common | 0.1.7 | MIT OR Apache-2.0 |
| cryptoxide | 0.4.4 | MIT/Apache-2.0 |
| curve25519-dalek | 4.1.3 | BSD-3-Clause |
| curve25519-dalek-derive | 0.1.1 | MIT/Apache-2.0 |
| darling | 0.21.3 | MIT |
| darling_core | 0.21.3 | MIT |
| darling_macro | 0.21.3 | MIT |
| dashmap | 6.1.0 | MIT |
| dashu-base | 0.4.1 | MIT OR Apache-2.0 |
| dashu-int | 0.4.1 | MIT OR Apache-2.0 |
| der | 0.7.10 | Apache-2.0 OR MIT |
| deranged | 0.5.8 | MIT OR Apache-2.0 |
| derive_more | 1.0.0 | MIT |
| derive_more-impl | 1.0.0 | MIT |
| digest | 0.9.0 | MIT OR Apache-2.0 |
| displaydoc | 0.2.5 | MIT OR Apache-2.0 |
| dyn-clone | 1.0.20 | MIT OR Apache-2.0 |
| ecdsa | 0.16.9 | Apache-2.0 OR MIT |
| ed25519 | 2.2.3 | Apache-2.0 OR MIT |
| ed25519-dalek | 2.2.0 | BSD-3-Clause |
| either | 1.15.0 | MIT OR Apache-2.0 |
| elliptic-curve | 0.13.8 | Apache-2.0 OR MIT |
| encode_unicode | 1.0.0 | Apache-2.0 OR MIT |
| equivalent | 1.0.2 | Apache-2.0 OR MIT |
| errno | 0.3.14 | MIT OR Apache-2.0 |
| fastrand | 2.3.0 | Apache-2.0 OR MIT |
| ff | 0.13.1 | MIT/Apache-2.0 |
| fiat-crypto | 0.2.9 | MIT OR Apache-2.0 OR BSD-1-Clause |
| filetime | 0.2.27 | MIT/Apache-2.0 |
| find-msvc-tools | 0.1.9 | MIT OR Apache-2.0 |
| fnv | 1.0.7 | Apache-2.0 / MIT |
| foldhash | 0.1.5 | Zlib |
| form_urlencoded | 1.2.2 | MIT OR Apache-2.0 |
| fs2 | 0.4.3 | MIT/Apache-2.0 |
| funty | 2.0.0 | MIT |
| futures | 0.3.32 | MIT OR Apache-2.0 |
| futures-channel | 0.3.32 | MIT OR Apache-2.0 |
| futures-core | 0.3.32 | MIT OR Apache-2.0 |
| futures-executor | 0.3.32 | MIT OR Apache-2.0 |
| futures-io | 0.3.32 | MIT OR Apache-2.0 |
| futures-macro | 0.3.32 | MIT OR Apache-2.0 |
| futures-sink | 0.3.32 | MIT OR Apache-2.0 |
| futures-task | 0.3.32 | MIT OR Apache-2.0 |
| futures-util | 0.3.32 | MIT OR Apache-2.0 |
| generic-array | 0.14.7 | MIT |
| getrandom | 0.4.2 | MIT OR Apache-2.0 |
| glob | 0.3.3 | MIT OR Apache-2.0 |
| group | 0.13.0 | MIT/Apache-2.0 |
| half | 2.7.1 | MIT OR Apache-2.0 |
| hamming | 0.1.3 | MIT/Apache-2.0 |
| hashbrown | 0.16.1 | MIT OR Apache-2.0 |
| heck | 0.5.0 | MIT OR Apache-2.0 |
| hermit-abi | 0.5.2 | MIT OR Apache-2.0 |
| hex | 0.4.3 | MIT OR Apache-2.0 |
| hmac | 0.12.1 | MIT OR Apache-2.0 |
| hostname | 0.3.1 | MIT |
| http | 1.4.0 | MIT OR Apache-2.0 |
| http-body | 1.0.1 | MIT |
| http-body-util | 0.1.3 | MIT |
| httparse | 1.10.1 | MIT OR Apache-2.0 |
| hyper | 1.8.1 | MIT |
| hyper-rustls | 0.27.7 | Apache-2.0 OR ISC OR MIT |
| hyper-util | 0.1.20 | MIT |
| iana-time-zone | 0.1.65 | MIT OR Apache-2.0 |
| iana-time-zone-haiku | 0.1.2 | MIT OR Apache-2.0 |
| icu_collections | 2.1.1 | Unicode-3.0 |
| icu_locale_core | 2.1.1 | Unicode-3.0 |
| icu_normalizer | 2.1.1 | Unicode-3.0 |
| icu_normalizer_data | 2.1.1 | Unicode-3.0 |
| icu_properties | 2.1.2 | Unicode-3.0 |
| icu_properties_data | 2.1.2 | Unicode-3.0 |
| icu_provider | 2.1.1 | Unicode-3.0 |
| id-arena | 2.3.0 | MIT/Apache-2.0 |
| ident_case | 1.0.1 | MIT/Apache-2.0 |
| idna | 1.1.0 | MIT OR Apache-2.0 |
| idna_adapter | 1.2.1 | Apache-2.0 OR MIT |
| indexmap | 2.13.0 | Apache-2.0 OR MIT |
| indicatif | 0.17.11 | MIT |
| ipnet | 2.12.0 | MIT OR Apache-2.0 |
| iri-string | 0.7.10 | MIT OR Apache-2.0 |
| is-terminal | 0.4.17 | MIT |
| is_terminal_polyfill | 1.70.2 | MIT OR Apache-2.0 |
| itertools | 0.13.0 | MIT OR Apache-2.0 |
| itoa | 1.0.17 | MIT OR Apache-2.0 |
| jobserver | 0.1.34 | MIT OR Apache-2.0 |
| js-sys | 0.3.91 | MIT OR Apache-2.0 |
| k256 | 0.13.4 | Apache-2.0 OR MIT |
| lazy_static | 1.5.0 | MIT OR Apache-2.0 |
| leb128fmt | 0.1.0 | MIT OR Apache-2.0 |
| libc | 0.2.183 | MIT OR Apache-2.0 |
| libredox | 0.1.14 | MIT |
| linux-raw-sys | 0.12.1 | Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT |
| litemap | 0.8.1 | Unicode-3.0 |
| lock_api | 0.4.14 | MIT OR Apache-2.0 |
| log | 0.4.29 | MIT OR Apache-2.0 |
| lru-slab | 0.1.2 | MIT OR Apache-2.0 OR Zlib |
| lz4 | 1.28.1 | MIT |
| lz4-sys | 1.11.1+lz4-1.10.0 | MIT |
| match_cfg | 0.1.0 | MIT/Apache-2.0 |
| matchers | 0.2.0 | MIT |
| memchr | 2.8.0 | Unlicense OR MIT |
| memmap2 | 0.9.10 | MIT OR Apache-2.0 |
| miette | 5.10.0 | Apache-2.0 |
| miette-derive | 5.10.0 | Apache-2.0 |
| minicbor | 0.26.5 | BlueOak-1.0.0 |
| minicbor-derive | 0.16.2 | BlueOak-1.0.0 |
| mio | 1.1.1 | MIT |
| nu-ansi-term | 0.50.3 | MIT |
| num-bigint | 0.4.6 | MIT OR Apache-2.0 |
| num-conv | 0.2.0 | MIT OR Apache-2.0 |
| num-integer | 0.1.46 | MIT OR Apache-2.0 |
| num-modular | 0.6.1 | Apache-2.0 |
| num-order | 1.2.0 | Apache-2.0 |
| num-rational | 0.4.2 | MIT OR Apache-2.0 |
| num-traits | 0.2.19 | MIT OR Apache-2.0 |
| num_cpus | 1.17.0 | MIT OR Apache-2.0 |
| number_prefix | 0.4.0 | MIT |
| once_cell | 1.21.4 | MIT OR Apache-2.0 |
| once_cell_polyfill | 1.70.2 | MIT OR Apache-2.0 |
| oorandom | 11.1.5 | MIT |
| opaque-debug | 0.3.1 | MIT OR Apache-2.0 |
| pallas-addresses | 1.0.0-alpha.5 | Apache-2.0 |
| pallas-codec | 1.0.0-alpha.5 | Apache-2.0 |
| pallas-crypto | 1.0.0-alpha.5 | Apache-2.0 |
| pallas-network | 1.0.0-alpha.5 | Apache-2.0 |
| pallas-primitives | 1.0.0-alpha.5 | Apache-2.0 |
| pallas-traverse | 1.0.0-alpha.5 | Apache-2.0 |
| parking_lot | 0.12.5 | MIT OR Apache-2.0 |
| parking_lot_core | 0.9.12 | MIT OR Apache-2.0 |
| paste | 1.0.15 | MIT OR Apache-2.0 |
| peg | 0.8.5 | MIT |
| peg-macros | 0.8.5 | MIT |
| peg-runtime | 0.8.5 | MIT |
| percent-encoding | 2.3.2 | MIT OR Apache-2.0 |
| pin-project-lite | 0.2.17 | Apache-2.0 OR MIT |
| pin-utils | 0.1.0 | MIT OR Apache-2.0 |
| pkcs8 | 0.10.2 | Apache-2.0 OR MIT |
| pkg-config | 0.3.32 | MIT OR Apache-2.0 |
| plain | 0.2.3 | MIT/Apache-2.0 |
| plotters | 0.3.7 | MIT |
| plotters-backend | 0.3.7 | MIT |
| plotters-svg | 0.3.7 | MIT |
| portable-atomic | 1.13.1 | Apache-2.0 OR MIT |
| potential_utf | 0.1.4 | Unicode-3.0 |
| powerfmt | 0.2.0 | MIT OR Apache-2.0 |
| ppv-lite86 | 0.2.21 | MIT OR Apache-2.0 |
| pretty | 0.11.3 | MIT |
| prettyplease | 0.2.37 | MIT OR Apache-2.0 |
| proc-macro2 | 1.0.106 | MIT OR Apache-2.0 |
| proptest | 1.10.0 | MIT OR Apache-2.0 |
| quick-error | 1.2.3 | MIT/Apache-2.0 |
| quinn | 0.11.9 | MIT OR Apache-2.0 |
| quinn-proto | 0.11.14 | MIT OR Apache-2.0 |
| quinn-udp | 0.5.14 | MIT OR Apache-2.0 |
| quote | 1.0.45 | MIT OR Apache-2.0 |
| r-efi | 6.0.0 | MIT OR Apache-2.0 OR LGPL-2.1-or-later |
| radium | 0.7.0 | MIT |
| rand | 0.9.2 | MIT OR Apache-2.0 |
| rand_chacha | 0.9.0 | MIT OR Apache-2.0 |
| rand_core | 0.9.5 | MIT OR Apache-2.0 |
| rand_xorshift | 0.4.0 | MIT OR Apache-2.0 |
| rayon | 1.11.0 | MIT OR Apache-2.0 |
| rayon-core | 1.13.0 | MIT OR Apache-2.0 |
| redox_syscall | 0.7.3 | MIT |
| ref-cast | 1.0.25 | MIT OR Apache-2.0 |
| ref-cast-impl | 1.0.25 | MIT OR Apache-2.0 |
| regex | 1.12.3 | MIT OR Apache-2.0 |
| regex-automata | 0.4.14 | MIT OR Apache-2.0 |
| regex-syntax | 0.8.10 | MIT OR Apache-2.0 |
| reqwest | 0.12.28 | MIT OR Apache-2.0 |
| rfc6979 | 0.4.0 | Apache-2.0 OR MIT |
| ring | 0.17.14 | Apache-2.0 AND ISC |
| rustc-hash | 2.1.1 | Apache-2.0 OR MIT |
| rustc_version | 0.4.1 | MIT OR Apache-2.0 |
| rustix | 1.1.4 | Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT |
| rustls | 0.23.37 | Apache-2.0 OR ISC OR MIT |
| rustls-pki-types | 1.14.0 | MIT OR Apache-2.0 |
| rustls-webpki | 0.103.9 | ISC |
| rustversion | 1.0.22 | MIT OR Apache-2.0 |
| rusty-fork | 0.3.1 | MIT/Apache-2.0 |
| ryu | 1.0.23 | Apache-2.0 OR BSL-1.0 |
| same-file | 1.0.6 | Unlicense/MIT |
| schemars | 1.2.1 | MIT |
| scopeguard | 1.2.0 | MIT OR Apache-2.0 |
| sec1 | 0.7.3 | Apache-2.0 OR MIT |
| secp256k1 | 0.26.0 | CC0-1.0 |
| secp256k1-sys | 0.8.2 | CC0-1.0 |
| semver | 1.0.27 | MIT OR Apache-2.0 |
| serde | 1.0.228 | MIT OR Apache-2.0 |
| serde_core | 1.0.228 | MIT OR Apache-2.0 |
| serde_derive | 1.0.228 | MIT OR Apache-2.0 |
| serde_json | 1.0.149 | MIT OR Apache-2.0 |
| serde_spanned | 0.6.9 | MIT OR Apache-2.0 |
| serde_urlencoded | 0.7.1 | MIT/Apache-2.0 |
| serde_with | 3.17.0 | MIT OR Apache-2.0 |
| serde_with_macros | 3.17.0 | MIT OR Apache-2.0 |
| sha2 | 0.9.9 | MIT OR Apache-2.0 |
| sharded-slab | 0.1.7 | MIT |
| shlex | 1.3.0 | MIT OR Apache-2.0 |
| signal-hook-registry | 1.4.8 | MIT OR Apache-2.0 |
| signature | 2.2.0 | Apache-2.0 OR MIT |
| slab | 0.4.12 | MIT |
| smallvec | 1.15.1 | MIT OR Apache-2.0 |
| snap | 1.1.1 | BSD-3-Clause |
| socket2 | 0.6.3 | MIT OR Apache-2.0 |
| spki | 0.7.3 | Apache-2.0 OR MIT |
| stable_deref_trait | 1.2.1 | MIT OR Apache-2.0 |
| static_assertions | 1.1.0 | MIT OR Apache-2.0 |
| strsim | 0.11.1 | MIT |
| strum | 0.26.3 | MIT |
| strum_macros | 0.26.4 | MIT |
| subtle | 2.6.1 | BSD-3-Clause |
| syn | 2.0.117 | MIT OR Apache-2.0 |
| sync_wrapper | 1.0.2 | Apache-2.0 |
| synstructure | 0.13.2 | MIT |
| tap | 1.0.1 | MIT |
| tar | 0.4.44 | MIT OR Apache-2.0 |
| tempfile | 3.27.0 | MIT OR Apache-2.0 |
| thiserror | 2.0.18 | MIT OR Apache-2.0 |
| thiserror-impl | 2.0.18 | MIT OR Apache-2.0 |
| thread_local | 1.1.9 | MIT OR Apache-2.0 |
| threadpool | 1.8.1 | MIT/Apache-2.0 |
| time | 0.3.47 | MIT OR Apache-2.0 |
| time-core | 0.1.8 | MIT OR Apache-2.0 |
| time-macros | 0.2.27 | MIT OR Apache-2.0 |
| tinystr | 0.8.2 | Unicode-3.0 |
| tinytemplate | 1.2.1 | Apache-2.0 OR MIT |
| tinyvec | 1.10.0 | Zlib OR Apache-2.0 OR MIT |
| tinyvec_macros | 0.1.1 | MIT OR Apache-2.0 OR Zlib |
| tokio | 1.50.0 | MIT |
| tokio-macros | 2.6.1 | MIT |
| tokio-rustls | 0.26.4 | MIT OR Apache-2.0 |
| tokio-util | 0.7.18 | MIT |
| toml | 0.8.23 | MIT OR Apache-2.0 |
| toml_datetime | 0.6.11 | MIT OR Apache-2.0 |
| toml_edit | 0.22.27 | MIT OR Apache-2.0 |
| toml_write | 0.1.2 | MIT OR Apache-2.0 |
| tower | 0.5.3 | MIT |
| tower-http | 0.6.8 | MIT |
| tower-layer | 0.3.3 | MIT |
| tower-service | 0.3.3 | MIT |
| tracing | 0.1.44 | MIT |
| tracing-appender | 0.2.4 | MIT |
| tracing-attributes | 0.1.31 | MIT |
| tracing-core | 0.1.36 | MIT |
| tracing-log | 0.2.0 | MIT |
| tracing-serde | 0.2.0 | MIT |
| tracing-subscriber | 0.3.22 | MIT |
| try-lock | 0.2.5 | MIT |
| typed-arena | 2.0.2 | MIT |
| typenum | 1.19.0 | MIT OR Apache-2.0 |
| unarray | 0.1.4 | MIT OR Apache-2.0 |
| unicode-ident | 1.0.24 | (MIT OR Apache-2.0) AND Unicode-3.0 |
| unicode-segmentation | 1.12.0 | MIT OR Apache-2.0 |
| unicode-width | 0.2.2 | MIT OR Apache-2.0 |
| unicode-xid | 0.2.6 | MIT OR Apache-2.0 |
| untrusted | 0.9.0 | ISC |
| uplc | 1.1.21 | Apache-2.0 |
| url | 2.5.8 | MIT OR Apache-2.0 |
| utf8_iter | 1.0.4 | Apache-2.0 OR MIT |
| utf8parse | 0.2.2 | Apache-2.0 OR MIT |
| uuid | 1.22.0 | Apache-2.0 OR MIT |
| valuable | 0.1.1 | MIT |
| version_check | 0.9.5 | MIT/Apache-2.0 |
| vrf_dalek | 0.1.0 | Unknown |
| wait-timeout | 0.2.1 | MIT/Apache-2.0 |
| walkdir | 2.5.0 | Unlicense/MIT |
| want | 0.3.1 | MIT |
| wasi | 0.9.0+wasi-snapshot-preview1 | Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT |
| wasip2 | 1.0.2+wasi-0.2.9 | Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT |
| wasip3 | 0.4.0+wasi-0.3.0-rc-2026-01-06 | Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT |
| wasm-bindgen | 0.2.114 | MIT OR Apache-2.0 |
| wasm-bindgen-futures | 0.4.64 | MIT OR Apache-2.0 |
| wasm-bindgen-macro | 0.2.114 | MIT OR Apache-2.0 |
| wasm-bindgen-macro-support | 0.2.114 | MIT OR Apache-2.0 |
| wasm-bindgen-shared | 0.2.114 | MIT OR Apache-2.0 |
| wasm-encoder | 0.244.0 | Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT |
| wasm-metadata | 0.244.0 | Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT |
| wasm-streams | 0.4.2 | MIT OR Apache-2.0 |
| wasmparser | 0.244.0 | Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT |
| web-sys | 0.3.91 | MIT OR Apache-2.0 |
| web-time | 1.1.0 | MIT OR Apache-2.0 |
| webpki-roots | 1.0.6 | CDLA-Permissive-2.0 |
| winapi | 0.3.9 | MIT/Apache-2.0 |
| winapi-i686-pc-windows-gnu | 0.4.0 | MIT/Apache-2.0 |
| winapi-util | 0.1.11 | Unlicense OR MIT |
| winapi-x86_64-pc-windows-gnu | 0.4.0 | MIT/Apache-2.0 |
| windows-core | 0.62.2 | MIT OR Apache-2.0 |
| windows-implement | 0.60.2 | MIT OR Apache-2.0 |
| windows-interface | 0.59.3 | MIT OR Apache-2.0 |
| windows-link | 0.2.1 | MIT OR Apache-2.0 |
| windows-result | 0.4.1 | MIT OR Apache-2.0 |
| windows-strings | 0.5.1 | MIT OR Apache-2.0 |
| windows-sys | 0.61.2 | MIT OR Apache-2.0 |
| windows-targets | 0.53.5 | MIT OR Apache-2.0 |
| windows_aarch64_gnullvm | 0.53.1 | MIT OR Apache-2.0 |
| windows_aarch64_msvc | 0.53.1 | MIT OR Apache-2.0 |
| windows_i686_gnu | 0.53.1 | MIT OR Apache-2.0 |
| windows_i686_gnullvm | 0.53.1 | MIT OR Apache-2.0 |
| windows_i686_msvc | 0.53.1 | MIT OR Apache-2.0 |
| windows_x86_64_gnu | 0.53.1 | MIT OR Apache-2.0 |
| windows_x86_64_gnullvm | 0.53.1 | MIT OR Apache-2.0 |
| windows_x86_64_msvc | 0.53.1 | MIT OR Apache-2.0 |
| winnow | 0.7.15 | MIT |
| wit-bindgen | 0.51.0 | Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT |
| wit-bindgen-core | 0.51.0 | Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT |
| wit-bindgen-rust | 0.51.0 | Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT |
| wit-bindgen-rust-macro | 0.51.0 | Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT |
| wit-component | 0.244.0 | Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT |
| wit-parser | 0.244.0 | Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT |
| writeable | 0.6.2 | Unicode-3.0 |
| wyz | 0.5.1 | MIT |
| xattr | 1.6.1 | MIT OR Apache-2.0 |
| yoke | 0.8.1 | Unicode-3.0 |
| yoke-derive | 0.8.1 | Unicode-3.0 |
| zerocopy | 0.8.42 | BSD-2-Clause OR Apache-2.0 OR MIT |
| zerocopy-derive | 0.8.42 | BSD-2-Clause OR Apache-2.0 OR MIT |
| zerofrom | 0.1.6 | Unicode-3.0 |
| zerofrom-derive | 0.1.6 | Unicode-3.0 |
| zeroize | 1.8.2 | Apache-2.0 OR MIT |
| zeroize_derive | 1.4.3 | Apache-2.0 OR MIT |
| zerotrie | 0.2.3 | Unicode-3.0 |
| zerovec | 0.11.5 | Unicode-3.0 |
| zerovec-derive | 0.11.2 | Unicode-3.0 |
| zmij | 1.0.21 | MIT |
| zstd | 0.13.3 | MIT |
| zstd-safe | 7.2.4 | MIT OR Apache-2.0 |
| zstd-sys | 2.0.16+zstd.1.5.7 | MIT/Apache-2.0 |
Regenerating This Page
This page is generated from Cargo.lock metadata. To regenerate after dependency changes:
python3 scripts/generate-licenses.py > docs/src/reference/third-party-licenses.md
Troubleshooting
Common issues and their solutions when running Dugite.
Build Issues
Compilation is slow
The initial build compiles all dependencies from source, which takes several minutes. Subsequent builds are much faster due to cargo caching.
For faster development iteration, use debug builds:
cargo build # debug mode, faster compilation
Only use --release when running against a live network.
Connection Issues
Cannot connect to peers
Symptoms: Node starts but never receives blocks. Logs show connection failures.
Possible causes:
-
Firewall blocking outbound connections on port 3001. Ensure outbound TCP connections to port 3001 are allowed.
-
Incorrect network magic. Verify the
NetworkMagicin your config matches the target network:- Mainnet:
764824073 - Preview:
2 - Preprod:
1
- Mainnet:
-
DNS resolution failure. If topology uses hostnames, ensure DNS is working:
nslookup preview-node.play.dev.cardano.org -
Stale topology. Peer addresses may change. Download the latest topology from the Cardano Operations Book.
Handshake failures
Error: Handshake failed: version mismatch
This usually means the peer does not support the protocol version Dugite is requesting (V14+). Ensure you are connecting to an up-to-date cardano-node (version 10.x+).
Socket Issues
Cannot connect to node socket
Error: Cannot connect to node socket './node.sock': No such file or directory
Solutions:
-
Node is not running. Start the node first.
-
Wrong socket path. Verify the socket path matches what the node was started with:
dugite-cli query tip --socket-path /path/to/actual/node.sock -
Permission denied. Ensure the user running the CLI has read/write access to the socket file.
-
Stale socket file. If the node crashed, the socket file may remain. Delete it and restart:
rm ./node.sock dugite-node run ...
Socket permission denied
Error: Permission denied (os error 13)
The Unix socket file inherits the permissions of the process that created it. Ensure both the node and CLI processes run as the same user, or adjust the socket file permissions.
Storage Issues
Database corruption
Symptoms: Node crashes on startup with storage errors.
Solution: The safest approach is to delete the database and resync:
rm -rf ./db-path
dugite-node run ...
For faster recovery, use Mithril snapshot import:
rm -rf ./db-path
dugite-node mithril-import --network-magic 2 --database-path ./db-path
dugite-node run ...
Disk space
Cardano databases grow continuously. Approximate sizes:
| Network | Database Size |
|---|---|
| Mainnet | 90-140+ GB |
| Preview | 8-15+ GB |
| Preprod | 20-35+ GB |
Monitor disk usage and ensure adequate free space.
Sync Issues
Sync is slow
Possible causes:
-
Single peer. Dugite benefits from multiple peers for block fetching. Ensure your topology includes multiple bootstrap peers or enable ledger-based peer discovery.
-
Network latency. The ChainSync protocol has an inherent per-header RTT (~300ms). High-latency connections will reduce throughput.
-
Slow disk. Storage performance depends on disk I/O speed. SSDs are strongly recommended. On Linux, enable
io_uringfor improved UTxO storage performance:cargo build --release --features io-uring. -
CPU-bound during ledger validation. Block processing includes UTxO validation and Plutus script execution. This is CPU-intensive during sync.
Recommendation: Use Mithril snapshot import to bypass the initial sync bottleneck entirely.
Sync stalls
Symptoms: Progress percentage stops increasing, no new blocks logged.
Possible causes:
-
Peer disconnected. The node will reconnect automatically with exponential backoff. Wait a few minutes.
-
All peers at same height. If all configured peers are also syncing, they may not have new blocks to serve. Add more peers to the topology.
-
Resource exhaustion. Check for out-of-memory or file descriptor limits.
Memory Issues
Out of memory
Dugite's memory usage depends on:
- UTxO set size (the largest memory consumer)
- Number of connected peers
- VolatileDB (last k=2160 blocks in memory)
For mainnet, expect memory usage of 8-16 GB depending on sync progress.
If running on a memory-constrained system, ensure adequate swap space is configured.
Logging
Increase log verbosity
Use the RUST_LOG environment variable:
# Debug all crates
RUST_LOG=debug dugite-node run ...
# Debug specific crate
RUST_LOG=dugite_network=debug dugite-node run ...
# Trace level (very verbose)
RUST_LOG=trace dugite-node run ...
Log to file
Use the built-in file logging:
dugite-node run --log-output file --log-dir /var/log/dugite ...
Log files are rotated daily by default. See Logging for rotation options and multi-target output.
SIGHUP: Topology Reload and Log Verbosity
Sending SIGHUP to the node triggers two live reloads without a restart:
-
Topology reload — The node re-reads the topology file and updates the peer manager:
# Edit topology.json, then: kill -HUP $(pidof dugite-node) -
Log verbosity reload — If
LogDirectiveis set in the config file, the per-subsystem log filter is reloaded:# Add/update in your config JSON: # "LogDirective": "info,dugite_network=trace" # # Then send SIGHUP: kill -HUP $(pidof dugite-node)This is useful for enabling trace logging for a specific subsystem while the node is running, without disrupting sync or block production.
See Logging for full details.
Block Producer Issues
Block producer shows ZERO stake
Cause: Snapshot loaded before UTxO store was attached, corrupting pool_stake values.
Fix: Automatic on restart — rebuild_stake_distribution runs after UTxO store attachment.
Verify: Check the log for "Block producer: pool stake in 'set' snapshot" with a non-zero pool_stake_lovelace value.
Node enters reconnection loop after forging
Cause: Forged block lost a slot battle and was persisted to ImmutableDB.
Symptoms: Log shows "intersection fell to Origin" or the node repeatedly reconnects to upstream peers.
Fix: The fork recovery mechanism now handles this automatically. If the issue persists, re-import from Mithril:
dugite-node mithril-import --network-magic <magic> --database-path <path>
See Fork Recovery & ImmutableDB Contamination for details on how the recovery mechanism works.
Epoch & State Issues
Epoch number appears wrong (e.g., epoch 445 instead of 1239)
Cause: Snapshot saved with incorrect epoch_length defaults (mainnet 432000 instead of preview 86400).
Fix: Automatic correction on load — the epoch is recalculated from the tip slot using genesis parameters.
Log message: "Snapshot epoch differs from computed epoch — correcting"
VRF verification failures after restart
Cause: Epoch nonce in snapshot may be stale if saved with wrong epoch boundaries, or the node is replaying blocks in non-strict mode.
Fix: VRF verification is non-fatal during non-strict (initial sync / replay) mode. Once the node reaches the chain tip it enables strict verification and the serialized epoch_nonce from the snapshot is used directly — matching Haskell's behavior.
Getting Help
If you encounter an issue not covered here:
- Check the GitHub issues
- Open a new issue with:
- Dugite version (
dugite-node --version) - Operating system
- Configuration files (redact any sensitive info)
- Relevant log output
- Steps to reproduce
- Dugite version (