Introduction

Dugite is a Cardano node implementation written in Rust, aiming for 100% compatibility with cardano-node (Haskell).

Built by Sandstone Pool.

CI

Why Dugite?

The Cardano ecosystem benefits from client diversity. Running multiple independent node implementations strengthens the network by:

  • Resilience — A bug in one implementation does not bring down the entire network.
  • Performance — Rust's zero-cost abstractions and memory safety without garbage collection enable high-throughput block processing.
  • Verification — An independent implementation validates the Cardano specification against the reference Haskell node, catching ambiguities and edge cases.
  • Accessibility — A Rust codebase broadens the pool of developers who can contribute to Cardano infrastructure.

Key Features

  • Full Ouroboros Praos consensus — Slot leader checks, VRF validation, KES period tracking, epoch nonce computation.
  • Multi-era support — Byron, Shelley, Allegra, Mary, Alonzo, Babbage, and Conway eras.
  • Conway governance (CIP-1694) — DRep registration, voting, proposals, constitutional committee, treasury withdrawals.
  • Pipelined multi-peer sync — Header collection from a primary peer with parallel block fetching from multiple peers.
  • Plutus script execution — Plutus V1/V2/V3 evaluation via the uplc CEK machine.
  • Node-to-Node (N2N) protocol — Full Ouroboros mini-protocol suite: ChainSync, BlockFetch, TxSubmission2, KeepAlive, PeerSharing.
  • Node-to-Client (N2C) protocol — Unix domain socket server with LocalChainSync, LocalStateQuery, LocalTxSubmission, and LocalTxMonitor.
  • cardano-cli compatible CLI — Key generation, transaction building, signing, submission, queries, and governance commands.
  • Prometheus metrics — Real-time node metrics on port 12798.
  • P2P networking — Peer manager with cold/warm/hot lifecycle, DNS multi-resolution, ledger-based peer discovery, and inbound rate limiting.
  • Mithril snapshot import — Fast initial sync by importing a Mithril-certified snapshot.
  • SIGHUP topology reload — Update peer configuration without restarting the node.

Project Status

Dugite is under active development. It can sync against both the Cardano mainnet and preview/preprod testnets. The node implements the full N2N and N2C protocol stacks, ledger validation, epoch transitions with stake snapshots and reward distribution, and Conway-era governance.

See the Feature Status section in the repository README for a detailed checklist of implemented and pending features.

License

Dugite is released under the Apache-2.0 License.

Installation

Dugite can be installed from pre-built binaries, container images, or built from source.

Pre-built Binaries

Download the latest release from GitHub Releases:

PlatformArchitectureDownload
Linuxx86_64dugite-x86_64-linux.tar.gz
Linuxaarch64dugite-aarch64-linux.tar.gz
macOSx86_64 (Intel)dugite-x86_64-macos.tar.gz
macOSApple Silicondugite-aarch64-macos.tar.gz
# Example: download and extract for Linux x86_64
curl -LO https://github.com/michaeljfazio/dugite/releases/latest/download/dugite-x86_64-linux.tar.gz
tar xzf dugite-x86_64-linux.tar.gz
sudo mv dugite-node dugite-cli dugite-monitor dugite-config /usr/local/bin/

Verify checksums:

curl -LO https://github.com/michaeljfazio/dugite/releases/latest/download/SHA256SUMS.txt
sha256sum -c SHA256SUMS.txt

Container Image

Multi-architecture container images (amd64 and arm64) are published to GitHub Container Registry:

docker pull ghcr.io/michaeljfazio/dugite:latest

The image uses a distroless base (gcr.io/distroless/cc-debian12:nonroot) for minimal attack surface — no shell, no package manager, runs as nonroot (UID 65532).

Run the node:

docker run -d \
  --name dugite \
  -p 3001:3001 \
  -p 12798:12798 \
  -v dugite-data:/opt/dugite/db \
  ghcr.io/michaeljfazio/dugite:latest

See Kubernetes Deployment for production container deployments.

Building from Source

Prerequisites

Rust Toolchain

Install the latest stable Rust toolchain via rustup:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

Verify the installation:

rustc --version
cargo --version

Dugite requires Rust 1.75 or later (edition 2021).

System Dependencies

Dugite's storage layer is pure Rust with no system dependencies beyond the Rust toolchain. Block storage uses append-only chunk files, and the UTxO set uses dugite-lsm, a pure Rust LSM tree. On all platforms, cargo build works out of the box.

Build

Clone the repository:

git clone https://github.com/michaeljfazio/dugite.git
cd dugite

Build in release mode:

cargo build --release

On Linux with kernel 5.1+, you can enable io_uring for improved disk I/O in the UTxO LSM tree:

cargo build --release --features io-uring

This produces four binaries in target/release/:

BinaryDescription
dugite-nodeThe Cardano node
dugite-cliThe cardano-cli compatible command-line interface
dugite-monitorTerminal monitoring dashboard (ratatui-based, real-time metrics via Prometheus polling)
dugite-configInteractive TUI configuration editor with tree navigation, inline editing, and diff view

Install Binaries

To install the binaries into your $CARGO_HOME/bin (typically ~/.cargo/bin/):

cargo install --path crates/dugite-node
cargo install --path crates/dugite-cli
cargo install --path crates/dugite-monitor
cargo install --path crates/dugite-config

Running Tests

Verify everything is working:

cargo test --all

The project enforces a zero-warning policy. You can run the full CI check locally:

cargo fmt --all -- --check
cargo clippy --all-targets -- -D warnings
cargo test --all

Development Build

For faster compilation during development, use the debug profile (the default):

cargo build

Debug builds are significantly faster to compile but produce slower binaries. Always use --release for running a node against a live network.

Quick Start

This guide walks you through getting Dugite running on the Cardano preview testnet.

1. Install

Option A: Pre-built binary (fastest)

curl -LO https://github.com/michaeljfazio/dugite/releases/latest/download/dugite-x86_64-linux.tar.gz
tar xzf dugite-x86_64-linux.tar.gz
sudo mv dugite-node dugite-cli dugite-monitor dugite-config /usr/local/bin/

Option B: Container image

docker pull ghcr.io/michaeljfazio/dugite:latest

Option C: Build from source

git clone https://github.com/michaeljfazio/dugite.git
cd dugite
cargo build --release

Import a Mithril-certified snapshot to skip syncing millions of blocks from genesis:

dugite-node mithril-import \
  --network-magic 2 \
  --database-path ./db-preview

This downloads the latest snapshot from the Mithril aggregator, extracts it, and imports all blocks into the database. On preview testnet this takes approximately 9 minutes (downloading a ~2.7 GB snapshot containing ~4M blocks).

3. Run the Node

Dugite ships with configuration files for all networks. If you built from source, they are in the config/ directory:

dugite-node run \
  --config config/preview-config.json \
  --topology config/preview-topology.json \
  --database-path ./db-preview \
  --socket-path ./node.sock \
  --host-addr 0.0.0.0 \
  --port 3001

Or with Docker:

docker run -d \
  --name dugite \
  -p 3001:3001 \
  -p 12798:12798 \
  -v dugite-data:/opt/dugite/db \
  ghcr.io/michaeljfazio/dugite:latest

The node will:

  1. Load the configuration and genesis files
  2. Replay imported blocks through the ledger (builds UTxO set, protocol params, delegations)
  3. Connect to preview testnet peers
  4. Sync remaining blocks to chain tip

Progress is logged every 5 seconds, showing sync percentage, blocks-per-second throughput, UTxO count, and epoch number. Logs go to stdout by default; add --log-output file --log-dir /var/log/dugite for file logging. See Logging for all options.

4. Query the Node

Once the node is running, query it using the CLI via the Unix domain socket:

# Query the current tip
dugite-cli query tip \
  --socket-path ./node.sock \
  --testnet-magic 2

Example output:

{
    "slot": 106453897,
    "hash": "8498ccda...",
    "block": 4094745,
    "epoch": 1232,
    "era": "Conway",
    "syncProgress": "100.00"
}
# Query protocol parameters
dugite-cli query protocol-parameters \
  --socket-path ./node.sock \
  --testnet-magic 2

# Query mempool
dugite-cli query tx-mempool info \
  --socket-path ./node.sock \
  --testnet-magic 2

5. Check Metrics

Prometheus metrics are served on port 12798:

curl -s http://localhost:12798/metrics | grep sync_progress
# sync_progress_percent 10000

Next Steps

Configuration

Dugite reads a JSON configuration file that controls network settings, genesis file paths, P2P parameters, and tracing options. The format is compatible with the cardano-node configuration format.

Configuration File Format

The configuration file uses PascalCase keys (matching the cardano-node convention):

{
  "Network": "Testnet",
  "NetworkMagic": 2,
  "EnableP2P": true,
  "DiffusionMode": "InitiatorAndResponder",
  "PeerSharing": null,
  "Protocol": {
    "RequiresNetworkMagic": "RequiresMagic"
  },
  "ShelleyGenesisFile": "shelley-genesis.json",
  "ByronGenesisFile": "byron-genesis.json",
  "AlonzoGenesisFile": "alonzo-genesis.json",
  "ConwayGenesisFile": "conway-genesis.json",
  "TargetNumberOfRootPeers": 60,
  "TargetNumberOfActivePeers": 15,
  "TargetNumberOfEstablishedPeers": 40,
  "TargetNumberOfKnownPeers": 85,
  "TargetNumberOfActiveBigLedgerPeers": 5,
  "TargetNumberOfEstablishedBigLedgerPeers": 10,
  "TargetNumberOfKnownBigLedgerPeers": 15,
  "MinSeverity": "Info",
  "TraceOptions": {
    "TraceBlockFetchClient": false,
    "TraceBlockFetchServer": false,
    "TraceChainDb": false,
    "TraceChainSyncClient": false,
    "TraceChainSyncServer": false,
    "TraceForge": false,
    "TraceMempool": false
  }
}

Fields Reference

Network Settings

FieldTypeDefaultDescription
Networkstring"Mainnet"Network identifier: "Mainnet" or "Testnet"
NetworkMagicintegerautoNetwork magic number. If omitted, derived from Network (764824073 for mainnet)
EnableP2PbooleantrueEnable P2P networking mode. When true (the default), the peer governor manages peer connections with automatic churn, ledger-based discovery, and peer sharing. When false, the node uses only static topology connections
DiffusionModestring"InitiatorAndResponder"Controls inbound connection acceptance. "InitiatorAndResponder" (default): relay mode, accepts inbound N2N connections. "InitiatorOnly": block producer mode, outbound only (no listening port opened)
PeerSharingboolean/nullnullEnable the peer sharing mini-protocol. When null (default), peer sharing is automatically disabled for block producers (when --shelley-kes-key is provided) and enabled for relays. Set explicitly to override

Protocol

FieldTypeDefaultDescription
Protocol.RequiresNetworkMagicstring"RequiresMagic"Whether network magic is required in handshake

Genesis Files

Genesis file paths are resolved relative to the directory containing the configuration file. For example, if your config is at /opt/cardano/config.json and specifies "ShelleyGenesisFile": "shelley-genesis.json", Dugite will look for /opt/cardano/shelley-genesis.json.

FieldTypeDefaultDescription
ShelleyGenesisFilestringnonePath to Shelley genesis JSON
ByronGenesisFilestringnonePath to Byron genesis JSON
AlonzoGenesisFilestringnonePath to Alonzo genesis JSON
ConwayGenesisFilestringnonePath to Conway genesis JSON

Tip: Genesis files for each network can be downloaded from the Cardano Operations Book.

P2P Parameters

These parameters control the P2P peer governor's target counts, matching the cardano-node defaults. The governor continuously works to maintain these targets by promoting/demoting peers and discovering new ones.

FieldTypeDefaultDescription
TargetNumberOfRootPeersinteger60Target number of root peers (bootstrap + local + public roots)
TargetNumberOfActivePeersinteger15Target number of active (hot) peers — fully syncing with ChainSync + BlockFetch
TargetNumberOfEstablishedPeersinteger40Target number of established (warm) peers — TCP connected, keepalive running
TargetNumberOfKnownPeersinteger85Target number of known (cold) peers in the peer table
TargetNumberOfActiveBigLedgerPeersinteger5Target number of active big ledger peers (high-stake SPOs, prioritised during sync)
TargetNumberOfEstablishedBigLedgerPeersinteger10Target number of established big ledger peers
TargetNumberOfKnownBigLedgerPeersinteger15Target number of known big ledger peers

Tracing

FieldTypeDefaultDescription
MinSeveritystring"Info"Minimum log severity level
TraceOptions.TraceBlockFetchClientbooleanfalseTrace block fetch client activity
TraceOptions.TraceBlockFetchServerbooleanfalseTrace block fetch server activity
TraceOptions.TraceChainDbbooleanfalseTrace ChainDB operations
TraceOptions.TraceChainSyncClientbooleanfalseTrace chain sync client activity
TraceOptions.TraceChainSyncServerbooleanfalseTrace chain sync server activity
TraceOptions.TraceForgebooleanfalseTrace block forging
TraceOptions.TraceMempoolbooleanfalseTrace mempool activity

Log Level Control

The log level can be set via CLI flag or environment variable:

# Via CLI flag
dugite-node run --log-level debug ...

# Via environment variable (takes priority over --log-level)
RUST_LOG=info dugite-node run ...

# Debug only for specific crates
RUST_LOG=dugite_network=debug,dugite_consensus=debug dugite-node run ...

Dugite supports multiple log output targets (stdout, file, journald) and file rotation. See Logging for full details on output configuration.

Minimal Configuration

The smallest viable configuration file specifies only the network:

{
  "Network": "Testnet",
  "NetworkMagic": 2
}

All other fields use sensible defaults. When no genesis files are specified, the node operates with built-in default parameters.

Format Support

Dugite supports both JSON (.json) and TOML (.toml) configuration files. The format is determined by the file extension. JSON files use the cardano-node compatible PascalCase format shown above.

Interactive Configuration Editor (dugite-config)

dugite-config is a standalone TUI tool for creating and editing Dugite configuration files interactively, without needing to remember field names or valid value ranges.

Installation

Built as part of the standard workspace:

cargo build --release -p dugite-config

Commands

# Interactively create a new configuration file
dugite-config init --out-file config.json

# Launch the interactive editor for an existing config file
dugite-config edit config.json

# Validate a configuration file and report errors
dugite-config validate config.json

# Get the value of a specific field
dugite-config get config.json TargetNumberOfActivePeers

# Set the value of a specific field
dugite-config set config.json TargetNumberOfActivePeers 30

Interactive Editor Features

The editor provides a tree view of all configuration fields with inline editing:

  • Tree navigation — expand/collapse sections, navigate fields with arrow keys
  • Inline editing — press Enter on any field to edit its value in place
  • Type validation — invalid values are rejected with an inline error message and a description of the expected type and range
  • Tuning hints — contextual hints appear alongside each field explaining the impact of changes (for example, peer count targets and their effect on network connectivity)
  • Search/filter — press / to search fields by name
  • Diff view — press d to see a side-by-side diff of your changes before saving
  • Save/discardCtrl+S to save, Ctrl+Q or Esc to discard

The editor validates the full configuration on save and will not write an invalid file.

Configuration Editor (dugite-config)

dugite-config is a standalone TUI tool for creating and editing Dugite configuration files interactively. It provides a full-screen terminal interface with tree navigation, inline editing, type validation, and a diff view — no need to remember field names or look up valid ranges.

Installation

dugite-config is built as part of the standard workspace:

cargo build --release -p dugite-config
cp target/release/dugite-config /usr/local/bin/

Commands

CommandDescription
initInteractively create a new configuration file
editLaunch the full-screen TUI editor for an existing file
validateValidate a configuration file and report all errors
getPrint the value of a single field
setSet the value of a single field non-interactively

init

Create a new configuration file, guided step by step:

dugite-config init --out-file config.json

The init wizard prompts for the network (mainnet/preview/preprod), genesis file paths, P2P targets, and tracing options, then writes a validated JSON file.

edit

Launch the full-screen interactive editor:

dugite-config edit config.json

validate

Check a configuration file for errors without modifying it:

dugite-config validate config.json

Output on success:

config.json: OK (all fields valid)

Output on failure:

config.json: 2 error(s)
  Line 7 — TargetNumberOfActivePeers: value 200 exceeds maximum (100)
  Line 12 — MinSeverity: unknown value "Verbose" (expected: Trace, Debug, Info, Warning, Error)

get / set

Non-interactive field access for scripting:

# Get a field
dugite-config get config.json TargetNumberOfActivePeers
# Output: 20

# Set a field
dugite-config set config.json TargetNumberOfActivePeers 30

# Set a nested field
dugite-config set config.json TraceOptions.TraceForge true

Interactive Editor

The interactive editor (dugite-config edit) renders a full-screen TUI with three panes:

┌─ Fields ──────────────────────┬─ Value ───────────┬─ Hints ───────────────────────────┐
│ > Network Settings            │                   │                                   │
│     Network                   │ Testnet           │ Network identifier. Use "Mainnet" │
│     NetworkMagic              │ 2                 │ for mainnet or "Testnet" for      │
│     EnableP2P                 │ true              │ testnets. If omitted, defaults    │
│ > Genesis Files               │                   │ based on Network field.           │
│     ShelleyGenesisFile        │ shelley-gen...    │                                   │
│     ByronGenesisFile          │ byron-genesi...   │                                   │
│     AlonzoGenesisFile         │ alonzo-genes...   │                                   │
│     ConwayGenesisFile         │ conway-genes...   │                                   │
│ > P2P Parameters              │                   │                                   │
│     DiffusionMode             │ InitiatorAn...    │                                   │
│     PeerSharing               │ PeerSharing...    │                                   │
│     TargetNumberOfActivePeers │ 15                │                                   │
└───────────────────────────────┴───────────────────┴───────────────────────────────────┘
KeyAction
Arrow Up / DownMove between fields
Arrow Right / EnterExpand a group or edit a field
Arrow Left / EscapeCollapse a group or cancel edit
/Open search/filter
dToggle diff view
Ctrl+SSave and exit
Ctrl+QDiscard changes and exit
?Toggle help overlay

Inline Editing

Pressing Enter on a field opens it for editing in place. The current value is pre-filled. Type a new value and press Enter to confirm or Escape to cancel.

Type validation runs immediately on confirmation. If the value is invalid (for example, a string where an integer is expected, or a number outside the valid range), an inline error message appears below the field. The cursor stays on the field until a valid value is entered or the edit is cancelled.

Tuning Hints

The right-hand pane shows contextual hints for the selected field, including:

  • A description of what the field controls
  • The valid type and range
  • Practical advice on the impact of different values

For example, TargetNumberOfActivePeers shows advice on the trade-off between connectivity and bandwidth, and notes that values above 50 are rarely beneficial for relay nodes.

Search and Filter

Press / to open the search bar. Typing narrows the visible fields to those whose names match the query. Press Escape to clear the filter and return to the full tree.

Diff View

Press d to toggle the diff view, which shows a side-by-side comparison of the original file and your pending changes. Fields with modified values are highlighted. Use this before saving to confirm your edits.

Scripted Workflows

dugite-config can be used in deployment scripts for automated configuration management:

#!/usr/bin/env bash
# Example: configure a relay node for preview testnet

CONFIG="config/preview-config.json"

dugite-config init --out-file "$CONFIG" \
  --network Testnet \
  --network-magic 2 \
  --shelley-genesis shelley-genesis.json \
  --byron-genesis byron-genesis.json \
  --alonzo-genesis alonzo-genesis.json \
  --conway-genesis conway-genesis.json

dugite-config set "$CONFIG" EnableP2P true
dugite-config set "$CONFIG" DiffusionMode InitiatorAndResponder
dugite-config set "$CONFIG" TargetNumberOfActivePeers 15
dugite-config set "$CONFIG" TargetNumberOfEstablishedPeers 40
dugite-config set "$CONFIG" TargetNumberOfKnownPeers 85
dugite-config validate "$CONFIG"

Topology

The topology file defines the peers that the node connects to. Dugite supports the full cardano-node 10.x+ P2P topology format.

Topology File Format

{
  "bootstrapPeers": [
    { "address": "backbone.cardano.iog.io", "port": 3001 },
    { "address": "backbone.mainnet.cardanofoundation.org", "port": 3001 },
    { "address": "backbone.mainnet.emurgornd.com", "port": 3001 }
  ],
  "localRoots": [
    {
      "accessPoints": [
        { "address": "192.168.1.100", "port": 3001 }
      ],
      "advertise": false,
      "hotValency": 1,
      "warmValency": 2,
      "trustable": true
    }
  ],
  "publicRoots": [
    {
      "accessPoints": [
        { "address": "relays-new.cardano-mainnet.iohk.io", "port": 3001 }
      ],
      "advertise": false
    }
  ],
  "useLedgerAfterSlot": 177724800
}

Peer Categories

Bootstrap Peers

Trusted peers from founding organizations, used during initial sync. These are the first peers the node contacts when starting.

"bootstrapPeers": [
  { "address": "backbone.cardano.iog.io", "port": 3001 }
]

Set to null or an empty array to disable bootstrap peers:

"bootstrapPeers": null

Local Roots

Peers the node should always maintain connections with. Typically used for:

  • Your block producer (if running a relay)
  • Peer arrangements with other stake pool operators
  • Trusted relay nodes you operate
"localRoots": [
  {
    "accessPoints": [
      { "address": "192.168.1.100", "port": 3001 }
    ],
    "advertise": true,
    "hotValency": 2,
    "warmValency": 3,
    "trustable": true,
    "behindFirewall": false,
    "diffusionMode": "InitiatorAndResponder"
  }
]
FieldTypeDefaultDescription
accessPointsarrayrequiredList of {address, port} entries
advertisebooleanfalseWhether to share these peers via peer sharing protocol
valencyinteger1Deprecated. Target number of active connections. Use hotValency instead
hotValencyintegervalencyTarget number of hot (actively syncing) peers
warmValencyintegerhotValency+1Target number of warm (connected, not syncing) peers
trustablebooleanfalseWhether these peers are trusted for sync. Trusted peers are preferred during initial sync
behindFirewallbooleanfalseIf true, the node waits for inbound connections from these peers instead of connecting outbound
diffusionModestring"InitiatorAndResponder"Per-group diffusion mode. "InitiatorOnly" for unidirectional connections

Public Roots

Publicly known nodes (e.g., IOG relays) serving as fallback peers before the node has synced to the useLedgerAfterSlot threshold.

"publicRoots": [
  {
    "accessPoints": [
      { "address": "relays-new.cardano-mainnet.iohk.io", "port": 3001 }
    ],
    "advertise": false
  }
]

Ledger-Based Peer Discovery

After the node syncs past the useLedgerAfterSlot slot, it discovers peers from stake pool registrations in the ledger state. This provides decentralized peer discovery without relying on centralized relay lists.

"useLedgerAfterSlot": 177724800

Set to a negative value or omit to disable ledger peer discovery.

Peer Snapshot File

Optional path to a big ledger peer snapshot file for Genesis bootstrap:

"peerSnapshotFile": "peer-snapshot.json"

Example Topologies

Preview Testnet Relay

{
  "bootstrapPeers": [
    { "address": "preview-node.play.dev.cardano.org", "port": 3001 }
  ],
  "localRoots": [
    { "accessPoints": [], "advertise": false, "valency": 1 }
  ],
  "publicRoots": [
    { "accessPoints": [], "advertise": false }
  ],
  "useLedgerAfterSlot": 102729600
}

Mainnet Relay

{
  "bootstrapPeers": [
    { "address": "backbone.cardano.iog.io", "port": 3001 },
    { "address": "backbone.mainnet.cardanofoundation.org", "port": 3001 },
    { "address": "backbone.mainnet.emurgornd.com", "port": 3001 }
  ],
  "localRoots": [
    { "accessPoints": [], "advertise": false, "valency": 1 }
  ],
  "publicRoots": [
    { "accessPoints": [], "advertise": false }
  ],
  "useLedgerAfterSlot": 177724800
}

Relay with Block Producer

A relay node that maintains a connection to your block producer:

{
  "bootstrapPeers": [
    { "address": "backbone.cardano.iog.io", "port": 3001 },
    { "address": "backbone.mainnet.cardanofoundation.org", "port": 3001 }
  ],
  "localRoots": [
    {
      "accessPoints": [
        { "address": "10.0.0.10", "port": 3001 }
      ],
      "advertise": false,
      "hotValency": 1,
      "warmValency": 2,
      "trustable": true,
      "behindFirewall": true
    }
  ],
  "publicRoots": [
    { "accessPoints": [], "advertise": false }
  ],
  "useLedgerAfterSlot": 177724800
}

SIGHUP Topology Reload

Dugite supports live topology reloading. Send a SIGHUP signal to the running node process, and it will re-read the topology file and update the peer manager with the new configuration:

kill -HUP $(pidof dugite-node)

This allows you to add or remove peers without restarting the node.

Networks

Dugite can connect to any Cardano network. Each network is identified by a unique magic number used during the N2N handshake.

Network Magic Values

NetworkMagicDescription
Mainnet764824073The production Cardano network
Preview2Fast-moving testnet for early feature testing
Preprod1Stable testnet that mirrors mainnet behavior

Connecting to Mainnet

Create a config-mainnet.json:

{
  "Network": "Mainnet",
  "NetworkMagic": 764824073
}

Create a topology-mainnet.json:

{
  "bootstrapPeers": [
    { "address": "backbone.cardano.iog.io", "port": 3001 },
    { "address": "backbone.mainnet.cardanofoundation.org", "port": 3001 },
    { "address": "backbone.mainnet.emurgornd.com", "port": 3001 }
  ],
  "localRoots": [{ "accessPoints": [], "advertise": false, "valency": 1 }],
  "publicRoots": [{ "accessPoints": [], "advertise": false }],
  "useLedgerAfterSlot": 177724800
}

Run the node:

dugite-node run \
  --config config-mainnet.json \
  --topology topology-mainnet.json \
  --database-path ./db-mainnet \
  --socket-path ./node-mainnet.sock \
  --host-addr 0.0.0.0 \
  --port 3001

Tip: For a faster initial mainnet sync, consider using Mithril snapshot import first.

Connecting to Preview Testnet

Create a config-preview.json:

{
  "Network": "Testnet",
  "NetworkMagic": 2
}

Create a topology-preview.json:

{
  "bootstrapPeers": [
    { "address": "preview-node.play.dev.cardano.org", "port": 3001 }
  ],
  "localRoots": [{ "accessPoints": [], "advertise": false, "valency": 1 }],
  "publicRoots": [{ "accessPoints": [], "advertise": false }],
  "useLedgerAfterSlot": 102729600
}

Run the node:

dugite-node run \
  --config config-preview.json \
  --topology topology-preview.json \
  --database-path ./db-preview \
  --socket-path ./node-preview.sock \
  --host-addr 0.0.0.0 \
  --port 3001

Connecting to Preprod Testnet

Create a config-preprod.json:

{
  "Network": "Testnet",
  "NetworkMagic": 1
}

Create a topology-preprod.json:

{
  "bootstrapPeers": [
    { "address": "preprod-node.play.dev.cardano.org", "port": 3001 }
  ],
  "localRoots": [{ "accessPoints": [], "advertise": false, "valency": 1 }],
  "publicRoots": [{ "accessPoints": [], "advertise": false }],
  "useLedgerAfterSlot": 76924800
}

Run the node:

dugite-node run \
  --config config-preprod.json \
  --topology topology-preprod.json \
  --database-path ./db-preprod \
  --socket-path ./node-preprod.sock \
  --host-addr 0.0.0.0 \
  --port 3001

Official Configuration Files

Official configuration and topology files for each network are maintained in the Cardano Operations Book:

These include the full genesis files (Byron, Shelley, Alonzo, Conway) required for complete protocol parameter initialization.

Using the CLI with Different Networks

When querying a node connected to a testnet, pass the --testnet-magic flag to the CLI:

# Preview
dugite-cli query tip --socket-path ./node-preview.sock --testnet-magic 2

# Preprod
dugite-cli query tip --socket-path ./node-preprod.sock --testnet-magic 1

# Mainnet (default, --testnet-magic not needed)
dugite-cli query tip --socket-path ./node-mainnet.sock

Multiple Nodes

You can run multiple Dugite instances on the same machine by using different ports, database paths, and socket paths:

# Preview on port 3001
dugite-node run --port 3001 --database-path ./db-preview --socket-path ./preview.sock ...

# Preprod on port 3002
dugite-node run --port 3002 --database-path ./db-preprod --socket-path ./preprod.sock ...

Mithril Snapshot Import

Syncing a Cardano node from genesis can take a very long time. Dugite supports importing Mithril-certified snapshots of the immutable database to drastically reduce initial sync time.

How It Works

Mithril is a stake-based threshold multi-signature scheme that produces certified snapshots of the Cardano immutable database. These snapshots are verified by Mithril signers (stake pool operators) and made available through Mithril aggregator endpoints.

The import process:

  1. Queries the Mithril aggregator for the latest available snapshot
  2. Downloads the snapshot archive (compressed with zstandard)
  3. Extracts the cardano-node chunk files
  4. Parses each block using the pallas CBOR decoder
  5. Bulk-imports blocks into Dugite's ImmutableDB (append-only chunk files)

Usage

dugite-node mithril-import \
  --network-magic <magic> \
  --database-path <path>

Arguments

ArgumentDefaultDescription
--network-magic764824073Network magic (764824073=mainnet, 2=preview, 1=preprod)
--database-pathdbPath to the database directory
--temp-dirsystem tempTemporary directory for download and extraction

Examples

Mainnet:

dugite-node mithril-import \
  --network-magic 764824073 \
  --database-path ./db-mainnet

Preview testnet:

dugite-node mithril-import \
  --network-magic 2 \
  --database-path ./db-preview

Preprod testnet:

dugite-node mithril-import \
  --network-magic 1 \
  --database-path ./db-preprod

Mithril Aggregator Endpoints

Dugite automatically selects the correct aggregator for each network:

NetworkAggregator URL
Mainnethttps://aggregator.release-mainnet.api.mithril.network/aggregator
Previewhttps://aggregator.pre-release-preview.api.mithril.network/aggregator
Preprodhttps://aggregator.release-preprod.api.mithril.network/aggregator

Resume Support

The import process supports resuming interrupted downloads and imports:

  • If the snapshot archive has already been downloaded (same size), the download is skipped
  • If the archive has already been extracted, extraction is skipped
  • Blocks already present in the database are skipped during import

This means you can safely interrupt the import and restart it later.

After Import

Once the import completes, start the node normally. It will detect the imported blocks and resume syncing from where the snapshot left off:

dugite-node run \
  --config config.json \
  --topology topology.json \
  --database-path ./db-mainnet \
  --socket-path ./node.sock \
  --host-addr 0.0.0.0 \
  --port 3001

Disk Space Requirements

Mithril snapshots are large. Approximate sizes (which grow over time):

NetworkCompressed ArchiveExtractedFinal DB
Mainnet~60-90 GB~120-180 GB~90-140 GB
Preview~5-10 GB~10-20 GB~8-15 GB
Preprod~15-25 GB~30-50 GB~20-35 GB

The temporary directory needs enough space for both the compressed archive and the extracted files. After import, temporary files are automatically cleaned up.

Note: Ensure you have sufficient disk space before starting the import. The --temp-dir flag can be used to direct temporary files to a different volume if needed.

Logging

Dugite uses the tracing ecosystem for structured logging. It supports multiple output targets, structured and human-readable formats, log rotation for file output, and fine-grained level control.

Output Formats

Dugite supports two log formats, selectable via the --log-format flag:

Text (default)

Human-readable compact output with timestamps, level, target module, and structured fields:

dugite-node run --log-format text ...
2026-03-12T12:34:56.789Z  INFO dugite_node::node: Syncing progress="95.42%" epoch=512 block=11283746 tip=11300000 remaining=16254 speed="312 blk/s" utxos=15234892
2026-03-12T12:34:56.790Z  INFO dugite_node::node: Peer connected peer=1.2.3.4:3001 rtt_ms=42

JSON

Structured JSON output, one object per line. Ideal for log aggregation systems (ELK, Loki, Datadog):

dugite-node run --log-format json ...
{"timestamp":"2026-03-12T12:34:56.789Z","level":"INFO","target":"dugite_node::node","fields":{"message":"Syncing","progress":"95.42%","epoch":512,"block":11283746}}

Output Targets

Dugite can log to one or more output targets simultaneously using the --log-output flag. You can specify this flag multiple times to enable multiple targets:

# Stdout only (default)
dugite-node run --log-output stdout ...

# File only
dugite-node run --log-output file ...

# Both stdout and file
dugite-node run --log-output stdout --log-output file ...

# Systemd journal (requires journald feature)
dugite-node run --log-output journald ...

Stdout

The default output target. Logs are written to standard output with ANSI color codes when the output is a terminal. Colors can be disabled with --log-no-color.

File

Logs are written to rotating log files in the directory specified by --log-dir (default: logs/). The rotation strategy is configured with --log-file-rotation:

StrategyDescription
dailyRotate log files daily (default)
hourlyRotate log files every hour
neverWrite to a single dugite.log file with no rotation
dugite-node run \
  --log-output file \
  --log-dir /var/log/dugite \
  --log-file-rotation daily \
  ...

File output uses non-blocking I/O with buffered writes. The buffer is flushed automatically on shutdown.

Journald

Native systemd journal integration. This requires building Dugite with the journald feature:

cargo build --release --features journald

Then run with:

dugite-node run --log-output journald ...

View logs with journalctl:

journalctl -u dugite-node -f
journalctl -u dugite-node --since "1 hour ago"

Log Levels

The log level can be set via the --log-level CLI flag or the RUST_LOG environment variable. If both are set, RUST_LOG takes priority.

# Via CLI flag
dugite-node run --log-level debug ...

# Via environment variable (takes priority)
RUST_LOG=debug dugite-node run ...

Available levels (from most to least verbose):

LevelDescription
traceVery detailed internal diagnostics
debugInternal operations: genesis loading, storage ops, network handshakes, epoch transitions
infoOperator-relevant events: sync progress, peer connections, block production (default)
warnPotential issues: stale snapshots, replay failures
errorErrors that may affect node operation

Per-Crate Filtering

Use RUST_LOG for fine-grained control over which components produce output:

# Debug only for specific crates
RUST_LOG=dugite_network=debug,dugite_consensus=debug dugite-node run ...

# Trace storage operations, debug everything else
RUST_LOG=dugite_storage=trace,debug dugite-node run ...

# Silence noisy crates
RUST_LOG=info,dugite_network=warn dugite-node run ...

CLI Reference

All logging flags are shared between the run and mithril-import subcommands:

FlagDefaultDescription
--log-outputstdoutLog output target: stdout, file, or journald. Can be specified multiple times.
--log-formattextLog format: text (human-readable) or json (structured).
--log-levelinfoLog level: trace, debug, info, warn, error. Overridden by RUST_LOG.
--log-dirlogsDirectory for log files (used with --log-output file)
--log-file-rotationdailyLog file rotation: daily, hourly, or never
--log-no-colorfalseDisable ANSI colors in stdout output

Production Recommendations

For production deployments with log aggregation:

dugite-node run \
  --log-output file \
  --log-output journald \
  --log-format json \
  --log-dir /var/log/dugite \
  --log-file-rotation daily \
  ...

This configuration:

  • Writes structured JSON logs to systemd journal for journalctl integration
  • Writes rotated JSON log files for archival and ingestion by log aggregators
  • JSON format ensures all structured fields are machine-parseable

For human operators monitoring the console:

dugite-node run --log-output stdout --log-format text ...

For containerized deployments (Docker, Kubernetes), stdout with JSON is ideal since the container runtime captures output and log drivers can parse the structured format:

dugite-node run --log-output stdout --log-format json ...

Monitoring

Dugite provides two complementary monitoring tools: a terminal dashboard (dugite-monitor) for quick at-a-glance status, and a Prometheus-compatible metrics endpoint for production alerting and dashboards.

Terminal Dashboard (dugite-monitor)

dugite-monitor is a standalone binary that renders a real-time status dashboard in the terminal by polling the node's Prometheus endpoint. It requires no external infrastructure and works over SSH.

# Monitor a local node (default: http://localhost:12798/metrics)
dugite-monitor

# Monitor a remote node
dugite-monitor --metrics-url http://192.168.1.100:12798/metrics

# Custom refresh interval (default: 2 seconds)
dugite-monitor --refresh-interval 5

The dashboard displays four panels:

  • Chain Status — sync progress, current slot/block/epoch, tip age, GSM state
  • Peers — out/in/total connection counts, hot/warm/cold breakdown, EWMA latency
  • Performance — block rate sparkline, replay throughput, transaction counts
  • Governance — treasury balance, DRep count, active proposals, pool count

Color-coded health indicators (green/yellow/red) reflect tip age and sync progress. The block rate sparkline shows the last 30 data points so you can spot throughput trends at a glance.

Keyboard navigation: q to quit, Tab to cycle panels, j/k (vim-style) to scroll within a panel.


Prometheus Metrics Endpoint

Dugite exposes a Prometheus-compatible metrics endpoint for monitoring node health and sync progress.

Metrics Endpoint

The metrics server runs on port 12798 by default and responds to any HTTP request with Prometheus exposition format metrics:

http://localhost:12798/metrics

Example response:

# HELP dugite_blocks_received_total Total blocks received from peers
# TYPE dugite_blocks_received_total gauge
dugite_blocks_received_total 1523847

# HELP dugite_blocks_applied_total Total blocks applied to ledger
# TYPE dugite_blocks_applied_total gauge
dugite_blocks_applied_total 1523845

# HELP dugite_slot_number Current slot number
# TYPE dugite_slot_number gauge
dugite_slot_number 142857392

# HELP dugite_block_number Current block number
# TYPE dugite_block_number gauge
dugite_block_number 11283746

# HELP dugite_epoch_number Current epoch number
# TYPE dugite_epoch_number gauge
dugite_epoch_number 512

# HELP dugite_sync_progress_percent Chain sync progress (0-10000, divide by 100 for %)
# TYPE dugite_sync_progress_percent gauge
dugite_sync_progress_percent 9542

# HELP dugite_utxo_count Number of entries in the UTxO set
# TYPE dugite_utxo_count gauge
dugite_utxo_count 15234892

# HELP dugite_mempool_tx_count Number of transactions in the mempool
# TYPE dugite_mempool_tx_count gauge
dugite_mempool_tx_count 42

# HELP dugite_peers_connected Number of connected peers
# TYPE dugite_peers_connected gauge
dugite_peers_connected 8

Health Endpoint

The metrics server exposes a /health endpoint for monitoring node status:

GET http://localhost:12798/health

Returns JSON with three possible statuses:

  • healthy: Sync progress >= 99.9%
  • syncing: Actively catching up to chain tip
  • stalled: No blocks received for > 5 minutes AND sync < 99%
{
  "status": "healthy",
  "uptime_seconds": 3421,
  "slot": 142857392,
  "block": 11283746,
  "epoch": 512,
  "sync_progress": 99.95,
  "peers": 8,
  "last_block_received": "2026-03-14T12:34:56.789Z"
}

Readiness Endpoint

For Kubernetes readiness probes:

GET http://localhost:12798/ready

Returns 200 OK when sync_progress >= 99.9%, 503 Service Unavailable otherwise:

{"ready": true}

or:

{"ready": false, "sync_progress": 75.42}

Available Metrics

Counters

MetricDescription
dugite_blocks_received_totalTotal blocks received from peers
dugite_blocks_applied_totalTotal blocks successfully applied to the ledger
dugite_transactions_received_totalTotal transactions received
dugite_transactions_validated_totalTotal transactions validated
dugite_transactions_rejected_totalTotal transactions rejected
dugite_rollback_count_totalTotal number of chain rollbacks
dugite_blocks_forged_totalTotal blocks forged by this node
dugite_leader_checks_totalTotal VRF leader checks performed
dugite_leader_checks_not_elected_totalLeader checks where node was not elected
dugite_forge_failures_totalBlock forge attempts that failed
dugite_blocks_announced_totalBlocks successfully announced to peers
dugite_n2n_connections_totalTotal N2N (peer-to-peer) connections accepted
dugite_n2c_connections_totalTotal N2C (client) connections accepted
dugite_validation_errors_total{error="..."}Transaction validation errors, broken down by error type
dugite_protocol_errors_total{error="..."}Protocol-level errors by type (e.g. handshake failures, connection errors)

Gauges

MetricDescription
dugite_peers_connectedNumber of connected peers
dugite_peers_coldNumber of cold (known but unconnected) peers
dugite_peers_warmNumber of warm (connected, not syncing) peers
dugite_peers_hotNumber of hot (actively syncing) peers
dugite_sync_progress_percentChain sync progress (0-10000; divide by 100 for percentage)
dugite_slot_numberCurrent slot number
dugite_block_numberCurrent block number
dugite_epoch_numberCurrent epoch number
dugite_utxo_countNumber of entries in the UTxO set
dugite_mempool_tx_countNumber of transactions in the mempool
dugite_mempool_bytesSize of the mempool in bytes
dugite_delegation_countNumber of active stake delegations
dugite_treasury_lovelaceTotal lovelace in the treasury
dugite_drep_countNumber of registered DReps
dugite_proposal_countNumber of active governance proposals
dugite_pool_countNumber of registered stake pools
dugite_uptime_secondsSeconds since node startup
dugite_disk_available_bytesAvailable disk space on the database volume
dugite_n2n_connections_activeCurrently active N2N connections
dugite_n2c_connections_activeCurrently active N2C connections
dugite_p2p_enabledWhether P2P governance is active (0 or 1)
dugite_diffusion_modeCurrent diffusion mode (0=InitiatorOnly, 1=InitiatorAndResponder)
dugite_peer_sharing_enabledWhether peer sharing is active (0 or 1)
dugite_tip_age_secondsSeconds since the tip slot time
dugite_chainsync_idle_secondsSeconds since last ChainSync RollForward event
dugite_ledger_replay_duration_secondsDuration of last ledger replay in seconds
dugite_mem_resident_bytesResident set size (RSS) in bytes

Histograms

MetricBuckets (ms)Description
dugite_peer_handshake_rtt_ms1, 5, 10, 25, 50, 100, 250, 500, 1000, 2500, 5000, 10000Peer N2N handshake round-trip time
dugite_peer_block_fetch_ms(same)Per-block fetch latency

Histograms expose _bucket, _count, and _sum suffixes for standard Prometheus histogram queries.

Prometheus Configuration

Add the Dugite node as a scrape target in your prometheus.yml:

scrape_configs:
  - job_name: 'dugite'
    scrape_interval: 15s
    static_configs:
      - targets: ['localhost:12798']
        labels:
          network: 'mainnet'
          node: 'relay-1'

Grafana Dashboard

Dugite ships with a pre-built Grafana dashboard at config/grafana-dashboard.json. The dashboard covers all node metrics organized into nine sections:

  • Overview — Sync progress gauge, block height, epoch, slot, connected peers, blocks forged
  • Node Health — Uptime, disk available (stat + time series)
  • Sync & Throughput — Sync progress over time, block apply/receive rate (blk/s), block height, rollbacks
  • Peers — Connected peer count over time, peer state breakdown (hot/warm/cold stacked)
  • Mempool & Transactions — Mempool tx count, mempool size (bytes), transaction rate (received/validated/rejected)
  • Ledger State — UTxO set size, stake delegations, treasury balance (ADA), registered stake pools
  • Governance — Registered DReps, active governance proposals
  • Block Production — Total blocks forged, block forge rate (blk/h)
  • Network Latency — Handshake RTT and block fetch latency percentiles (p50/p95/p99), request counts
  • Validation Errors — Error breakdown by type (stacked bars), error totals (bar chart)

Quick Start (Docker)

The fastest way to start a local monitoring stack is with the included script:

# Start Prometheus + Grafana
./scripts/start-monitoring.sh

# Open the dashboard (admin/admin)
open http://localhost:3000/d/dugite-node/dugite-node

# Check status
./scripts/start-monitoring.sh status

# Stop
./scripts/start-monitoring.sh stop

The script starts Prometheus (port 9090) and Grafana (port 3000) as Docker containers, auto-configures the Prometheus datasource, and imports the Dugite dashboard. Prometheus data is persisted in .monitoring-data/ so metrics survive restarts.

Environment variables for port customization:

VariableDefaultDescription
PROMETHEUS_PORT9090Prometheus web UI port
GRAFANA_PORT3000Grafana web UI port
DUGITE_METRICS_PORT12798Port where Dugite exposes metrics

Importing the Dashboard

  1. Open Grafana and go to Dashboards > Import
  2. Click Upload JSON file and select config/grafana-dashboard.json
  3. Select your Prometheus data source when prompted
  4. Click Import

The dashboard includes an instance template variable so you can monitor multiple Dugite nodes (relays + block producer) from a single dashboard. It auto-refreshes every 30 seconds.

Provisioning

To auto-provision the dashboard, copy it into your Grafana provisioning directory:

cp config/grafana-dashboard.json /etc/grafana/provisioning/dashboards/dugite.json

Add a dashboard provider in /etc/grafana/provisioning/dashboards/dugite.yaml:

apiVersion: 1
providers:
  - name: Dugite
    folder: Cardano
    type: file
    options:
      path: /etc/grafana/provisioning/dashboards
      foldersFromFilesStructure: false

Quick Start (macOS)

To quickly preview the dashboard locally with Homebrew:

# Install Prometheus and Grafana
brew install prometheus grafana

# Configure Prometheus to scrape Dugite
cat > /opt/homebrew/etc/prometheus.yml << 'EOF'
global:
  scrape_interval: 5s

scrape_configs:
  - job_name: dugite
    static_configs:
      - targets: ['localhost:12798']
EOF

# Provision the datasource
cat > "$(brew --prefix)/opt/grafana/share/grafana/conf/provisioning/datasources/dugite.yaml" << 'EOF'
apiVersion: 1
datasources:
  - name: Prometheus
    type: prometheus
    access: proxy
    url: http://localhost:9090
    isDefault: true
    uid: DS_PROMETHEUS
EOF

# Provision the dashboard
cat > "$(brew --prefix)/opt/grafana/share/grafana/conf/provisioning/dashboards/dugite.yaml" << 'EOF'
apiVersion: 1
providers:
  - name: Dugite
    folder: Cardano
    type: file
    options:
      path: /opt/homebrew/var/lib/grafana/dashboards
EOF

mkdir -p /opt/homebrew/var/lib/grafana/dashboards
sed 's/${DS_PROMETHEUS}/DS_PROMETHEUS/g' config/grafana-dashboard.json \
  > /opt/homebrew/var/lib/grafana/dashboards/dugite.json

# Start services
brew services start prometheus
brew services start grafana

# Open the dashboard (default login: admin/admin)
open "http://localhost:3000/d/dugite-node/dugite-node"

To stop:

brew services stop prometheus grafana

Key Queries

PanelPromQL
Sync progressdugite_sync_progress_percent / 100
Block throughputrate(dugite_blocks_applied_total[5m])
Transaction rejection raterate(dugite_transactions_rejected_total[5m])
Treasury balance (ADA)dugite_treasury_lovelace / 1e6
Block forge rate (per hour)rate(dugite_blocks_forged_total[1h]) * 3600
Handshake RTT p95histogram_quantile(0.95, rate(dugite_peer_handshake_rtt_ms_bucket[5m]))
Block fetch latency p95histogram_quantile(0.95, rate(dugite_peer_block_fetch_ms_bucket[5m]))
Validation errors by typerate(dugite_validation_errors_total[5m])
Protocol errors by typerate(dugite_protocol_errors_total[5m])
Leader election raterate(dugite_leader_checks_total[5m])
Active N2N connectionsdugite_n2n_connections_active
Disk availabledugite_disk_available_bytes

Console Logging

In addition to the Prometheus endpoint, Dugite logs sync progress to the console every 5 seconds. The log output includes:

  • Current slot and block number
  • Epoch number
  • UTxO count
  • Sync percentage
  • Blocks-per-second throughput

Example log line:

2026-03-12T12:34:56.789Z  INFO dugite_node::node: Syncing progress="95.42%" epoch=512 block=11283746 tip=11300000 remaining=16254 speed="312 blk/s" utxos=15234892

Log output can be directed to stdout, file, or systemd journal. See Logging for full details on output targets, file rotation, and log level configuration.

Relay Node

A relay node is the public-facing component of a stake pool deployment. It bridges your block producer to the wider Cardano network while shielding the BP from direct internet exposure.

Role in Stake Pool Architecture

In a properly secured stake pool, the block producer never communicates directly with the public network. Instead, one or more relay nodes handle all external connectivity:

graph LR
    Internet["Cardano Network"] <-->|N2N| Relay1["Relay 1<br/>Public IP"]
    Internet <-->|N2N| Relay2["Relay 2<br/>Public IP"]
    Relay1 <-->|Private| BP["Block Producer<br/>Private IP"]
    Relay2 <-->|Private| BP
  • Relays accept inbound connections from any Cardano peer, discover peers via bootstrap/ledger, and forward blocks to/from the BP.
  • Block producer connects only to your relays, never to the public internet.

Running a Relay

A relay is simply a Dugite node started without block production keys:

dugite-node run \
  --config config.json \
  --topology topology-relay.json \
  --database-path ./db \
  --socket-path ./node.sock \
  --host-addr 0.0.0.0 \
  --port 3001

Tip: For initial sync, use Mithril snapshot import first to skip millions of blocks.

Relay Topology

A relay topology combines public peer discovery with a local root pointing to your block producer.

Preview Testnet Relay

{
  "bootstrapPeers": [
    { "address": "preview-node.play.dev.cardano.org", "port": 3001 }
  ],
  "localRoots": [
    {
      "accessPoints": [
        { "address": "10.0.0.10", "port": 3001 }
      ],
      "advertise": false,
      "hotValency": 1,
      "warmValency": 2,
      "trustable": true,
      "behindFirewall": true
    }
  ],
  "publicRoots": [
    { "accessPoints": [], "advertise": false }
  ],
  "useLedgerAfterSlot": 102729600
}

Mainnet Relay

{
  "bootstrapPeers": [
    { "address": "backbone.cardano.iog.io", "port": 3001 },
    { "address": "backbone.mainnet.cardanofoundation.org", "port": 3001 },
    { "address": "backbone.mainnet.emurgornd.com", "port": 3001 }
  ],
  "localRoots": [
    {
      "accessPoints": [
        { "address": "10.0.0.10", "port": 3001 }
      ],
      "advertise": false,
      "hotValency": 1,
      "warmValency": 2,
      "trustable": true,
      "behindFirewall": true
    }
  ],
  "publicRoots": [
    { "accessPoints": [], "advertise": false }
  ],
  "useLedgerAfterSlot": 177724800
}

Key topology settings for relays:

  • bootstrapPeers — Trusted initial peers for syncing from genesis or after restart.
  • localRoots with behindFirewall: true — Your block producer. The relay waits for inbound connections from the BP rather than connecting outbound, which works correctly when the BP is behind a firewall.
  • useLedgerAfterSlot — Enables ledger-based peer discovery once synced past this slot, providing decentralized peer resolution from on-chain stake pool registrations.
  • advertise: false — Set to true if you want your relay to be discoverable via peer sharing.

Multiple Relays

Running two or more relays provides redundancy. If one relay goes down, the block producer stays connected through the other.

To run multiple relays on the same machine, use different ports, database paths, and socket paths:

# Relay 1 on port 3001
dugite-node run \
  --config config.json \
  --topology topology-relay1.json \
  --database-path ./db-relay1 \
  --socket-path ./relay1.sock \
  --host-addr 0.0.0.0 \
  --port 3001

# Relay 2 on port 3002
dugite-node run \
  --config config.json \
  --topology topology-relay2.json \
  --database-path ./db-relay2 \
  --socket-path ./relay2.sock \
  --host-addr 0.0.0.0 \
  --port 3002

Each relay's topology should include the block producer as a local root. The block producer's topology should list all relays (see Block Producer Topology).

For production deployments, run relays on separate machines or in different availability zones for better fault tolerance.

Firewall Configuration

Relay nodes need port 3001 (or your chosen port) open to the public for Cardano N2N traffic. The block producer should only be reachable from your relays.

Relay firewall rules

# Allow inbound Cardano N2N from anywhere
sudo ufw allow 3001/tcp

# Allow SSH (adjust as needed)
sudo ufw allow 22/tcp

sudo ufw enable

Block producer firewall rules

# Allow inbound only from relay IPs
sudo ufw allow from <relay1-ip> to any port 3001
sudo ufw allow from <relay2-ip> to any port 3001

# Allow SSH (adjust as needed)
sudo ufw allow 22/tcp

# Deny everything else
sudo ufw default deny incoming
sudo ufw enable

Important: The block producer should have no public-facing ports. All Cardano traffic flows exclusively through your relays.

Monitoring

Dugite exposes Prometheus metrics on port 12798 by default. Key metrics to watch on a relay:

MetricWhat it tells you
peers_connectedNumber of active peer connections. Should be > 0 at all times
sync_progress_percentSync progress (10000 = 100%). Must be at 100% for the BP to produce blocks
blocks_receivedTotal blocks received from peers. Should increase steadily
slot_numberCurrent slot. Compare against network tip to verify sync
curl -s http://localhost:12798/metrics | grep -E "peers_connected|sync_progress"

See Monitoring for the full list of available metrics and Grafana dashboard setup.

Next Steps

  • Block Producer — Set up key generation, operational certificates, and block production
  • Topology — Full topology format reference
  • Monitoring — Prometheus metrics and alerting

Block Producer

Dugite can operate as a block-producing node (stake pool). This requires KES keys, VRF keys, and an operational certificate.

Architecture

A block producer is never directly exposed to the public internet. Instead, it sits behind one or more relay nodes that handle all external network connectivity. The relays forward blocks and transactions to the BP over a private network, and the BP announces forged blocks back through the relays.

See the Complete Deployment section at the bottom of this page for the full architecture diagram and setup checklist.

Overview

A block producer is a node that has been registered as a stake pool and is capable of minting new blocks when it is elected as a slot leader. The block production pipeline involves:

  1. Slot leader check — Each slot, the node uses its VRF key and the epoch nonce to determine if it is elected to produce a block.
  2. Block forging — If elected, the node assembles a block from pending mempool transactions, signs it with the KES key, and includes the VRF proof.
  3. Block announcement — The forged block is propagated to connected peers via the N2N protocol.

Required Keys

Cold Keys (Offline)

Cold keys identify the stake pool and should be kept offline (air-gapped) after initial setup.

Generate cold keys using the CLI:

dugite-cli node key-gen \
  --cold-verification-key-file cold.vkey \
  --cold-signing-key-file cold.skey \
  --operational-certificate-counter-file opcert.counter

KES Keys (Hot)

KES (Key Evolving Signature) keys are rotated periodically. Each KES key is valid for a limited number of KES periods (typically 62 periods of 129600 slots each on mainnet, approximately 90 days total).

Generate KES keys:

dugite-cli node key-gen-kes \
  --verification-key-file kes.vkey \
  --signing-key-file kes.skey

VRF Keys

VRF (Verifiable Random Function) keys are used for slot leader election. They are generated once and do not need rotation.

Generate VRF keys:

dugite-cli node key-gen-vrf \
  --verification-key-file vrf.vkey \
  --signing-key-file vrf.skey

Operational Certificate

The operational certificate binds the cold key to the current KES key. It must be regenerated each time the KES key is rotated.

Issue an operational certificate:

dugite-cli node issue-op-cert \
  --kes-verification-key-file kes.vkey \
  --cold-signing-key-file cold.skey \
  --operational-certificate-counter-file opcert.counter \
  --kes-period <current-kes-period> \
  --out-file opcert.cert

The --kes-period should be set to the current KES period at the time of issuance. You can calculate the current KES period as:

current_kes_period = current_slot / slots_per_kes_period

On mainnet, slots_per_kes_period is 129600.

Running as Block Producer

Pass the key and certificate paths when starting the node:

dugite-node run \
  --config config.json \
  --topology topology.json \
  --database-path ./db \
  --socket-path ./node.sock \
  --host-addr 0.0.0.0 \
  --port 3001 \
  --shelley-kes-key kes.skey \
  --shelley-vrf-key vrf.skey \
  --shelley-operational-certificate opcert.cert

When all three arguments are provided, the node enters block production mode. Without them, it operates as a relay-only node.

Block Producer Topology

A block producer should not be directly exposed to the public internet. Instead, it should connect only to your relay nodes:

{
  "bootstrapPeers": null,
  "localRoots": [
    {
      "accessPoints": [
        { "address": "relay1.example.com", "port": 3001 },
        { "address": "relay2.example.com", "port": 3001 }
      ],
      "advertise": false,
      "hotValency": 2,
      "warmValency": 3,
      "trustable": true
    }
  ],
  "publicRoots": [{ "accessPoints": [], "advertise": false }],
  "useLedgerAfterSlot": -1
}

Key points:

  • No bootstrap peers — The block producer syncs exclusively through your relays.
  • No public roots — No connections to unknown peers.
  • Ledger peers disableduseLedgerAfterSlot: -1 disables ledger-based peer discovery.
  • Only local roots — All connections are to your own relay nodes.

Leader Schedule

You can compute your pool's leader schedule for an epoch:

dugite-cli query leadership-schedule \
  --vrf-signing-key-file vrf.skey \
  --epoch-nonce <64-char-hex> \
  --epoch-start-slot <slot> \
  --epoch-length 432000 \
  --relative-stake 0.001 \
  --active-slot-coeff 0.05

This outputs all slots where your pool is elected to produce a block in the given epoch.

KES Key Rotation

KES keys must be rotated before they expire. The rotation process:

  1. Generate new KES keys:

    dugite-cli node key-gen-kes \
      --verification-key-file kes-new.vkey \
      --signing-key-file kes-new.skey
    
  2. Issue a new operational certificate with the new KES key (on the air-gapped machine):

    dugite-cli node issue-op-cert \
      --kes-verification-key-file kes-new.vkey \
      --cold-signing-key-file cold.skey \
      --operational-certificate-counter-file opcert.counter \
      --kes-period <current-kes-period> \
      --out-file opcert-new.cert
    
  3. Replace the KES key and certificate on the block producer and restart:

    cp kes-new.skey kes.skey
    cp opcert-new.cert opcert.cert
    # Restart the node
    

Important: Always rotate KES keys before they expire. If a KES key expires, your pool will stop producing blocks until a new key is issued.

Security Recommendations

  • Keep cold keys on an air-gapped machine. They are only needed to issue new operational certificates.
  • Restrict access to the block producer machine. Only your relay nodes should be able to connect.
  • Monitor your pool's block production. Use the Prometheus metrics endpoint to track dugite_blocks_applied_total.
  • Set up KES key rotation reminders well before expiry (2 weeks in advance is a good practice).
  • Use firewalls to ensure the block producer is not reachable from the public internet.

Snapshot Recovery & Block Forging Readiness

When a block producer starts up, several subsystems must be initialized before it can begin forging blocks. The path to readiness depends on how the node was bootstrapped.

Epoch Nonce Establishment

The epoch nonce is critical for VRF leader election. It is derived from accumulated VRF contributions across epoch boundaries:

  • After Mithril import + full replay from genesis: The nonce is computed correctly during replay. If the replay crosses at least one epoch boundary, nonce_established is set to true immediately and forging is enabled once the node reaches the chain tip.
  • After loading a ledger snapshot: The serialized snapshot does not carry live nonce tracking state. The node must observe at least one live epoch transition before nonce_established becomes true. Until then, the node logs Forge: skipping — epoch nonce not yet established and will not attempt to forge.

On preview testnet (epoch length 86,400 slots = 1 day), expect up to one day after snapshot load before forging is enabled.

Pool Stake Reconstruction

On startup, after loading a ledger snapshot, the node rebuilds the stake distribution from the UTxO store to ensure consistency:

  1. rebuild_stake_distribution() recomputes per-pool stake totals from the current UTxO set and delegation map.
  2. recompute_snapshot_pool_stakes() updates the mark/set/go snapshots so that the "set" snapshot (used for leader election) reflects the rebuilt distribution.

This runs automatically when the UTxO store is non-empty. After completion, the node logs the pool's stake in the "set" snapshot:

Block producer: pool stake in 'set' snapshot (used for leader election)
  pool_id=<hash>, pool_stake_lovelace=<n>, total_active_stake_lovelace=<n>, relative_stake=<f>

If your pool shows zero stake after startup, verify:

  • The pool registration certificate transaction is confirmed on-chain.
  • At least one stake address is delegated to the pool and that delegation is confirmed.
  • The UTxO store was properly attached (the node logs Rebuilding stake distribution from UTxO store on startup).
  • The "set" snapshot epoch is recent enough to include your pool's registration and delegation.

Epoch Numbering

Each network has its own epoch length defined in the Shelley genesis configuration:

Networkepoch_lengthApproximate Duration
Mainnet432,0005 days
Preview86,4001 day
Preprod432,0005 days

When a ledger snapshot is loaded, the node recalculates the current epoch from the tip slot using the genesis parameters. If the snapshot was saved with incorrect epoch parameters (for example, using mainnet's default 432,000 instead of preview's 86,400), the epoch number baked into the snapshot will be wrong. The node detects this automatically and corrects it:

Snapshot epoch differs from computed epoch — correcting
  snapshot_epoch=<wrong>, correct_epoch=<right>, tip_slot=<slot>

Without this correction, apply_block would attempt to process hundreds of spurious epoch transitions, and the stake snapshots would land at wrong epochs, causing pool_stake=0 for block producers.

Fork Recovery

When a block producer forges a block but another pool wins the slot battle (their block is adopted by the network instead), the forged block becomes orphaned. Dugite detects this situation during chain synchronization and recovers automatically.

How Fork Detection Works

During ChainSync, the node presents historical chain points (up to 10 ancestors, walked backwards through the volatile DB) to the upstream peer. If the local tip is an orphaned forged block that the peer does not recognize, the ancestor blocks provide fallback intersection points.

Recovery Cases

Case A: Full Reset. The intersection falls back to Origin despite having a non-trivial ledger tip. This means no peer recognizes any of the node's chain points. The node:

  1. Clears the volatile DB.
  2. Rolls back the ledger state to Origin.
  3. Disables strict VRF verification (so replay can proceed without rejecting blocks due to stale nonce).
  4. Reconnects and replays from the ImmutableDB.

Case B: Targeted ImmutableDB Replay. The intersection is behind the ledger tip but not at Origin. The node:

  1. Clears the volatile DB.
  2. Detaches the LSM UTxO store (switches to fast in-memory replay).
  3. Replays the ImmutableDB from genesis up to the intersection slot.
  4. Reattaches the UTxO store and resumes syncing from the canonical chain.

In both cases, orphaned forged blocks are not propagated to downstream peers. The node resumes normal operation on the canonical chain after recovery completes.

Troubleshooting Block Producer Issues

"Block producer has ZERO stake"

Block producer has ZERO stake in 'set' snapshot — will not be elected slot leader.

This warning appears at startup when the "set" snapshot contains no stake for your pool. Possible causes:

  • Pool not registered: Submit a pool registration certificate transaction and wait for it to be confirmed.
  • No delegations: At least one stake address must delegate to the pool. Submit a delegation certificate and wait for confirmation.
  • Snapshot too old: The "set" snapshot reflects stake from two epoch boundaries ago. A newly registered pool must wait 2 epoch transitions before appearing in the "set" snapshot.
  • UTxO store not attached: If the node started without a UTxO store, stake reconstruction is skipped. Check for the Rebuilding stake distribution from UTxO store log message.

"Forge: skipping — epoch nonce not yet established"

Forge: skipping — epoch nonce not yet established

This is expected after loading a ledger snapshot. The node must observe at least one live epoch transition to establish a reliable epoch nonce. On preview testnet, this takes up to 1 day (one epoch). On mainnet, up to 5 days.

No action required — the node will begin forging automatically after the next epoch boundary.

"VRF leader eligibility check failed"

VRF leader check failures during the first few epochs after a full replay are non-fatal and expected. The mark/set/go snapshot rotation means the "set" snapshot needs up to 3 epoch transitions to stabilize with correct stake distributions derived from the replayed state. During this window:

  • The node may compute incorrect leader eligibility for some slots.
  • Block verification for incoming blocks from peers is relaxed (strict VRF verification is disabled until nonce_established is true).
  • Your pool may miss some leader slots — this is temporary and self-correcting.

Pool Registered but No Forge Attempts

If your pool is registered on-chain but the node never logs any forge attempts:

  1. Check the "set" snapshot log: Look for the startup message Block producer: pool stake in 'set' snapshot. Verify that pool_stake_lovelace is greater than zero.
  2. Check nonce establishment: Look for Forge: skipping — epoch nonce not yet established. If present, wait for the next epoch boundary.
  3. Check the "set" snapshot availability: If you see Block producer: no 'set' snapshot available — leader election disabled until epoch transition, the node has not yet completed enough epoch transitions. Wait for at least 2 epoch boundaries.
  4. Verify key files: Ensure --shelley-kes-key, --shelley-vrf-key, and --shelley-operational-certificate are all provided and point to valid files. Without all three, the node runs in relay-only mode.
  5. Check KES period: If the KES key has expired (current KES period exceeds the operational certificate's start period plus maxKESEvolutions), rotate the KES key and issue a new operational certificate.

Complete Deployment

A full stake pool deployment consists of one block producer and one or more relay nodes working together:

graph TB
    subgraph Public Network
        Peers["Cardano Peers"]
    end

    subgraph Your Infrastructure
        subgraph Relay Tier
            R1["Relay 1<br/>Port 3001<br/>Public IP"]
            R2["Relay 2<br/>Port 3001<br/>Public IP"]
        end

        subgraph Private Network
            BP["Block Producer<br/>Port 3001<br/>Private IP<br/>KES + VRF + OpCert"]
        end
    end

    Peers <-->|N2N| R1
    Peers <-->|N2N| R2
    R1 <-->|Private| BP
    R2 <-->|Private| BP

Deployment Checklist

  1. Set up relay nodes (Relay Node guide)

    • Install Dugite on relay machines
    • Import Mithril snapshot for fast initial sync
    • Configure relay topology with bootstrap peers and BP as local root
    • Open port 3001 to the public
    • Start relay nodes and verify they sync to tip
  2. Set up the block producer (this page)

    • Install Dugite on the BP machine
    • Import Mithril snapshot
    • Generate cold keys, VRF keys, and KES keys
    • Issue an operational certificate
    • Configure BP topology with relays as local roots (no public peers)
    • Restrict firewall to relay IPs only
    • Start the BP node with --shelley-kes-key, --shelley-vrf-key, --shelley-operational-certificate
  3. Register the stake pool on-chain (requires a transaction with pool registration certificate)

  4. Verify block production

    • Confirm sync progress is 100% on all nodes
    • Check peers_connected metrics on relays and BP
    • Monitor blocks_forged metric on the BP after epoch transition
    • Set up monitoring and KES rotation reminders

Kubernetes Deployment

Dugite includes a Helm chart for deploying to Kubernetes as either a relay node or a block producer.

Prerequisites

  • Kubernetes 1.25+
  • Helm 3.x
  • A StorageClass that supports ReadWriteOnce persistent volumes

Quick Start

Deploy a relay node on the preview testnet:

helm install dugite-relay ./charts/dugite-node \
  --set network.name=preview

This will:

  1. Run a Mithril snapshot import (init container) for fast bootstrap
  2. Start the node syncing with the preview testnet
  3. Create a 100Gi persistent volume for the chain database
  4. Expose Prometheus metrics on port 12798

Chart Reference

Node Role

The chart supports two deployment modes:

# Relay node (default)
role: relay

# Block producer
role: producer

Network Selection

network:
  name: preview    # mainnet, preview, or preprod
  port: 3001       # N2N port
  hostAddr: "0.0.0.0"

Network magic is derived automatically from the network name. Override with network.magic if needed.

Persistence

persistence:
  enabled: true
  storageClass: ""    # Use default StorageClass
  size: 100Gi         # 100Gi for testnet, 500Gi+ for mainnet
  accessMode: ReadWriteOnce
  existingClaim: ""   # Use an existing PVC

Resources

resources:
  requests:
    cpu: "1"
    memory: 4Gi
  limits:
    cpu: "4"
    memory: 16Gi

For mainnet, increase memory limits to 24-32Gi during initial sync and ledger replay.

Mithril Import

mithril:
  enabled: true     # Run Mithril import on first startup

The init container is idempotent — it skips the import on subsequent restarts if blocks already exist.

Ledger Replay

ledger:
  replayLimit: null    # null = unlimited (replay all blocks)
  pipelineDepth: 150   # Chain sync pipeline depth

After Mithril import, the node replays all imported blocks through the ledger to build correct UTxO state, delegations, and protocol parameters. Set replayLimit: 0 to skip replay for faster startup (at the cost of incomplete ledger state).

Metrics and Monitoring

metrics:
  enabled: true
  port: 12798
  serviceMonitor:
    enabled: false     # Set true if using Prometheus Operator
    interval: 30s
    labels: {}

When serviceMonitor.enabled is true, the chart creates a ServiceMonitor resource for automatic Prometheus scraping.

Available metrics include sync_progress_percent, blocks_applied_total, utxo_count, epoch_number, peers_connected, and more. See Monitoring for the full list.

Relay Node Deployment

A relay node connects to the Cardano network, syncs blocks, and serves them to connected peers and local clients.

Minimal Relay

helm install dugite-relay ./charts/dugite-node \
  --set network.name=mainnet \
  --set persistence.size=500Gi

Relay with Custom Topology

helm install dugite-relay ./charts/dugite-node \
  --set network.name=mainnet \
  --set persistence.size=500Gi \
  -f relay-values.yaml

relay-values.yaml:

topology:
  bootstrapPeers:
    - address: relays-new.cardano-mainnet.iohk.io
      port: 3001
  localRoots:
    - accessPoints:
        - address: dugite-producer.default.svc.cluster.local
          port: 3001
      advertise: false
      trustable: true
      valency: 1
  publicRoots:
    - accessPoints:
        - address: relays-new.cardano-mainnet.iohk.io
          port: 3001
      advertise: false
  useLedgerAfterSlot: 110332800

Relay with Prometheus Operator

helm install dugite-relay ./charts/dugite-node \
  --set network.name=mainnet \
  --set metrics.serviceMonitor.enabled=true \
  --set metrics.serviceMonitor.labels.release=prometheus

Block Producer Deployment

A block producer creates blocks when elected as slot leader. It requires KES, VRF, and operational certificate keys.

Create Keys Secret

First, create a Kubernetes secret with your block producer keys:

kubectl create secret generic dugite-producer-keys \
  --from-file=kes.skey=kes.skey \
  --from-file=vrf.skey=vrf.skey \
  --from-file=node.cert=node.cert

Deploy the Producer

helm install dugite-producer ./charts/dugite-node \
  --set role=producer \
  --set network.name=mainnet \
  --set producer.existingSecret=dugite-producer-keys \
  --set persistence.size=500Gi

Producer Security

When role=producer, the chart automatically creates a NetworkPolicy that:

  • Restricts N2N ingress to pods labeled app.kubernetes.io/component: relay
  • Allows metrics scraping from any pod in the cluster
  • Block producers should never be exposed directly to the internet

Producer + Relay Architecture

A typical production deployment uses one or more relay nodes that shield the block producer:

graph LR
    Internet[Cardano Network] --> R1[Relay 1]
    Internet --> R2[Relay 2]
    R1 --> BP[Block Producer]
    R2 --> BP
    BP -. blocks .-> R1
    BP -. blocks .-> R2

Deploy both:

# Deploy the block producer
helm install dugite-producer ./charts/dugite-node \
  --set role=producer \
  --set network.name=mainnet \
  --set producer.existingSecret=dugite-producer-keys \
  -f producer-values.yaml

# Deploy relay(s) pointing to the producer
helm install dugite-relay ./charts/dugite-node \
  --set role=relay \
  --set network.name=mainnet \
  -f relay-values.yaml

producer-values.yaml:

topology:
  bootstrapPeers: []
  localRoots:
    - accessPoints:
        - address: dugite-relay-dugite-node.default.svc.cluster.local
          port: 3001
      advertise: false
      trustable: true
      valency: 1
  publicRoots: []
  useLedgerAfterSlot: -1

relay-values.yaml:

topology:
  bootstrapPeers:
    - address: relays-new.cardano-mainnet.iohk.io
      port: 3001
  localRoots:
    - accessPoints:
        - address: dugite-producer-dugite-node.default.svc.cluster.local
          port: 3001
      advertise: false
      trustable: true
      valency: 1
  publicRoots:
    - accessPoints:
        - address: relays-new.cardano-mainnet.iohk.io
          port: 3001
      advertise: false
  useLedgerAfterSlot: 110332800

Verifying the Deployment

Check pod status:

kubectl get pods -l app.kubernetes.io/name=dugite-node

View logs:

kubectl logs -f deploy/dugite-relay-dugite-node

Query the node tip:

kubectl exec deploy/dugite-relay-dugite-node -- \
  dugite-cli query tip --testnet-magic 2

Check metrics:

kubectl port-forward svc/dugite-relay-dugite-node 12798:12798
curl -s http://localhost:12798/metrics | grep sync_progress

Configuration Reference

All configurable values with defaults:

ParameterDefaultDescription
rolerelayNode role: relay or producer
image.repositoryghcr.io/michaeljfazio/dugiteContainer image
image.tagChart appVersionImage tag
network.namepreviewNetwork: mainnet, preview, preprod
network.port3001N2N port
mithril.enabledtrueRun Mithril import on first start
ledger.replayLimitnullMax blocks to replay (null = unlimited)
ledger.pipelineDepth150Chain sync pipeline depth
persistence.enabledtrueEnable persistent storage
persistence.size100GiVolume size
metrics.enabledtrueEnable Prometheus metrics
metrics.port12798Metrics port
metrics.serviceMonitor.enabledfalseCreate ServiceMonitor
producer.existingSecret""Secret with KES/VRF/cert keys
resources.requests.cpu1CPU request
resources.requests.memory4GiMemory request
resources.limits.memory16GiMemory limit

CLI Overview

Dugite provides dugite-cli, a cardano-cli compatible command-line interface for interacting with a running Dugite node and managing keys, transactions, and governance.

Binary

dugite-cli [COMMAND] [OPTIONS]

Command Groups

CommandDescription
addressAddress generation and manipulation
keyPayment and stake key generation
transactionTransaction building, signing, and submission
queryNode queries (tip, UTxO, protocol parameters, etc.)
stake-addressStake address registration, delegation, and vote delegation
stake-poolStake pool operations (retirement certificates)
governanceConway governance (DRep, voting, proposals)
nodeNode key operations (cold keys, KES, VRF, operational certificates)

Common Patterns

Socket Path

Most commands that interact with a running node require --socket-path to specify the Unix domain socket:

dugite-cli query tip --socket-path ./node.sock

The default socket path is node.sock in the current directory.

Testnet Magic

When querying a node on a testnet, pass the --testnet-magic flag:

dugite-cli query tip --socket-path ./node.sock --testnet-magic 2

For mainnet, --testnet-magic is not needed (defaults to mainnet magic 764824073).

Text Envelope Format

Keys, certificates, and transactions are stored in the cardano-node "text envelope" JSON format:

{
  "type": "PaymentSigningKeyShelley_ed25519",
  "description": "Payment Signing Key",
  "cborHex": "5820..."
}

This format is interchangeable with files produced by cardano-cli.

Output Files

Commands that produce artifacts use --out-file:

dugite-cli transaction build ... --out-file tx.body
dugite-cli transaction sign ... --out-file tx.signed

Help

Every command supports --help:

dugite-cli --help
dugite-cli transaction --help
dugite-cli transaction build --help

dugite-node Reference

dugite-node is the main Dugite node binary. It supports two subcommands: run (start the node) and mithril-import (import a Mithril snapshot for fast initial sync).

run

Start the Dugite node:

dugite-node run [OPTIONS]

Options

FlagDefaultDescription
--configconfig/mainnet-config.jsonPath to the node configuration file
--topologyconfig/mainnet-topology.jsonPath to the topology file
--database-pathdbPath to the database directory
--socket-pathnode.sockUnix domain socket path for N2C (local client) connections
--port3001TCP port for N2N (node-to-node) connections
--host-addr0.0.0.0Host address to bind to
--metrics-port12798Prometheus metrics port (set to 0 to disable)
--shelley-kes-keyPath to the KES signing key (enables block production)
--shelley-vrf-keyPath to the VRF signing key (enables block production)
--shelley-operational-certificatePath to the operational certificate (enables block production)
--log-outputstdoutLog output target: stdout, file, or journald. Can be specified multiple times.
--log-formattextLog format: text (human-readable) or json (structured).
--log-levelinfoLog level (trace, debug, info, warn, error). Overridden by RUST_LOG.
--log-dirlogsDirectory for log files (used with --log-output file)
--log-file-rotationdailyLog file rotation strategy: daily, hourly, or never
--log-no-colorfalseDisable ANSI colors in stdout output
--mempool-max-tx16384Maximum number of transactions in the mempool
--mempool-max-bytes536870912Maximum mempool size in bytes (default 512 MB)
--snapshot-max-retained2Maximum number of ledger snapshots to retain on disk
--snapshot-bulk-min-blocks50000Minimum blocks between bulk-sync snapshots
--snapshot-bulk-min-secs360Minimum seconds between bulk-sync snapshots
--storage-profilehigh-memoryStorage profile: ultra-memory (32GB), high-memory (16GB), low-memory (8GB), or minimal (4GB)
--immutable-index-typeOverride block index type: in-memory or mmap
--utxo-backendOverride UTxO backend: in-memory or lsm
--utxo-memtable-size-mbOverride LSM memtable size in MB
--utxo-block-cache-size-mbOverride LSM block cache size in MB
--utxo-bloom-filter-bitsOverride LSM bloom filter bits per key

Relay Node (default)

Run as a relay node with no block production keys:

dugite-node run \
  --config config/preview-config.json \
  --topology config/preview-topology.json \
  --database-path ./db-preview \
  --socket-path ./node.sock \
  --host-addr 0.0.0.0 \
  --port 3001

Block Producer

Run as a block producer by providing all three key/certificate paths:

dugite-node run \
  --config config/preview-config.json \
  --topology config/preview-topology.json \
  --database-path ./db-preview \
  --socket-path ./node.sock \
  --host-addr 0.0.0.0 \
  --port 3001 \
  --shelley-kes-key ./keys/kes.skey \
  --shelley-vrf-key ./keys/vrf.skey \
  --shelley-operational-certificate ./keys/opcert.cert

When all three block producer flags are provided, the node enters block production mode. The cold signing key is not needed at runtime — the cold verification key is extracted from the operational certificate, matching cardano-node behavior.

If any of the three flags is missing, the node runs in relay-only mode.

Environment Variables

VariableDefaultDescription
DUGITE_PIPELINE_DEPTH300ChainSync pipeline depth (number of blocks requested ahead)
RUST_LOGinfoLog level filter (e.g., debug, info, warn, dugite_node=debug). Overrides --log-level.

See Logging for details on output targets, file rotation, and per-crate filtering.

Configuration File

The --config file follows the same JSON format as cardano-node. Key fields:

{
  "Protocol": "Cardano",
  "RequiresNetworkMagic": "RequiresMagic",
  "ByronGenesisFile": "byron-genesis.json",
  "ShelleyGenesisFile": "shelley-genesis.json",
  "AlonzoGenesisFile": "alonzo-genesis.json",
  "ConwayGenesisFile": "conway-genesis.json"
}

Genesis file paths are resolved relative to the directory containing the config file.

Metrics

When --metrics-port is non-zero, Prometheus metrics are served at http://localhost:<port>/metrics. See Monitoring for the full list of available metrics.

mithril-import

Import a Mithril snapshot for fast initial sync. This downloads and verifies a certified snapshot from a Mithril aggregator, then imports all blocks into the local database.

dugite-node mithril-import [OPTIONS]

Options

FlagDefaultDescription
--network-magic764824073Network magic value
--database-pathdbPath to the database directory
--temp-dirTemporary directory for download and extraction (uses system temp if omitted)
--log-outputstdoutLog output target: stdout, file, or journald. Can be specified multiple times.
--log-formattextLog format: text (human-readable) or json (structured).
--log-levelinfoLog level (trace, debug, info, warn, error). Overridden by RUST_LOG.
--log-dirlogsDirectory for log files (used with --log-output file)
--log-file-rotationdailyLog file rotation strategy: daily, hourly, or never
--log-no-colorfalseDisable ANSI colors in stdout output

Network Magic Values

NetworkMagic
Mainnet764824073
Preview2
Preprod1

Example: Preview Testnet

dugite-node mithril-import \
  --network-magic 2 \
  --database-path ./db-preview

# Then start the node to sync from the snapshot to tip
dugite-node run \
  --config config/preview-config.json \
  --topology config/preview-topology.json \
  --database-path ./db-preview \
  --socket-path ./node.sock

The import process:

  1. Downloads the latest snapshot from the Mithril aggregator
  2. Verifies the snapshot digest (SHA256)
  3. Extracts and parses immutable chunk files
  4. Imports blocks into ChainDB with CRC32 verification
  5. Supports resume — skips blocks already in the database

On preview testnet, importing ~4M blocks takes approximately 2 minutes.

Key Generation

Dugite CLI supports generating all key types needed for Cardano operations.

Payment Keys

Generate an Ed25519 key pair for payments:

dugite-cli key generate-payment-key \
  --signing-key-file payment.skey \
  --verification-key-file payment.vkey

Output files:

  • payment.skey — Payment signing key (keep secret)
  • payment.vkey — Payment verification key (safe to share)

Stake Keys

Generate an Ed25519 key pair for staking:

dugite-cli key generate-stake-key \
  --signing-key-file stake.skey \
  --verification-key-file stake.vkey

Output files:

  • stake.skey — Stake signing key
  • stake.vkey — Stake verification key

Verification Key Hash

Compute the Blake2b-224 hash of any verification key:

dugite-cli key verification-key-hash \
  --verification-key-file payment.vkey

This outputs the 28-byte key hash in hexadecimal, used in addresses and certificates.

DRep Keys

Generate keys for a Delegated Representative (Conway governance):

dugite-cli governance drep key-gen \
  --signing-key-file drep.skey \
  --verification-key-file drep.vkey

Get the DRep ID:

# Bech32 format (default)
dugite-cli governance drep id \
  --drep-verification-key-file drep.vkey

# Hex format
dugite-cli governance drep id \
  --drep-verification-key-file drep.vkey \
  --output-format hex

Node Keys

Cold Keys

Generate cold keys and an operational certificate issue counter:

dugite-cli node key-gen \
  --cold-verification-key-file cold.vkey \
  --cold-signing-key-file cold.skey \
  --operational-certificate-counter-file opcert.counter

KES Keys

Generate Key Evolving Signature keys (rotated periodically):

dugite-cli node key-gen-kes \
  --verification-key-file kes.vkey \
  --signing-key-file kes.skey

VRF Keys

Generate Verifiable Random Function keys (for slot leader election):

dugite-cli node key-gen-vrf \
  --verification-key-file vrf.vkey \
  --signing-key-file vrf.skey

Operational Certificate

Issue an operational certificate binding the cold key to the current KES key:

dugite-cli node issue-op-cert \
  --kes-verification-key-file kes.vkey \
  --cold-signing-key-file cold.skey \
  --operational-certificate-counter-file opcert.counter \
  --kes-period 400 \
  --out-file opcert.cert

Address Generation

Payment Address

Build a payment address from keys:

# Enterprise address (no staking)
dugite-cli address build \
  --payment-verification-key-file payment.vkey \
  --testnet-magic 2

# Base address (with staking)
dugite-cli address build \
  --payment-verification-key-file payment.vkey \
  --stake-verification-key-file stake.vkey \
  --testnet-magic 2

# Mainnet address
dugite-cli address build \
  --payment-verification-key-file payment.vkey \
  --stake-verification-key-file stake.vkey \
  --mainnet

Key File Format

All keys are stored in the cardano-node text envelope format:

{
  "type": "PaymentSigningKeyShelley_ed25519",
  "description": "Payment Signing Key",
  "cborHex": "5820a1b2c3d4..."
}

The cborHex field contains the CBOR-encoded key bytes. The type field identifies the key type and is used for validation when loading keys.

Key files generated by Dugite are compatible with cardano-cli and vice versa.

Complete Workflow Example

Generate all keys needed for a basic wallet:

# 1. Generate payment keys
dugite-cli key generate-payment-key \
  --signing-key-file payment.skey \
  --verification-key-file payment.vkey

# 2. Generate stake keys
dugite-cli key generate-stake-key \
  --signing-key-file stake.skey \
  --verification-key-file stake.vkey

# 3. Build a testnet address
dugite-cli address build \
  --payment-verification-key-file payment.vkey \
  --stake-verification-key-file stake.vkey \
  --testnet-magic 2

# 4. Get the payment key hash
dugite-cli key verification-key-hash \
  --verification-key-file payment.vkey

Transactions

Dugite CLI supports the full transaction lifecycle: building, signing, submitting, and inspecting transactions.

Building a Transaction

dugite-cli transaction build \
  --tx-in <tx_hash>#<index> \
  --tx-out <address>+<lovelace> \
  --change-address <address> \
  --fee <lovelace> \
  --out-file tx.body

Arguments

ArgumentDescription
--tx-inTransaction input in tx_hash#index format. Can be specified multiple times
--tx-outTransaction output in address+lovelace format. Can be specified multiple times
--change-addressAddress to receive change
--feeFee in lovelace (default: 200000)
--ttlTime-to-live slot number (optional)
--certificate-filePath to a certificate file to include (can be repeated)
--withdrawalWithdrawal in stake_address+lovelace format (can be repeated)
--metadata-json-filePath to a JSON metadata file (optional)
--out-fileOutput file for the transaction body

Example: Simple ADA Transfer

dugite-cli transaction build \
  --tx-in "abc123...#0" \
  --tx-out "addr_test1qz...+5000000" \
  --change-address "addr_test1qp..." \
  --fee 200000 \
  --ttl 50000000 \
  --out-file tx.body

Multi-Asset Outputs

To include native tokens in an output, use the extended format:

address+lovelace+"policy_id.asset_name quantity"

Example:

dugite-cli transaction build \
  --tx-in "abc123...#0" \
  --tx-out 'addr_test1qz...+2000000+"a1b2c3...d4e5f6.4d79546f6b656e 100"' \
  --change-address "addr_test1qp..." \
  --fee 200000 \
  --out-file tx.body

Multiple tokens can be separated with + inside the quoted string:

"policy1.asset1 100+policy2.asset2 50"

Including Certificates

dugite-cli transaction build \
  --tx-in "abc123...#0" \
  --tx-out "addr_test1qz...+5000000" \
  --change-address "addr_test1qp..." \
  --fee 200000 \
  --certificate-file stake-reg.cert \
  --certificate-file stake-deleg.cert \
  --out-file tx.body

Including Metadata

Create a metadata JSON file with integer keys:

{
  "674": {
    "msg": ["Hello, Cardano!"]
  }
}
dugite-cli transaction build \
  --tx-in "abc123...#0" \
  --tx-out "addr_test1qz...+5000000" \
  --change-address "addr_test1qp..." \
  --fee 200000 \
  --metadata-json-file metadata.json \
  --out-file tx.body

Signing a Transaction

dugite-cli transaction sign \
  --tx-body-file tx.body \
  --signing-key-file payment.skey \
  --out-file tx.signed

Multiple signing keys can be provided:

dugite-cli transaction sign \
  --tx-body-file tx.body \
  --signing-key-file payment.skey \
  --signing-key-file stake.skey \
  --out-file tx.signed

Submitting a Transaction

dugite-cli transaction submit \
  --tx-file tx.signed \
  --socket-path ./node.sock

The node validates the transaction (Phase-1 and Phase-2 for Plutus transactions) and, if valid, adds it to the mempool for propagation.

Viewing a Transaction

dugite-cli transaction view --tx-file tx.signed

Output includes:

  • Transaction type
  • CBOR size
  • Transaction hash
  • Number of inputs and outputs
  • Fee
  • TTL (if set)

Transaction ID

Compute the transaction hash:

dugite-cli transaction txid --tx-file tx.body

Works with both transaction body files and signed transaction files.

Calculate Minimum Fee

dugite-cli transaction calculate-min-fee \
  --tx-body-file tx.body \
  --witness-count 2 \
  --protocol-params-file protocol-params.json

The fee calculation accounts for:

  • Base fee: txFeeFixed + txFeePerByte * tx_size
  • Script execution: executionUnitPrices * total_ExUnits for any Plutus witnesses
  • Reference script surcharge: CIP-0112 tiered fee for reference scripts (25KiB tiers, 1.2x multiplier per tier)

To get the current protocol parameters:

dugite-cli query protocol-parameters \
  --socket-path ./node.sock \
  --out-file protocol-params.json

Calculate Minimum Required UTxO

Compute the minimum lovelace required for a transaction output to satisfy the minUTxOValue protocol parameter:

dugite-cli transaction calculate-min-required-utxo \
  --protocol-params-file protocol-params.json \
  --tx-out "addr_test1qz...+0+\"policy1.asset1 100\""

Output:

Minimum required lovelace: 1724100

This is particularly useful when constructing outputs that carry native tokens, since the minimum lovelace depends on the byte-size of the value bundle (number of policy IDs, asset names, and quantities).

Creating Witnesses

For multi-signature workflows, you can create witnesses separately and assemble them:

Create a Witness

dugite-cli transaction witness \
  --tx-body-file tx.body \
  --signing-key-file payment.skey \
  --out-file payment.witness

Assemble a Transaction

dugite-cli transaction assemble \
  --tx-body-file tx.body \
  --witness-file payment.witness \
  --witness-file stake.witness \
  --out-file tx.signed

Policy ID

Compute the policy ID (Blake2b-224 hash) of a native script:

dugite-cli transaction policyid --script-file policy.script

Complete Workflow

# 1. Query UTxOs to find inputs
dugite-cli query utxo \
  --address addr_test1qz... \
  --socket-path ./node.sock \
  --testnet-magic 2

# 2. Get protocol parameters for fee calculation
dugite-cli query protocol-parameters \
  --socket-path ./node.sock \
  --testnet-magic 2 \
  --out-file pp.json

# 3. Build the transaction
dugite-cli transaction build \
  --tx-in "abc123...#0" \
  --tx-out "addr_test1qr...+5000000" \
  --change-address "addr_test1qz..." \
  --fee 200000 \
  --out-file tx.body

# 4. Calculate the exact fee
dugite-cli transaction calculate-min-fee \
  --tx-body-file tx.body \
  --witness-count 1 \
  --protocol-params-file pp.json

# 5. Rebuild with the correct fee (repeat step 3 with updated --fee)

# 6. Sign
dugite-cli transaction sign \
  --tx-body-file tx.body \
  --signing-key-file payment.skey \
  --out-file tx.signed

# 7. Submit
dugite-cli transaction submit \
  --tx-file tx.signed \
  --socket-path ./node.sock

Queries

Dugite CLI provides a comprehensive set of queries against a running node via the N2C (Node-to-Client) protocol over a Unix domain socket.

Chain Tip

Query the current chain tip:

dugite-cli query tip --socket-path ./node.sock

For testnets:

dugite-cli query tip --socket-path ./node.sock --testnet-magic 2

Output:

{
    "slot": 73429851,
    "hash": "a1b2c3d4e5f6...",
    "block": 2847392,
    "epoch": 170,
    "era": "Conway",
    "syncProgress": "99.87"
}

UTxO Query

Query UTxOs at a specific address:

dugite-cli query utxo \
  --address addr_test1qz... \
  --socket-path ./node.sock \
  --testnet-magic 2

Output:

TxHash#Ix                                                            Datum           Lovelace
------------------------------------------------------------------------------------------------
a1b2c3d4...#0                                                           no            5000000
e5f6a7b8...#1                                                          yes           10000000

Total UTxOs: 2

Protocol Parameters

Query current protocol parameters:

# Print to stdout
dugite-cli query protocol-parameters \
  --socket-path ./node.sock

# Save to file
dugite-cli query protocol-parameters \
  --socket-path ./node.sock \
  --out-file protocol-params.json

The output is a JSON object containing all active protocol parameters, including fee settings, execution unit limits, and governance thresholds.

Stake Distribution

Query the stake distribution across all registered pools:

dugite-cli query stake-distribution \
  --socket-path ./node.sock

Output:

PoolId                                                             Stake (lovelace)   Pledge (lovelace)
----------------------------------------------------------------------------------------------------------
pool1abc...                                                        15234892000000      500000000000
pool1def...                                                         8923451000000      250000000000

Total pools: 3200

Stake Address Info

Query delegation and rewards for a stake address:

dugite-cli query stake-address-info \
  --address stake_test1uz... \
  --socket-path ./node.sock \
  --testnet-magic 2

Output:

[
  {
    "address": "stake_test1uz...",
    "delegation": "pool1abc...",
    "rewardAccountBalance": 5234000
  }
]

Stake Pools

List all registered stake pools with their parameters:

dugite-cli query stake-pools \
  --socket-path ./node.sock

Output:

PoolId                                                      Pledge (ADA)    Cost (ADA)   Margin
----------------------------------------------------------------------------------------------------
pool1abc...                                                   500.000000     340.000000    1.00%
pool1def...                                                   250.000000     340.000000    2.50%

Total pools: 3200

Pool Parameters

Query detailed parameters for a specific pool:

dugite-cli query pool-params \
  --socket-path ./node.sock \
  --stake-pool-id pool1abc...

Stake Snapshots

Query the mark/set/go stake snapshots:

dugite-cli query stake-snapshot \
  --socket-path ./node.sock

# Filter by pool
dugite-cli query stake-snapshot \
  --socket-path ./node.sock \
  --stake-pool-id pool1abc...

Governance State (Conway)

Query the overall governance state:

dugite-cli query gov-state --socket-path ./node.sock

Output:

Governance State (Conway)
========================
Treasury:         1234567890 ADA
Registered DReps: 456
Committee Members: 7
Active Proposals: 12

Proposals:
Type                 TxId     Yes     No  Abstain
----------------------------------------------------
InfoAction           a1b2c3#0    42     3        5
TreasuryWithdrawals  d4e5f6#1    28    12        8

DRep State (Conway)

Query registered DReps:

# All DReps
dugite-cli query drep-state --socket-path ./node.sock

# Specific DRep by key hash
dugite-cli query drep-state \
  --socket-path ./node.sock \
  --drep-key-hash a1b2c3d4...

Output:

DRep State (Conway)
===================
Total DReps: 456

Credential Hash                                                    Deposit (ADA)    Epoch
--------------------------------------------------------------------------------------------
a1b2c3d4...                                                                500      412
  Anchor: https://example.com/drep-metadata.json

Committee State (Conway)

Query the constitutional committee:

dugite-cli query committee-state --socket-path ./node.sock

Output:

Constitutional Committee State (Conway)
=======================================
Active Members: 7
Resigned Members: 1

Cold Credential                                                    Hot Credential
--------------------------------------------------------------------------------------------------------------------------------------
a1b2c3d4...                                                        e5f6a7b8...

Resigned:
  d4e5f6a7...

Transaction Mempool

Query the node's transaction mempool:

# Mempool info (size, capacity, tx count)
dugite-cli query tx-mempool info --socket-path ./node.sock

# Check if a specific transaction is in the mempool
dugite-cli query tx-mempool has-tx \
  --socket-path ./node.sock \
  --tx-id a1b2c3d4...

Info output:

Mempool snapshot at slot 73429851:
  Capacity:     2000000 bytes
  Size:         45320 bytes
  Transactions: 12

Treasury

Query the treasury and reserves:

dugite-cli query treasury --socket-path ./node.sock

Output:

Account State
=============
Treasury: 1234567 ADA
Reserves: 9876543 ADA

Constitution (Conway)

Query the current constitution:

dugite-cli query constitution --socket-path ./node.sock

Output:

Constitution
============
URL:         https://constitution.gov/hash.json
Data Hash:   a1b2c3d4e5f6...
Script Hash: none

Ratification State (Conway)

Query the ratification state (enacted/expired proposals from the most recent epoch transition):

dugite-cli query ratify-state --socket-path ./node.sock

Output:

Ratification State
==================
Enacted proposals: 1
  a1b2c3d4e5f6...#0
Expired proposals: 2
  d4e5f6a7b8c9...#1
  e5f6a7b8c9d0...#0
Delayed:           false

Slot Number

Convert a wall-clock time to a Cardano slot number:

dugite-cli query slot-number \
  --socket-path ./node.sock \
  --testnet-magic 2 \
  --utc-time "2026-03-20T12:00:00Z"

Output:

Slot: 73851200

This is useful for computing TTL values or verifying that a specific point in time falls within a given epoch.

KES Period Info

Query KES period information for an operational certificate:

dugite-cli query kes-period-info \
  --socket-path ./node.sock \
  --op-cert-file opcert.cert

Output:

KES Period Info
===============
On-chain: yes
Operational certificate counter on-chain: 3
Certificate issue counter: 3

Current KES period: 418
Operational certificate start KES period: 418
KES max evolutions: 62
KES periods remaining: 62

Node start time: 2026-03-19T08:00:00Z
KES key expiry: 2026-09-14T08:00:00Z

Use this command to verify that a KES key is current and to determine when rotation is needed.

Leadership Schedule

Compute the leader schedule for a stake pool:

dugite-cli query leadership-schedule \
  --vrf-signing-key-file vrf.skey \
  --epoch-nonce a1b2c3d4... \
  --epoch-start-slot 73000000 \
  --epoch-length 432000 \
  --relative-stake 0.001 \
  --active-slot-coeff 0.05

Output:

Computing leader schedule for epoch starting at slot 73000000...
Epoch length: 432000 slots
Relative stake: 0.001000
Active slot coefficient: 0.05

SlotNo       VRF Output (first 16 bytes)
--------------------------------------------------
73012345     a1b2c3d4e5f6a7b8...
73045678     d4e5f6a7b8c9d0e1...

Total leader slots: 2
Expected: ~22 (f=0.05, stake=0.001000)

Stake Address Commands

The dugite-cli stake-address subcommands manage stake key generation, reward address construction, and certificate creation for staking operations.

key-gen

Generate a stake key pair:

dugite-cli stake-address key-gen \
  --verification-key-file stake.vkey \
  --signing-key-file stake.skey
FlagRequiredDescription
--verification-key-fileYesOutput path for the stake verification key
--signing-key-fileYesOutput path for the stake signing key

build

Build a stake (reward) address from a stake verification key:

dugite-cli stake-address build \
  --stake-verification-key-file stake.vkey \
  --network testnet
FlagRequiredDefaultDescription
--stake-verification-key-fileYesPath to the stake verification key
--networkNomainnetNetwork: mainnet or testnet
--out-fileNoOutput file (prints to stdout if omitted)

registration-certificate

Create a stake address registration certificate:

# Conway era (with deposit)
dugite-cli stake-address registration-certificate \
  --stake-verification-key-file stake.vkey \
  --key-reg-deposit-amt 2000000 \
  --out-file stake-reg.cert

# Legacy Shelley era (no deposit parameter)
dugite-cli stake-address registration-certificate \
  --stake-verification-key-file stake.vkey \
  --out-file stake-reg.cert
FlagRequiredDescription
--stake-verification-key-fileYesPath to the stake verification key
--key-reg-deposit-amtNoDeposit amount in lovelace (Conway era; omit for legacy Shelley cert)
--out-fileYesOutput path for the certificate

The deposit amount should match the current stakeAddressDeposit protocol parameter (typically 2 ADA = 2000000 lovelace).

deregistration-certificate

Create a stake address deregistration certificate to reclaim the deposit:

dugite-cli stake-address deregistration-certificate \
  --stake-verification-key-file stake.vkey \
  --key-reg-deposit-amt 2000000 \
  --out-file stake-dereg.cert
FlagRequiredDescription
--stake-verification-key-fileYesPath to the stake verification key
--key-reg-deposit-amtNoDeposit refund amount (Conway era; omit for legacy Shelley cert)
--out-fileYesOutput path for the certificate

delegation-certificate

Create a stake delegation certificate to delegate to a stake pool:

dugite-cli stake-address delegation-certificate \
  --stake-verification-key-file stake.vkey \
  --stake-pool-id pool1abc... \
  --out-file delegation.cert
FlagRequiredDescription
--stake-verification-key-fileYesPath to the stake verification key
--stake-pool-idYesPool ID to delegate to (bech32 or hex)
--out-fileYesOutput path for the certificate

vote-delegation-certificate

Create a vote delegation certificate (Conway era) to delegate voting power to a DRep:

# Delegate to a specific DRep
dugite-cli stake-address vote-delegation-certificate \
  --stake-verification-key-file stake.vkey \
  --drep-verification-key-file drep.vkey \
  --out-file vote-deleg.cert

# Delegate to always-abstain
dugite-cli stake-address vote-delegation-certificate \
  --stake-verification-key-file stake.vkey \
  --always-abstain \
  --out-file vote-deleg.cert

# Delegate to always-no-confidence
dugite-cli stake-address vote-delegation-certificate \
  --stake-verification-key-file stake.vkey \
  --always-no-confidence \
  --out-file vote-deleg.cert
FlagRequiredDescription
--stake-verification-key-fileYesPath to the stake verification key
--drep-verification-key-fileNoDRep verification key file (mutually exclusive with --always-abstain/--always-no-confidence)
--always-abstainNoUse the special always-abstain DRep
--always-no-confidenceNoUse the special always-no-confidence DRep
--out-fileYesOutput path for the certificate

Complete Staking Workflow

# 1. Generate stake keys
dugite-cli stake-address key-gen \
  --verification-key-file stake.vkey \
  --signing-key-file stake.skey

# 2. Create registration certificate
dugite-cli stake-address registration-certificate \
  --stake-verification-key-file stake.vkey \
  --key-reg-deposit-amt 2000000 \
  --out-file stake-reg.cert

# 3. Create delegation certificate
dugite-cli stake-address delegation-certificate \
  --stake-verification-key-file stake.vkey \
  --stake-pool-id pool1abc... \
  --out-file delegation.cert

# 4. Submit both in a single transaction
dugite-cli transaction build \
  --tx-in "abc123...#0" \
  --tx-out "addr_test1qz...+5000000" \
  --change-address "addr_test1qp..." \
  --fee 200000 \
  --certificate-file stake-reg.cert \
  --certificate-file delegation.cert \
  --out-file tx.body

dugite-cli transaction sign \
  --tx-body-file tx.body \
  --signing-key-file payment.skey \
  --signing-key-file stake.skey \
  --out-file tx.signed

dugite-cli transaction submit \
  --tx-file tx.signed \
  --socket-path ./node.sock

Stake Pool Commands

The dugite-cli stake-pool subcommands manage stake pool key generation, pool registration, and operational certificate issuance.

key-gen

Generate pool cold keys and an operational certificate counter:

dugite-cli stake-pool key-gen \
  --cold-verification-key-file cold.vkey \
  --cold-signing-key-file cold.skey \
  --operational-certificate-counter-file opcert.counter
FlagRequiredDescription
--cold-verification-key-fileYesOutput path for the cold verification key
--cold-signing-key-fileYesOutput path for the cold signing key
--operational-certificate-counter-fileYesOutput path for the opcert issue counter

id

Get the pool ID (Blake2b-224 hash of the cold verification key):

dugite-cli stake-pool id \
  --cold-verification-key-file cold.vkey
FlagRequiredDescription
--cold-verification-key-fileYesPath to the cold verification key

vrf-key-gen

Generate a VRF key pair:

dugite-cli stake-pool vrf-key-gen \
  --verification-key-file vrf.vkey \
  --signing-key-file vrf.skey
FlagRequiredDescription
--verification-key-fileYesOutput path for the VRF verification key
--signing-key-fileYesOutput path for the VRF signing key

kes-key-gen

Generate a KES key pair:

dugite-cli stake-pool kes-key-gen \
  --verification-key-file kes.vkey \
  --signing-key-file kes.skey
FlagRequiredDescription
--verification-key-fileYesOutput path for the KES verification key
--signing-key-fileYesOutput path for the KES signing key

issue-op-cert

Issue an operational certificate:

dugite-cli stake-pool issue-op-cert \
  --kes-verification-key-file kes.vkey \
  --cold-signing-key-file cold.skey \
  --operational-certificate-counter-file opcert.counter \
  --kes-period 400 \
  --out-file opcert.cert
FlagRequiredDescription
--kes-verification-key-fileYesPath to the KES verification key
--cold-signing-key-fileYesPath to the cold signing key
--operational-certificate-counter-fileYesPath to the opcert issue counter
--kes-periodYesCurrent KES period
--out-fileYesOutput path for the operational certificate

registration-certificate

Create a stake pool registration certificate:

dugite-cli stake-pool registration-certificate \
  --cold-verification-key-file cold.vkey \
  --vrf-verification-key-file vrf.vkey \
  --pledge 500000000 \
  --cost 340000000 \
  --margin 0.02 \
  --reward-account-verification-key-file stake.vkey \
  --pool-owner-verification-key-file stake.vkey \
  --single-host-pool-relay "relay.example.com:3001" \
  --metadata-url "https://example.com/pool-metadata.json" \
  --metadata-hash "a1b2c3d4..." \
  --out-file pool-reg.cert
FlagRequiredDescription
--cold-verification-key-fileYesPath to the cold verification key
--vrf-verification-key-fileYesPath to the VRF verification key
--pledgeYesPledge amount in lovelace
--costYesFixed cost per epoch in lovelace
--marginYesPool margin (0.0 to 1.0)
--reward-account-verification-key-fileYesStake key for the reward account
--pool-owner-verification-key-fileNoPool owner stake key (can be repeated)
--pool-relay-ipv4NoRelay IP address with port (e.g., 1.2.3.4:3001)
--single-host-pool-relayNoRelay DNS hostname with port (e.g., relay.example.com:3001)
--multi-host-pool-relayNoRelay DNS SRV record (e.g., _cardano._tcp.example.com)
--metadata-urlNoURL to pool metadata JSON
--metadata-hashNoBlake2b-256 hash of the metadata file (hex)
--testnetNoUse testnet network ID for the reward account
--out-fileYesOutput path for the certificate

metadata-hash

Compute the Blake2b-256 hash of a pool metadata file:

dugite-cli stake-pool metadata-hash \
  --pool-metadata-file pool-metadata.json

Output:

a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2

This hash is required when registering a pool. The metadata file must be served at the URL specified in the registration certificate and the hash must match. The file contents at that URL are checked by other nodes during pool discovery.

Example pool metadata file:

{
  "name": "Sandstone Pool",
  "description": "A Cardano stake pool running Dugite",
  "ticker": "SAND",
  "homepage": "https://sandstone.io"
}

retirement-certificate

Create a stake pool retirement certificate:

dugite-cli stake-pool retirement-certificate \
  --cold-verification-key-file cold.vkey \
  --epoch 500 \
  --out-file pool-retire.cert
FlagRequiredDescription
--cold-verification-key-fileYesPath to the cold verification key
--epochYesEpoch at which the pool retires
--out-fileYesOutput path for the certificate

Complete Pool Registration Workflow

# 1. Generate all keys
dugite-cli stake-pool key-gen \
  --cold-verification-key-file cold.vkey \
  --cold-signing-key-file cold.skey \
  --operational-certificate-counter-file opcert.counter

dugite-cli stake-pool vrf-key-gen \
  --verification-key-file vrf.vkey \
  --signing-key-file vrf.skey

dugite-cli stake-pool kes-key-gen \
  --verification-key-file kes.vkey \
  --signing-key-file kes.skey

# 2. Issue operational certificate
dugite-cli stake-pool issue-op-cert \
  --kes-verification-key-file kes.vkey \
  --cold-signing-key-file cold.skey \
  --operational-certificate-counter-file opcert.counter \
  --kes-period 400 \
  --out-file opcert.cert

# 3. Create registration certificate
dugite-cli stake-pool registration-certificate \
  --cold-verification-key-file cold.vkey \
  --vrf-verification-key-file vrf.vkey \
  --pledge 500000000 \
  --cost 340000000 \
  --margin 0.02 \
  --reward-account-verification-key-file stake.vkey \
  --pool-owner-verification-key-file stake.vkey \
  --single-host-pool-relay "relay.example.com:3001" \
  --metadata-url "https://example.com/pool.json" \
  --metadata-hash "a1b2c3..." \
  --out-file pool-reg.cert

# 4. Submit registration in a transaction
dugite-cli transaction build \
  --tx-in "abc123...#0" \
  --tx-out "addr_test1qz...+5000000" \
  --change-address "addr_test1qp..." \
  --fee 200000 \
  --certificate-file pool-reg.cert \
  --out-file tx.body

dugite-cli transaction sign \
  --tx-body-file tx.body \
  --signing-key-file payment.skey \
  --signing-key-file cold.skey \
  --signing-key-file stake.skey \
  --out-file tx.signed

dugite-cli transaction submit \
  --tx-file tx.signed \
  --socket-path ./node.sock

Node Commands

The dugite-cli node subcommands manage cold keys, KES keys, VRF keys, and operational certificates for block producer setup.

key-gen

Generate a cold key pair and an operational certificate issue counter:

dugite-cli node key-gen \
  --cold-verification-key-file cold.vkey \
  --cold-signing-key-file cold.skey \
  --operational-certificate-counter-file opcert.counter
FlagRequiredDescription
--cold-verification-key-fileYesOutput path for the cold verification key
--cold-signing-key-fileYesOutput path for the cold signing key
--operational-certificate-counter-fileYesOutput path for the opcert issue counter

The cold key identifies your stake pool. Keep the signing key offline (air-gapped) after initial setup.

key-gen-kes

Generate a KES (Key Evolving Signature) key pair:

dugite-cli node key-gen-kes \
  --verification-key-file kes.vkey \
  --signing-key-file kes.skey
FlagRequiredDescription
--verification-key-fileYesOutput path for the KES verification key
--signing-key-fileYesOutput path for the KES signing key

KES keys are rotated periodically. Each key is valid for a limited number of KES periods (62 periods on mainnet, approximately 90 days total).

key-gen-vrf

Generate a VRF (Verifiable Random Function) key pair:

dugite-cli node key-gen-vrf \
  --verification-key-file vrf.vkey \
  --signing-key-file vrf.skey
FlagRequiredDescription
--verification-key-fileYesOutput path for the VRF verification key
--signing-key-fileYesOutput path for the VRF signing key

VRF keys are used for slot leader election and do not need rotation.

issue-op-cert

Issue an operational certificate binding the cold key to the current KES key:

dugite-cli node issue-op-cert \
  --kes-verification-key-file kes.vkey \
  --cold-signing-key-file cold.skey \
  --operational-certificate-counter-file opcert.counter \
  --kes-period 400 \
  --out-file opcert.cert
FlagRequiredDescription
--kes-verification-key-fileYesPath to the KES verification key
--cold-signing-key-fileYesPath to the cold signing key
--operational-certificate-counter-fileYesPath to the opcert issue counter (incremented automatically)
--kes-periodYesCurrent KES period (current_slot / slots_per_kes_period)
--out-fileYesOutput path for the operational certificate

The opcert must be regenerated each time you rotate KES keys. The counter file is incremented each time to prevent replay attacks.

new-counter

Create a new operational certificate issue counter (useful if the original counter is lost):

dugite-cli node new-counter \
  --cold-verification-key-file cold.vkey \
  --counter-value 5 \
  --operational-certificate-counter-file opcert.counter
FlagRequiredDescription
--cold-verification-key-fileYesPath to the cold verification key
--counter-valueYesCounter value to set
--operational-certificate-counter-fileYesOutput path for the counter file

Governance

Dugite CLI supports Conway-era governance operations as defined in CIP-1694. This includes DRep management, voting, and governance action creation.

DRep Operations

Generate DRep Keys

dugite-cli governance drep key-gen \
  --signing-key-file drep.skey \
  --verification-key-file drep.vkey

Get DRep ID

# Bech32 format (default)
dugite-cli governance drep id \
  --drep-verification-key-file drep.vkey

# Hex format
dugite-cli governance drep id \
  --drep-verification-key-file drep.vkey \
  --output-format hex

DRep Registration

Create a DRep registration certificate:

dugite-cli governance drep registration-certificate \
  --drep-verification-key-file drep.vkey \
  --key-reg-deposit-amt 500000000 \
  --anchor-url "https://example.com/drep-metadata.json" \
  --anchor-data-hash "a1b2c3d4..." \
  --out-file drep-reg.cert

The --key-reg-deposit-amt should match the current DRep deposit parameter (currently 500 ADA = 500000000 lovelace on mainnet).

DRep Retirement

dugite-cli governance drep retirement-certificate \
  --drep-verification-key-file drep.vkey \
  --deposit-amt 500000000 \
  --out-file drep-retire.cert

DRep Update

Update DRep metadata:

dugite-cli governance drep update-certificate \
  --drep-verification-key-file drep.vkey \
  --anchor-url "https://example.com/drep-metadata-v2.json" \
  --anchor-data-hash "d4e5f6a7..." \
  --out-file drep-update.cert

Voting

Create a Vote

Votes can be cast by DReps, SPOs, or Constitutional Committee members:

DRep vote:

dugite-cli governance vote create \
  --governance-action-tx-id "a1b2c3d4..." \
  --governance-action-index 0 \
  --vote yes \
  --drep-verification-key-file drep.vkey \
  --out-file vote.json

SPO vote:

dugite-cli governance vote create \
  --governance-action-tx-id "a1b2c3d4..." \
  --governance-action-index 0 \
  --vote no \
  --cold-verification-key-file cold.vkey \
  --out-file vote.json

Constitutional Committee vote:

dugite-cli governance vote create \
  --governance-action-tx-id "a1b2c3d4..." \
  --governance-action-index 0 \
  --vote yes \
  --cc-hot-verification-key-file cc-hot.vkey \
  --out-file vote.json

Vote Values

ValueDescription
yesVote in favor
noVote against
abstainAbstain from voting

Vote with Anchor

Attach rationale metadata to a vote:

dugite-cli governance vote create \
  --governance-action-tx-id "a1b2c3d4..." \
  --governance-action-index 0 \
  --vote yes \
  --drep-verification-key-file drep.vkey \
  --anchor-url "https://example.com/vote-rationale.json" \
  --anchor-data-hash "e5f6a7b8..." \
  --out-file vote.json

Governance Actions

Info Action

A governance action that carries no on-chain effect (used for signaling):

dugite-cli governance action create-info \
  --anchor-url "https://example.com/proposal.json" \
  --anchor-data-hash "a1b2c3d4..." \
  --deposit 100000000000 \
  --return-addr "addr_test1qz..." \
  --out-file info-action.json

No Confidence Motion

Express no confidence in the current constitutional committee:

dugite-cli governance action create-no-confidence \
  --anchor-url "https://example.com/no-confidence.json" \
  --anchor-data-hash "a1b2c3d4..." \
  --deposit 100000000000 \
  --return-addr "addr_test1qz..." \
  --prev-governance-action-tx-id "d4e5f6a7..." \
  --prev-governance-action-index 0 \
  --out-file no-confidence.json

New Constitution

Propose a new constitution:

dugite-cli governance action create-constitution \
  --anchor-url "https://example.com/constitution-proposal.json" \
  --anchor-data-hash "a1b2c3d4..." \
  --deposit 100000000000 \
  --return-addr "addr_test1qz..." \
  --constitution-url "https://example.com/constitution.txt" \
  --constitution-hash "e5f6a7b8..." \
  --constitution-script-hash "b8c9d0e1..." \
  --out-file new-constitution.json

Hard Fork Initiation

Propose a protocol version change:

dugite-cli governance action create-hard-fork-initiation \
  --anchor-url "https://example.com/hardfork.json" \
  --anchor-data-hash "a1b2c3d4..." \
  --deposit 100000000000 \
  --return-addr "addr_test1qz..." \
  --protocol-major-version 10 \
  --protocol-minor-version 0 \
  --out-file hardfork.json

Protocol Parameters Update

Propose changes to protocol parameters:

dugite-cli governance action create-protocol-parameters-update \
  --anchor-url "https://example.com/pp-update.json" \
  --anchor-data-hash "a1b2c3d4..." \
  --deposit 100000000000 \
  --return-addr "addr_test1qz..." \
  --protocol-parameters-update pp-changes.json \
  --out-file pp-update.json

The pp-changes.json file contains the parameter fields to change:

{
  "txFeePerByte": 44,
  "txFeeFixed": 155381,
  "maxBlockBodySize": 90112,
  "maxTxSize": 16384
}

Update Committee

Propose changes to the constitutional committee:

dugite-cli governance action create-update-committee \
  --anchor-url "https://example.com/committee-update.json" \
  --anchor-data-hash "a1b2c3d4..." \
  --deposit 100000000000 \
  --return-addr "addr_test1qz..." \
  --remove-cc-cold-verification-key-hash "old_member_hash" \
  --add-cc-cold-verification-key-hash "new_member_hash,500" \
  --threshold "2/3" \
  --out-file committee-update.json

The --add-cc-cold-verification-key-hash uses the format key_hash,expiry_epoch.

Treasury Withdrawal

Propose a withdrawal from the treasury:

dugite-cli governance action create-treasury-withdrawal \
  --anchor-url "https://example.com/withdrawal.json" \
  --anchor-data-hash "a1b2c3d4..." \
  --deposit 100000000000 \
  --return-addr "addr_test1qz..." \
  --funds-receiving-stake-verification-key-file recipient.vkey \
  --transfer 50000000000 \
  --out-file treasury-withdrawal.json

Hash Anchor Data

Compute the Blake2b-256 hash of an anchor data file:

# Binary file
dugite-cli governance action hash-anchor-data \
  --file-binary proposal.json

# Text file
dugite-cli governance action hash-anchor-data \
  --file-text proposal.txt

Submitting Governance Actions

Governance actions and votes are submitted as part of transactions. Include the certificate or vote file when building the transaction:

# Submit a DRep registration
dugite-cli transaction build \
  --tx-in "abc123...#0" \
  --tx-out "addr_test1qz...+5000000" \
  --change-address "addr_test1qp..." \
  --fee 200000 \
  --certificate-file drep-reg.cert \
  --out-file tx.body

dugite-cli transaction sign \
  --tx-body-file tx.body \
  --signing-key-file payment.skey \
  --signing-key-file drep.skey \
  --out-file tx.signed

dugite-cli transaction submit \
  --tx-file tx.signed \
  --socket-path ./node.sock

Architecture Overview

Dugite is organized as a 14-crate Cargo workspace. Each crate has a focused responsibility and well-defined dependencies.

Crate Workspace

CrateDescription
dugite-primitivesCore types: hashes, blocks, transactions, addresses, values, protocol parameters (Byron through Conway)
dugite-cryptoEd25519 keys, VRF, KES, text envelope format
dugite-serializationCBOR encoding/decoding for Cardano wire format via pallas
dugite-lsmPure Rust LSM-tree engine with WAL, compaction, bloom filters, and snapshots
dugite-networkOuroboros mini-protocols (ChainSync, BlockFetch, TxSubmission, KeepAlive), N2N client/server, N2C server, multi-peer block fetch pool
dugite-consensusOuroboros Praos, chain selection, epoch transitions, slot leader checks
dugite-ledgerUTxO set (LSM-backed via UTxO-HD), transaction validation, ledger state, certificate processing, native script evaluation, reward calculation
dugite-mempoolThread-safe transaction mempool with input-conflict checking and TTL sweep
dugite-storageChainDB (ImmutableDB append-only chunk files + VolatileDB in-memory)
dugite-nodeMain binary, config, topology, pipelined chain sync loop, Mithril import, block forging
dugite-clicardano-cli compatible CLI (38+ subcommands)
dugite-monitorTerminal monitoring dashboard (ratatui-based, real-time metrics via Prometheus polling)
dugite-configInteractive TUI configuration editor with tree navigation, inline editing, type validation, and diff view

Crate Dependency Graph

graph TD
    NODE[dugite-node] --> NET[dugite-network]
    NODE --> CONS[dugite-consensus]
    NODE --> LEDGER[dugite-ledger]
    NODE --> STORE[dugite-storage]
    NODE --> POOL[dugite-mempool]
    CLI[dugite-cli] --> NET
    CLI --> PRIM[dugite-primitives]
    CLI --> CRYPTO[dugite-crypto]
    CLI --> SER[dugite-serialization]
    MON[dugite-monitor] --> PRIM
    CFG[dugite-config] --> PRIM
    NET --> PRIM
    NET --> CRYPTO
    NET --> SER
    NET --> POOL
    CONS --> PRIM
    CONS --> CRYPTO
    LEDGER --> PRIM
    LEDGER --> CRYPTO
    LEDGER --> SER
    LEDGER --> LSM[dugite-lsm]
    STORE --> PRIM
    STORE --> SER
    POOL --> PRIM
    SER --> PRIM
    CRYPTO --> PRIM

Key Dependencies

Dugite leverages the pallas family of crates (v1.0.0-alpha.5) for Cardano wire-format compatibility:

  • pallas-network — Ouroboros multiplexer and handshake
  • pallas-codec — CBOR encoding/decoding
  • pallas-primitives — Cardano primitive types
  • pallas-traverse — Multi-era block traversal
  • pallas-crypto — Cryptographic primitives
  • pallas-addresses — Address parsing and construction

Other key dependencies:

  • tokio — Async runtime
  • dugite-lsm — Pure Rust LSM tree for the on-disk UTxO set (UTxO-HD)
  • minicbor — CBOR encoding for custom types
  • ed25519-dalek — Ed25519 signatures
  • blake2b_simd — SIMD-accelerated Blake2b hashing
  • uplc — Plutus CEK machine for script evaluation
  • clap — CLI argument parsing
  • tracing — Structured logging

Design Principles

Zero-Warning Policy

All code must compile with RUSTFLAGS="-D warnings" and pass cargo clippy --all-targets -- -D warnings. This is enforced by CI.

Pallas Interoperability

Dugite uses pallas for network protocol handling and block deserialization, ensuring wire-format compatibility with cardano-node. Internal types (in dugite-primitives) are converted from pallas types during deserialization.

Key conversion patterns:

  • Transaction.hash is set during deserialization from pallas tx.hash()
  • ChainSyncEvent::RollForward uses Box<Block> to avoid large enum variant size
  • Invalid transactions (is_valid: false) are skipped during apply_block
  • Pool IDs are Hash28 (Blake2b-224), not Hash32

Multi-Era Support

Dugite handles all Cardano eras from Byron through Conway. The serialization layer handles era-specific block formats transparently, while the ledger layer applies era-appropriate validation rules.

Sync Pipeline

Dugite uses a pipelined multi-peer architecture for block synchronization, separating header collection from block fetching for maximum throughput.

Architecture

flowchart LR
    subgraph Primary Peer
        CS[ChainSync<br/>Header Collection]
    end

    CS -->|headers| HQ[Header Queue]

    subgraph Block Fetch Pool
        BF1[Peer 1<br/>BlockFetch]
        BF2[Peer 2<br/>BlockFetch]
        BF3[Peer N<br/>BlockFetch]
    end

    HQ -->|range 1| BF1
    HQ -->|range 2| BF2
    HQ -->|range N| BF3

    BF1 -->|blocks| BP[Block Processor]
    BF2 -->|blocks| BP
    BF3 -->|blocks| BP

    BP --> CDB[(ChainDB)]
    BP --> LS[Ledger State]

Pipeline Stages

1. Header Collection (ChainSync)

A primary peer is selected for the ChainSync protocol. The node requests block headers sequentially using the N2N ChainSync mini-protocol (V14+). Headers are collected into batches.

The ChainSync protocol involves:

  1. MsgFindIntersect — Find a common point between the node and the peer
  2. MsgRequestNext — Request the next header
  3. MsgRollForward — Receive a new header
  4. MsgRollBackward — Handle a chain reorganization

2. Block Fetch Pool

Collected headers are distributed across multiple peers for parallel block retrieval. The block fetch pool supports up to 4 concurrent peers, each fetching a range of blocks.

The BlockFetch protocol involves:

  1. MsgRequestRange — Request a range of blocks by header hash
  2. MsgBlock — Receive a block
  3. MsgBatchDone — Signal the end of a batch

Blocks are fetched in batches of 500 headers, with sub-batches of 100 headers each. Each sub-batch is decoded on a spawn_blocking task to avoid blocking the async runtime.

3. Block Processing

Fetched blocks are applied to the ledger state in order:

  1. Deserialization — Raw CBOR bytes are decoded into Dugite's internal Block type using pallas
  2. Ledger validation — Each block is validated against the current ledger state (UTxO checks, fee validation, certificate processing)
  3. Storage — Valid blocks are added to the ChainDB (volatile database first, flushed to immutable when k-deep)
  4. Epoch transitions — At epoch boundaries, stake snapshots are rotated and rewards are calculated

Batched Lock Acquisition

To minimize lock contention, the sync loop acquires a single lock on both the ChainDB and ledger state for each batch of 500 blocks, rather than locking per-block.

Progress Reporting

Progress is logged every 5 seconds, showing:

  • Current slot and block number
  • Epoch number
  • UTxO count
  • Sync percentage (based on slot vs. wall-clock time)
  • Blocks-per-second throughput metric

Rollback Handling

When the ChainSync peer sends a MsgRollBackward message, the node:

  1. Identifies the rollback point (a slot/hash pair)
  2. Removes rolled-back blocks from the VolatileDB
  3. Reverts the ledger state to the rollback point
  4. Resumes header collection from the new tip

Only blocks in the VolatileDB (the last k=2160 blocks) can be rolled back. Blocks that have been flushed to the ImmutableDB are permanent.

Pipelined ChainSync

Dugite uses pipelined ChainSync to avoid the round-trip latency bottleneck of serial header requests. Instead of waiting for each MsgRollForward before requesting the next header, the node sends up to 300 MsgRequestNext messages concurrently (configurable via DUGITE_PIPELINE_DEPTH).

This bypasses pallas' serial ChainSync state machine in favor of a custom implementation that manages the pipeline depth directly.

Performance Characteristics

  • Header collection is pipelined per peer (up to 300 in-flight requests, configurable via DUGITE_PIPELINE_DEPTH)
  • Block fetching is parallelized across up to 4 concurrent peers
  • Block processing is batched (500 blocks per batch) with single-lock acquisition
  • Throughput depends on network latency, peer count, and block sizes

On preview testnet, full sync from genesis completes in approximately 10 hours, with block replay (from Mithril snapshot) achieving ~13,700 blocks/second.

Storage

Dugite's storage layer is implemented in the dugite-storage and dugite-ledger crates. It closely mirrors the cardano-node architecture with three distinct storage subsystems coordinated by ChainDB.

Storage Architecture

flowchart TD
    CDB[ChainDB] --> VOL[VolatileDB<br/>In-Memory HashMap<br/>Last k=2160 blocks]
    CDB --> IMM[ImmutableDB<br/>Append-Only Chunk Files<br/>Finalized blocks]

    NEW[New Block] -->|add_block| VOL
    VOL -->|flush when > k blocks| IMM

    READ[Block Query] -->|1. check volatile| VOL
    READ -->|2. fallback to immutable| IMM

    ROLL[Rollback] -->|remove from volatile| VOL

    LS[LedgerState] --> UTXO[UtxoStore<br/>dugite-lsm LSM tree<br/>On-disk UTxO set]
    LS --> DIFF[DiffSeq<br/>Last k UTxO diffs<br/>For rollback]

Block Storage

ImmutableDB (Append-Only Chunk Files)

The ImmutableDB stores finalized blocks in append-only chunk files on disk. This matches cardano-node's ImmutableDB design — blocks are simply appended to files and are inherently durable without any snapshot mechanism.

Properties:

  • Always durable — append-only writes survive process crashes without special persistence logic
  • No LSM tree — plain chunk files, no compaction or memtable overhead
  • Sequential access — optimized for the append-heavy, read-sequential block storage workload
  • Secondary indexes — slot-to-offset and hash-to-slot mappings for efficient lookups
  • Memory-mapped block index — on-disk open-addressing hash table (hash_index.dat) provides 3-5x faster lookups than in-memory HashMap while using near-zero RSS

VolatileDB (In-Memory HashMap)

The VolatileDB stores recent blocks (the last k=2160 blocks) in an in-memory HashMap. This enables:

  • Fast reads — no disk I/O for recent blocks
  • Efficient rollback — blocks can be removed without touching disk
  • Simple eviction — when a block becomes k-deep, it is flushed to the ImmutableDB

The VolatileDB has no on-disk representation — it exists only in memory and is rebuilt from the ImmutableDB tip on restart.

ChainDB

ChainDB is the unified interface for block storage. It coordinates the ImmutableDB and VolatileDB:

  1. New blocks arrive from peers and are added to the VolatileDB
  2. Once a block is more than k slots deep (k=2160 for mainnet), it is flushed from the VolatileDB to the ImmutableDB
  3. Flushed blocks are removed from the VolatileDB

When querying for a block:

  1. The VolatileDB is checked first (fast, in-memory)
  2. If not found, the ImmutableDB is consulted (disk-based)

Block Range Queries

ChainDB supports querying blocks by slot range:

  • VolatileDB scans its HashMap for matching slots
  • ImmutableDB uses secondary indexes for slot range scanning
  • Results from both databases are merged

UTxO Storage (UTxO-HD)

The UTxO set is stored on disk using dugite-lsm, a pure Rust LSM tree. This matches Haskell cardano-node's UTxO-HD architecture, where the UTxO set lives in an LSM-backed on-disk store rather than entirely in memory.

UtxoStore

The UtxoStore (in dugite-ledger) wraps a dugite-lsm LsmTree and provides:

  • Disk-backed UTxO set — the full UTxO set lives on disk, not in memory
  • Efficient point lookups — bloom filters for fast negative lookups
  • Batch writes — UTxO inserts and deletes are batched per block
  • Snapshots — periodic snapshots for crash recovery

dugite-lsm is configured via storage profiles that maximize available system memory:

ProfileTarget SystemMemtableBlock CacheExpected RSS
ultra-memory32GB2GB24GB~27GB
high-memory (default)16GB1GB12GB~14GB
low-memory8GB512MB5GB~6.5GB
minimal4GB256MB2GB~3GB

All profiles use 10 bits per key bloom filters and hybrid compaction (tiered L0, leveled L1+).

DiffSeq (Rollback Support)

The DiffSeq (in dugite-ledger) maintains the last k blocks of UTxO diffs, enabling rollback without replaying blocks:

  • Each block produces a UtxoDiff recording which UTxOs were added and removed
  • The DiffSeq holds the last k=2160 diffs
  • On rollback, diffs are applied in reverse to restore the UTxO set

io_uring Support (Linux)

On Linux with kernel 5.1+, enable io_uring for async I/O in the UTxO LSM tree:

cargo build --release --features io-uring

On other platforms (macOS, Windows), the feature flag is accepted but falls back to synchronous I/O automatically.

Snapshot Policy

Dugite uses a time-based snapshot policy matching Haskell's cardano-node:

  • Normal sync: snapshots every 72 minutes (k * 2 seconds, where k=2160)
  • Bulk sync: snapshots every 50,000 blocks plus 6 minutes of wall-clock time
  • Maximum retained: 2 snapshots on disk at any time

Ledger snapshots include the full ledger state (stake distribution, protocol parameters, governance state, etc.). The UTxO set is persisted separately via the UtxoStore's LSM snapshots.

Tip Recovery

When the node restarts:

  1. The ImmutableDB tip is read from the chunk files (always durable)
  2. The VolatileDB starts empty (in-memory state is rebuilt)
  3. The ledger state is restored from the most recent snapshot
  4. The UTxO set is restored from the UtxoStore's LSM snapshot
  5. The node resumes syncing from the recovered tip

Disk Layout

database-path/
  immutable/          # Append-only block chunk files
    chunks/           # Block data files
    index/            # Secondary indexes (slot, hash)
    hash_index.dat    # Mmap block index (open-addressing hash table)
  utxo-store/         # dugite-lsm database (UTxO set)
    active/           # Current SSTables
    snapshots/        # Durable snapshots
  ledger/             # Ledger state snapshots

Performance Considerations

  • Block writes — append-only chunk files provide consistent write performance without compaction pauses
  • UTxO lookups — LSM tree with bloom filters provides efficient point lookups for transaction validation
  • Memory usage — the VolatileDB holds approximately k blocks in memory (typically a few hundred MB). The UTxO set lives on disk, significantly reducing memory pressure compared to an all-in-memory approach
  • Batch size — the flush batch size balances memory usage against write efficiency

Storage Profiles

Dugite provides four storage profiles sized to maximize available system memory:

# Select a profile via CLI
./dugite-node run --storage-profile high-memory ...

# Override individual parameters
./dugite-node run --storage-profile low-memory --utxo-block-cache-size-mb 4096 ...

Profiles can also be set in the node configuration file:

{
  "storage": {
    "profile": "high-memory",
    "utxoBlockCacheSizeMb": 8192
  }
}

Resolution order: profile defaults < config file overrides < CLI overrides.

Fork Recovery & ImmutableDB Contamination

Problem

When a forged block loses a slot battle, flush_all_to_immutable on graceful shutdown can persist orphaned blocks permanently in the ImmutableDB. Since the ImmutableDB is append-only and designed for finalized blocks, these orphaned blocks contaminate the canonical chain history and can cause intersection failures on reconnect.

sequenceDiagram
    participant Node as Dugite Node
    participant Vol as VolatileDB
    participant Imm as ImmutableDB
    participant Peer as Upstream Peer

    Node->>Vol: Forge block at slot S
    Peer->>Node: Competing block at slot S wins
    Note over Vol: Orphaned forged block still in VolatileDB
    Node->>Imm: flush_all_to_immutable (graceful shutdown)
    Note over Imm: Orphaned block now persisted permanently
    Node->>Peer: Restart — intersection negotiation fails

Detection

  • ChainDB.get_chain_points() walks backwards through volatile blocks via prev_hash links, providing the peer with enough ancestry for intersection even when the tip is orphaned.
  • ImmutableDB.get_historical_points() samples older chunk secondary indexes in reverse order, providing canonical intersection points even when the immutable tip is contaminated.
  • When fork divergence is detected, contaminated ChainDB chain points are excluded from intersection negotiation, preventing the node from advertising orphaned blocks to peers.

Recovery

  • Case A (Origin intersection): The volatile DB is cleared, the ledger state is reset, and the node reconnects from genesis. This is the fallback when no valid intersection can be found.
  • Case B (Intersection behind ledger): A targeted ImmutableDB replay is performed up to the intersection slot using a detached LSM store, achieving approximately 50K blocks/second replay speed. This avoids a full resync while restoring the ledger to a consistent state.

Benchmarks

Run storage benchmarks with:

# Storage benchmarks (block index, ImmutableDB, ChainDB, scaling to 1M entries)
cargo bench -p dugite-storage --bench storage_bench

# UTxO store benchmarks (insert, lookup, apply_tx, LSM configs, scaling to 1M entries)
cargo bench -p dugite-ledger --bench utxo_bench

# Crypto benchmarks (Ed25519, blake2b keyhash)
cargo bench -p dugite-crypto --bench crypto_bench

# Hash benchmarks (blake2b_256, blake2b_224, batch hashing)
cargo bench -p dugite-primitives --bench hash_bench

Results are saved to target/criterion/ with HTML reports. Baseline results are tracked in benches/.

Latest Results (Apple M2 Max, 32GB, 2026-03-14)

Block Index Lookup (500 random lookups, mmap vs in-memory HashMap)

SizeIn-MemoryMmapSpeedup
10K10.0µs2.83µs3.5x
100K10.1µs2.17µs4.7x
1M10.6µs2.01µs5.3x

Mmap lookup advantage grows with scale — at mainnet block counts (~10M), the gap widens further.

UTxO Store Scaling (dugite-lsm LSM tree)

SizeInsert (per-entry)Lookup (per-entry)Total Lovelace Scan
10K455ns191ns2.38ms
100K479ns236ns29.1ms
1M569ns308ns330ms

Insert and lookup scale near-linearly. At mainnet scale (~20M UTxOs), estimated full scan ~6.6s.

Crypto & Hashing

OperationTime
Ed25519 verify (single)28.6µs
Blake2b-224 keyhash (32B)128ns
Blake2b-256 tx hash (1KB)949ns

A typical block with 50 witnesses: ~1.4ms for signature verification, ~6.4µs for keyhash computation.

LSM Config Comparison (100K entries)

All storage profiles perform identically at benchmark scale — config differences emerge at mainnet scale (20M+ UTxOs) where working set exceeds cache capacity.

See benches/2026-03-14-all-profiles.md for full results.

Ledger

Dugite's ledger layer (dugite-ledger) implements full Cardano transaction validation, UTxO management, stake distribution, reward calculation, and Conway-era governance. It closely follows the Haskell cardano-ledger STS (State Transition System) rules.

Ledger State

The LedgerState is the complete mutable state of the Cardano ledger at a given point in the chain:

flowchart TD
    LS[LedgerState] --> UTXO[UtxoSet<br/>On-disk via LSM tree]
    LS --> DELEG[Delegations<br/>Stake → Pool mapping]
    LS --> POOLS[Pool Parameters<br/>Registered pools + future updates]
    LS --> REWARDS[Reward Accounts<br/>Per-credential balances]
    LS --> GOV[GovernanceState<br/>DReps, proposals, committee, constitution]
    LS --> SNAP[EpochSnapshots<br/>Mark / Set / Go]
    LS --> PP[Protocol Parameters<br/>Current + previous epoch]
    LS --> FIN[Treasury + Reserves<br/>Financial state]

Key design decisions:

  • Arc-wrapped collections — Large mutable fields (delegations, pool_params, reward_accounts, governance) are wrapped in Arc for copy-on-write semantics. Cloning LedgerState bumps reference counts; mutations via Arc::make_mut() only copy when shared.
  • On-disk UTxO — The UTxO set lives in an LSM tree (dugite-lsm) rather than in memory, matching Haskell's UTxO-HD architecture. At mainnet scale (~20M UTxOs), this avoids multi-gigabyte memory pressure.
  • Exact rational arithmetic — Reward calculations use Rat (backed by num_bigint::BigInt) for lossless intermediate computation, with a single floor operation at the end matching Haskell's rationalToCoinViaFloor.

Block Application Pipeline

When a new block arrives, apply_block() processes it through this pipeline:

flowchart TD
    BLK[New Block] --> CONN[Check prev_hash chain]
    CONN --> EPOCH{Epoch boundary?}
    EPOCH -->|Yes| ET[Process epoch transition]
    EPOCH -->|No| TXS[Process transactions]
    ET --> TXS
    TXS --> P1[Phase-1 Validation<br/>Structural + witness checks]
    P1 --> P2{Plutus scripts?}
    P2 -->|Yes| EVAL[Phase-2 Evaluation<br/>uplc CEK machine]
    P2 -->|No| APPLY[Apply UTxO changes]
    EVAL --> APPLY
    APPLY --> CERT[Process certificates]
    CERT --> GOV[Process governance actions]
    GOV --> DIFF[Record UtxoDiff]

Block Validation Modes

ModePlutus EvaluationUse Case
ValidateAllRe-evaluate, verify is_valid flagNew blocks from peers
ApplyOnlyTrust is_valid flagImmutableDB replay, Mithril import, self-forged blocks

Invalid transactions (is_valid: false) skip normal input/output processing. Instead, collateral inputs are consumed and collateral return is added.

Transaction Validation

Phase-1 (Structural + Witness)

Phase-1 validation checks structural rules without executing scripts:

  1. Inputs exist — All transaction inputs are present in the UTxO set
  2. Fee sufficient — Fee covers minimum fee based on tx size, execution units, and reference script size (CIP-0112 tiered pricing in Conway)
  3. Value conserved — Inputs = outputs + fee (+ minting/burning for multi-asset)
  4. TTL valid — Transaction has not expired (time-to-live check against current slot)
  5. Witness verification — Ed25519 signatures match required signers from inputs, withdrawals, and certificates
  6. Multi-asset rules — No negative quantities, minting requires policy witness
  7. Reference inputs — All reference inputs exist (not consumed, only read)
  8. Output minimum — Each output meets the minimum lovelace requirement
  9. Transaction size — Does not exceed max transaction size
  10. Network ID — Matches the expected network

Phase-2 (Plutus Script Execution)

For transactions containing Plutus scripts (V1/V2/V3):

  1. Script data hash — Matches the hash of redeemers + datums + cost models
  2. Collateral — Sufficient collateral provided (150% of estimated fees in Conway)
  3. Execution units — Each redeemer's CPU and memory within budget
  4. Script evaluation — Each script is executed via the uplc CEK machine with the appropriate cost model
  5. Block budget — Total execution units across all transactions do not exceed block limits

Scripts are evaluated in parallel using rayon when the parallel-verification feature is enabled (default).

Validation Error Types

The ValidationError enum covers 50+ error variants across all categories: structural, UTxO, fees, witnesses, time, scripts, collateral, Plutus, era-gating, certificates, governance, datums, withdrawals, network, and auxiliary data.

Certificate Processing

Dugite processes all Shelley through Conway certificate types:

CertificateDescription
StakeRegistrationRegister a stake credential (deposit required)
StakeDeregistrationDeregister a stake credential (deposit refunded)
StakeDelegationDelegate stake to a pool
PoolRegistrationRegister a new stake pool
PoolRetirementSchedule pool retirement at a future epoch
RegDRepRegister a delegated representative (Conway)
UnregDRepDeregister a DRep (Conway)
UpdateDRepUpdate DRep metadata anchor (Conway)
VoteDelegationDelegate voting power to a DRep (Conway)
StakeVoteDelegationCombined stake + vote delegation (Conway)
RegStakeDelegCombined registration + stake delegation (Conway)
RegStakeVoteDelegCombined registration + stake + vote delegation (Conway)
CommitteeHotAuthAuthorize a hot key for a constitutional committee member (Conway)
CommitteeColdResignResign a constitutional committee cold key (Conway)
MoveInstantaneousRewardsTransfer between treasury and reserves (pre-Conway)

Governance (CIP-1694)

The GovernanceState tracks all Conway-era governance:

DRep Lifecycle

  • Registration — DReps register with a deposit, becoming eligible to vote
  • Activity tracking — DReps must vote within drepActivity epochs or become inactive
  • Expiration — Inactive DReps' delegated stake counts as abstaining
  • Delegation — Stake credentials delegate voting power to DReps, AlwaysAbstain, or AlwaysNoConfidence

Constitutional Committee

  • Hot key authorization — Cold keys authorize hot keys for voting
  • Member expiration — Each member has an epoch-based term limit
  • Quorum — Threshold fraction of non-expired, non-resigned members must approve

Governance Actions

Seven action types with per-type ratification thresholds:

ActionDRep ThresholdSPO ThresholdCC Required
ParameterChangeVaries by param group (4 groups)Varies by param group (5 groups)Yes
HardForkInitiationDRep thresholdSPO thresholdYes
TreasuryWithdrawalsDRep thresholdNoYes
NoConfidenceDRep thresholdSPO thresholdNo
UpdateCommitteeDRep thresholdSPO thresholdNo (if NoConfidence)
NewConstitutionDRep thresholdNoYes
InfoActionNo thresholdNo thresholdNo

Ratification

Ratification uses a two-epoch delay: proposals and votes from epoch E are considered at the E+1 → E+2 boundary using a frozen RatificationSnapshot. This prevents mid-epoch voting from affecting the current epoch's ratification. Thresholds use exact rational arithmetic via u128 cross-multiplication.

Epoch Transitions

At each epoch boundary, process_epoch_transition() follows the Haskell NEWEPOCH STS rule:

flowchart TD
    NE[NEWEPOCH] --> RUPD[Apply pending RUPD<br/>treasury += deltaT<br/>reserves -= deltaR<br/>credit rewards]
    RUPD --> SNAP[SNAP<br/>Rotate mark → set → go<br/>Capture current fees]
    SNAP --> POOLREAP[POOLREAP<br/>Process pool retirements<br/>Refund deposits]
    POOLREAP --> RAT[RATIFY<br/>Governance ratification<br/>Enact approved actions]
    RAT --> RESET[Reset block counters<br/>Clear RUPD state]

Reward Distribution (RUPD)

Rewards follow a deferred schedule matching Haskell's pulsing reward computation:

  1. Epoch E → E+1: Compute RUPD (monetary expansion + fees - treasury cut)
  2. Epoch E+1 → E+2: Apply RUPD (credit rewards to accounts, update treasury/reserves)

The reward calculation uses the "go" snapshot (two epochs old) for stake distribution, ensuring a stable base for computation.

Stake Snapshots

The mark/set/go model ensures different subsystems use consistent, non-overlapping snapshots:

SnapshotAgeUsed For
MarkCurrent epoch boundaryFuture leader election (2 epochs later)
SetPrevious epoch boundaryCurrent epoch leader election
GoTwo epochs agoCurrent epoch reward distribution

UTxO Storage

UtxoStore

The persistent UTxO set wraps a dugite-lsm LSM tree:

  • 36-byte keys — 32-byte transaction hash + 4-byte output index (big-endian)
  • Bincode valuesTransactionOutput serialized via bincode
  • Address index — In-memory HashMap<Address, HashSet<TransactionInput>> for N2C LocalStateQuery GetUTxOByAddress efficiency
  • Bloom filters — 10 bits per key (~1% false positive rate) for fast negative lookups during validation

DiffSeq (Rollback Support)

Each block produces a UtxoDiff recording inserted and deleted UTxOs. The DiffSeq holds the last k=2160 diffs, enabling O(1) rollback by applying diffs in reverse without reloading snapshots.

LedgerSeq (Anchored State Sequence)

LedgerSeq implements Haskell's V2 LedgerDB architecture:

  • Anchor — One full LedgerState at the immutable tip (persisted to disk)
  • Volatile deltas — Per-block LedgerDelta for the last k blocks
  • Checkpoints — Full state snapshots every ~100 blocks for fast reconstruction
  • Rollback — Drop trailing deltas and reconstruct from the nearest checkpoint

This avoids the 17-34 GB memory overhead of storing k full state copies.

CompositeUtxoView (Mempool Support)

All validate_transaction_* functions accept any UtxoLookup implementation. The CompositeUtxoView layers a mempool overlay on top of the on-chain UTxO set, enabling validation of chained mempool transactions (where one tx spends outputs of another unconfirmed tx) without mutating the live ledger state.

Consensus

Dugite implements the Ouroboros Praos consensus protocol, the proof-of-stake protocol used by Cardano since the Shelley era.

Ouroboros Praos Overview

Ouroboros Praos divides time into fixed-length slots. Each slot, a slot leader is selected based on their stake proportion. The leader is entitled to produce a block for that slot. Key properties:

  • Slot-based — Time is divided into slots (1 second each on mainnet)
  • Epoch-based — Slots are grouped into epochs (432000 slots / 5 days on mainnet)
  • Stake-proportional — The probability of being elected is proportional to the pool's active stake
  • Private leader selection — Only the pool operator knows if they are elected (until they publish the block)

Slot Leader Election

VRF-Based Selection

Each slot, the pool operator evaluates a VRF (Verifiable Random Function) using:

  • Their VRF signing key
  • The slot number
  • The epoch nonce

The VRF produces:

  1. A VRF output — A deterministic pseudo-random value
  2. A VRF proof — A proof that the output was correctly computed

Leader Threshold

The VRF output is compared against a threshold derived from:

  • The pool's relative stake (sigma)
  • The active slot coefficient (f = 0.05 on mainnet)

The threshold is computed using the phi function:

phi(sigma) = 1 - (1 - f)^sigma

A slot leader is elected if VRF_output < phi(sigma).

VRF Exact Rational Arithmetic

The leader check is a critical consensus operation — any deviation from the Haskell reference implementation would cause a node to disagree on which blocks are valid. Dugite uses exact 34-digit fixed-point arithmetic via dashu-int IBig, matching Haskell's FixedPoint E34 type exactly. No floating-point operations are used anywhere in the VRF computation path.

Era-dependent VRF modes:

EraProtocol VersionVRF Output DerivationcertNatMax
Shelley — Alonzo (TPraos)proto < 7Raw 64-byte VRF output2^512
Babbage — Conway (Praos)proto >= 7Blake2b-256("L" || output)2^256

In TPraos mode (Shelley through Alonzo), the raw 64-byte VRF output is used directly for the leader check, with a certNatMax of 2^512 defining the output space. In Praos mode (Babbage onward), the VRF output is hashed with Blake2b-256("L" || output) to produce a 32-byte value, reducing certNatMax to 2^256. The "L" prefix distinguishes the leader VRF output from the nonce VRF output (which uses "N").

Mathematical primitives:

  • ln(1 + x) — Uses the Euler continued fraction expansion, matching Haskell's lncf function. This converges for all x >= 0, unlike Taylor series which has a limited radius of convergence.
  • taylorExpCmp — Computes exp() via Taylor series with rigorous error bounds, enabling early termination when the comparison result can be determined without computing the full expansion. This avoids unnecessary precision in the common case where the VRF output is far from the threshold.

Epoch Nonce

The epoch nonce is computed at each epoch boundary:

epoch_nonce = hash(candidate_nonce || lab_nonce)

Where:

  • candidate_nonce is the evolving nonce frozen at the stability window boundary of the previous epoch
  • lab_nonce is a hash derived from the previous epoch's first block (the "laboratory" nonce)

The initial nonce is derived from the Shelley genesis hash.

Nonce Establishment

The nonce lifecycle follows a precise sequence across epoch boundaries:

  1. Evolving nonce — Accumulates VRF nonce contributions from every block: evolving_nonce = hash(prev_evolving_nonce || hash(vrf_nonce_output))
  2. Candidate nonce — The evolving nonce is frozen (snapshotted) at the stability window boundary within each epoch. After this point, new VRF contributions only affect the evolving nonce, not the candidate.
  3. Epoch nonce — At the epoch boundary, the new epoch nonce is computed as hash(candidate_nonce_from_prev_epoch || lab_nonce).
flowchart LR
    A["Block VRF<br/>contributions"] -->|"accumulated<br/>every block"| B["Evolving<br/>Nonce"]
    B -->|"frozen at<br/>stability window"| C["Candidate<br/>Nonce"]
    C -->|"hash(candidate ∥ lab)"| D["Epoch Nonce<br/>(next epoch)"]

Nonce establishment after startup:

  • After snapshot load or Mithril import: The nonce is not immediately valid. At least one full epoch transition must occur during live operation for nonce_established to become true. The replay of blocks during the first partial epoch counts toward building the evolving nonce.
  • After full genesis replay: The nonce is immediately valid because all VRF contributions from every block have been accumulated during the replay.

Era-Dependent Nonce Stabilisation Window

The stability window determines how early in an epoch the candidate nonce is frozen. This varies by era:

EraProtocol VersionStability Window
Shelley — Babbageproto < 103k/f slots
Conwayproto >= 104k/f slots

Where k is the security parameter (2160 on mainnet) and f is the active slot coefficient (0.05 on mainnet). The longer Conway window provides additional time for nonce contributions to accumulate, improving randomness quality.

Chain Selection

When multiple valid chains exist, Ouroboros Praos selects the chain with the most blocks (longest chain rule). Dugite implements:

  1. Chain comparison — Compare the block height of competing chains
  2. Rollback support — Roll back up to k=2160 blocks to switch to a longer chain
  3. Immutability — Blocks deeper than k are considered final

Epoch Transitions

At each epoch boundary, Dugite performs:

Stake Snapshot Rotation

Dugite uses the mark/set/go snapshot model:

  • Mark — The current epoch boundary snapshot (will be used for leader election two epochs from now)
  • Set — The previous epoch's mark (used for leader election in the current epoch)
  • Go — Two epochs ago (used for reward distribution in the current epoch)

At each epoch boundary:

  1. Go becomes the active snapshot for reward distribution
  2. Set moves to go
  3. Mark moves to set
  4. A new mark is taken from the current ledger state
flowchart LR
    subgraph "Epoch N Boundary"
        direction TB
        L["Current Ledger<br/>State"] -->|"snapshot"| M["Mark"]
        M -->|"rotate"| S["Set"]
        S -->|"rotate"| G["Go"]
    end
    S -.- LE["Leader Election<br/>(epoch N)"]
    G -.- RD["Reward Distribution<br/>(epoch N)"]

Snapshot Establishment

After a node starts, the snapshots are not immediately trustworthy for block production:

  • snapshots_established requires at least 3 live (post-replay) epoch transitions before returning true. This ensures that all three snapshot positions (mark, set, go) have been populated by the running node with precise stake calculations.
  • Replay-built snapshots may contain approximate stake values due to differences in reward calculation during fast replay versus live operation. These are sufficient for validation but not authoritative for forging.
  • VRF leader eligibility failures are non-fatal until snapshots are fully established. During the establishment period, a pool may fail leader checks because the stake distribution in the snapshot does not yet reflect the true on-chain state. The node logs these failures but continues normal operation.

Reward Calculation and Distribution

At each epoch boundary, rewards are calculated and distributed:

  1. Monetary expansion — New ADA is created from the reserves based on the monetary expansion rate
  2. Fee collection — Transaction fees from the epoch are collected
  3. Treasury cut — A fraction (tau) of rewards goes to the treasury
  4. Pool rewards — Remaining rewards are distributed to pools based on their performance
  5. Member distribution — Pool rewards are split between the operator and delegators based on pool parameters (cost, margin, pledge)

Validation Checks

Dugite validates the following consensus-level properties:

KES Period Validation

The KES (Key Evolving Signature) period in the block header must be within the valid range for the operational certificate:

opcert_start_kes_period <= current_kes_period < opcert_start_kes_period + max_kes_evolutions

VRF Verification

Full VRF verification includes:

  1. VRF key bindingblake2b_256(header.vrf_vkey) must match the pool's registered vrf_keyhash
  2. VRF proof verification — The VRF proof is cryptographically verified against the VRF public key
  3. Leader eligibility — The VRF leader value is checked against the Praos threshold for the pool's relative stake using the phi function

Operational Certificate Verification

The operational certificate's Ed25519 signature is verified against the raw bytes signable format (matching Haskell's OCertSignable):

signable = hot_vkey(32 bytes) || counter(8 bytes BE) || kes_period(8 bytes BE)
signature = sign(cold_skey, signable)

The counter must be monotonically increasing per pool to prevent certificate replay.

KES Signature Verification

Block headers are signed using the Sum6Kes scheme (depth-6 binary sum composition over Ed25519). The KES key is evolved to the correct period offset from the operational certificate's start period. Verification checks:

  1. The KES signature over the header body bytes is valid
  2. The KES period matches the expected value for the block's slot

Slot Leader Eligibility

The VRF proof is checked to confirm the block producer was indeed elected for the slot, given the epoch nonce and their pool's stake.

Networking

Dugite implements the full Ouroboros network protocol stack, supporting both Node-to-Node (N2N) and Node-to-Client (N2C) communication.

Protocol Stack

flowchart TB
    subgraph N2N ["Node-to-Node (TCP)"]
        HS[Handshake V14/V15]
        CSP[ChainSync<br/>Headers]
        BFP[BlockFetch<br/>Block Bodies]
        TX[TxSubmission2<br/>Transactions]
        KA[KeepAlive<br/>Liveness]
    end

    subgraph N2C ["Node-to-Client (Unix Socket)"]
        HSC[Handshake]
        LCS[LocalChainSync<br/>Block Delivery]
        LSQ[LocalStateQuery<br/>Ledger Queries]
        LTS[LocalTxSubmission<br/>Submit Transactions]
        LTM[LocalTxMonitor<br/>Mempool Queries]
    end

    MUX[Multiplexer] --> N2N
    MUX --> N2C

Relay Node Architecture

flowchart TB
    subgraph Inbound ["Inbound Connections"]
        IN1[Peer A] -->|N2N| MUX_IN[Multiplexer]
        IN2[Peer B] -->|N2N| MUX_IN
        IN3[Wallet] -->|N2C| MUX_N2C[N2C Server]
    end

    subgraph Outbound ["Outbound Connections"]
        MUX_OUT[Multiplexer] -->|ChainSync| PEER1[Bootstrap Peer]
        MUX_OUT -->|BlockFetch| PEER1
        MUX_OUT -->|TxSubmission| PEER1
    end

    subgraph Core ["Node Core"]
        PM[Peer Manager<br/>Cold→Warm→Hot]
        MP[Mempool<br/>Tx Validation]
        CDB[(ChainDB)]
        LS[Ledger State]
        CONS[Consensus<br/>Ouroboros Praos]
    end

    MUX_IN -->|ChainSync| CDB
    MUX_IN -->|BlockFetch| CDB
    MUX_IN -->|TxSubmission| MP
    MUX_N2C -->|LocalStateQuery| LS
    MUX_N2C -->|LocalTxSubmission| MP
    MUX_N2C -->|LocalTxMonitor| MP

    PEER1 -->|blocks| CDB
    CDB --> LS
    LS --> CONS
    PM -->|manage| MUX_OUT
    PM -->|manage| MUX_IN

Node-to-Node (N2N) Protocol

N2N connections use TCP and carry multiple mini-protocols over a multiplexed connection.

Handshake (V14/V15)

The N2N handshake negotiates the protocol version and network parameters:

  • Protocol version V14 (Plomin HF) and V15 (SRV DNS support)
  • Network magic number
  • Diffusion mode: InitiatorOnly or InitiatorAndResponder
  • Peer sharing flags

ChainSync

The ChainSync mini-protocol synchronizes block headers between peers:

  • Client mode: Requests headers sequentially from a peer to track the chain
  • Server mode: Serves headers to connected peers, with per-peer cursor tracking

Key messages:

  • MsgFindIntersect — Find a common chain point
  • MsgRequestNext — Request the next header
  • MsgRollForward — Header delivered
  • MsgRollBackward — Chain reorganization
  • MsgAwaitReply — Peer has no new headers (at tip)

BlockFetch

The BlockFetch mini-protocol retrieves block bodies by hash:

  • Client mode: Requests ranges of blocks from peers
  • Server mode: Serves blocks to peers, validates block existence before serving

Key messages:

  • MsgRequestRange — Request blocks in a slot range
  • MsgBlock — Block delivered
  • MsgNoBlocks — Requested blocks not available
  • MsgBatchDone — End of batch

TxSubmission2

The TxSubmission2 mini-protocol propagates transactions between peers:

  • Bidirectional handshake (MsgInit)
  • Flow-controlled transaction exchange with ack/req counts
  • Inflight tracking per peer
  • Mempool integration for serving transaction IDs and bodies

KeepAlive

The KeepAlive mini-protocol maintains connection liveness with periodic heartbeat messages.

PeerSharing

The PeerSharing mini-protocol enables gossip-based peer discovery. Peers exchange addresses of other known peers to help the network self-organize.

Node-to-Client (N2C) Protocol

N2C connections use Unix domain sockets and serve local clients (wallets, CLI tools). The N2C handshake supports versions V16-V22 (Conway era) with automatic detection of the Haskell bit-15 version encoding used by cardano-cli 10.x.

LocalStateQuery

Supports all 39 Shelley BlockQuery tags (0-38) plus cross-era queries, providing full compatibility with cardano-node. The query protocol uses an acquire/query/release pattern:

  1. MsgAcquire — Lock the ledger state at the current tip
  2. MsgQuery — Execute queries against the locked state
  3. MsgRelease — Release the lock

All BlockQuery messages are wrapped in the Hard Fork Combinator (HFC) envelope. Results from era-specific BlockQuery tags are returned inside an array(1) success wrapper, while QueryAnytime and QueryHardFork results are returned unwrapped.

Shelley BlockQuery Tags 0-38

TagQueryDescription
0GetLedgerTipCurrent slot, hash, and block number
1GetEpochNoActive epoch number
2GetCurrentPParamsLive protocol parameters (positional array(31) CBOR encoding matching Haskell ConwayPParams EncCBOR)
3GetProposedPParamsUpdatesProposed parameter updates (empty map in Conway)
4GetStakeDistributionPool stake distribution with pledge
5GetNonMyopicMemberRewardsEstimated rewards per pool for given stake amounts
6GetUTxOByAddressUTxO set filtered by address (Cardano wire format Map<[tx_hash, index], {0: addr, 1: value, 2: datum}>)
7GetUTxOWholeEntire UTxO set (expensive; used by testing tools)
8DebugEpochStateSimplified epoch state summary (treasury, reserves, active stake totals)
9GetCBORMeta-query that wraps the result of an inner query in CBOR tag(24), returning raw bytes
10GetFilteredDelegationsAndRewardAccountsDelegation targets and reward balances for a set of stake credentials
11GetGenesisConfigSystem start, epoch length, slot length, and security parameter
12DebugNewEpochStateSimplified new epoch state summary (epoch number, block count, snapshot state)
13DebugChainDepStateChain-dependent state summary (last applied block, operational certificate counters)
14GetRewardProvenanceReward calculation provenance: reward pot, treasury tax rate, total active stake, per-pool reward breakdown
15GetUTxOByTxInUTxO set filtered by transaction inputs
16GetStakePoolsSet of all registered pool key hashes
17GetStakePoolParamsRegistered pool parameters (owner, cost, margin, pledge, relays, metadata)
18GetRewardInfoPoolsPer-pool reward breakdown: relative stake, leader and member reward splits, pool margin, fixed cost, and performance metrics
19GetPoolStateQueryPoolStateResult encoded as array(4): [poolParams, futurePoolParams, retiring, deposits]
20GetStakeSnapshotsMark/set/go stake snapshots used for leader schedule calculation
21GetPoolDistrPool stake distribution with VRF verification key hashes
22GetStakeDelegDepositsDeposit amounts per registered stake credential
23GetConstitutionConstitution anchor (URL + hash) and optional guardrail script hash
24GetGovStateConwayGovState encoded as array(7) CBOR: active proposals, committee state, constitution, current/previous protocol parameters, future parameters, and DRep pulse state
25GetDRepStateRegistered DReps with their delegation counts and deposit balances (supports credential filter)
26GetDRepStakeDistrTotal delegated stake per DRep (lovelace)
27GetCommitteeMembersStateConstitutional committee members, iterating committee_expiration entries with hot_credential_type for each member
28GetFilteredVoteDelegateesVote delegation map per stake credential
29GetAccountStateTreasury and reserves balances
30GetSPOStakeDistrPer-pool stake distribution filtered by a set of pool IDs
31GetProposalsActive governance proposals with optional governance action ID filter
32GetRatifyStateEnacted and expired proposals along with the ratify_delayed flag
33GetFuturePParamsPending protocol parameter changes scheduled for the next epoch (if any)
34GetLedgerPeerSnapshotSPO relay addresses weighted by relative stake, used for P2P ledger-based peer discovery
35QueryStakePoolDefaultVoteDefault vote per pool derived from its DRep delegation (AlwaysAbstain, AlwaysNoConfidence, or specific DRep vote)
36GetPoolDistr2Extended pool distribution including total_active_stake alongside per-pool entries
37GetStakeDistribution2Extended stake distribution including total_active_stake
38GetMaxMajorProtocolVersionMaximum supported major protocol version (returns 10)

Cross-Era Queries

In addition to the Shelley BlockQuery tags, the following queries operate outside the HFC era-specific envelope:

QueryDescription
GetCurrentEraActive era (Byron through Conway)
GetChainBlockNoCurrent chain height, WithOrigin encoded as [1, blockNo] for At or [0] for Origin
GetChainPointCurrent tip point, encoded as [] for Origin or [slot, hash] for a specific point
GetSystemStartNetwork genesis time as UTCTime encoded [year, dayOfYear, picosOfDay]
GetEraHistoryIndefinite array of EraSummary entries (Byron safe_zone = k*2, Shelley+ safe_zone = 3k/f)

CBOR Encoding Notes

  • PParams are encoded as a positional array(31) with integer keys 0-33, matching Haskell's EncCBOR instance (not JSON string keys).
  • CBOR Sets (e.g., pool IDs, stake key owners) use tag(258) and elements must be sorted for canonical encoding.
  • Value encoding: plain integer for ADA-only UTxOs, [coin, multiasset_map] for multi-asset UTxOs.

LocalTxSubmission

Submits transactions from local clients to the node's mempool:

MessageDescription
MsgSubmitTxSubmit a transaction (era ID + CBOR bytes)
MsgAcceptTxTransaction accepted into mempool
MsgRejectTxTransaction rejected with reason

Submitted transactions undergo both Phase-1 (structural) and Phase-2 (Plutus script) validation before mempool admission.

LocalTxMonitor

Monitors the transaction mempool:

MessageDescription
MsgAcquireAcquire a mempool snapshot
MsgHasTxCheck if a transaction is in the mempool
MsgNextTxGet the next transaction from the mempool
MsgGetSizesGet mempool capacity, size, and transaction count

P2P Networking

Dugite implements the full Ouroboros P2P peer selection governor, enabled by default (EnableP2P: true). The governor manages peer connections through a target-driven state machine that continuously maintains optimal connectivity.

Diffusion Mode

The DiffusionMode config field controls how the node participates in the network:

  • InitiatorAndResponder (default) — Full relay mode. The node opens a listening port and accepts inbound N2N connections from other peers, in addition to making outbound connections. This is the correct mode for relay nodes.
  • InitiatorOnly — Block producer mode. The node only makes outbound connections to its configured relays and never opens a listening port. This prevents direct internet exposure of block producers.

Peer Sharing

The PeerSharing mini-protocol enables gossip-based peer discovery. When enabled, the node exchanges addresses of known routable peers with connected peers.

Peer sharing behaviour is auto-configured by default:

  • Relays — Peer sharing is enabled, allowing the node to both request and serve peer addresses.
  • Block producers — Peer sharing is disabled (when --shelley-kes-key is provided) to avoid leaking the BP's network position.

Override with the PeerSharing config field (true/false) if needed.

The PeerSharing protocol filters out non-routable addresses (RFC1918, CGNAT, loopback, link-local, IPv6 ULA) before sharing.

Peer Manager

The peer manager classifies peers into three temperature categories following the cardano-node model:

  • Cold — Known but not connected
  • Warm — TCP connected, keepalive running, but not actively syncing
  • Hot — Fully active with ChainSync, BlockFetch, and TxSubmission2

Peer Lifecycle

stateDiagram-v2
    [*] --> Cold: Discovered
    Cold --> Warm: TCP connect + handshake
    Warm --> Hot: Mini-protocols activated (5s dwell)
    Hot --> Warm: Demotion (poor performance / churn)
    Warm --> Cold: Disconnection / backoff
    Cold --> [*]: Evicted (max failures)

Warm peers must dwell for at least 5 seconds before promotion to Hot, preventing rapid cycling.

Peer Sources

Peers enter the Cold pool from four sources:

SourceDescription
TopologyBootstrap peers, local roots, and public roots from the topology file
DNSA/AAAA resolution of hostname-based topology entries
LedgerSPO relay addresses from pool registration certificates (after useLedgerAfterSlot)
PeerSharingAddresses received via the gossip protocol from connected peers

Peer Selection & Scoring

Peers are ranked using a composite score:

score = 0.4 × reputation + 0.4 × latency_score + 0.2 × failure_score

Where:

  • Reputation — 0.0 (worst) to 1.0 (best), adjusted +0.01 per success, -0.1 per failure
  • Latency score1 / (1 + ms/200), based on EWMA latency (smoothing α=0.3)
  • Failure scoremax(1.0 - failures×0.1, 0.0), failure counts decay (halve every 5 minutes)

Subnet diversity is enforced: peers from the same /24 (IPv4) or /48 (IPv6) subnet receive a selection penalty.

Failure Handling

  • Exponential backoff on connection failures: 5s → 10s → 20s → 40s → 80s → 160s (capped), with ±2s random fuzz
  • Max cold failures: 5 consecutive failures before a peer is evicted from the peer table
  • Failure decay: Failure counts halve every 5 minutes, allowing peers to recover reputation over time
  • Circuit breaker: Closed → Open → HalfOpen with exponential cooldown

Inbound Connections

  • Per-IP token bucket rate limiting for DoS protection
  • N2N server handles handshake, ChainSync, BlockFetch, KeepAlive, TxSubmission2, and PeerSharing
  • DiffusionMode controls whether inbound connections are accepted

P2P Governor

The governor runs as a tokio task on a 30-second interval, continuously evaluating peer counts against configured targets and emitting promotion/demotion/connect/disconnect actions.

Target Counts

The governor maintains six independent target counts (matching cardano-node defaults):

TargetDefaultDescription
TargetNumberOfKnownPeers85Total peers in the peer table (cold + warm + hot)
TargetNumberOfEstablishedPeers40Warm + hot peers (TCP connected)
TargetNumberOfActivePeers15Hot peers (fully syncing)
TargetNumberOfKnownBigLedgerPeers15Known big ledger peers
TargetNumberOfEstablishedBigLedgerPeers10Established big ledger peers
TargetNumberOfActiveBigLedgerPeers5Active big ledger peers

When any target is not met, the governor promotes peers to fill the deficit. When any target is exceeded, the governor demotes the lowest-scoring surplus peers. Local root peers are never demoted.

Sync-State-Aware Targeting

The governor adjusts behaviour based on sync state:

  • PreSyncing / Syncing — Big ledger peers are prioritised for fast block download
  • CaughtUp — Normal target enforcement with balanced peer selection

Churn

The governor periodically rotates a subset of peers to discover better alternatives:

  • Configurable churn interval (default: 20% target reduction cycle)
  • Local root peers are exempt from churn
  • Churn ensures the node explores the peer landscape rather than settling on suboptimal connections

Prometheus Metrics

The P2P subsystem exports the following metrics:

MetricDescription
dugite_p2p_enabledWhether P2P governance is active (gauge: 0 or 1)
dugite_diffusion_modeCurrent diffusion mode (0=InitiatorOnly, 1=InitiatorAndResponder)
dugite_peer_sharing_enabledWhether peer sharing is active (gauge: 0 or 1)
dugite_peers_coldNumber of cold (known, unconnected) peers
dugite_peers_warmNumber of warm (established) peers
dugite_peers_hotNumber of hot (active) peers

Peer Discovery

Peers are discovered through multiple channels:

  1. Topology file — Bootstrap peers, local roots, and public roots
  2. PeerSharing protocol — Gossip-based discovery from connected peers
  3. Ledger-based discovery — SPO relay addresses extracted from pool registration certificates

Ledger-Based Peer Discovery

Once the node has synced past the slot threshold configured by useLedgerAfterSlot in the topology file, it activates ledger-based peer discovery. This mechanism extracts SPO relay addresses directly from pool registration parameters (pool_params) stored in the ledger state.

The discovery process runs on a periodic 5-minute interval and works as follows:

  1. Slot check — The current ledger tip slot is compared against useLedgerAfterSlot. If the topology sets this value to a negative number or omits it entirely, ledger peer discovery remains disabled.
  2. Relay extraction — All registered pool parameters are iterated, extracting relay entries of three types:
    • SingleHostAddr — IPv4 address and port
    • SingleHostName — DNS hostname and port
    • MultiHostName — DNS hostname with default port 3001
  3. Sampling — A deterministic subset (up to 20 relays) is sampled from the full relay set to avoid resolving thousands of addresses at once. The sample offset rotates based on the current slot for coverage diversity.
  4. DNS resolution — Hostnames are resolved to socket addresses via async DNS lookup.
  5. Peer manager integration — Resolved addresses are added as cold peers with PeerSource::Ledger classification, alongside existing bootstrap and public root peers.

As pool registrations change over time (new pools register, existing pools update relay addresses, pools retire), the ledger peer set evolves dynamically. This provides a protocol-native discovery mechanism that does not depend on any centralized directory.

Block Relay

Dugite implements full relay node behavior, propagating blocks received from upstream peers to all downstream N2N connections. This ensures that blocks flow through the network without requiring every node to sync directly from the block producer.

Broadcast Architecture

Block propagation uses a tokio::sync::broadcast channel with a capacity of 64 announcements. The architecture has three components:

  1. Sender — The node core holds a broadcast::Sender<BlockAnnouncement> obtained from the N2N server at startup. When the sync pipeline processes new blocks or the forge module produces a new block, it sends an announcement containing the slot, block hash, and block number.
  2. Receivers — Each N2N server connection spawns with its own broadcast::Receiver subscription. The connection handler uses tokio::select! to concurrently service mini-protocol messages and listen for block announcements.
  3. Delivery — When a downstream peer is waiting at the tip (having received MsgAwaitReply from ChainSync), an incoming block announcement triggers a MsgRollForward message to that peer, along with the block header. The peer can then fetch the full block body via BlockFetch.

Relay vs. Forger Announcements

Both synced and forged blocks flow through the same broadcast channel:

  • Synced blocks — When the pipelined ChainSync client receives blocks from an upstream peer and the node is following the tip (strict mode), each batch's final block is announced to all downstream connections. This enables relay behavior where blocks received from one upstream peer propagate to all other connected peers.
  • Forged blocks — When the block producer creates a new block, it is announced through the same channel after being written to ChainDB and applied to the ledger.

A parallel broadcast::Sender<RollbackAnnouncement> handles chain rollbacks, sending MsgRollBackward to downstream peers when the node's chain selection switches to a different fork.

Lagged Receivers

If a downstream peer falls behind (e.g., slow network or processing), the broadcast channel's bounded capacity means the receiver may lag. Lagged receivers skip missed announcements and log the gap, ensuring a slow peer does not block propagation to others.

Multiplexer

All mini-protocols run over a single TCP connection (N2N) or Unix socket (N2C), multiplexed by protocol ID:

Protocol IDMini-Protocol
0Handshake
2ChainSync (N2N)
3BlockFetch (N2N)
4TxSubmission2 (N2N)
8KeepAlive (N2N)
10PeerSharing (N2N)
5LocalChainSync (N2C)
6LocalTxSubmission (N2C)
7LocalStateQuery (N2C)
9LocalTxMonitor (N2C)

The multiplexer uses length-prefixed frames with protocol ID headers, matching the Ouroboros specification.

P2P Governor

This document describes Dugite's peer management architecture, implementing the Ouroboros P2P peer selection governor.


Architecture

Two modules implement peer management in dugite-network:

PeerManager (peer_manager.rs)

The data layer. Tracks every known peer in a flat HashMap<SocketAddr, PeerInfo> together with three HashSets for the cold/warm/hot buckets.

FeatureDescription
Cold / Warm / Hot temperature trackingThree-tier peer classification matching Ouroboros
PeerCategoryLocalRoot, PublicRoot, BigLedgerPeer, LedgerPeer, Shared, Bootstrap
ConnectionDirectionInbound / Outbound tracking
PeerSourceConfig, PeerSharing, Ledger
PeerPerformanceEWMA handshake RTT + block fetch latency
Reputation scoringComposite of latency + volume + reliability + recency
Circuit breakerClosed / Open / HalfOpen with exponential cooldown
Subnet diversity penalty/24 IPv4, /48 IPv6 penalisation for peer selection
Trustable-first orderingTwo-tier ordering for peers_to_connect()
Inbound connection limitConfigurable max inbound connections
DiffusionModeInitiatorOnly / InitiatorAndResponder
Failure-count time decayHalves every 5 minutes

Governor (governor.rs)

The policy layer. Runs on a 30-second tokio::interval in dugite-node.

FeatureDescription
PeerTargetsroot/known/established/active + BLP variants
Sync-state-aware target switchingAdjusts targets for PreSyncing / Syncing / CaughtUp
Hard/soft connection limitsConnectionDecision for accept/reject
Big-ledger-peer promotion priorityBLPs promoted first during sync
Active (hot) peer target enforcementPromotes/demotes to meet active target
Established (warm+hot) target enforcementMaintains established peer count
Surplus reductionDemote/disconnect lowest reputation, local-root protected
Churn mechanism20% target reduction cycle at configurable intervals
Default targetsactive=15, established=40, known=85 (matching cardano-node)

Wiring

The governor runs as a standalone tokio::spawn task in node/mod.rs. Every 30 seconds it:

  1. Acquires a read lock on Arc<RwLock<PeerManager>> and calls governor.evaluate() and governor.maybe_churn().
  2. Acquires a write lock and applies the resulting GovernorEvents by calling promote_to_hot, demote_to_warm, peer_disconnected, and recompute_reputations.
  3. GovernorEvent::Connect is acknowledged but not executed here — outbound connections originate from the main connection loop via peers_to_connect().

Peer Selection State Machine

Peers progress through a formal state machine:

stateDiagram-v2
    [*] --> Cold
    Cold --> Warm: TCP connect + handshake
    Warm --> Hot: Activate mini-protocols
    Hot --> Warm: Deactivate mini-protocols
    Warm --> Cold: Disconnect
    Hot --> Cold: Forceful disconnect

Target Counts

The governor maintains six independent target counts:

TargetDefault
Known peers100
Established peers40
Active peers15
Known big-ledger peers15
Established big-ledger peers10
Active big-ledger peers5

When any target is not met, the governor attempts to satisfy the deficit. When any target is exceeded, surplus peers are demoted by lowest reputation.


Local Root Peer Pinning

Local root peers (from localRoots in the topology file) have pinned targets that override the normal target counts. Local roots are never demoted for surplus reduction and are never churned.


Churn

The governor performs periodic churn to rotate peers:

  • Deadline churn (normal mode) — Approximately every 55 minutes, a fraction of established and active peers are replaced.
  • Bulk sync churn — During active block download, churn cycles are more aggressive (~15 minutes) to shed peers with poor block-fetch performance.

Big Ledger Peer Preference During Sync

Big ledger peers (SPOs in the top 90% of stake, obtained via GetLedgerPeerSnapshot) serve as trusted anchors during bulk block download. The governor maintains a separate target bucket for BLPs. When SyncState is Syncing or PreSyncing, BLP targets take priority.


Thread Safety

The PeerManager is wrapped in Arc<RwLock<PeerManager>>. The governor task acquires a read lock for evaluate() and a write lock only for event application, keeping the write-lock window minimal.


Files

FilePurpose
crates/dugite-network/src/governor.rsPolicy decisions and target enforcement
crates/dugite-network/src/peer_manager.rsPeer state tracking and reputation
crates/dugite-node/src/node/mod.rsGovernor task wiring
crates/dugite-node/src/config.rsTopology parsing

Ouroboros Genesis Support

Dugite includes a Genesis State Machine (GSM) that tracks the node's sync progression through the Ouroboros Genesis protocol states.

Overview

The GSM implements three states matching the Ouroboros Genesis specification:

  • PreSyncing — Waiting for enough trusted big ledger peers (BLPs). The Historical Availability Assumption (HAA) requires a minimum number of active BLPs before sync begins.
  • Syncing — Active block download with density-based peer evaluation. The GSM monitors chain density across peers and can disconnect peers with insufficient chain density (GDD).
  • CaughtUp — Normal Praos operation. The node is at or near the chain tip and participates in standard consensus.

Enabling Genesis Mode

Genesis mode is opt-in via the --consensus-mode genesis CLI flag:

dugite-node run \
  --consensus-mode genesis \
  --config config/preview-config.json \
  ...

When not enabled (the default praos mode), the GSM immediately enters CaughtUp and all Genesis constraints are disabled. This is the recommended mode for nodes that sync from Mithril snapshots.

State Transitions

stateDiagram-v2
    [*] --> PreSyncing: genesis enabled, no marker
    [*] --> CaughtUp: marker file exists
    PreSyncing --> Syncing: HAA satisfied (enough BLPs)
    Syncing --> CaughtUp: all peers idle + tip fresh
    CaughtUp --> PreSyncing: tip becomes stale

A caught_up.marker file is written to the database directory when the node reaches CaughtUp, enabling fast restart without re-evaluating the Genesis bootstrap.

Features

  • State tracking: PreSyncing/Syncing/CaughtUp with automatic transitions
  • Big Ledger Peer identification: Pools in the top 90% of active stake are classified as BLPs
  • Genesis Density Disconnector (GDD): Compares chain density across peers within the genesis window and disconnects peers with insufficient density
  • Limit on Eagerness (LoE): Computes the maximum immutable tip slot based on candidate chain tips
  • Peer snapshot loading: JSON-based peer snapshot for initial peer discovery

The recommended deployment path uses Mithril snapshot import for fast sync with the default praos consensus mode:

# Import a Mithril snapshot first
dugite-node mithril-import --network-magic 2 --database-path ./db

# Then run in default praos mode
dugite-node run --config config/preview-config.json --database-path ./db ...

Protocol Parameters Reference

Cardano protocol parameters control the behavior of the network, including fees, block sizes, staking mechanics, and governance. These parameters can be queried from a running node and updated through governance actions.

Querying Parameters

dugite-cli query protocol-parameters \
  --socket-path ./node.sock \
  --out-file protocol-params.json

Fee Parameters

ParameterJSON KeyDescriptionMainnet Default
Min fee coefficienttxFeePerByte / minFeeAFee per byte of transaction size44
Min fee constanttxFeeFixed / minFeeBFixed fee component155381
Min UTxO value per byteutxoCostPerByte / adaPerUtxoByteMinimum lovelace per byte of UTxO4310

The transaction fee formula is:

fee = txFeePerByte * tx_size_in_bytes + txFeeFixed

Block Size Parameters

ParameterJSON KeyDescriptionMainnet Default
Max block body sizemaxBlockBodySizeMaximum block body size in bytes90112
Max transaction sizemaxTxSizeMaximum transaction size in bytes16384
Max block header sizemaxBlockHeaderSizeMaximum block header size in bytes1100

Staking Parameters

ParameterJSON KeyDescriptionMainnet Default
Stake address depositstakeAddressDeposit / keyDepositDeposit for stake key registration (lovelace)2000000
Pool depositstakePoolDeposit / poolDepositDeposit for pool registration (lovelace)500000000
Pool retire max epochpoolRetireMaxEpoch / eMaxMaximum future epochs for pool retirement18
Pool target countstakePoolTargetNum / nOptTarget number of pools (k parameter)500
Min pool costminPoolCostMinimum fixed pool cost (lovelace)170000000

Monetary Policy

ParameterDescription
Monetary expansion (rho)Rate of new ADA creation from reserves per epoch
Treasury cut (tau)Fraction of rewards directed to the treasury
Pledge influence (a0)How pledge affects reward calculations

Plutus Execution Parameters

ParameterJSON KeyDescriptionMainnet Default
Max tx execution unitsmaxTxExecutionUnits{memory, steps} per transaction{14000000, 10000000000}
Max block execution unitsmaxBlockExecutionUnits{memory, steps} per block{62000000, 40000000000}
Max value sizemaxValueSizeMaximum serialized value size in bytes5000
Collateral percentagecollateralPercentageCollateral % of total tx fee for Plutus txs150
Max collateral inputsmaxCollateralInputsMaximum collateral inputs per tx3

Governance Parameters (Conway)

ParameterJSON KeyDescriptionMainnet Default
DRep depositdrepDepositDeposit for DRep registration (lovelace)500000000
Gov action depositgovActionDepositDeposit for governance action submission (lovelace)100000000000
Gov action lifetimegovActionLifetimeGovernance action expiry (epochs)6

Voting Thresholds

Different governance action types require different voting thresholds from DReps, SPOs, and the Constitutional Committee:

Action TypeDRep ThresholdSPO ThresholdCC Threshold
No ConfidencedvtMotionNoConfidencepvtMotionNoConfidenceRequired
Update Committee (normal)dvtCommitteeNormalpvtCommitteeNormalN/A
Update Committee (no confidence)dvtCommitteeNoConfidencepvtCommitteeNoConfidenceN/A
New ConstitutiondvtUpdateToConstitutionN/ARequired
Hard Fork InitiationdvtHardForkInitiationpvtHardForkInitiationRequired
Protocol Parameter Update (network)dvtPPNetworkGroupN/ARequired
Protocol Parameter Update (economic)dvtPPEconomicGrouppvtPPEconomicGroupRequired
Protocol Parameter Update (technical)dvtPPTechnicalGroupN/ARequired
Protocol Parameter Update (governance)dvtPPGovGroupN/ARequired
Treasury WithdrawaldvtTreasuryWithdrawalN/ARequired

CBOR Field Numbers

When encoding protocol parameter updates in governance actions, each parameter maps to a CBOR field number:

CBOR KeyParameter
0txFeePerByte / minFeeA
1txFeeFixed / minFeeB
2maxBlockBodySize
3maxTxSize
4maxBlockHeaderSize
5stakeAddressDeposit / keyDeposit
6stakePoolDeposit / poolDeposit
7poolRetireMaxEpoch / eMax
8stakePoolTargetNum / nOpt
16minPoolCost
17utxoCostPerByte / adaPerUtxoByte
20maxTxExecutionUnits
21maxBlockExecutionUnits
22maxValueSize
23collateralPercentage
24maxCollateralInputs
30drepDeposit
31govActionDeposit
32govActionLifetime

Cardano Mini-Protocol Reference

This document is the definitive implementation reference for every Cardano mini-protocol used in node-to-node (N2N) and node-to-client (N2C) communication. It covers the complete state machine, exact CBOR wire format, timing constraints, flow-control rules, and every protocol-error condition for each protocol. The information is derived directly from the Haskell source in the IntersectMBO/ouroboros-network repository.


Connection Model and Multiplexer

All mini-protocols share a single TCP connection per peer, multiplexed by the network-mux layer using 8-byte SDU headers:

  Bytes  Field
  -----  -----
   0-3   timestamp (u32, microseconds, used for RTT measurement)
   4-5   mini_protocol_num (u16)
     6   flags (bit 0 = direction: 0=initiator, 1=responder)
   7-8   payload_length (u16, max 65535)

Large messages are fragmented across multiple SDUs transparently. Handshake (protocol 0) runs on the raw socket before the mux is started.

Key invariant: if any single mini-protocol thread throws an exception, the entire mux — and therefore the entire TCP connection — is torn down. Protocol errors are fatal to the connection, not just to the affected mini-protocol.

Sources:

  • ouroboros-network/network-mux/src/Network/Mux/Types.hs
  • ouroboros-network/network-mux/src/Network/Mux/Egress.hs

Shared Encoding Primitives

These types are used identically across all protocols.

Point

A Point identifies a position on the chain by slot and header hash.

; CBOR encoding (Haskell: encodePoint / decodePoint)
point = []                         ; Origin — empty definite-length list
      / [slot_no, header_hash]     ; At(slot, hash) — definite-length list of 2

slot_no     = uint     ; word64
header_hash = bstr     ; 32 bytes (Blake2b-256 of header)

Source: ouroboros-network/ouroboros-network/api/lib/Ouroboros/Network/Block.hs

Tip

A Tip is the chain tip as seen by the server. It is a (Point, BlockNo) pair.

; N2N ChainSync / N2C LocalChainSync
tip = [slot_no, header_hash, block_no]   ; At(pt, blockno)
    / [0]                                ; TipGenesis (Origin point, blockno=0)

block_no = uint   ; word64

Source: ouroboros-network/ouroboros-network/api/lib/Ouroboros/Network/Block.hs (encodeTip / decodeTip)

Byte and Time Limit Constants

These constants appear in state-machine timeout and size-limit tables throughout this document.

ConstantValueSource in Codec/Limits.hs
smallByteLimit65535 bytesProtocol/Limits.hs:smallByteLimit
largeByteLimit2 500 000 bytesProtocol/Limits.hs:largeByteLimit
shortWait10 secondsProtocol/Limits.hs:shortWait
longWait60 secondsProtocol/Limits.hs:longWait
waitForeverno timeoutProtocol/Limits.hs:waitForever (= Nothing)

Source: ouroboros-network/ouroboros-network/api/lib/Ouroboros/Network/Protocol/Limits.hs


N2N Mini-Protocol IDs

ProtocolID
Handshake0
DeltaQ1 (reserved, never used)
ChainSync2
BlockFetch3
TxSubmission24
KeepAlive8
PeerSharing10
Peras Cert16 (future)
Peras Vote17 (future)

N2C Mini-Protocol IDs

ProtocolID
Handshake0
LocalChainSync5
LocalTxSubmission6
LocalStateQuery7
LocalTxMonitor9

Protocol Temperatures (N2N)

Protocol temperature determines when each N2N mini-protocol is started during the peer lifecycle (cold → warm → hot).

TemperatureProtocolsStarted when
EstablishedKeepAlive (8), PeerSharing (10)On cold→warm promotion
Warm(none currently)
HotChainSync (2), BlockFetch (3), TxSubmission2 (4)On warm→hot promotion

Hot protocols use StartOnDemand for the responder side (they wait for the first inbound byte). Initiator sides are started eagerly by startProtocols.

Source: ouroboros-network/cardano-diffusion/lib/Cardano/Network/Diffusion/Peer/


N2N Protocol 0: Handshake

Identity

  • Protocol ID: 0 (runs on raw socket bearer before mux starts)
  • Direction: Initiator sends MsgProposeVersions, responder replies
  • Versions: V14 (Plomin HF, mandatory since 2025-01-29), V15 (SRV DNS)

State Machine

StPropose  (ClientAgency)  -- initiator has agency
    │
    │ MsgProposeVersions
    ▼
StConfirm  (ServerAgency)  -- server chooses version
    │
    ├─── MsgAcceptVersion ──→ StDone
    ├─── MsgRefuse        ──→ StDone
    └─── MsgQueryReply    ──→ StDone
StateAgencyMeaning
StProposeClientInitiator must send its version list
StConfirmServerServer must accept, refuse, or query
StDoneNobodyTerminal

Terminal state: StDone — connection is closed after handshake completes (for N2N; the mux then starts).

Wire Format

Source: ouroboros-network/ouroboros-network/framework/lib/Ouroboros/Network/Protocol/Handshake/Codec.hs and cardano-diffusion/protocols/cddl/specs/handshake-node-to-node-v14.cddl

; Every handshake message is a definite-length CBOR array.
MsgProposeVersions = [0, versionTable]
MsgAcceptVersion   = [1, versionNumber, versionData]
MsgRefuse          = [2, refuseReason]
MsgQueryReply      = [3, versionTable]

; versionTable is a CBOR definite-length MAP (not an array).
; Keys are encoded in ascending order.
versionTable = { * versionNumber => versionData }

; N2N version numbers (V14=14, V15=15, V16=16, ...)
; Note: N2N does NOT set bit-15. Only N2C uses bit-15.
versionNumber = 14 / 15 / 16

; Version data for V14/V15: 4-element array
versionData_v14 = [networkMagic, initiatorOnly, peerSharing, query]
; Version data for V16+: 5-element array (adds perasSupport)
versionData_v16 = [networkMagic, initiatorOnly, peerSharing, query, perasSupport]

networkMagic = uint .size 4   ; word32 (mainnet=764824073, preview=2, preprod=1)
initiatorOnly = bool           ; true=InitiatorOnly, false=InitiatorAndResponder
peerSharing   = 0 / 1          ; 0=Disabled, 1=Enabled
query         = bool
perasSupport  = bool

refuseReason
  = [0, [* versionNumber]]           ; VersionMismatch
  / [1, versionNumber, tstr]         ; HandshakeDecodeError
  / [2, versionNumber, tstr]         ; Refused

Version Negotiation Rules

Source: cardano-diffusion/api/lib/Cardano/Network/NodeToNode/Version.hs

  • The responder picks the highest version number that appears in both the initiator's and responder's version tables.
  • If no common version: MsgRefuse with VersionMismatch.
  • networkMagic must match exactly; mismatch → MsgRefuse with Refused.
  • initiatorOnlyDiffusionMode = min(local, remote) — more restrictive wins (i.e., InitiatorOnly if either side is).
  • peerSharing = local <> remote (Semigroup): both must be Enabled for Enabled; any Disabled results in Disabled. InitiatorOnly nodes automatically have Disabled.
  • query = local || remote (logical OR).

MsgQueryReply Semantics

When the initiator sends MsgProposeVersions with query=true, the responder must reply with MsgQueryReply (a copy of its own version table) and then close the connection. This is used by cardano-cli for version probing. The mux never starts in this case.

Timeout

Handshake SDU read/write: 10 seconds per SDU. There is no per-state timeout beyond this; the handshake exchange must complete within one SDU read cycle on each side.


N2N Protocol 2: ChainSync

Identity

  • Protocol ID: 2
  • Temperature: Hot (started on warm→hot promotion)
  • Direction: N2N ChainSync streams block headers only (not full blocks). Full blocks are fetched via BlockFetch.
  • Versions: All N2N versions (V7+)

State Machine

Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/ChainSync/Type.hs

StIdle       (ClientAgency)  -- client requests next update or intersect
    │
    ├─── MsgRequestNext      ──→ StNext(StCanAwait)
    ├─── MsgFindIntersect    ──→ StIntersect
    └─── MsgDone             ──→ StDone

StNext(StCanAwait)  (ServerAgency)  -- server can immediately reply or defer
    │
    ├─── MsgAwaitReply       ──→ StNext(StMustReply)
    ├─── MsgRollForward      ──→ StIdle
    └─── MsgRollBackward     ──→ StIdle

StNext(StMustReply)  (ServerAgency)  -- server MUST reply (already sent await)
    │
    ├─── MsgRollForward      ──→ StIdle
    └─── MsgRollBackward     ──→ StIdle

StIntersect  (ServerAgency)  -- server searching for intersection
    │
    ├─── MsgIntersectFound   ──→ StIdle
    └─── MsgIntersectNotFound ─→ StIdle

StDone (NobodyAgency)

Critical invariant: MsgAwaitReply is only valid in state StNext(StCanAwait). The server transitions to StNext(StMustReply) after sending it. Sending MsgAwaitReply when the client sent a non-blocking variant (Pipeline rather than Request) or when the server has already sent MsgAwaitReply this round is a protocol error (ProtocolErrorRequestNonBlocking). The typed-protocol framework enforces this at compile time; a Rust implementation must enforce it at runtime by tracking which sub-state of StNext is current.

Wire Format

Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/ChainSync/Codec.hs and cardano-diffusion/protocols/cddl/specs/chain-sync.cddl

MsgRequestNext        = [0]
MsgAwaitReply         = [1]
MsgRollForward        = [2, header, tip]
MsgRollBackward       = [3, point, tip]
MsgFindIntersect      = [4, points]
MsgIntersectFound     = [5, point, tip]
MsgIntersectNotFound  = [6, tip]
MsgDone               = [7]

; points is a DEFINITE-length array (not indefinite)
points = [* point]

N2N header encoding in MsgRollForward: For the CardanoBlock HFC block type, the header is wrapped as:

header = [era_index, serialised_header_bytes]

where era_index is 0=Byron, 1=Shelley, ..., 6=Conway, 7=Dijkstra (see TxSubmission2 section for full table), and serialised_header_bytes is tag(24)(bstr(cbor_encoded_header)) — CBOR-in-CBOR wrapping via wrapCBORinCBOR.

Source: ouroboros-consensus/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node/Serialisation.hs

Pipelining

Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/ChainSync/PipelineDecision.hs

ChainSync uses the pipelineDecisionLowHighMark strategy with default marks lowMark=200, highMark=300 (Dugite uses configurable depth via DUGITE_PIPELINE_DEPTH, default 300).

pipelineDecisionLowHighMark :: Word16 -> Word16 -> MkPipelineDecision

Decision logic (given n outstanding requests, clientTip, serverTip):

  • n=0, clientTip == serverTipRequest (non-pipelined, triggers await semantics)
  • n=0, clientTip < serverTipPipeline
  • n>0, clientTip + n >= serverTipCollect (we're caught up, stop pipelining)
  • n >= highMarkCollect (high-water: drain before adding more)
  • n < lowMarkCollectOrPipeline (can collect or pipeline)
  • n >= lowMarkCollect (above low mark in high state)

When n=0 and clientTip == serverTip: the client sends a non-pipelined Request, the server is at its tip and sends MsgAwaitReply (valid because the client sent a blocking request). This is the "at tip" steady state.

Timing

Source: ouroboros-network/cardano-diffusion/protocols/lib/Cardano/Network/Protocol/ChainSync/Codec/TimeLimits.hs

StateTrusted peerUntrusted peer
StIdle3373 s3373 s (configurable via ChainSyncIdleTimeout)
StNext(StCanAwait)10 s (shortWait)10 s
StNext(StMustReply)waitForeveruniform random 601–911 s
StIntersect10 s10 s

The random range for untrusted StMustReply corresponds to streak-of-empty-slots probabilities between 99.9% and 99.9999% at f=0.05.

Default ChainSyncIdleTimeout = 3373 seconds. Source: cardano-diffusion/lib/Cardano/Network/Diffusion/Configuration.hs:defaultChainSyncIdleTimeout

Ingress Queue Limit

highMark × 1400 bytes × 1.1 safety factor

With highMark=300: approximately 462 000 bytes.


N2N Protocol 3: BlockFetch

Identity

  • Protocol ID: 3
  • Temperature: Hot
  • Purpose: Bulk download of full block bodies, driven by the BlockFetch decision logic after ChainSync supplies candidate chain headers.

State Machine

Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/BlockFetch/Type.hs

BFIdle      (ClientAgency)  -- client decides what to fetch
    │
    ├─── MsgRequestRange  ──→ BFBusy
    └─── MsgClientDone    ──→ BFDone

BFBusy      (ServerAgency)  -- server preparing batch
    │
    ├─── MsgStartBatch    ──→ BFStreaming
    └─── MsgNoBlocks      ──→ BFIdle

BFStreaming  (ServerAgency)  -- server streaming blocks
    │
    ├─── MsgBlock         ──→ BFStreaming  (self-loop, one block per message)
    └─── MsgBatchDone     ──→ BFIdle

BFDone (NobodyAgency)
StateAgency
BFIdleClient
BFBusyServer
BFStreamingServer
BFDoneNobody

Wire Format

Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/BlockFetch/Codec.hs and cardano-diffusion/protocols/cddl/specs/block-fetch.cddl

MsgRequestRange = [0, lower_point, upper_point]
MsgClientDone   = [1]
MsgStartBatch   = [2]
MsgNoBlocks     = [3]
MsgBlock        = [4, block]
MsgBatchDone    = [5]

MsgRequestRange: Both lower_point and upper_point are inclusive (the range spans from lower to upper, both included). Each point uses the standard point encoding ([] for Origin, [slot, hash] for specific).

Block encoding in MsgBlock: For CardanoBlock, the block is encoded as:

block = [era_index, tag(24)(bstr(cbor_encoded_block))]

The full block (including header and body) is CBOR-serialized, then wrapped in tag(24)(bytes(cbor_bytes)) (CBOR-in-CBOR), then placed in a 2-element array with the HFC era index.

Source: ouroboros-consensus/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node/Serialisation.hs

Timing

Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/BlockFetch/Codec.hs:timeLimitsBlockFetch

StateTimeout
BFIdlewaitForever
BFBusy60 s (longWait)
BFStreaming60 s (longWait)

Byte Limits

StateLimit
BFIdle65535 bytes (smallByteLimit)
BFBusy65535 bytes (smallByteLimit)
BFStreaming2 500 000 bytes (largeByteLimit)

BlockFetch Decision Loop

The blockFetchLogic thread runs continuously, waking every 10 ms (Praos) or 40 ms (Genesis). It reads candidate chains from ChainSync via STM, computes which block ranges need to be fetched, and issues MsgRequestRange messages.

ParameterDefaultSource
maxInFlightReqsPerPeer100blockFetchPipeliningMax
maxConcurrencyBulkSync1 peerbfcMaxConcurrencyBulkSync
maxConcurrencyDeadline1 peerbfcMaxConcurrencyDeadline
Decision loop interval (Praos)10 msbfcDecisionLoopIntervalPraos
Decision loop interval (Genesis)40 msbfcDecisionLoopIntervalGenesis

Source: cardano-diffusion/lib/Cardano/Network/Diffusion/Configuration.hs:defaultBlockFetchConfiguration

Ingress Queue Limit

max(10 × 2 097 154, 100 × 90 112) × 1.1 ≈ 22 MB.


N2N Protocol 4: TxSubmission2

Identity

  • Protocol ID: 4
  • Temperature: Hot
  • Direction: Inverted agency — the server (inbound/receiver) has agency first. The server requests transactions; the client replies with them. This is the opposite of most protocols.
  • Versions: All N2N versions. V2 logic (multi-peer decision loop) is enabled server-side when TxSubmissionLogicV2 is configured; V1 is the current default in cardano-node.

State Machine

Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/TxSubmission2/Type.hs

StInit  (ClientAgency)   -- client must send MsgInit before anything else
    │
    │ MsgInit
    ▼
StIdle  (ServerAgency)   -- server has agency; requests txids or terminates
    │
    ├─── MsgRequestTxIds(blocking=true)    ──→ StTxIds(StBlocking)
    ├─── MsgRequestTxIds(blocking=false)   ──→ StTxIds(StNonBlocking)
    ├─── MsgRequestTxs                     ──→ StTxs
    └─── MsgDone                           ──→ StDone

StTxIds(StBlocking)   (ClientAgency)   -- client MUST reply, no timeout
    │
    └─── MsgReplyTxIds(NonEmpty list)  ──→ StIdle
         (BlockingReply: list must be non-empty)

StTxIds(StNonBlocking)  (ClientAgency)  -- client must reply within shortWait
    │
    └─── MsgReplyTxIds(possibly empty) ──→ StIdle

StTxs  (ClientAgency)  -- client must reply with requested tx bodies
    │
    └─── MsgReplyTxs(tx list)  ──→ StIdle

StDone (NobodyAgency)

MsgDone constraint: MsgDone can only be sent from StIdle (server side). It is the server's prerogative to terminate, not the client's.

Wire Format

Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/TxSubmission2/Codec.hs:encodeTxSubmission2 and cardano-diffusion/protocols/cddl/specs/tx-submission2.cddl

MsgInit           = [6]

MsgRequestTxIds   = [0, blocking:bool, ack:word16, req:word16]
                  ; blocking=true  → StTxIds(StBlocking)
                  ; blocking=false → StTxIds(StNonBlocking)

MsgReplyTxIds     = [1, [_ *[txid, size:word32] ]]
                  ; INDEFINITE-length outer list (encodeListLenIndef)
                  ; Each inner entry is a DEFINITE-length array(2)

MsgRequestTxs     = [2, [_ *txid ]]
                  ; INDEFINITE-length list

MsgReplyTxs       = [3, [_ *tx ]]
                  ; INDEFINITE-length list

MsgDone           = [4]

IMPORTANT: Both MsgReplyTxIds, MsgRequestTxs, and MsgReplyTxs use indefinite-length CBOR arrays (encoded with encodeListLenIndef and terminated with encodeBreak). The codec explicitly requires this. Using definite-length arrays is a decoding error.

HFC era-tag wrapping for txids and txs:

For the Cardano HFC instantiation, each txid and each tx is wrapped with the era index before being placed into the list. The wrapping is done by encodeNS in ouroboros-consensus:

; txid (GenTxId) encoding
txid = [era_index:uint8, bstr(32)]
     ; era_index: 0=Byron, 1=Shelley, 2=Allegra, 3=Mary, 4=Alonzo,
     ;            5=Babbage, 6=Conway, 7=Dijkstra
     ; payload:   32 raw bytes = Blake2b-256 hash of tx body (no CBOR tag)

; tx (GenTx) encoding
tx = [era_index:uint8, tag(24)(bstr(cbor_of_tx))]
   ; The transaction CBOR bytes are wrapped in CBOR tag 24 (embedded CBOR)

Example for Conway (era_index=6):

txid = [6, bstr(32_bytes_of_txhash)]
tx   = [6, #6.24(bstr(cbor_bytes_of_transaction))]

Source: ouroboros-consensus/ouroboros-consensus-diffusion/src/.../Consensus/Network/NodeToNode.hs and ouroboros-consensus/src/.../HardFork/Combinator/Serialisation/Common.hs:encodeNS

MsgReplyTxIds — Size Reporting

Each entry in MsgReplyTxIds carries a SizeInBytes (word32) alongside the txid. This size must include the full HFC envelope overhead that the tx will have in MsgReplyTxs. For Conway: 3 bytes overhead (1 byte array-of-2 header, 1 byte era_index word8, CBOR tag 24 header). Mismatches beyond the tolerance threshold (const_MAX_TX_SIZE_DISCREPANCY = 10 bytes in V2 inbound) terminate the connection.

Blocking vs Non-Blocking Rules

In blocking mode (MsgRequestTxIds(blocking=true)):

  • req_count must be >= 1
  • MsgReplyTxIds reply must contain a non-empty list (BlockingReply)
  • No timeout: the client MAY block indefinitely in STM waiting for new mempool entries

In non-blocking mode (MsgRequestTxIds(blocking=false)):

  • At least one of ack_count or req_count must be non-zero
  • MsgReplyTxIds reply may be empty (NonBlockingReply [])
  • Timeout: shortWait (10 seconds)

Acknowledgment semantics: ack_count tells the client how many previously announced txids can now be removed from the outbound window. The client maintains a FIFO of unacknowledgedTxIds. When the server sends MsgRequestTxIds(ack=N, req=M), the client drops the first N entries from the FIFO and adds up to M new txids from the mempool.

Timing

Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/TxSubmission2/Codec.hs:timeLimitsTxSubmission2

StateTimeout
StInitwaitForever
StIdlewaitForever
StTxIds(StBlocking)waitForever
StTxIds(StNonBlocking)10 s (shortWait)
StTxs10 s (shortWait)

V1 Server Constants (current default)

ParameterValue
maxTxIdsToRequest3
maxTxToRequest2
maxUnacknowledgedTxIds100
txSubmissionInitDelay60 s

The 60-second init delay is applied via threadDelay before the V1 server makes its first MsgRequestTxIds. This intentionally avoids requesting transactions during initial chain sync.

V2 Server Constants (experimental)

ParameterValue
maxNumTxIdsToRequest12
maxUnacknowledgedTxIds100
txsSizeInflightPerPeer6 × 65540 bytes
txInflightMultiplicity2
Decision loop delay5 ms

Source: ouroboros-network/ouroboros-network/lib/Ouroboros/Network/TxSubmission/Inbound/V2/

MsgInit Requirement

MsgInit (tag=6, one-element array [6]) must be the very first message sent by the client (outbound side) after the mux connection is established for the TxSubmission2 protocol. The server waits for MsgInit in StInit before transitioning to StIdle. Sending any other message first is a protocol error.

Ingress Queue Limit

maxUnacknowledgedTxIds × (44 + 65536) × 1.1

With maxUnacknowledgedTxIds=100: approximately 6 666 400 bytes.


N2N Protocol 8: KeepAlive

Identity

  • Protocol ID: 8
  • Temperature: Established (started on cold→warm, runs for entire connection lifetime)
  • Purpose: Detects connection failure and measures round-trip time for GSV (Good-Spread-Variable) calculations used in BlockFetch prioritization.

State Machine

Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/KeepAlive/Type.hs

StClient  (ClientAgency)  -- client sends keep-alive request
    │
    ├─── MsgKeepAlive(cookie)  ──→ StServer
    └─── MsgDone               ──→ StDone

StServer  (ServerAgency)  -- server must respond with same cookie
    │
    └─── MsgKeepAliveResponse(cookie)  ──→ StClient

StDone (NobodyAgency)

Wire Format

Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/KeepAlive/Codec.hs:codecKeepAlive_v2

MsgKeepAlive         = [0, cookie:word16]
MsgKeepAliveResponse = [1, cookie:word16]
MsgDone              = [2]

Cookie matching: The server must echo back the exact cookie value sent by the client. A mismatch raises KeepAliveCookieMissmatch (note: the Haskell source has the typo "Missmatch" with double-s), which terminates the connection.

Timing

Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/KeepAlive/Codec.hs:timeLimitsKeepAlive

StateTimeout
StClient97 seconds
StServer60 seconds

The asymmetry is intentional: the client side (97 s) is how long the client waits before sending the next keep-alive; the server side (60 s) is how long the server has to respond. The comment in source notes that StServer timeout "should be 10s" (issue #2505) but is currently 60 s.

Byte Limits

Both states: smallByteLimit (65535 bytes).

Protocol Error Condition

KeepAliveCookieMissmatch oldCookie receivedCookie — thrown when MsgKeepAliveResponse cookie does not match the outstanding request cookie. This terminates the connection.


N2N Protocol 10: PeerSharing

Identity

  • Protocol ID: 10
  • Temperature: Established (started on cold→warm)
  • Purpose: Exchange of peer addresses to assist in peer discovery. Only active when both sides negotiated peerSharing=1 in Handshake.

State Machine

Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/PeerSharing/Type.hs

StIdle  (ClientAgency)  -- client requests peer addresses or terminates
    │
    ├─── MsgShareRequest(amount)  ──→ StBusy
    └─── MsgDone                  ──→ StDone

StBusy  (ServerAgency)  -- server must reply with peer list
    │
    └─── MsgSharePeers(addrs)  ──→ StIdle

StDone (NobodyAgency)

Wire Format

Source: ouroboros-network/cardano-diffusion/protocols/lib/Cardano/Network/Protocol/PeerSharing/Codec.hs and cardano-diffusion/protocols/cddl/specs/peer-sharing-v14.cddl

MsgShareRequest = [0, amount:word8]
MsgSharePeers   = [1, [* peerAddress]]
MsgDone         = [2]

; Peer address encoding (SockAddr)
peerAddress = [0, ipv4:word32, port:word16]
            ; IPv4: single u32 in network byte order, then port as word16
            / [1, word32, word32, word32, word32, port:word16]
            ; IPv6: four u32s (network byte order), then port as word16

Protocol error condition: If the server replies with more addresses than amount requested, it is a protocol error. The client must request no more than 255 peers (word8 max).

Timing

StateTimeout
StIdlewaitForever
StBusy60 s (longWait)

Server Address Selection Policy

The server only shares addresses for peers that satisfy all of:

  • knownPeerAdvertise = DoAdvertisePeer
  • knownSuccessfulConnection = True
  • knownPeerFailCount = 0

Addresses are randomized using a hash with a salt that rotates every 823 seconds to prevent fingerprinting.

Source: ouroboros-network/ouroboros-network/api/lib/Ouroboros/Network/PeerSelection/PeerSharing/Codec.hs and ouroboros-network/ouroboros-network/lib/Ouroboros/Network/PeerSharing.hs

Key Policy Constants

ConstantValue
policyMaxInProgressPeerShareReqs2
policyPeerShareRetryTime900 s
policyPeerShareBatchWaitTime3 s
policyPeerShareOverallTimeout10 s
policyPeerShareActivationDelay300 s
ps_POLICY_PEER_SHARE_STICKY_TIME823 s (salt rotation)
ps_POLICY_PEER_SHARE_MAX_PEERS10

Source: ouroboros-network/ouroboros-network/lib/Ouroboros/Network/Diffusion/Policies.hs


N2C Protocol 0: Handshake (Node-to-Client)

Identity

  • Protocol ID: 0 (same as N2N, runs on raw socket before mux)
  • Direction: Same as N2N: client proposes, server accepts or refuses
  • Versions: V16 (=32784) through V23 (=32791)

Wire Format

Source: CDDL: cardano-diffusion/protocols/cddl/specs/handshake-node-to-client.cddl Codec: same codecHandshake function as N2N, parameterized on version number type.

; Messages are identical in structure to N2N handshake
MsgProposeVersions = [0, versionTable]
MsgAcceptVersion   = [1, versionNumber, nodeToClientVersionData]
MsgRefuse          = [2, refuseReason]
MsgQueryReply      = [3, versionTable]

; N2C version numbers have bit 15 set to distinguish from N2N
; V16=32784, V17=32785, V18=32786, V19=32787,
; V20=32788, V21=32789, V22=32790, V23=32791
versionNumber = 32784 / 32785 / 32786 / 32787 / 32788 / 32789 / 32790 / 32791

; Encoding: versionNumber_wire = logical_version | 0x8000
; Decoding: logical_version = wire_value & 0x7FFF (after verifying bit 15 is set)

; Version data (V16+): 2-element array
nodeToClientVersionData = [networkMagic:uint, query:bool]

The versionTable in MsgProposeVersions is a definite-length CBOR map with entries sorted in ascending key order.

Version Features

N2C VersionWire ValueWhat Changed
V1632784Conway era; ImmutableTip acquire; GetStakeDelegDeposits
V1732785GetProposals, GetRatifyState
V1832786GetFuturePParams
V1932787GetBigLedgerPeerSnapshot
V2032788QueryStakePoolDefaultVote; MsgGetMeasures in LocalTxMonitor
V2132789New ProtVer codec for Shelley-Babbage; GetPoolDistr2, GetStakeDistribution2, GetMaxMajorProtVersion
V2232790SRV records in GetBigLedgerPeerSnapshot
V2332791GetDRepDelegations; LedgerPeerSnapshot includes block hash + NetworkMagic

Source: cardano-diffusion/api/lib/Cardano/Network/NodeToClient/Version.hs

Version Negotiation

Same rules as N2N:

  • Highest common version wins.
  • networkMagic must match.
  • query = local || remote (logical OR).
  • No initiatorOnlyDiffusionMode or peerSharing fields in N2C version data.

N2C Protocol 5: LocalChainSync

Identity

  • Protocol ID: 5
  • Direction: N2C clients receive full serialized blocks (not just headers). This is the key difference from N2N ChainSync.
  • Versions: All N2C versions

State Machine

Identical state machine to N2N ChainSync (same Type.hs). See that section for the complete state machine diagram.

Wire Format

Messages tags are identical to N2N ChainSync (0–7). The key difference is the content of MsgRollForward.

N2C MsgRollForward block encoding:

; N2C LocalChainSync block payload in MsgRollForward
block = [era_id:uint, tag(24)(bstr(cbor_of_full_block))]

The entire block (header + body) is CBOR-encoded, wrapped in CBOR tag(24) (embedded CBOR), and then paired with the era index in a 2-element array.

Era indices: same as TxSubmission2 (0=Byron through 7=Dijkstra).

This matches the same HFC wrapping used by BlockFetch MsgBlock in N2N.

Differences from N2N ChainSync

AspectN2N ChainSyncN2C LocalChainSync
Payload typeBlock headers onlyFull blocks
PurposeChain selectionWallet / tool consumption
PipeliningYes (pipelineDecisionLowHighMark)Typically none
Source of blocksServer → clientServer → client

Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/ChainSync/Codec.hs (same codec)


N2C Protocol 6: LocalTxSubmission

Identity

  • Protocol ID: 6
  • Direction: Client submits a single transaction; server accepts or rejects.
  • No HFC era-tag wrapping: Unlike N2N TxSubmission2, N2C LocalTxSubmission sends raw transaction CBOR without any HFC era-index prefix.
  • Versions: All N2C versions

State Machine

Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/LocalTxSubmission/Type.hs

StIdle  (ClientAgency)  -- client submits a transaction or terminates
    │
    ├─── MsgSubmitTx(tx)  ──→ StBusy
    └─── MsgDone          ──→ StDone

StBusy  (ServerAgency)  -- server validates and responds
    │
    ├─── MsgAcceptTx   ──→ StIdle
    └─── MsgRejectTx   ──→ StIdle

StDone (NobodyAgency)

Blocking semantics: After sending MsgSubmitTx, the client must wait for MsgAcceptTx or MsgRejectTx before sending another transaction. This protocol processes one transaction at a time. This is intentional: N2C is only used by local trusted clients (wallets, CLI), so throughput is not a concern.

Wire Format

Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/LocalTxSubmission/Codec.hs:encodeLocalTxSubmission and cardano-diffusion/protocols/cddl/specs/local-tx-submission.cddl

MsgSubmitTx = [0, tx]
MsgAcceptTx = [1]
MsgRejectTx = [2, rejectReason]
MsgDone     = [3]

Transaction encoding (tx): Raw transaction CBOR, exactly as produced by toCBOR on the ledger's Tx type. No HFC wrapper, no era tag, no tag(24). The server determines the era from the ledger state.

Rejection reason (rejectReason): The full ApplyTxError encoded via the ledger's EncCBOR instance. For Conway, this is a nested structure of ConwayLedgerPredFailure variants. The exact encoding is era-specific and defined in cardano-ledger.

Source: cardano-ledger/eras/conway/impl/src/Cardano/Ledger/Conway/Rules/


N2C Protocol 7: LocalStateQuery

Identity

  • Protocol ID: 7
  • Direction: Client acquires a ledger state snapshot and submits queries; server responds with query results.
  • Versions: All N2C versions. Some queries require specific minimum versions (see Shelley query tag table).

State Machine

Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/LocalStateQuery/Type.hs

StIdle      (ClientAgency)   -- client acquires a state or terminates
    │
    ├─── MsgAcquire(target)  ──→ StAcquiring
    └─── MsgDone             ──→ StDone

StAcquiring  (ServerAgency)  -- server acquiring the requested state
    │
    ├─── MsgAcquired         ──→ StAcquired
    └─── MsgFailure(reason)  ──→ StIdle

StAcquired   (ClientAgency)  -- client can query or release
    │
    ├─── MsgQuery(query)     ──→ StQuerying
    ├─── MsgRelease          ──→ StIdle
    └─── MsgReAcquire(target)──→ StAcquiring

StQuerying   (ServerAgency)  -- server computing query result
    │
    └─── MsgResult(result)   ──→ StAcquired

StDone (NobodyAgency)

Re-acquire: MsgReAcquire transitions from StAcquired directly back to StAcquiring, allowing the client to acquire a new state without going through StIdle. This avoids a round trip.

Acquire Targets

Three targets exist for MsgAcquire and MsgReAcquire:

TargetCBORSemanticsMin Version
SpecificPoint[0, point]Acquire the state at a specific slot/hash pointV8+ (any)
VolatileTip[8]Acquire the current tip of the volatile chainV8+
ImmutableTip[10]Acquire the tip of the immutable chainN2C V16+

For MsgReAcquire: tags are shifted by 3 → SpecificPoint=[6, point], VolatileTip=[9], ImmutableTip=[11] (V16+).

VolatileTip and ImmutableTip cannot fail (they always succeed with MsgAcquired). SpecificPoint can fail if the point is not in the volatile chain window (yields MsgFailure).

Acquire Failure Codes

AcquireFailurePointTooOld     = 0
AcquireFailurePointNotOnChain = 1

Wire Format

Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/LocalStateQuery/Codec.hs:codecLocalStateQuery and cardano-diffusion/protocols/cddl/specs/local-state-query.cddl

; Acquire / Re-acquire
MsgAcquire(SpecificPoint pt)  = [0, point]
MsgAcquire(VolatileTip)       = [8]
MsgAcquire(ImmutableTip)      = [10]    ; V16+ only

MsgAcquired                   = [1]

MsgFailure(reason)            = [2, failure_code:uint]
                              ; 0=PointTooOld, 1=PointNotOnChain

MsgQuery(query)               = [3, query_encoding]
MsgResult(result)             = [4, result_encoding]
MsgRelease                    = [5]

MsgReAcquire(SpecificPoint pt)= [6, point]
MsgReAcquire(VolatileTip)     = [9]
MsgReAcquire(ImmutableTip)    = [11]    ; V16+ only

MsgDone                       = [7]

Query Encoding (Three-Level HFC Wrapping)

Queries are wrapped in three layers. The outermost layer is the consensus-level Query type (in Ouroboros.Consensus.Ledger.Query):

; Outermost consensus layer
query = [2, tag=0, wrapped_block_query]   ; BlockQuery — delegates to HFC
      / [1, tag=1]                         ; GetSystemStart
      / [1, tag=2]                         ; GetChainBlockNo (V16+ / QueryVersion2)
      / [1, tag=3]                         ; GetChainPoint  (V16+ / QueryVersion2)
      / [1, tag=4]                         ; DebugLedgerConfig (V20+ / QueryVersion3)

For BlockQuery (tag=0), the next layer is the HFC query:

; HFC (Hard Fork Combinator) layer
hfc_query = [2, tag=0, era_query]   ; QueryIfCurrent — query current era
          / [3, tag=1, era_query, era_index]   ; QueryAnytime
          / [2, tag=2, hf_specific]            ; QueryHardFork

For QueryIfCurrent, the era index is determined by dispatch; there is no explicit era tag in the message. The era_query is the era-level query:

; Era-level query (Shelley BlockQuery tags)
; These are 1-element or 2-element arrays with a numeric tag
era_query = [1, tag=0]    ; GetLedgerTip
          / [1, tag=1]    ; GetEpochNo
          / [2, tag=2, ..] ; GetNonMyopicMemberRewards
          / [1, tag=3]    ; GetCurrentPParams
          ; ... (see full table below)

Shelley BlockQuery Tag Table

TagQuery NameMin N2C Version
0GetLedgerTipV8
1GetEpochNoV8
2GetNonMyopicMemberRewardsV8
3GetCurrentPParamsV8
4GetProposedPParamsUpdatesV8
5GetStakeDistributionV8 (removed in V21)
6GetUTxOByAddressV8
7GetUTxOWholeV8
8DebugEpochStateV8
9GetCBOR (wraps inner query in tag(24))V8
10GetFilteredDelegationsAndRewardAccountsV8
11GetGenesisConfigV8
12DebugNewEpochStateV8
13DebugChainDepStateV8
14GetRewardProvenanceV9
15GetUTxOByTxInV10
16GetStakePoolsV11
17GetStakePoolParamsV11
18GetRewardInfoPoolsV11
19GetPoolStateV11
20GetStakeSnapshotsV11
21GetPoolDistrV11 (removed in V21)
22GetStakeDelegDepositsV16
23GetConstitutionV16
24GetGovStateV16
25GetDRepStateV16
26GetDRepStakeDistrV16
27GetCommitteeMembersStateV16
28GetFilteredVoteDelegateesV16
29GetAccountStateV16
30GetSPOStakeDistrV16
31GetProposalsV17
32GetRatifyStateV17
33GetFuturePParamsV18
34GetLedgerPeerSnapshotV19
35QueryStakePoolDefaultVoteV20
36GetPoolDistr2V21
37GetStakeDistribution2V21
38GetMaxMajorProtVersionV21
39GetDRepDelegationsV23

Source: cardano-diffusion/api/lib/Cardano/Network/NodeToClient/Version.hs and ouroboros-consensus/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Block/Cardano.hs

MsgResult Wrapping

For QueryIfCurrent queries, the result is wrapped in an EitherMismatch type to indicate whether the query was applied to the correct era:

; QueryIfCurrent result encoding
result = [result_value]          ; Success: definite-length array(1) wrapping the value
       / [era_mismatch_info]     ; Era mismatch: see EraEraMismatch encoding

A successful QueryIfCurrent result is wrapped in a 1-element definite-length array. This is easy to miss and causes decoding failures if omitted.

QueryAnytime and QueryHardFork results are not wrapped in this extra array.


N2C Protocol 9: LocalTxMonitor

Identity

  • Protocol ID: 9
  • Direction: Client monitors the node's mempool contents.
  • Versions: All N2C versions. MsgGetMeasures/MsgReplyGetMeasures require N2C V20+.

State Machine

Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/LocalTxMonitor/Type.hs

StIdle      (ClientAgency)   -- client can acquire a snapshot or terminate
    │
    ├─── MsgAcquire  ──→ StAcquiring
    └─── MsgDone     ──→ StDone

StAcquiring  (ServerAgency)  -- server captures mempool snapshot
    │
    └─── MsgAcquired(slotNo)  ──→ StAcquired

StAcquired   (ClientAgency)  -- client queries snapshot or releases
    │
    ├─── MsgNextTx          ──→ StBusy(NextTx)
    ├─── MsgHasTx(txid)     ──→ StBusy(HasTx)
    ├─── MsgGetSizes        ──→ StBusy(GetSizes)
    ├─── MsgGetMeasures     ──→ StBusy(GetMeasures)   ; V20+ only
    ├─── MsgAwaitAcquire    ──→ StAcquiring           ; refresh snapshot
    └─── MsgRelease         ──→ StIdle

StBusy(NextTx)      (ServerAgency)
    └─── MsgReplyNextTx(maybe tx)  ──→ StAcquired

StBusy(HasTx)       (ServerAgency)
    └─── MsgReplyHasTx(bool)       ──→ StAcquired

StBusy(GetSizes)    (ServerAgency)
    └─── MsgReplyGetSizes(sizes)   ──→ StAcquired

StBusy(GetMeasures) (ServerAgency)   ; V20+
    └─── MsgReplyGetMeasures(m)    ──→ StAcquired

StDone (NobodyAgency)

Snapshot semantics: After MsgAcquired, the client holds a fixed snapshot of the mempool as of the slotNo returned. The snapshot does not change even if new transactions arrive or are removed. MsgAwaitAcquire refreshes the snapshot without going through StIdle.

Wire Format

Source: ouroboros-network/ouroboros-network/protocols/lib/Ouroboros/Network/Protocol/LocalTxMonitor/Codec.hs and cardano-diffusion/protocols/cddl/specs/local-tx-monitor.cddl

MsgDone             = [0]

MsgAcquire          = [1]          ; same tag for initial acquire from StIdle
MsgAwaitAcquire     = [1]          ; same tag for re-acquire from StAcquired

MsgAcquired         = [2, slotNo:word64]

MsgRelease          = [3]

MsgNextTx           = [5]          ; note: tag 4 is unused
MsgReplyNextTx      = [6]          ; no tx: empty mempool
                    / [6, tx]      ; with tx: next transaction in snapshot

MsgHasTx            = [7, txId]
MsgReplyHasTx       = [8, bool]

MsgGetSizes         = [9]
MsgReplyGetSizes    = [10, [capacityInBytes:word32,
                            sizeInBytes:word32,
                            numberOfTxs:word32]]

MsgGetMeasures      = [11]         ; V20+ only
MsgReplyGetMeasures = [12, txCount:word32, {* tstr => [integer, integer]}]
                    ; V20+ only

Tag 4 is intentionally unused. Tags jump from 3 (MsgRelease) to 5 (MsgNextTx).

MsgReplyNextTx: Uses the same tag (6) for both the no-tx and has-tx cases, distinguished by array length: [6] (len=1) means no more txs; [6, tx] (len=2) means a tx follows.

MsgAcquire and MsgAwaitAcquire use the same wire tag [1]. The protocol state (StIdle vs StAcquired) determines which message is being decoded. This is handled by the state token in the codec.

Transaction encoding: Same as LocalTxSubmission — raw CBOR with no HFC wrapping.

txId encoding: Raw 32-byte Blake2b-256 hash as CBOR bytes primitive.


Initialization Sequence

N2N Connection Startup

After the TCP connection is established:

  1. Handshake (protocol 0): Both sides send MsgProposeVersions simultaneously (simultaneous open). The one with the lower socket address keeps the outbound role; the other keeps the inbound. Each side processes the other's proposal and the higher-address side sends MsgAcceptVersion or MsgRefuse. The connection proceeds only if both sides determine the same version.

  2. Mux starts: After successful handshake, the mux multiplexer and demultiplexer threads are started. Protocol threads are started based on peer temperature.

  3. Cold→Warm: KeepAlive (8) and PeerSharing (10) initiator threads start eagerly.

  4. Warm→Hot: ChainSync (2), BlockFetch (3), TxSubmission2 (4) initiator threads start eagerly. Responder threads start on-demand (when first inbound bytes arrive).

  5. TxSubmission2 MsgInit: The TxSubmission2 client (outbound side) must send MsgInit ([6]) as its very first message. Without this, the server stays in StInit indefinitely (waitForever timeout).

N2C Connection Startup

  1. Handshake (protocol 0): Same mechanism, but using N2C version numbers (with bit 15 set). The local client proposes; the node accepts.

  2. Mux starts: All N2C mini-protocols start eagerly on both sides.

  3. No mandatory initial messages: Unlike N2N TxSubmission2, no N2C protocol requires a mandatory initial message before the first client request. The client may begin with MsgAcquire (LocalStateQuery), MsgSubmitTx (LocalTxSubmission), or MsgAcquire (LocalTxMonitor) immediately.


HFC Era Index Table

This table applies to all N2N protocols (ChainSync headers, BlockFetch blocks, TxSubmission2 txids/txs) and N2C LocalChainSync blocks.

Era IndexEra
0Byron
1Shelley (TPraos)
2Allegra (TPraos)
3Mary (TPraos)
4Alonzo (TPraos)
5Babbage (Praos)
6Conway (Praos)
7Dijkstra (Praos, future)

Source: ouroboros-consensus/ouroboros-consensus-cardano/src/unstable-cardano-consensus/Ouroboros/Consensus/Cardano/Block.hs


Summary: Protocol Error Triggers

This table lists the most common protocol violations that terminate the connection.

ProtocolError ConditionTrigger
HandshakeVersionMismatchNo common version in propose
HandshakeRefusedMagic mismatch, policy rejection
HandshakeHandshakeDecodeErrorFailed to decode version params
ChainSyncAgency violationClient sends MsgRollForward (server-only message)
ChainSyncProtocolErrorRequestNonBlockingServer sends MsgAwaitReply but StNext(StMustReply) was active (not StCanAwait)
BlockFetchAgency violationClient sends MsgBlock (server-only message)
TxSubmission2Protocol errorAny message before MsgInit is processed
TxSubmission2BlockingReply emptyServer sends MsgRequestTxIds(blocking=true) and client replies with empty list
TxSubmission2Size mismatchReported SizeInBytes deviates >10 bytes from actual tx wire size (V2 inbound)
KeepAliveKeepAliveCookieMissmatchResponse cookie != request cookie
PeerSharingProtocol errorServer replies with more peers than requested
LocalStateQueryAcquireFailurePointTooOldSpecificPoint is outside the volatile window
LocalStateQueryAcquireFailurePointNotOnChainSpecificPoint not on the node's chain
LocalStateQueryImmutableTip on old versionAttempting MsgAcquire(ImmutableTip) before N2C V16
AnyByte limit exceededIngress queue overflow (per-state byte limits)
AnyTimeout exceededPer-state timing limits (see per-protocol tables)

Source File Index

All files are in the IntersectMBO/ouroboros-network repository (main branch) unless otherwise noted.

Protocol / TopicFile
N2N Handshake Typeouroboros-network/framework/lib/Ouroboros/Network/Protocol/Handshake/Type.hs
N2N Handshake Codecouroboros-network/framework/lib/Ouroboros/Network/Protocol/Handshake/Codec.hs
N2N Handshake CDDLcardano-diffusion/protocols/cddl/specs/handshake-node-to-node-v14.cddl
N2C Handshake CDDLcardano-diffusion/protocols/cddl/specs/handshake-node-to-client.cddl
N2N Version data v14 CDDLcardano-diffusion/protocols/cddl/specs/node-to-node-version-data-v14.cddl
N2N Version data v16 CDDLcardano-diffusion/protocols/cddl/specs/node-to-node-version-data-v16.cddl
N2C Version enumcardano-diffusion/api/lib/Cardano/Network/NodeToClient/Version.hs
N2N Version enumcardano-diffusion/api/lib/Cardano/Network/NodeToNode/Version.hs
ChainSync Typeouroboros-network/protocols/lib/Ouroboros/Network/Protocol/ChainSync/Type.hs
ChainSync Codecouroboros-network/protocols/lib/Ouroboros/Network/Protocol/ChainSync/Codec.hs
ChainSync TimeLimitscardano-diffusion/protocols/lib/Cardano/Network/Protocol/ChainSync/Codec/TimeLimits.hs
ChainSync CDDLcardano-diffusion/protocols/cddl/specs/chain-sync.cddl
ChainSync Pipeliningouroboros-network/protocols/lib/Ouroboros/Network/Protocol/ChainSync/PipelineDecision.hs
BlockFetch Typeouroboros-network/protocols/lib/Ouroboros/Network/Protocol/BlockFetch/Type.hs
BlockFetch Codecouroboros-network/protocols/lib/Ouroboros/Network/Protocol/BlockFetch/Codec.hs
BlockFetch CDDLcardano-diffusion/protocols/cddl/specs/block-fetch.cddl
TxSubmission2 Typeouroboros-network/protocols/lib/Ouroboros/Network/Protocol/TxSubmission2/Type.hs
TxSubmission2 Codecouroboros-network/protocols/lib/Ouroboros/Network/Protocol/TxSubmission2/Codec.hs
TxSubmission2 CDDLcardano-diffusion/protocols/cddl/specs/tx-submission2.cddl
KeepAlive Typeouroboros-network/protocols/lib/Ouroboros/Network/Protocol/KeepAlive/Type.hs
KeepAlive Codecouroboros-network/protocols/lib/Ouroboros/Network/Protocol/KeepAlive/Codec.hs
KeepAlive CDDLcardano-diffusion/protocols/cddl/specs/keep-alive.cddl
PeerSharing Typeouroboros-network/protocols/lib/Ouroboros/Network/Protocol/PeerSharing/Type.hs
PeerSharing Codec (Cardano)cardano-diffusion/protocols/lib/Cardano/Network/Protocol/PeerSharing/Codec.hs
PeerSharing CDDLcardano-diffusion/protocols/cddl/specs/peer-sharing-v14.cddl
LocalStateQuery Typeouroboros-network/protocols/lib/Ouroboros/Network/Protocol/LocalStateQuery/Type.hs
LocalStateQuery Codecouroboros-network/protocols/lib/Ouroboros/Network/Protocol/LocalStateQuery/Codec.hs
LocalStateQuery CDDLcardano-diffusion/protocols/cddl/specs/local-state-query.cddl
LocalTxSubmission Typeouroboros-network/protocols/lib/Ouroboros/Network/Protocol/LocalTxSubmission/Type.hs
LocalTxSubmission Codecouroboros-network/protocols/lib/Ouroboros/Network/Protocol/LocalTxSubmission/Codec.hs
LocalTxSubmission CDDLcardano-diffusion/protocols/cddl/specs/local-tx-submission.cddl
LocalTxMonitor Typeouroboros-network/protocols/lib/Ouroboros/Network/Protocol/LocalTxMonitor/Type.hs
LocalTxMonitor Codecouroboros-network/protocols/lib/Ouroboros/Network/Protocol/LocalTxMonitor/Codec.hs
LocalTxMonitor CDDLcardano-diffusion/protocols/cddl/specs/local-tx-monitor.cddl
Protocol Limits (byte/time constants)ouroboros-network/api/lib/Ouroboros/Network/Protocol/Limits.hs
Diffusion Configurationcardano-diffusion/lib/Cardano/Network/Diffusion/Configuration.hs
Mux SDU framingouroboros-network/network-mux/src/Network/Mux/Types.hs
HFC era encoding (encodeNS)ouroboros-consensus repo: src/.../HardFork/Combinator/Serialisation/Common.hs
network.base.cddlcardano-diffusion/protocols/cddl/specs/network.base.cddl

Nightly Benchmark Results — 2026-04-06

Machine: GitHub Actions ubuntu-latest Branch: main (0c4920a)

Storage Benchmarks

    Updating crates.io index
    Updating git repository `https://github.com/iquerejeta/curve25519-dalek`
    Updating git repository `https://github.com/input-output-hk/vrf`
    Updating git repository `https://github.com/aiken-lang/aiken.git`
 Downloading crates ...
  Downloaded anstyle v1.0.14
  Downloaded anes v0.1.6
  Downloaded bech32 v0.9.1
  Downloaded arrayref v0.3.9
  Downloaded base58 v0.2.0
  Downloaded autocfg v1.5.0
  Downloaded base64ct v1.8.3
  Downloaded arrayvec v0.7.6
  Downloaded aho-corasick v1.1.4
  Downloaded parking_lot_core v0.9.12
  Downloaded dyn-clone v1.0.20
  Downloaded pallas-crypto v1.0.0-alpha.6
  Downloaded plotters-backend v0.3.7
  Downloaded base64 v0.22.1
  Downloaded pallas-addresses v1.0.0-alpha.6
  Downloaded crc-catalog v2.4.0
  Downloaded strsim v0.11.1
  Downloaded block-buffer v0.10.4
  Downloaded ciborium-io v0.2.2
  Downloaded fastrand v2.3.0
  Downloaded digest v0.9.0
  Downloaded equivalent v1.0.2
  Downloaded block-buffer v0.9.0
  Downloaded itoa v1.0.18
  Downloaded parking_lot v0.12.5
  Downloaded unicode-xid v0.2.6
  Downloaded digest v0.10.7
  Downloaded is-terminal v0.4.17
  Downloaded ref-cast v1.0.25
  Downloaded criterion-plot v0.5.0
  Downloaded scopeguard v1.2.0
  Downloaded num-order v1.2.0
  Downloaded minicbor-derive v0.16.2
  Downloaded crossbeam-deque v0.8.6
  Downloaded pallas-codec v1.0.0-alpha.6
  Downloaded version_check v0.9.5
  Downloaded byteorder v1.5.0
  Downloaded oorandom v11.1.5
  Downloaded ref-cast-impl v1.0.25
  Downloaded ciborium v0.2.2
  Downloaded num-modular v0.6.1
  Downloaded pallas-primitives v1.0.0-alpha.6
  Downloaded getrandom v0.2.17
  Downloaded indexmap v1.9.3
  Downloaded getrandom v0.4.2
  Downloaded same-file v1.0.6
  Downloaded memchr v2.8.0
  Downloaded num-bigint v0.4.6
  Downloaded hashbrown v0.16.1
  Downloaded dashu-int v0.4.1
  Downloaded powerfmt v0.2.0
  Downloaded paste v1.0.15
  Downloaded plotters-svg v0.3.7
  Downloaded cryptoxide v0.4.4
  Downloaded time-core v0.1.8
  Downloaded chrono v0.4.44
  Downloaded tokio-macros v2.6.1
  Downloaded quote v1.0.45
  Downloaded pin-project-lite v0.2.17
  Downloaded tinytemplate v1.2.1
  Downloaded thiserror-impl v2.0.18
  Downloaded rustversion v1.0.22
  Downloaded rand_core v0.5.1
  Downloaded thiserror v2.0.18
  Downloaded walkdir v2.5.0
  Downloaded pkcs8 v0.10.2
  Downloaded spki v0.7.3
  Downloaded sha2 v0.9.9
  Downloaded semver v1.0.27
  Downloaded zeroize v1.8.2
  Downloaded zmij v1.0.21
  Downloaded thiserror-impl v1.0.69
  Downloaded rustc_version v0.4.1
  Downloaded rand_core v0.6.4
  Downloaded smallvec v1.15.1
  Downloaded rand_core v0.9.5
  Downloaded ppv-lite86 v0.2.21
  Downloaded signal-hook-registry v1.4.8
  Downloaded curve25519-dalek v3.2.0
  Downloaded unicode-ident v1.0.24
  Downloaded thiserror v1.0.69
  Downloaded libc v0.2.184
  Downloaded tracing-core v0.1.36
  Downloaded schemars v0.9.0
  Downloaded typenum v1.19.0
  Downloaded serde v1.0.228
  Downloaded schemars v1.2.1
  Downloaded zerocopy-derive v0.8.48
  Downloaded rand v0.9.2
  Downloaded rayon-core v1.13.0
  Downloaded plotters v0.3.7
  Downloaded rand v0.8.5
  Downloaded socket2 v0.6.3
  Downloaded serde_json v1.0.149
  Downloaded rayon v1.11.0
  Downloaded serde_with v3.18.0
  Downloaded regex v1.12.3
  Downloaded serde_core v1.0.228
  Downloaded time v0.3.47
  Downloaded zerocopy v0.8.48
  Downloaded syn v2.0.117
  Downloaded proc-macro2 v1.0.106
  Downloaded tracing-attributes v0.1.31
  Downloaded serde_derive v1.0.228
  Downloaded regex-syntax v0.8.10
  Downloaded tempfile v3.27.0
  Downloaded rustix v1.1.4
  Downloaded serde_with_macros v3.18.0
  Downloaded tracing v0.1.44
  Downloaded rand_chacha v0.9.0
  Downloaded curve25519-dalek v4.1.3
  Downloaded time-macros v0.2.27
  Downloaded static_assertions v1.1.0
  Downloaded signature v2.2.0
  Downloaded sha2 v0.10.9
  Downloaded subtle v2.6.1
  Downloaded regex-automata v0.4.14
  Downloaded pallas-traverse v1.0.0-alpha.6
  Downloaded itertools v0.13.0
  Downloaded clap_builder v4.6.0
  Downloaded hashbrown v0.12.3
  Downloaded criterion v0.5.1
  Downloaded mio v1.2.0
  Downloaded itertools v0.10.5
  Downloaded indexmap v2.13.0
  Downloaded ed25519-dalek v2.2.0
  Downloaded derive_more-impl v1.0.0
  Downloaded der v0.7.10
  Downloaded darling_core v0.23.0
  Downloaded const-oid v0.9.6
  Downloaded bytes v1.11.1
  Downloaded bech32 v0.11.1
  Downloaded rand_chacha v0.3.1
  Downloaded iana-time-zone v0.1.65
  Downloaded clap v4.6.0
  Downloaded tokio v1.50.0
  Downloaded derive_more v1.0.0
  Downloaded generic-array v0.14.9
  Downloaded deranged v0.5.8
  Downloaded crc32fast v1.5.0
  Downloaded blake2b_simd v1.0.4
  Downloaded blake2 v0.10.6
  Downloaded bitflags v2.11.0
  Downloaded once_cell v1.21.4
  Downloaded minicbor-derive v0.15.3
  Downloaded minicbor v0.26.5
  Downloaded minicbor v0.25.1
  Downloaded half v2.7.1
  Downloaded getrandom v0.3.4
  Downloaded darling v0.23.0
  Downloaded crossbeam-utils v0.8.21
  Downloaded crossbeam-epoch v0.9.18
  Downloaded constant_time_eq v0.4.2
  Downloaded num-rational v0.4.2
  Downloaded num-integer v0.1.46
  Downloaded getrandom v0.1.16
  Downloaded num-traits v0.2.19
  Downloaded memmap2 v0.9.10
  Downloaded ed25519 v2.2.3
  Downloaded lock_api v0.4.14
  Downloaded dashu-base v0.4.1
  Downloaded opaque-debug v0.3.1
  Downloaded num-conv v0.2.1
  Downloaded ident_case v1.0.1
  Downloaded hex v0.4.3
  Downloaded either v1.15.0
  Downloaded curve25519-dalek-derive v0.1.1
  Downloaded clap_lex v1.1.0
  Downloaded cfg-if v1.0.4
  Downloaded errno v0.3.14
  Downloaded crypto-common v0.1.6
  Downloaded crc v3.4.0
  Downloaded cpufeatures v0.2.17
  Downloaded cast v0.3.0
  Downloaded darling_macro v0.23.0
  Downloaded ciborium-ll v0.2.2
  Downloaded linux-raw-sys v0.12.1
   Compiling proc-macro2 v1.0.106
   Compiling quote v1.0.45
   Compiling unicode-ident v1.0.24
   Compiling cfg-if v1.0.4
   Compiling libc v0.2.184
   Compiling version_check v0.9.5
   Compiling typenum v1.19.0
   Compiling generic-array v0.14.9
   Compiling serde_core v1.0.228
   Compiling zerocopy v0.8.48
   Compiling serde v1.0.228
   Compiling syn v2.0.117
   Compiling subtle v2.6.1
   Compiling block-buffer v0.10.4
   Compiling crypto-common v0.1.6
   Compiling autocfg v1.5.0
   Compiling digest v0.10.7
   Compiling semver v1.0.27
   Compiling num-traits v0.2.19
   Compiling cpufeatures v0.2.17
   Compiling strsim v0.11.1
   Compiling zeroize v1.8.2
   Compiling ident_case v1.0.1
   Compiling getrandom v0.3.4
   Compiling rustc_version v0.4.1
   Compiling getrandom v0.2.17
   Compiling thiserror v1.0.69
   Compiling rand_core v0.6.4
   Compiling curve25519-dalek v4.1.3
   Compiling minicbor v0.26.5
   Compiling zmij v1.0.21
   Compiling rand_core v0.9.5
   Compiling hex v0.4.3
   Compiling signature v2.2.0
   Compiling serde_json v1.0.149
   Compiling sha2 v0.10.9
   Compiling memchr v2.8.0
   Compiling itoa v1.0.18
   Compiling getrandom v0.1.16
   Compiling either v1.15.0
   Compiling crossbeam-utils v0.8.21
   Compiling cryptoxide v0.4.4
   Compiling darling_core v0.23.0
   Compiling num-integer v0.1.46
   Compiling zerocopy-derive v0.8.48
   Compiling serde_derive v1.0.228
   Compiling darling_macro v0.23.0
   Compiling thiserror-impl v1.0.69
   Compiling darling v0.23.0
   Compiling minicbor-derive v0.16.2
   Compiling curve25519-dalek-derive v0.1.1
   Compiling serde_with_macros v3.18.0
   Compiling ed25519 v2.2.3
   Compiling ed25519-dalek v2.2.0
   Compiling serde_with v3.18.0
   Compiling digest v0.9.0
   Compiling once_cell v1.21.4
   Compiling unicode-xid v0.2.6
   Compiling thiserror v2.0.18
   Compiling minicbor v0.25.1
   Compiling derive_more-impl v1.0.0
   Compiling thiserror-impl v2.0.18
   Compiling half v2.7.1
   Compiling ppv-lite86 v0.2.21
   Compiling rand_chacha v0.9.0
   Compiling minicbor-derive v0.15.3
   Compiling rand v0.9.2
   Compiling num-bigint v0.4.6
   Compiling rand_core v0.5.1
   Compiling pallas-codec v1.0.0-alpha.6
   Compiling byteorder v1.5.0
   Compiling rustversion v1.0.22
   Compiling parking_lot_core v0.9.12
   Compiling crc-catalog v2.4.0
   Compiling paste v1.0.15
   Compiling constant_time_eq v0.4.2
   Compiling pallas-crypto v1.0.0-alpha.6
   Compiling arrayref v0.3.9
   Compiling iana-time-zone v0.1.65
   Compiling arrayvec v0.7.6
   Compiling bytes v1.11.1
   Compiling blake2b_simd v1.0.4
   Compiling chrono v0.4.44
   Compiling crc v3.4.0
   Compiling num-rational v0.4.2
   Compiling crossbeam-epoch v0.9.18
   Compiling derive_more v1.0.0
   Compiling blake2 v0.10.6
   Compiling block-buffer v0.9.0
   Compiling bech32 v0.9.1
   Compiling opaque-debug v0.3.1
   Compiling bech32 v0.11.1
   Compiling scopeguard v1.2.0
   Compiling pin-project-lite v0.2.17
   Compiling smallvec v1.15.1
   Compiling num-modular v0.6.1
   Compiling base58 v0.2.0
   Compiling rayon-core v1.13.0
   Compiling pallas-addresses v1.0.0-alpha.6
   Compiling dugite-primitives v1.0.2-alpha (/home/runner/work/dugite/dugite/crates/dugite-primitives)
   Compiling num-order v1.2.0
   Compiling lock_api v0.4.14
   Compiling sha2 v0.9.9
   Compiling crossbeam-deque v0.8.6
   Compiling pallas-primitives v1.0.0-alpha.6
   Compiling curve25519-dalek v3.2.0 (https://github.com/iquerejeta/curve25519-dalek?branch=ietf03_vrf_compat_ell2#70a36f41)
   Compiling curve25519-dalek v3.2.0
   Compiling rand_chacha v0.3.1
   Compiling tracing-core v0.1.36
   Compiling tracing-attributes v0.1.31
   Compiling itertools v0.13.0
   Compiling errno v0.3.14
   Compiling dashu-base v0.4.1
   Compiling anstyle v1.0.14
   Compiling regex-syntax v0.8.10
   Compiling static_assertions v1.1.0
   Compiling ciborium-io v0.2.2
   Compiling plotters-backend v0.3.7
   Compiling rustix v1.1.4
   Compiling getrandom v0.4.2
   Compiling crc32fast v1.5.0
   Compiling clap_lex v1.1.0
   Compiling clap_builder v4.6.0
   Compiling plotters-svg v0.3.7
   Compiling regex-automata v0.4.14
   Compiling pallas-traverse v1.0.0-alpha.6
   Compiling ciborium-ll v0.2.2
   Compiling dashu-int v0.4.1
   Compiling tracing v0.1.44
   Compiling signal-hook-registry v1.4.8
   Compiling rand v0.8.5
   Compiling vrf_dalek v0.1.0 (https://github.com/input-output-hk/vrf?rev=03ac038e9b92c754ebbcb71824866d93f25e27f3#03ac038e)
   Compiling parking_lot v0.12.5
   Compiling tokio-macros v2.6.1
   Compiling itertools v0.10.5
   Compiling socket2 v0.6.3
   Compiling mio v1.2.0
   Compiling bitflags v2.11.0
   Compiling cast v0.3.0
   Compiling same-file v1.0.6
   Compiling linux-raw-sys v0.12.1
   Compiling walkdir v2.5.0
   Compiling criterion-plot v0.5.0
   Compiling tokio v1.50.0
   Compiling rayon v1.11.0
   Compiling dugite-crypto v1.0.2-alpha (/home/runner/work/dugite/dugite/crates/dugite-crypto)
   Compiling dugite-serialization v1.0.2-alpha (/home/runner/work/dugite/dugite/crates/dugite-serialization)
   Compiling ciborium v0.2.2
   Compiling regex v1.12.3
   Compiling clap v4.6.0
   Compiling plotters v0.3.7
   Compiling tinytemplate v1.2.1
   Compiling is-terminal v0.4.17
   Compiling memmap2 v0.9.10
   Compiling anes v0.1.6
   Compiling fastrand v2.3.0
   Compiling oorandom v11.1.5
   Compiling criterion v0.5.1
   Compiling tempfile v3.27.0
   Compiling dugite-storage v1.0.2-alpha (/home/runner/work/dugite/dugite/crates/dugite-storage)
    Finished `bench` profile [optimized] target(s) in 1m 19s
     Running benches/storage_bench.rs (target/release/deps/storage_bench-3ea341f38aeca15b)
Gnuplot not found, using plotters backend
Benchmarking chaindb/sequential_insert/10k_20kb
Benchmarking chaindb/sequential_insert/10k_20kb: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 46.5s.
Benchmarking chaindb/sequential_insert/10k_20kb: Collecting 10 samples in estimated 46.544 s (10 iterations)
Benchmarking chaindb/sequential_insert/10k_20kb: Analyzing
chaindb/sequential_insert/10k_20kb
                        time:   [4.1789 s 4.3073 s 4.4968 s]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) high severe

Benchmarking chaindb/random_read/by_hash/10000blks
Benchmarking chaindb/random_read/by_hash/10000blks: Warming up for 3.0000 s

Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 400.3s, or reduce sample count to 10.
Benchmarking chaindb/random_read/by_hash/10000blks: Collecting 100 samples in estimated 400.29 s (100 iterations)
Benchmarking chaindb/random_read/by_hash/10000blks: Analyzing
chaindb/random_read/by_hash/10000blks
                        time:   [31.748 ms 31.962 ms 32.175 ms]
Benchmarking chaindb/random_read/by_hash/100000blks
Benchmarking chaindb/random_read/by_hash/100000blks: Warming up for 3.0000 s

Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 4402.7s, or reduce sample count to 10.
Benchmarking chaindb/random_read/by_hash/100000blks: Collecting 100 samples in estimated 4402.7 s (100 iterations)
Benchmarking chaindb/random_read/by_hash/100000blks: Analyzing
chaindb/random_read/by_hash/100000blks
                        time:   [356.32 ms 361.79 ms 367.51 ms]
Found 2 outliers among 100 measurements (2.00%)
  2 (2.00%) high mild

Benchmarking chaindb/tip_query
Benchmarking chaindb/tip_query: Warming up for 3.0000 s

Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 361.2s, or reduce sample count to 10.
Benchmarking chaindb/tip_query: Collecting 100 samples in estimated 361.19 s (100 iterations)
Benchmarking chaindb/tip_query: Analyzing
chaindb/tip_query       time:   [30.860 ms 31.845 ms 33.675 ms]
Found 3 outliers among 100 measurements (3.00%)
  2 (2.00%) high mild
  1 (1.00%) high severe

Benchmarking chaindb/has_block
Benchmarking chaindb/has_block: Warming up for 3.0000 s

Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 368.9s, or reduce sample count to 10.
Benchmarking chaindb/has_block: Collecting 100 samples in estimated 368.92 s (100 iterations)
Benchmarking chaindb/has_block: Analyzing
chaindb/has_block       time:   [30.780 ms 30.958 ms 31.138 ms]

Benchmarking chaindb/slot_range_100
Benchmarking chaindb/slot_range_100: Warming up for 3.0000 s

Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 370.3s, or reduce sample count to 10.
Benchmarking chaindb/slot_range_100: Collecting 100 samples in estimated 370.26 s (100 iterations)
Benchmarking chaindb/slot_range_100: Analyzing
chaindb/slot_range_100  time:   [31.206 ms 31.390 ms 31.582 ms]
Found 2 outliers among 100 measurements (2.00%)
  2 (2.00%) high mild

Benchmarking chaindb/flush_to_immutable/k_2160_blocks_20kb/2160
Benchmarking chaindb/flush_to_immutable/k_2160_blocks_20kb/2160: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 7.9s.
Benchmarking chaindb/flush_to_immutable/k_2160_blocks_20kb/2160: Collecting 10 samples in estimated 7.8944 s (10 iterations)
Benchmarking chaindb/flush_to_immutable/k_2160_blocks_20kb/2160: Analyzing
chaindb/flush_to_immutable/k_2160_blocks_20kb/2160
                        time:   [6.5466 ms 6.7124 ms 6.8934 ms]

Benchmarking chaindb/profile_comparison/insert_10k_20kb/in_memory
Benchmarking chaindb/profile_comparison/insert_10k_20kb/in_memory: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 36.1s.
Benchmarking chaindb/profile_comparison/insert_10k_20kb/in_memory: Collecting 10 samples in estimated 36.136 s (10 iterations)
Benchmarking chaindb/profile_comparison/insert_10k_20kb/in_memory: Analyzing
chaindb/profile_comparison/insert_10k_20kb/in_memory
                        time:   [3.6752 s 3.7145 s 3.7530 s]
Benchmarking chaindb/profile_comparison/insert_10k_20kb/mmap
Benchmarking chaindb/profile_comparison/insert_10k_20kb/mmap: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 36.2s.
Benchmarking chaindb/profile_comparison/insert_10k_20kb/mmap: Collecting 10 samples in estimated 36.188 s (10 iterations)
Benchmarking chaindb/profile_comparison/insert_10k_20kb/mmap: Analyzing
chaindb/profile_comparison/insert_10k_20kb/mmap
                        time:   [3.6515 s 3.6790 s 3.7089 s]
Benchmarking chaindb/profile_comparison/read_500/in_memory
Benchmarking chaindb/profile_comparison/read_500/in_memory: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 37.8s.
Benchmarking chaindb/profile_comparison/read_500/in_memory: Collecting 10 samples in estimated 37.751 s (10 iterations)
Benchmarking chaindb/profile_comparison/read_500/in_memory: Analyzing
chaindb/profile_comparison/read_500/in_memory
                        time:   [30.004 ms 30.470 ms 30.927 ms]
Benchmarking chaindb/profile_comparison/read_500/mmap
Benchmarking chaindb/profile_comparison/read_500/mmap: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 37.1s.
Benchmarking chaindb/profile_comparison/read_500/mmap: Collecting 10 samples in estimated 37.105 s (10 iterations)
Benchmarking chaindb/profile_comparison/read_500/mmap: Analyzing
chaindb/profile_comparison/read_500/mmap
                        time:   [29.574 ms 30.173 ms 30.837 ms]

Benchmarking immutabledb/open/in_memory/10000
Benchmarking immutabledb/open/in_memory/10000: Warming up for 3.0000 s
Benchmarking immutabledb/open/in_memory/10000: Collecting 100 samples in estimated 6.3435 s (300 iterations)
Benchmarking immutabledb/open/in_memory/10000: Analyzing
immutabledb/open/in_memory/10000
                        time:   [20.753 ms 20.834 ms 20.915 ms]
Found 1 outliers among 100 measurements (1.00%)
  1 (1.00%) low mild
Benchmarking immutabledb/open/mmap_cached/10000
Benchmarking immutabledb/open/mmap_cached/10000: Warming up for 3.0000 s
Benchmarking immutabledb/open/mmap_cached/10000: Collecting 100 samples in estimated 5.0729 s (25k iterations)
Benchmarking immutabledb/open/mmap_cached/10000: Analyzing
immutabledb/open/mmap_cached/10000
                        time:   [200.19 µs 200.39 µs 200.66 µs]
Found 5 outliers among 100 measurements (5.00%)
  3 (3.00%) high mild
  2 (2.00%) high severe
Benchmarking immutabledb/open/mmap_cold_rebuild/10000
Benchmarking immutabledb/open/mmap_cold_rebuild/10000: Warming up for 3.0000 s
Benchmarking immutabledb/open/mmap_cold_rebuild/10000: Collecting 100 samples in estimated 5.2293 s (1700 iterations)
Benchmarking immutabledb/open/mmap_cold_rebuild/10000: Analyzing
immutabledb/open/mmap_cold_rebuild/10000
                        time:   [3.0467 ms 3.1954 ms 3.4748 ms]
Found 6 outliers among 100 measurements (6.00%)
  3 (3.00%) high mild
  3 (3.00%) high severe
Benchmarking immutabledb/open/in_memory/100000
Benchmarking immutabledb/open/in_memory/100000: Warming up for 3.0000 s

Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 116.9s, or reduce sample count to 10.
Benchmarking immutabledb/open/in_memory/100000: Collecting 100 samples in estimated 116.92 s (100 iterations)
Benchmarking immutabledb/open/in_memory/100000: Analyzing
immutabledb/open/in_memory/100000
                        time:   [869.31 ms 870.33 ms 871.48 ms]
Found 7 outliers among 100 measurements (7.00%)
  1 (1.00%) high mild
  6 (6.00%) high severe
Benchmarking immutabledb/open/mmap_cached/100000
Benchmarking immutabledb/open/mmap_cached/100000: Warming up for 3.0000 s

Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 8.4s, enable flat sampling, or reduce sample count to 50.
Benchmarking immutabledb/open/mmap_cached/100000: Collecting 100 samples in estimated 8.3896 s (5050 iterations)
Benchmarking immutabledb/open/mmap_cached/100000: Analyzing
immutabledb/open/mmap_cached/100000
                        time:   [1.6160 ms 1.6259 ms 1.6367 ms]
Benchmarking immutabledb/open/mmap_cold_rebuild/100000
Benchmarking immutabledb/open/mmap_cold_rebuild/100000: Warming up for 3.0000 s
Benchmarking immutabledb/open/mmap_cold_rebuild/100000: Collecting 100 samples in estimated 5.2647 s (200 iterations)
Benchmarking immutabledb/open/mmap_cold_rebuild/100000: Analyzing
immutabledb/open/mmap_cold_rebuild/100000
                        time:   [26.439 ms 26.602 ms 26.785 ms]
Found 2 outliers among 100 measurements (2.00%)
  1 (1.00%) high mild
  1 (1.00%) high severe

Benchmarking immutabledb/lookup/in_memory/10000
Benchmarking immutabledb/lookup/in_memory/10000: Warming up for 3.0000 s
Benchmarking immutabledb/lookup/in_memory/10000: Collecting 100 samples in estimated 5.5288 s (500 iterations)
Benchmarking immutabledb/lookup/in_memory/10000: Analyzing
immutabledb/lookup/in_memory/10000
                        time:   [10.768 ms 10.814 ms 10.861 ms]
Found 1 outliers among 100 measurements (1.00%)
  1 (1.00%) high mild
Benchmarking immutabledb/lookup/mmap/10000
Benchmarking immutabledb/lookup/mmap/10000: Warming up for 3.0000 s
Benchmarking immutabledb/lookup/mmap/10000: Collecting 100 samples in estimated 5.4963 s (500 iterations)
Benchmarking immutabledb/lookup/mmap/10000: Analyzing
immutabledb/lookup/mmap/10000
                        time:   [11.058 ms 11.114 ms 11.175 ms]
Found 5 outliers among 100 measurements (5.00%)
  4 (4.00%) high mild
  1 (1.00%) high severe

Benchmarking immutabledb/has_block/in_memory
Benchmarking immutabledb/has_block/in_memory: Warming up for 3.0000 s
Benchmarking immutabledb/has_block/in_memory: Collecting 100 samples in estimated 5.0568 s (146k iterations)
Benchmarking immutabledb/has_block/in_memory: Analyzing
immutabledb/has_block/in_memory
                        time:   [34.470 µs 34.483 µs 34.496 µs]
Found 3 outliers among 100 measurements (3.00%)
  2 (2.00%) high mild
  1 (1.00%) high severe
Benchmarking immutabledb/has_block/mmap
Benchmarking immutabledb/has_block/mmap: Warming up for 3.0000 s
Benchmarking immutabledb/has_block/mmap: Collecting 100 samples in estimated 5.0525 s (146k iterations)
Benchmarking immutabledb/has_block/mmap: Analyzing
immutabledb/has_block/mmap
                        time:   [34.483 µs 34.494 µs 34.504 µs]
Found 3 outliers among 100 measurements (3.00%)
  2 (2.00%) high mild
  1 (1.00%) high severe

Benchmarking immutabledb/append/1k_blocks_20kb/in_memory
Benchmarking immutabledb/append/1k_blocks_20kb/in_memory: Warming up for 3.0000 s
Benchmarking immutabledb/append/1k_blocks_20kb/in_memory: Collecting 100 samples in estimated 6.2626 s (400 iterations)
Benchmarking immutabledb/append/1k_blocks_20kb/in_memory: Analyzing
immutabledb/append/1k_blocks_20kb/in_memory
                        time:   [15.298 ms 15.346 ms 15.395 ms]
Benchmarking immutabledb/append/1k_blocks_20kb/mmap
Benchmarking immutabledb/append/1k_blocks_20kb/mmap: Warming up for 3.0000 s
Benchmarking immutabledb/append/1k_blocks_20kb/mmap: Collecting 100 samples in estimated 5.0122 s (300 iterations)
Benchmarking immutabledb/append/1k_blocks_20kb/mmap: Analyzing
immutabledb/append/1k_blocks_20kb/mmap
                        time:   [16.519 ms 16.554 ms 16.588 ms]
Found 5 outliers among 100 measurements (5.00%)
  2 (2.00%) low mild
  3 (3.00%) high mild

Benchmarking immutabledb/slot_range/range_100/in_memory
Benchmarking immutabledb/slot_range/range_100/in_memory: Warming up for 3.0000 s
Benchmarking immutabledb/slot_range/range_100/in_memory: Collecting 100 samples in estimated 6.5887 s (20k iterations)
Benchmarking immutabledb/slot_range/range_100/in_memory: Analyzing
immutabledb/slot_range/range_100/in_memory
                        time:   [326.17 µs 327.03 µs 327.95 µs]
Found 10 outliers among 100 measurements (10.00%)
  3 (3.00%) low severe
  1 (1.00%) low mild
  5 (5.00%) high mild
  1 (1.00%) high severe
Benchmarking immutabledb/slot_range/range_100/mmap
Benchmarking immutabledb/slot_range/range_100/mmap: Warming up for 3.0000 s
Benchmarking immutabledb/slot_range/range_100/mmap: Collecting 100 samples in estimated 6.5084 s (20k iterations)
Benchmarking immutabledb/slot_range/range_100/mmap: Analyzing
immutabledb/slot_range/range_100/mmap
                        time:   [328.42 µs 329.92 µs 331.58 µs]
Found 13 outliers among 100 measurements (13.00%)
  1 (1.00%) low severe
  4 (4.00%) low mild
  5 (5.00%) high mild
  3 (3.00%) high severe

Benchmarking block_index/insert/in_memory/10000
Benchmarking block_index/insert/in_memory/10000: Warming up for 3.0000 s
Benchmarking block_index/insert/in_memory/10000: Collecting 100 samples in estimated 7.9963 s (10k iterations)
Benchmarking block_index/insert/in_memory/10000: Analyzing
block_index/insert/in_memory/10000
                        time:   [795.78 µs 796.33 µs 796.84 µs]
Found 3 outliers among 100 measurements (3.00%)
  1 (1.00%) high mild
  2 (2.00%) high severe
Benchmarking block_index/insert/mmap/10000
Benchmarking block_index/insert/mmap/10000: Warming up for 3.0000 s
Benchmarking block_index/insert/mmap/10000: Collecting 100 samples in estimated 5.1554 s (800 iterations)
Benchmarking block_index/insert/mmap/10000: Analyzing
block_index/insert/mmap/10000
                        time:   [6.2209 ms 6.2816 ms 6.3535 ms]
Found 5 outliers among 100 measurements (5.00%)
  5 (5.00%) high severe
Benchmarking block_index/insert/in_memory/50000
Benchmarking block_index/insert/in_memory/50000: Warming up for 3.0000 s
Benchmarking block_index/insert/in_memory/50000: Collecting 100 samples in estimated 5.1941 s (1500 iterations)
Benchmarking block_index/insert/in_memory/50000: Analyzing
block_index/insert/in_memory/50000
                        time:   [3.4683 ms 3.4738 ms 3.4799 ms]
Found 7 outliers among 100 measurements (7.00%)
  6 (6.00%) high mild
  1 (1.00%) high severe
Benchmarking block_index/insert/mmap/50000
Benchmarking block_index/insert/mmap/50000: Warming up for 3.0000 s
Benchmarking block_index/insert/mmap/50000: Collecting 100 samples in estimated 9.1282 s (200 iterations)
Benchmarking block_index/insert/mmap/50000: Analyzing
block_index/insert/mmap/50000
                        time:   [46.040 ms 46.371 ms 46.767 ms]
Found 5 outliers among 100 measurements (5.00%)
  2 (2.00%) high mild
  3 (3.00%) high severe
Benchmarking block_index/insert/in_memory/100000
Benchmarking block_index/insert/in_memory/100000: Warming up for 3.0000 s
Benchmarking block_index/insert/in_memory/100000: Collecting 100 samples in estimated 5.1675 s (700 iterations)
Benchmarking block_index/insert/in_memory/100000: Analyzing
block_index/insert/in_memory/100000
                        time:   [7.5499 ms 7.6126 ms 7.6780 ms]
Found 1 outliers among 100 measurements (1.00%)
  1 (1.00%) high mild
Benchmarking block_index/insert/mmap/100000
Benchmarking block_index/insert/mmap/100000: Warming up for 3.0000 s

Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 9.0s, or reduce sample count to 50.
Benchmarking block_index/insert/mmap/100000: Collecting 100 samples in estimated 8.9772 s (100 iterations)
Benchmarking block_index/insert/mmap/100000: Analyzing
block_index/insert/mmap/100000
                        time:   [88.699 ms 89.185 ms 89.737 ms]
Found 10 outliers among 100 measurements (10.00%)
  7 (7.00%) high mild
  3 (3.00%) high severe

Benchmarking block_index/lookup/in_memory/10000
Benchmarking block_index/lookup/in_memory/10000: Warming up for 3.0000 s
Benchmarking block_index/lookup/in_memory/10000: Collecting 100 samples in estimated 5.0161 s (318k iterations)
Benchmarking block_index/lookup/in_memory/10000: Analyzing
block_index/lookup/in_memory/10000
                        time:   [15.672 µs 15.695 µs 15.729 µs]
Found 13 outliers among 100 measurements (13.00%)
  4 (4.00%) high mild
  9 (9.00%) high severe
Benchmarking block_index/lookup/mmap/10000
Benchmarking block_index/lookup/mmap/10000: Warming up for 3.0000 s
Benchmarking block_index/lookup/mmap/10000: Collecting 100 samples in estimated 5.0341 s (177k iterations)
Benchmarking block_index/lookup/mmap/10000: Analyzing
block_index/lookup/mmap/10000
                        time:   [27.751 µs 27.771 µs 27.801 µs]
Found 5 outliers among 100 measurements (5.00%)
  3 (3.00%) high mild
  2 (2.00%) high severe
Benchmarking block_index/lookup/in_memory/50000
Benchmarking block_index/lookup/in_memory/50000: Warming up for 3.0000 s
Benchmarking block_index/lookup/in_memory/50000: Collecting 100 samples in estimated 5.0586 s (313k iterations)
Benchmarking block_index/lookup/in_memory/50000: Analyzing
block_index/lookup/in_memory/50000
                        time:   [16.094 µs 16.122 µs 16.167 µs]
Found 12 outliers among 100 measurements (12.00%)
  4 (4.00%) high mild
  8 (8.00%) high severe
Benchmarking block_index/lookup/mmap/50000
Benchmarking block_index/lookup/mmap/50000: Warming up for 3.0000 s
Benchmarking block_index/lookup/mmap/50000: Collecting 100 samples in estimated 5.0765 s (247k iterations)
Benchmarking block_index/lookup/mmap/50000: Analyzing
block_index/lookup/mmap/50000
                        time:   [20.512 µs 20.520 µs 20.529 µs]
Found 7 outliers among 100 measurements (7.00%)
  4 (4.00%) high mild
  3 (3.00%) high severe
Benchmarking block_index/lookup/in_memory/100000
Benchmarking block_index/lookup/in_memory/100000: Warming up for 3.0000 s
Benchmarking block_index/lookup/in_memory/100000: Collecting 100 samples in estimated 5.0504 s (308k iterations)
Benchmarking block_index/lookup/in_memory/100000: Analyzing
block_index/lookup/in_memory/100000
                        time:   [16.328 µs 16.341 µs 16.354 µs]
Found 5 outliers among 100 measurements (5.00%)
  3 (3.00%) high mild
  2 (2.00%) high severe
Benchmarking block_index/lookup/mmap/100000
Benchmarking block_index/lookup/mmap/100000: Warming up for 3.0000 s
Benchmarking block_index/lookup/mmap/100000: Collecting 100 samples in estimated 5.0527 s (252k iterations)
Benchmarking block_index/lookup/mmap/100000: Analyzing
block_index/lookup/mmap/100000
                        time:   [20.002 µs 20.035 µs 20.085 µs]
Found 6 outliers among 100 measurements (6.00%)
  3 (3.00%) high mild
  3 (3.00%) high severe

Benchmarking block_index/contains_miss/in_memory
Benchmarking block_index/contains_miss/in_memory: Warming up for 3.0000 s
Benchmarking block_index/contains_miss/in_memory: Collecting 100 samples in estimated 5.0588 s (429k iterations)
Benchmarking block_index/contains_miss/in_memory: Analyzing
block_index/contains_miss/in_memory
                        time:   [11.780 µs 11.789 µs 11.805 µs]
Found 9 outliers among 100 measurements (9.00%)
  5 (5.00%) high mild
  4 (4.00%) high severe
Benchmarking block_index/contains_miss/mmap
Benchmarking block_index/contains_miss/mmap: Warming up for 3.0000 s
Benchmarking block_index/contains_miss/mmap: Collecting 100 samples in estimated 5.0783 s (121k iterations)
Benchmarking block_index/contains_miss/mmap: Analyzing
block_index/contains_miss/mmap
                        time:   [41.900 µs 41.923 µs 41.954 µs]
Found 12 outliers among 100 measurements (12.00%)
  1 (1.00%) low mild
  4 (4.00%) high mild
  7 (7.00%) high severe

Benchmarking scaling/block_index_insert/in_memory/10000
Benchmarking scaling/block_index_insert/in_memory/10000: Warming up for 3.0000 s
Benchmarking scaling/block_index_insert/in_memory/10000: Collecting 10 samples in estimated 5.0257 s (6435 iterations)
Benchmarking scaling/block_index_insert/in_memory/10000: Analyzing
scaling/block_index_insert/in_memory/10000
                        time:   [782.85 µs 785.43 µs 789.67 µs]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) high severe
Benchmarking scaling/block_index_insert/mmap/10000
Benchmarking scaling/block_index_insert/mmap/10000: Warming up for 3.0000 s
Benchmarking scaling/block_index_insert/mmap/10000: Collecting 10 samples in estimated 5.1787 s (825 iterations)
Benchmarking scaling/block_index_insert/mmap/10000: Analyzing
scaling/block_index_insert/mmap/10000
                        time:   [6.3196 ms 6.3656 ms 6.4264 ms]
Benchmarking scaling/block_index_insert/in_memory/50000
Benchmarking scaling/block_index_insert/in_memory/50000: Warming up for 3.0000 s
Benchmarking scaling/block_index_insert/in_memory/50000: Collecting 10 samples in estimated 5.1108 s (1485 iterations)
Benchmarking scaling/block_index_insert/in_memory/50000: Analyzing
scaling/block_index_insert/in_memory/50000
                        time:   [3.4491 ms 3.4549 ms 3.4633 ms]
Benchmarking scaling/block_index_insert/mmap/50000
Benchmarking scaling/block_index_insert/mmap/50000: Warming up for 3.0000 s
Benchmarking scaling/block_index_insert/mmap/50000: Collecting 10 samples in estimated 5.1108 s (110 iterations)
Benchmarking scaling/block_index_insert/mmap/50000: Analyzing
scaling/block_index_insert/mmap/50000
                        time:   [45.067 ms 45.340 ms 45.924 ms]
Benchmarking scaling/block_index_insert/in_memory/100000
Benchmarking scaling/block_index_insert/in_memory/100000: Warming up for 3.0000 s
Benchmarking scaling/block_index_insert/in_memory/100000: Collecting 10 samples in estimated 5.1640 s (715 iterations)
Benchmarking scaling/block_index_insert/in_memory/100000: Analyzing
scaling/block_index_insert/in_memory/100000
                        time:   [7.2050 ms 7.3078 ms 7.4111 ms]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) high mild
Benchmarking scaling/block_index_insert/mmap/100000
Benchmarking scaling/block_index_insert/mmap/100000: Warming up for 3.0000 s
Benchmarking scaling/block_index_insert/mmap/100000: Collecting 10 samples in estimated 9.7904 s (110 iterations)
Benchmarking scaling/block_index_insert/mmap/100000: Analyzing
scaling/block_index_insert/mmap/100000
                        time:   [88.885 ms 89.859 ms 90.479 ms]
Benchmarking scaling/block_index_insert/in_memory/250000
Benchmarking scaling/block_index_insert/in_memory/250000: Warming up for 3.0000 s
Benchmarking scaling/block_index_insert/in_memory/250000: Collecting 10 samples in estimated 6.2840 s (220 iterations)
Benchmarking scaling/block_index_insert/in_memory/250000: Analyzing
scaling/block_index_insert/in_memory/250000
                        time:   [27.609 ms 27.849 ms 28.279 ms]
Benchmarking scaling/block_index_insert/mmap/250000
Benchmarking scaling/block_index_insert/mmap/250000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 9.9s or enable flat sampling.
Benchmarking scaling/block_index_insert/mmap/250000: Collecting 10 samples in estimated 9.9433 s (55 iterations)
Benchmarking scaling/block_index_insert/mmap/250000: Analyzing
scaling/block_index_insert/mmap/250000
                        time:   [179.60 ms 180.91 ms 182.60 ms]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) high severe
Benchmarking scaling/block_index_insert/in_memory/500000
Benchmarking scaling/block_index_insert/in_memory/500000: Warming up for 3.0000 s
Benchmarking scaling/block_index_insert/in_memory/500000: Collecting 10 samples in estimated 7.1290 s (110 iterations)
Benchmarking scaling/block_index_insert/in_memory/500000: Analyzing
scaling/block_index_insert/in_memory/500000
                        time:   [64.324 ms 65.047 ms 65.949 ms]
Benchmarking scaling/block_index_insert/mmap/500000
Benchmarking scaling/block_index_insert/mmap/500000: Warming up for 3.0000 s
Benchmarking scaling/block_index_insert/mmap/500000: Collecting 10 samples in estimated 6.4695 s (20 iterations)
Benchmarking scaling/block_index_insert/mmap/500000: Analyzing
scaling/block_index_insert/mmap/500000
                        time:   [312.57 ms 314.26 ms 316.22 ms]
Found 3 outliers among 10 measurements (30.00%)
  1 (10.00%) low mild
  1 (10.00%) high mild
  1 (10.00%) high severe
Benchmarking scaling/block_index_insert/in_memory/1000000
Benchmarking scaling/block_index_insert/in_memory/1000000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 7.8s or enable flat sampling.
Benchmarking scaling/block_index_insert/in_memory/1000000: Collecting 10 samples in estimated 7.8209 s (55 iterations)
Benchmarking scaling/block_index_insert/in_memory/1000000: Analyzing
scaling/block_index_insert/in_memory/1000000
                        time:   [140.79 ms 141.98 ms 142.93 ms]
Benchmarking scaling/block_index_insert/mmap/1000000
Benchmarking scaling/block_index_insert/mmap/1000000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 5.5s.
Benchmarking scaling/block_index_insert/mmap/1000000: Collecting 10 samples in estimated 5.5024 s (10 iterations)
Benchmarking scaling/block_index_insert/mmap/1000000: Analyzing
scaling/block_index_insert/mmap/1000000
                        time:   [556.69 ms 569.88 ms 584.27 ms]

Benchmarking scaling/block_index_lookup/in_memory/10000
Benchmarking scaling/block_index_lookup/in_memory/10000: Warming up for 3.0000 s
Benchmarking scaling/block_index_lookup/in_memory/10000: Collecting 10 samples in estimated 5.0000 s (319k iterations)
Benchmarking scaling/block_index_lookup/in_memory/10000: Analyzing
scaling/block_index_lookup/in_memory/10000
                        time:   [17.760 µs 17.776 µs 17.791 µs]
Found 3 outliers among 10 measurements (30.00%)
  1 (10.00%) low severe
  1 (10.00%) high mild
  1 (10.00%) high severe
Benchmarking scaling/block_index_lookup/mmap/10000
Benchmarking scaling/block_index_lookup/mmap/10000: Warming up for 3.0000 s
Benchmarking scaling/block_index_lookup/mmap/10000: Collecting 10 samples in estimated 5.0003 s (180k iterations)
Benchmarking scaling/block_index_lookup/mmap/10000: Analyzing
scaling/block_index_lookup/mmap/10000
                        time:   [27.267 µs 27.300 µs 27.347 µs]
Found 2 outliers among 10 measurements (20.00%)
  1 (10.00%) high mild
  1 (10.00%) high severe
Benchmarking scaling/block_index_lookup/in_memory/50000
Benchmarking scaling/block_index_lookup/in_memory/50000: Warming up for 3.0000 s
Benchmarking scaling/block_index_lookup/in_memory/50000: Collecting 10 samples in estimated 5.0008 s (309k iterations)
Benchmarking scaling/block_index_lookup/in_memory/50000: Analyzing
scaling/block_index_lookup/in_memory/50000
                        time:   [18.340 µs 18.360 µs 18.392 µs]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) high mild
Benchmarking scaling/block_index_lookup/mmap/50000
Benchmarking scaling/block_index_lookup/mmap/50000: Warming up for 3.0000 s
Benchmarking scaling/block_index_lookup/mmap/50000: Collecting 10 samples in estimated 5.0000 s (245k iterations)
Benchmarking scaling/block_index_lookup/mmap/50000: Analyzing
scaling/block_index_lookup/mmap/50000
                        time:   [20.184 µs 20.247 µs 20.336 µs]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) high severe
Benchmarking scaling/block_index_lookup/in_memory/100000
Benchmarking scaling/block_index_lookup/in_memory/100000: Warming up for 3.0000 s
Benchmarking scaling/block_index_lookup/in_memory/100000: Collecting 10 samples in estimated 5.0002 s (308k iterations)
Benchmarking scaling/block_index_lookup/in_memory/100000: Analyzing
scaling/block_index_lookup/in_memory/100000
                        time:   [18.443 µs 18.454 µs 18.463 µs]
Benchmarking scaling/block_index_lookup/mmap/100000
Benchmarking scaling/block_index_lookup/mmap/100000: Warming up for 3.0000 s
Benchmarking scaling/block_index_lookup/mmap/100000: Collecting 10 samples in estimated 5.0003 s (251k iterations)
Benchmarking scaling/block_index_lookup/mmap/100000: Analyzing
scaling/block_index_lookup/mmap/100000
                        time:   [19.694 µs 19.701 µs 19.707 µs]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) high mild
Benchmarking scaling/block_index_lookup/in_memory/250000
Benchmarking scaling/block_index_lookup/in_memory/250000: Warming up for 3.0000 s
Benchmarking scaling/block_index_lookup/in_memory/250000: Collecting 10 samples in estimated 5.0003 s (306k iterations)
Benchmarking scaling/block_index_lookup/in_memory/250000: Analyzing
scaling/block_index_lookup/in_memory/250000
                        time:   [18.506 µs 18.538 µs 18.575 µs]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) high mild
Benchmarking scaling/block_index_lookup/mmap/250000
Benchmarking scaling/block_index_lookup/mmap/250000: Warming up for 3.0000 s
Benchmarking scaling/block_index_lookup/mmap/250000: Collecting 10 samples in estimated 5.0007 s (253k iterations)
Benchmarking scaling/block_index_lookup/mmap/250000: Analyzing
scaling/block_index_lookup/mmap/250000
                        time:   [19.587 µs 19.603 µs 19.632 µs]
Found 2 outliers among 10 measurements (20.00%)
  1 (10.00%) high mild
  1 (10.00%) high severe
Benchmarking scaling/block_index_lookup/in_memory/500000
Benchmarking scaling/block_index_lookup/in_memory/500000: Warming up for 3.0000 s
Benchmarking scaling/block_index_lookup/in_memory/500000: Collecting 10 samples in estimated 5.0007 s (303k iterations)
Benchmarking scaling/block_index_lookup/in_memory/500000: Analyzing
scaling/block_index_lookup/in_memory/500000
                        time:   [18.656 µs 18.686 µs 18.709 µs]
Found 2 outliers among 10 measurements (20.00%)
  1 (10.00%) low severe
  1 (10.00%) high mild
Benchmarking scaling/block_index_lookup/mmap/500000
Benchmarking scaling/block_index_lookup/mmap/500000: Warming up for 3.0000 s
Benchmarking scaling/block_index_lookup/mmap/500000: Collecting 10 samples in estimated 5.0009 s (256k iterations)
Benchmarking scaling/block_index_lookup/mmap/500000: Analyzing
scaling/block_index_lookup/mmap/500000
                        time:   [19.282 µs 19.288 µs 19.292 µs]
Benchmarking scaling/block_index_lookup/in_memory/1000000
Benchmarking scaling/block_index_lookup/in_memory/1000000: Warming up for 3.0000 s
Benchmarking scaling/block_index_lookup/in_memory/1000000: Collecting 10 samples in estimated 5.0007 s (307k iterations)
Benchmarking scaling/block_index_lookup/in_memory/1000000: Analyzing
scaling/block_index_lookup/in_memory/1000000
                        time:   [18.463 µs 18.501 µs 18.627 µs]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) high severe
Benchmarking scaling/block_index_lookup/mmap/1000000
Benchmarking scaling/block_index_lookup/mmap/1000000: Warming up for 3.0000 s
Benchmarking scaling/block_index_lookup/mmap/1000000: Collecting 10 samples in estimated 5.0001 s (257k iterations)
Benchmarking scaling/block_index_lookup/mmap/1000000: Analyzing
scaling/block_index_lookup/mmap/1000000
                        time:   [19.209 µs 19.217 µs 19.222 µs]

Benchmarking scaling/immutabledb_open/in_memory/10000
Benchmarking scaling/immutabledb_open/in_memory/10000: Warming up for 3.0000 s
Benchmarking scaling/immutabledb_open/in_memory/10000: Collecting 10 samples in estimated 5.9151 s (275 iterations)
Benchmarking scaling/immutabledb_open/in_memory/10000: Analyzing
scaling/immutabledb_open/in_memory/10000
                        time:   [21.216 ms 21.443 ms 21.658 ms]
Benchmarking scaling/immutabledb_open/mmap_cached/10000
Benchmarking scaling/immutabledb_open/mmap_cached/10000: Warming up for 3.0000 s
Benchmarking scaling/immutabledb_open/mmap_cached/10000: Collecting 10 samples in estimated 5.0058 s (25k iterations)
Benchmarking scaling/immutabledb_open/mmap_cached/10000: Analyzing
scaling/immutabledb_open/mmap_cached/10000
                        time:   [201.12 µs 201.35 µs 201.45 µs]
Benchmarking scaling/immutabledb_open/in_memory/50000
Benchmarking scaling/immutabledb_open/in_memory/50000: Warming up for 3.0000 s
Benchmarking scaling/immutabledb_open/in_memory/50000: Collecting 10 samples in estimated 8.7919 s (20 iterations)
Benchmarking scaling/immutabledb_open/in_memory/50000: Analyzing
scaling/immutabledb_open/in_memory/50000
                        time:   [437.61 ms 438.15 ms 438.72 ms]
Benchmarking scaling/immutabledb_open/mmap_cached/50000
Benchmarking scaling/immutabledb_open/mmap_cached/50000: Warming up for 3.0000 s
Benchmarking scaling/immutabledb_open/mmap_cached/50000: Collecting 10 samples in estimated 5.0192 s (6710 iterations)
Benchmarking scaling/immutabledb_open/mmap_cached/50000: Analyzing
scaling/immutabledb_open/mmap_cached/50000
                        time:   [746.90 µs 749.14 µs 751.96 µs]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) high severe
Benchmarking scaling/immutabledb_open/in_memory/100000
Benchmarking scaling/immutabledb_open/in_memory/100000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 11.6s.
Benchmarking scaling/immutabledb_open/in_memory/100000: Collecting 10 samples in estimated 11.582 s (10 iterations)
Benchmarking scaling/immutabledb_open/in_memory/100000: Analyzing
scaling/immutabledb_open/in_memory/100000
                        time:   [867.49 ms 868.21 ms 869.04 ms]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) high mild
Benchmarking scaling/immutabledb_open/mmap_cached/100000
Benchmarking scaling/immutabledb_open/mmap_cached/100000: Warming up for 3.0000 s
Benchmarking scaling/immutabledb_open/mmap_cached/100000: Collecting 10 samples in estimated 5.0234 s (2915 iterations)
Benchmarking scaling/immutabledb_open/mmap_cached/100000: Analyzing
scaling/immutabledb_open/mmap_cached/100000
                        time:   [1.7063 ms 1.7474 ms 1.7789 ms]
Benchmarking scaling/immutabledb_open/in_memory/250000
Benchmarking scaling/immutabledb_open/in_memory/250000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 35.0s.
Benchmarking scaling/immutabledb_open/in_memory/250000: Collecting 10 samples in estimated 35.008 s (10 iterations)
Benchmarking scaling/immutabledb_open/in_memory/250000: Analyzing
scaling/immutabledb_open/in_memory/250000
                        time:   [2.1551 s 2.1599 s 2.1658 s]
Found 2 outliers among 10 measurements (20.00%)
  2 (20.00%) high mild
Benchmarking scaling/immutabledb_open/mmap_cached/250000
Benchmarking scaling/immutabledb_open/mmap_cached/250000: Warming up for 3.0000 s
Benchmarking scaling/immutabledb_open/mmap_cached/250000: Collecting 10 samples in estimated 5.1741 s (990 iterations)
Benchmarking scaling/immutabledb_open/mmap_cached/250000: Analyzing
scaling/immutabledb_open/mmap_cached/250000
                        time:   [5.1499 ms 5.1868 ms 5.2331 ms]
Benchmarking scaling/immutabledb_open/in_memory/500000
Benchmarking scaling/immutabledb_open/in_memory/500000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 77.0s.
Benchmarking scaling/immutabledb_open/in_memory/500000: Collecting 10 samples in estimated 76.976 s (10 iterations)
Benchmarking scaling/immutabledb_open/in_memory/500000: Analyzing
scaling/immutabledb_open/in_memory/500000
                        time:   [22.970 s 23.249 s 23.756 s]
Found 2 outliers among 10 measurements (20.00%)
  2 (20.00%) high severe
Benchmarking scaling/immutabledb_open/mmap_cached/500000
Benchmarking scaling/immutabledb_open/mmap_cached/500000: Warming up for 3.0000 s
Benchmarking scaling/immutabledb_open/mmap_cached/500000: Collecting 10 samples in estimated 5.4618 s (550 iterations)
Benchmarking scaling/immutabledb_open/mmap_cached/500000: Analyzing
scaling/immutabledb_open/mmap_cached/500000
                        time:   [9.8168 ms 9.8734 ms 9.9400 ms]

Benchmarking scaling/chaindb_insert/default_20kb/10000
Benchmarking scaling/chaindb_insert/default_20kb/10000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 35.2s.
Benchmarking scaling/chaindb_insert/default_20kb/10000: Collecting 10 samples in estimated 35.249 s (10 iterations)
Benchmarking scaling/chaindb_insert/default_20kb/10000: Analyzing
scaling/chaindb_insert/default_20kb/10000
                        time:   [3.6082 s 3.6286 s 3.6520 s]
Benchmarking scaling/chaindb_insert/default_20kb/50000
Benchmarking scaling/chaindb_insert/default_20kb/50000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 184.2s.
Benchmarking scaling/chaindb_insert/default_20kb/50000: Collecting 10 samples in estimated 184.19 s (10 iterations)
Benchmarking scaling/chaindb_insert/default_20kb/50000: Analyzing
scaling/chaindb_insert/default_20kb/50000
                        time:   [18.577 s 18.753 s 18.951 s]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) high mild
Benchmarking scaling/chaindb_insert/default_20kb/100000
Benchmarking scaling/chaindb_insert/default_20kb/100000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 386.4s.
Benchmarking scaling/chaindb_insert/default_20kb/100000: Collecting 10 samples in estimated 386.43 s (10 iterations)
Benchmarking scaling/chaindb_insert/default_20kb/100000: Analyzing
scaling/chaindb_insert/default_20kb/100000
                        time:   [37.282 s 37.491 s 37.736 s]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) high mild
Benchmarking scaling/chaindb_insert/default_20kb/250000
Benchmarking scaling/chaindb_insert/default_20kb/250000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 970.2s.
Benchmarking scaling/chaindb_insert/default_20kb/250000: Collecting 10 samples in estimated 970.18 s (10 iterations)
Benchmarking scaling/chaindb_insert/default_20kb/250000: Analyzing
scaling/chaindb_insert/default_20kb/250000
                        time:   [93.674 s 93.951 s 94.218 s]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) low mild

UTxO Benchmarks

   Compiling zeroize_derive v1.4.3
   Compiling num-integer v0.1.46
   Compiling minicbor v0.25.1
   Compiling find-msvc-tools v0.1.9
   Compiling shlex v1.3.0
   Compiling num-bigint v0.4.6
   Compiling cc v1.2.58
   Compiling zeroize v1.8.2
   Compiling curve25519-dalek v4.1.3
   Compiling ed25519-dalek v2.2.0
   Compiling byteorder v1.5.0
   Compiling pallas-crypto v1.0.0-alpha.6
   Compiling pallas-codec v0.33.0
   Compiling pallas-crypto v0.33.0
   Compiling secp256k1-sys v0.8.2
   Compiling log v0.4.29
   Compiling radium v0.7.0
   Compiling num-rational v0.4.2
   Compiling blst v0.3.16
   Compiling indexmap v1.9.3
   Compiling num_cpus v1.17.0
   Compiling peg-runtime v0.8.5
   Compiling tap v1.0.1
   Compiling regex-syntax v0.8.10
   Compiling heck v0.5.0
   Compiling dugite-primitives v1.0.2-alpha (/home/runner/work/dugite/dugite/crates/dugite-primitives)
   Compiling strum_macros v0.26.4
   Compiling wyz v0.5.1
   Compiling peg-macros v0.8.5
   Compiling threadpool v1.8.1
   Compiling pallas-primitives v1.0.0-alpha.6
   Compiling pallas-addresses v1.0.0-alpha.6
   Compiling pallas-primitives v0.33.0
   Compiling curve25519-dalek v3.2.0 (https://github.com/iquerejeta/curve25519-dalek?branch=ietf03_vrf_compat_ell2#70a36f41)
   Compiling curve25519-dalek v3.2.0
   Compiling pallas-addresses v0.33.0
   Compiling miette-derive v5.10.0
   Compiling funty v2.0.0
   Compiling hashbrown v0.12.3
   Compiling unicode-width v0.1.14
   Compiling typed-arena v2.0.2
   Compiling arrayvec v0.5.2
   Compiling unicode-segmentation v1.13.2
   Compiling peg v0.8.5
   Compiling pretty v0.11.3
   Compiling miette v5.10.0
   Compiling pallas-traverse v1.0.0-alpha.6
   Compiling bitvec v1.0.1
   Compiling pallas-traverse v0.33.0
   Compiling vrf_dalek v0.1.0 (https://github.com/input-output-hk/vrf?rev=03ac038e9b92c754ebbcb71824866d93f25e27f3#03ac038e)
   Compiling secp256k1 v0.26.0
   Compiling strum v0.26.3
   Compiling regex-automata v0.4.14
   Compiling wait-timeout v0.2.1
   Compiling fs2 v0.4.3
   Compiling hamming v0.1.3
   Compiling fnv v1.0.7
   Compiling bit-vec v0.8.0
   Compiling quick-error v1.2.3
   Compiling rusty-fork v0.3.1
   Compiling bit-set v0.8.0
   Compiling uplc v1.1.21 (https://github.com/aiken-lang/aiken.git?rev=6806d52211fbb469ae13aa0e0290aeb6b9b3e8cf#6806d522)
   Compiling regex v1.12.3
   Compiling dugite-lsm v1.0.2-alpha (/home/runner/work/dugite/dugite/crates/dugite-lsm)
   Compiling dugite-crypto v1.0.2-alpha (/home/runner/work/dugite/dugite/crates/dugite-crypto)
   Compiling dugite-serialization v1.0.2-alpha (/home/runner/work/dugite/dugite/crates/dugite-serialization)
   Compiling rand_xorshift v0.4.0
   Compiling bincode v1.3.3
   Compiling unarray v0.1.4
   Compiling criterion v0.5.1
   Compiling proptest v1.11.0
   Compiling dugite-ledger v1.0.2-alpha (/home/runner/work/dugite/dugite/crates/dugite-ledger)
    Finished `bench` profile [optimized] target(s) in 1m 10s
     Running benches/utxo_bench.rs (target/release/deps/utxo_bench-789bdddd11b06ca4)
Gnuplot not found, using plotters backend
Benchmarking utxo_store/insert/default/1000000
Benchmarking utxo_store/insert/default/1000000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 27.2s.
Benchmarking utxo_store/insert/default/1000000: Collecting 10 samples in estimated 27.241 s (10 iterations)
Benchmarking utxo_store/insert/default/1000000: Analyzing
utxo_store/insert/default/1000000
                        time:   [2.6234 s 2.6407 s 2.6590 s]

Benchmarking utxo_store/lookup/hit/1000000
Benchmarking utxo_store/lookup/hit/1000000: Warming up for 3.0000 s
Benchmarking utxo_store/lookup/hit/1000000: Collecting 100 samples in estimated 5.6617 s (10k iterations)
Benchmarking utxo_store/lookup/hit/1000000: Analyzing
utxo_store/lookup/hit/1000000
                        time:   [578.09 µs 579.75 µs 581.71 µs]
Found 4 outliers among 100 measurements (4.00%)
  2 (2.00%) high mild
  2 (2.00%) high severe
Benchmarking utxo_store/lookup/miss/1000000
Benchmarking utxo_store/lookup/miss/1000000: Warming up for 3.0000 s
Benchmarking utxo_store/lookup/miss/1000000: Collecting 100 samples in estimated 5.1628 s (15k iterations)
Benchmarking utxo_store/lookup/miss/1000000: Analyzing
utxo_store/lookup/miss/1000000
                        time:   [336.69 µs 337.18 µs 337.77 µs]
Found 7 outliers among 100 measurements (7.00%)
  5 (5.00%) high mild
  2 (2.00%) high severe

Benchmarking utxo_store/contains/hit
Benchmarking utxo_store/contains/hit: Warming up for 3.0000 s
Benchmarking utxo_store/contains/hit: Collecting 100 samples in estimated 6.2966 s (15k iterations)
Benchmarking utxo_store/contains/hit: Analyzing
utxo_store/contains/hit time:   [411.35 µs 411.80 µs 412.34 µs]
Found 5 outliers among 100 measurements (5.00%)
  2 (2.00%) high mild
  3 (3.00%) high severe
Benchmarking utxo_store/contains/miss
Benchmarking utxo_store/contains/miss: Warming up for 3.0000 s
Benchmarking utxo_store/contains/miss: Collecting 100 samples in estimated 6.3341 s (20k iterations)
Benchmarking utxo_store/contains/miss: Analyzing
utxo_store/contains/miss
                        time:   [313.99 µs 314.43 µs 314.91 µs]
Found 6 outliers among 100 measurements (6.00%)
  4 (4.00%) high mild
  2 (2.00%) high severe

Benchmarking utxo_store/remove/sequential/1000000
Benchmarking utxo_store/remove/sequential/1000000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 50.9s.
Benchmarking utxo_store/remove/sequential/1000000: Collecting 10 samples in estimated 50.921 s (10 iterations)
Benchmarking utxo_store/remove/sequential/1000000: Analyzing
utxo_store/remove/sequential/1000000
                        time:   [2.6603 s 2.6724 s 2.6863 s]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) high mild

Benchmarking utxo_store/apply_tx/block_50tx_3in_2out
Benchmarking utxo_store/apply_tx/block_50tx_3in_2out: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 26.2s.
Benchmarking utxo_store/apply_tx/block_50tx_3in_2out: Collecting 10 samples in estimated 26.217 s (10 iterations)
Benchmarking utxo_store/apply_tx/block_50tx_3in_2out: Analyzing
utxo_store/apply_tx/block_50tx_3in_2out
                        time:   [257.74 ms 265.53 ms 272.25 ms]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) low mild
Benchmarking utxo_store/apply_tx/block_300tx_2in_2out
Benchmarking utxo_store/apply_tx/block_300tx_2in_2out: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 25.9s.
Benchmarking utxo_store/apply_tx/block_300tx_2in_2out: Collecting 10 samples in estimated 25.939 s (10 iterations)
Benchmarking utxo_store/apply_tx/block_300tx_2in_2out: Analyzing
utxo_store/apply_tx/block_300tx_2in_2out
                        time:   [266.18 ms 268.68 ms 271.43 ms]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) high mild

Benchmarking utxo_store/multi_asset/insert_mixed_30pct/1000000
Benchmarking utxo_store/multi_asset/insert_mixed_30pct/1000000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 33.4s.
Benchmarking utxo_store/multi_asset/insert_mixed_30pct/1000000: Collecting 10 samples in estimated 33.425 s (10 iterations)
Benchmarking utxo_store/multi_asset/insert_mixed_30pct/1000000: Analyzing
utxo_store/multi_asset/insert_mixed_30pct/1000000
                        time:   [3.3259 s 3.3424 s 3.3590 s]
Found 2 outliers among 10 measurements (20.00%)
  1 (10.00%) low mild
  1 (10.00%) high mild
Benchmarking utxo_store/multi_asset/lookup_mixed_30pct/1000000
Benchmarking utxo_store/multi_asset/lookup_mixed_30pct/1000000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 33.9s.
Benchmarking utxo_store/multi_asset/lookup_mixed_30pct/1000000: Collecting 10 samples in estimated 33.854 s (10 iterations)
Benchmarking utxo_store/multi_asset/lookup_mixed_30pct/1000000: Analyzing
utxo_store/multi_asset/lookup_mixed_30pct/1000000
                        time:   [124.51 ms 131.62 ms 137.82 ms]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) low mild

Benchmarking utxo_store/total_lovelace/scan/1000000
Benchmarking utxo_store/total_lovelace/scan/1000000: Warming up for 3.0000 s

Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 29.3s, or reduce sample count to 10.
Benchmarking utxo_store/total_lovelace/scan/1000000: Collecting 100 samples in estimated 29.271 s (100 iterations)
Benchmarking utxo_store/total_lovelace/scan/1000000: Analyzing
utxo_store/total_lovelace/scan/1000000
                        time:   [293.23 ms 293.95 ms 294.64 ms]
Found 6 outliers among 100 measurements (6.00%)
  1 (1.00%) low severe
  4 (4.00%) high mild
  1 (1.00%) high severe

Benchmarking utxo_store/rebuild_address_index/rebuild/1000000
Benchmarking utxo_store/rebuild_address_index/rebuild/1000000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 36.8s.
Benchmarking utxo_store/rebuild_address_index/rebuild/1000000: Collecting 10 samples in estimated 36.840 s (10 iterations)
Benchmarking utxo_store/rebuild_address_index/rebuild/1000000: Analyzing
utxo_store/rebuild_address_index/rebuild/1000000
                        time:   [607.74 ms 615.34 ms 622.93 ms]
Found 2 outliers among 10 measurements (20.00%)
  1 (10.00%) low mild
  1 (10.00%) high mild

Benchmarking utxo_store/insert_configs/low_8gb/1000000
Benchmarking utxo_store/insert_configs/low_8gb/1000000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 26.1s.
Benchmarking utxo_store/insert_configs/low_8gb/1000000: Collecting 10 samples in estimated 26.109 s (10 iterations)
Benchmarking utxo_store/insert_configs/low_8gb/1000000: Analyzing
utxo_store/insert_configs/low_8gb/1000000
                        time:   [2.6027 s 2.6114 s 2.6219 s]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) high mild
Benchmarking utxo_store/insert_configs/mid_16gb/1000000
Benchmarking utxo_store/insert_configs/mid_16gb/1000000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 25.5s.
Benchmarking utxo_store/insert_configs/mid_16gb/1000000: Collecting 10 samples in estimated 25.542 s (10 iterations)
Benchmarking utxo_store/insert_configs/mid_16gb/1000000: Analyzing
utxo_store/insert_configs/mid_16gb/1000000
                        time:   [2.5256 s 2.5381 s 2.5493 s]
Benchmarking utxo_store/insert_configs/high_32gb/1000000
Benchmarking utxo_store/insert_configs/high_32gb/1000000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 25.4s.
Benchmarking utxo_store/insert_configs/high_32gb/1000000: Collecting 10 samples in estimated 25.446 s (10 iterations)
Benchmarking utxo_store/insert_configs/high_32gb/1000000: Analyzing
utxo_store/insert_configs/high_32gb/1000000
                        time:   [2.5284 s 2.5448 s 2.5608 s]
Benchmarking utxo_store/insert_configs/high_bloom_16gb/1000000
Benchmarking utxo_store/insert_configs/high_bloom_16gb/1000000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 25.9s.
Benchmarking utxo_store/insert_configs/high_bloom_16gb/1000000: Collecting 10 samples in estimated 25.931 s (10 iterations)
Benchmarking utxo_store/insert_configs/high_bloom_16gb/1000000: Analyzing
utxo_store/insert_configs/high_bloom_16gb/1000000
                        time:   [2.6244 s 2.6429 s 2.6611 s]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) low mild
Benchmarking utxo_store/insert_configs/legacy_small/1000000
Benchmarking utxo_store/insert_configs/legacy_small/1000000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 26.1s.
Benchmarking utxo_store/insert_configs/legacy_small/1000000: Collecting 10 samples in estimated 26.068 s (10 iterations)
Benchmarking utxo_store/insert_configs/legacy_small/1000000: Analyzing
utxo_store/insert_configs/legacy_small/1000000
                        time:   [2.5963 s 2.6103 s 2.6258 s]

Benchmarking utxo_store/lookup_configs/low_8gb/1000000
Benchmarking utxo_store/lookup_configs/low_8gb/1000000: Warming up for 3.0000 s
Benchmarking utxo_store/lookup_configs/low_8gb/1000000: Collecting 100 samples in estimated 6.8857 s (15k iterations)
Benchmarking utxo_store/lookup_configs/low_8gb/1000000: Analyzing
utxo_store/lookup_configs/low_8gb/1000000
                        time:   [451.67 µs 452.44 µs 453.20 µs]
Found 4 outliers among 100 measurements (4.00%)
  1 (1.00%) low mild
  2 (2.00%) high mild
  1 (1.00%) high severe
Benchmarking utxo_store/lookup_configs/mid_16gb/1000000
Benchmarking utxo_store/lookup_configs/mid_16gb/1000000: Warming up for 3.0000 s
Benchmarking utxo_store/lookup_configs/mid_16gb/1000000: Collecting 100 samples in estimated 6.8043 s (15k iterations)
Benchmarking utxo_store/lookup_configs/mid_16gb/1000000: Analyzing
utxo_store/lookup_configs/mid_16gb/1000000
                        time:   [448.06 µs 448.62 µs 449.22 µs]
Found 4 outliers among 100 measurements (4.00%)
  2 (2.00%) high mild
  2 (2.00%) high severe
Benchmarking utxo_store/lookup_configs/high_32gb/1000000
Benchmarking utxo_store/lookup_configs/high_32gb/1000000: Warming up for 3.0000 s
Benchmarking utxo_store/lookup_configs/high_32gb/1000000: Collecting 100 samples in estimated 6.8766 s (15k iterations)
Benchmarking utxo_store/lookup_configs/high_32gb/1000000: Analyzing
utxo_store/lookup_configs/high_32gb/1000000
                        time:   [453.48 µs 454.43 µs 455.86 µs]
Found 13 outliers among 100 measurements (13.00%)
  5 (5.00%) high mild
  8 (8.00%) high severe
Benchmarking utxo_store/lookup_configs/high_bloom_16gb/1000000
Benchmarking utxo_store/lookup_configs/high_bloom_16gb/1000000: Warming up for 3.0000 s
Benchmarking utxo_store/lookup_configs/high_bloom_16gb/1000000: Collecting 100 samples in estimated 6.8083 s (15k iterations)
Benchmarking utxo_store/lookup_configs/high_bloom_16gb/1000000: Analyzing
utxo_store/lookup_configs/high_bloom_16gb/1000000
                        time:   [451.65 µs 452.65 µs 453.67 µs]
Found 3 outliers among 100 measurements (3.00%)
  1 (1.00%) high mild
  2 (2.00%) high severe
Benchmarking utxo_store/lookup_configs/legacy_small/1000000
Benchmarking utxo_store/lookup_configs/legacy_small/1000000: Warming up for 3.0000 s
Benchmarking utxo_store/lookup_configs/legacy_small/1000000: Collecting 100 samples in estimated 6.8398 s (15k iterations)
Benchmarking utxo_store/lookup_configs/legacy_small/1000000: Analyzing
utxo_store/lookup_configs/legacy_small/1000000
                        time:   [449.66 µs 450.48 µs 451.41 µs]
Found 8 outliers among 100 measurements (8.00%)
  6 (6.00%) high mild
  2 (2.00%) high severe

Benchmarking utxo_scaling/insert/default/100000
Benchmarking utxo_scaling/insert/default/100000: Warming up for 3.0000 s
Benchmarking utxo_scaling/insert/default/100000: Collecting 10 samples in estimated 6.5213 s (30 iterations)
Benchmarking utxo_scaling/insert/default/100000: Analyzing
utxo_scaling/insert/default/100000
                        time:   [214.92 ms 216.41 ms 217.93 ms]
Benchmarking utxo_scaling/insert/default/500000
Benchmarking utxo_scaling/insert/default/500000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 12.1s.
Benchmarking utxo_scaling/insert/default/500000: Collecting 10 samples in estimated 12.095 s (10 iterations)
Benchmarking utxo_scaling/insert/default/500000: Analyzing
utxo_scaling/insert/default/500000
                        time:   [1.1929 s 1.2019 s 1.2116 s]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) high mild
Benchmarking utxo_scaling/insert/default/1000000
Benchmarking utxo_scaling/insert/default/1000000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 26.0s.
Benchmarking utxo_scaling/insert/default/1000000: Collecting 10 samples in estimated 25.970 s (10 iterations)
Benchmarking utxo_scaling/insert/default/1000000: Analyzing
utxo_scaling/insert/default/1000000
                        time:   [2.5693 s 2.5789 s 2.5881 s]
Found 4 outliers among 10 measurements (40.00%)
  1 (10.00%) low severe
  1 (10.00%) low mild
  2 (20.00%) high mild

Benchmarking utxo_scaling/lookup/hit/100000
Benchmarking utxo_scaling/lookup/hit/100000: Warming up for 3.0000 s
Benchmarking utxo_scaling/lookup/hit/100000: Collecting 10 samples in estimated 5.0188 s (13k iterations)
Benchmarking utxo_scaling/lookup/hit/100000: Analyzing
utxo_scaling/lookup/hit/100000
                        time:   [368.75 µs 370.69 µs 373.21 µs]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) high mild
Benchmarking utxo_scaling/lookup/hit/500000
Benchmarking utxo_scaling/lookup/hit/500000: Warming up for 3.0000 s
Benchmarking utxo_scaling/lookup/hit/500000: Collecting 10 samples in estimated 5.0119 s (11k iterations)
Benchmarking utxo_scaling/lookup/hit/500000: Analyzing
utxo_scaling/lookup/hit/500000
                        time:   [416.01 µs 416.95 µs 417.84 µs]
Benchmarking utxo_scaling/lookup/hit/1000000
Benchmarking utxo_scaling/lookup/hit/1000000: Warming up for 3.0000 s
Benchmarking utxo_scaling/lookup/hit/1000000: Collecting 10 samples in estimated 5.0241 s (11k iterations)
Benchmarking utxo_scaling/lookup/hit/1000000: Analyzing
utxo_scaling/lookup/hit/1000000
                        time:   [445.80 µs 446.62 µs 447.79 µs]

Benchmarking utxo_scaling/apply_tx/block_50tx_3in_2out/100000
Benchmarking utxo_scaling/apply_tx/block_50tx_3in_2out/100000: Warming up for 3.0000 s
Benchmarking utxo_scaling/apply_tx/block_50tx_3in_2out/100000: Collecting 10 samples in estimated 6.4981 s (30 iterations)
Benchmarking utxo_scaling/apply_tx/block_50tx_3in_2out/100000: Analyzing
utxo_scaling/apply_tx/block_50tx_3in_2out/100000
                        time:   [16.701 ms 18.142 ms 19.578 ms]
Benchmarking utxo_scaling/apply_tx/block_50tx_3in_2out/500000
Benchmarking utxo_scaling/apply_tx/block_50tx_3in_2out/500000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 12.0s.
Benchmarking utxo_scaling/apply_tx/block_50tx_3in_2out/500000: Collecting 10 samples in estimated 11.995 s (10 iterations)
Benchmarking utxo_scaling/apply_tx/block_50tx_3in_2out/500000: Analyzing
utxo_scaling/apply_tx/block_50tx_3in_2out/500000
                        time:   [123.08 ms 125.53 ms 127.91 ms]
Benchmarking utxo_scaling/apply_tx/block_50tx_3in_2out/1000000
Benchmarking utxo_scaling/apply_tx/block_50tx_3in_2out/1000000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 25.6s.
Benchmarking utxo_scaling/apply_tx/block_50tx_3in_2out/1000000: Collecting 10 samples in estimated 25.555 s (10 iterations)
Benchmarking utxo_scaling/apply_tx/block_50tx_3in_2out/1000000: Analyzing
utxo_scaling/apply_tx/block_50tx_3in_2out/1000000
                        time:   [267.98 ms 270.93 ms 274.09 ms]

Benchmarking utxo_scaling/total_lovelace/scan/100000
Benchmarking utxo_scaling/total_lovelace/scan/100000: Warming up for 3.0000 s
Benchmarking utxo_scaling/total_lovelace/scan/100000: Collecting 10 samples in estimated 6.5104 s (220 iterations)
Benchmarking utxo_scaling/total_lovelace/scan/100000: Analyzing
utxo_scaling/total_lovelace/scan/100000
                        time:   [29.491 ms 29.525 ms 29.559 ms]
Found 3 outliers among 10 measurements (30.00%)
  1 (10.00%) low severe
  2 (20.00%) high mild
Benchmarking utxo_scaling/total_lovelace/scan/500000
Benchmarking utxo_scaling/total_lovelace/scan/500000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 8.2s or enable flat sampling.
Benchmarking utxo_scaling/total_lovelace/scan/500000: Collecting 10 samples in estimated 8.1955 s (55 iterations)
Benchmarking utxo_scaling/total_lovelace/scan/500000: Analyzing
utxo_scaling/total_lovelace/scan/500000
                        time:   [148.65 ms 148.94 ms 149.24 ms]
Benchmarking utxo_scaling/total_lovelace/scan/1000000
Benchmarking utxo_scaling/total_lovelace/scan/1000000: Warming up for 3.0000 s
Benchmarking utxo_scaling/total_lovelace/scan/1000000: Collecting 10 samples in estimated 5.9277 s (20 iterations)
Benchmarking utxo_scaling/total_lovelace/scan/1000000: Analyzing
utxo_scaling/total_lovelace/scan/1000000
                        time:   [294.89 ms 297.83 ms 299.93 ms]
Found 2 outliers among 10 measurements (20.00%)
  1 (10.00%) low severe
  1 (10.00%) high mild

Benchmarking utxo_large_scale/insert/default/5000000
Benchmarking utxo_large_scale/insert/default/5000000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 156.4s.
Benchmarking utxo_large_scale/insert/default/5000000: Collecting 10 samples in estimated 156.37 s (10 iterations)
Benchmarking utxo_large_scale/insert/default/5000000: Analyzing
utxo_large_scale/insert/default/5000000
                        time:   [15.597 s 15.660 s 15.722 s]
Benchmarking utxo_large_scale/insert/default/10000000
Benchmarking utxo_large_scale/insert/default/10000000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 340.5s.
Benchmarking utxo_large_scale/insert/default/10000000: Collecting 10 samples in estimated 340.53 s (10 iterations)
Benchmarking utxo_large_scale/insert/default/10000000: Analyzing
utxo_large_scale/insert/default/10000000
                        time:   [33.098 s 33.276 s 33.431 s]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) low mild

Benchmarking utxo_large_scale/lookup/hit/5000000
Benchmarking utxo_large_scale/lookup/hit/5000000: Warming up for 3.0000 s
Benchmarking utxo_large_scale/lookup/hit/5000000: Collecting 10 samples in estimated 5.0556 s (3740 iterations)
Benchmarking utxo_large_scale/lookup/hit/5000000: Analyzing
utxo_large_scale/lookup/hit/5000000
                        time:   [1.3285 ms 1.3377 ms 1.3509 ms]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) high mild
Benchmarking utxo_large_scale/lookup/hit/10000000
Benchmarking utxo_large_scale/lookup/hit/10000000: Warming up for 3.0000 s
Benchmarking utxo_large_scale/lookup/hit/10000000: Collecting 10 samples in estimated 5.0013 s (2475 iterations)
Benchmarking utxo_large_scale/lookup/hit/10000000: Analyzing
utxo_large_scale/lookup/hit/10000000
                        time:   [1.9549 ms 2.3566 ms 2.8296 ms]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) high severe

Benchmarking utxo_large_scale/total_lovelace/scan/5000000
Benchmarking utxo_large_scale/total_lovelace/scan/5000000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 15.6s.
Benchmarking utxo_large_scale/total_lovelace/scan/5000000: Collecting 10 samples in estimated 15.553 s (10 iterations)
Benchmarking utxo_large_scale/total_lovelace/scan/5000000: Analyzing
utxo_large_scale/total_lovelace/scan/5000000
                        time:   [1.5498 s 1.5658 s 1.5823 s]
Benchmarking utxo_large_scale/total_lovelace/scan/10000000
Benchmarking utxo_large_scale/total_lovelace/scan/10000000: Warming up for 3.0000 s

Warning: Unable to complete 10 samples in 5.0s. You may wish to increase target time to 30.8s.
Benchmarking utxo_large_scale/total_lovelace/scan/10000000: Collecting 10 samples in estimated 30.848 s (10 iterations)
Benchmarking utxo_large_scale/total_lovelace/scan/10000000: Analyzing
utxo_large_scale/total_lovelace/scan/10000000
                        time:   [3.1112 s 3.1446 s 3.1849 s]
Found 1 outliers among 10 measurements (10.00%)
  1 (10.00%) high mild

LSM Stress Tests

   Compiling syn v2.0.117
   Compiling serde_core v1.0.228
   Compiling num-traits v0.2.19
   Compiling serde v1.0.228
   Compiling rand v0.9.2
   Compiling byteorder v1.5.0
   Compiling plotters v0.3.7
   Compiling serde_json v1.0.149
   Compiling zerocopy-derive v0.8.48
   Compiling serde_derive v1.0.228
   Compiling thiserror-impl v2.0.18
   Compiling thiserror v2.0.18
   Compiling dugite-lsm v1.0.2-alpha (/home/runner/work/dugite/dugite/crates/dugite-lsm)
   Compiling zerocopy v0.8.48
   Compiling tinytemplate v1.2.1
   Compiling half v2.7.1
   Compiling ppv-lite86 v0.2.21
   Compiling ciborium-ll v0.2.2
   Compiling rand_chacha v0.9.0
   Compiling ciborium v0.2.2
   Compiling proptest v1.11.0
   Compiling criterion v0.5.1
    Finished `release` profile [optimized] target(s) in 29.95s
     Running unittests src/lib.rs (target/release/deps/dugite_lsm-67fc76b486f6cc41)

running 3 tests
test tree::mainnet_scale_tests::test_mainnet_scale_wal_crash_recovery ... ok
test tree::mainnet_scale_tests::test_mainnet_scale_insert_read ... ok
test tree::mainnet_scale_tests::test_mainnet_scale_delete_amplification ... ok

test result: ok. 3 passed; 0 failed; 0 ignored; 0 measured; 93 filtered out; finished in 8.45s

   Doc-tests dugite_lsm

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 1 filtered out; finished in 0.00s

Third-Party Licenses

Dugite depends on a number of open-source Rust crates. This page documents all third-party dependencies and their license terms.

Total dependencies: 393

License Summary

LicenseCount
MIT OR Apache-2.0205
MIT71
Apache-2.0 OR MIT33
Unicode-3.018
Apache-2.017
Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT14
Unlicense OR MIT6
BSD-3-Clause4
Apache-2.0 OR ISC OR MIT2
MIT OR Apache-2.0 OR Zlib2
BlueOak-1.0.02
ISC2
CC0-1.02
BSD-2-Clause OR Apache-2.0 OR MIT2
BSD-2-Clause1
CC0-1.0 OR Apache-2.0 OR Apache-2.0 WITH LLVM-exception1
CC0-1.0 OR MIT-0 OR Apache-2.01
MIT OR Apache-2.0 OR BSD-1-Clause1
Apache-2.0 OR MIT1
Zlib1
MIT OR Apache-2.0 OR LGPL-2.1-or-later1
Apache-2.0 AND ISC1
Apache-2.0 OR BSL-1.01
Zlib OR Apache-2.0 OR MIT1
(MIT OR Apache-2.0) AND Unicode-3.01
Unknown1
CDLA-Permissive-2.01

Key Dependencies

These are the primary libraries that Dugite directly depends on:

CrateVersionLicenseDescription
pallas-codec1.0.0-alpha.5Apache-2.0Pallas common CBOR encoding interface and utilities
pallas-crypto1.0.0-alpha.5Apache-2.0Cryptographic primitives for Cardano
pallas-primitives1.0.0-alpha.5Apache-2.0Ledger primitives and cbor codec for the different Cardano eras
pallas-traverse1.0.0-alpha.5Apache-2.0Utilities to traverse over multi-era block data
pallas-addresses1.0.0-alpha.5Apache-2.0Ergonomic library to work with different Cardano addresses
pallas-network1.0.0-alpha.5Apache-2.0Ouroboros networking stack using async IO
uplc1.1.21Apache-2.0Utilities for working with Untyped Plutus Core
tokio1.50.0MITAn event-driven, non-blocking I/O platform for writing asynchronous I/O
backe...
hyper1.8.1MITA protective and efficient HTTP library for all.
reqwest0.12.28MIT OR Apache-2.0higher level HTTP client library
clap4.6.0MIT OR Apache-2.0A simple to use, efficient, and full-featured Command Line Argument Parser
serde1.0.228MIT OR Apache-2.0A generic serialization/deserialization framework
serde_json1.0.149MIT OR Apache-2.0A JSON serialization file format
bincode1.3.3MITA binary serialization / deserialization strategy that uses Serde for transfo...
blake2b_simd1.0.4MITa pure Rust BLAKE2b implementation with dynamic SIMD
sha20.9.9MIT OR Apache-2.0Pure Rust implementation of the SHA-2 hash function family
including SHA-224,...
ed25519-dalek2.2.0BSD-3-ClauseFast and efficient ed25519 EdDSA key generations, signing, and verification i...
curve25519-dalek4.1.3BSD-3-ClauseA pure-Rust implementation of group operations on ristretto255 and Curve25519
blst0.3.16Apache-2.0Bindings for blst BLS12-381 library
k2560.13.4Apache-2.0 OR MITsecp256k1 elliptic curve library written in pure Rust with support for ECDSA
...
minicbor0.26.5BlueOak-1.0.0A small CBOR codec suitable for no_std environments.
tracing0.1.44MITApplication-level tracing for Rust.
tracing-subscriber0.3.22MITUtilities for implementing and composing tracing subscribers.
dashmap6.1.0MITBlazing fast concurrent HashMap for Rust.
crossbeam0.8.4MIT OR Apache-2.0Tools for concurrent programming
dashu-int0.4.1MIT OR Apache-2.0A big integer library with good performance
memmap20.9.10MIT OR Apache-2.0Cross-platform Rust API for memory-mapped file IO
lz41.28.1MITRust LZ4 bindings library.
zstd0.13.3MITBinding for the zstd compression library.
tar0.4.44MIT OR Apache-2.0A Rust implementation of a TAR file reader and writer. This library does not
...
crc32fast1.5.0MIT OR Apache-2.0Fast, SIMD-accelerated CRC32 (IEEE) checksum computation
hex0.4.3MIT OR Apache-2.0Encoding and decoding data into/from hexadecimal representation.
bs580.5.1MIT/Apache-2.0Another Base58 codec implementation.
bech320.9.1MITEncodes and decodes the Bech32 format
base640.22.1MIT OR Apache-2.0encodes and decodes base64 as bytes or utf8
rand0.9.2MIT OR Apache-2.0Random number generators and other randomness functionality.
chrono0.4.44MIT OR Apache-2.0Date and time library for Rust
uuid1.22.0Apache-2.0 OR MITA library to generate and parse UUIDs.
indicatif0.17.11MITA progress bar and cli reporting library for Rust
vrf_dalek0.1.0Unknown

All Dependencies

Complete list of all third-party crates used by Dugite, sorted alphabetically.

CrateVersionLicense
aho-corasick1.1.4Unlicense OR MIT
android_system_properties0.1.5MIT/Apache-2.0
anes0.1.6MIT OR Apache-2.0
anstream1.0.0MIT OR Apache-2.0
anstyle1.0.13MIT OR Apache-2.0
anstyle-parse1.0.0MIT OR Apache-2.0
anstyle-query1.1.5MIT OR Apache-2.0
anstyle-wincon3.0.11MIT OR Apache-2.0
anyhow1.0.102MIT OR Apache-2.0
arrayref0.3.9BSD-2-Clause
arrayvec0.7.6MIT OR Apache-2.0
async-trait0.1.89MIT OR Apache-2.0
atomic-waker1.1.2Apache-2.0 OR MIT
autocfg1.5.0Apache-2.0 OR MIT
base16ct0.2.0Apache-2.0 OR MIT
base580.2.0MIT
base640.22.1MIT OR Apache-2.0
base64ct1.8.3Apache-2.0 OR MIT
bech320.9.1MIT
bincode1.3.3MIT
bit-set0.8.0Apache-2.0 OR MIT
bit-vec0.8.0Apache-2.0 OR MIT
bitflags2.11.0MIT OR Apache-2.0
bitvec1.0.1MIT
blake20.10.6MIT OR Apache-2.0
blake2b_simd1.0.4MIT
blake31.8.3CC0-1.0 OR Apache-2.0 OR Apache-2.0 WITH LLVM-exception
block-buffer0.9.0MIT OR Apache-2.0
blst0.3.16Apache-2.0
bs580.5.1MIT/Apache-2.0
bumpalo3.20.2MIT OR Apache-2.0
byteorder1.5.0Unlicense OR MIT
bytes1.11.1MIT
cast0.3.0MIT OR Apache-2.0
cc1.2.56MIT OR Apache-2.0
cfg-if1.0.4MIT OR Apache-2.0
cfg_aliases0.2.1MIT
chrono0.4.44MIT OR Apache-2.0
ciborium0.2.2Apache-2.0
ciborium-io0.2.2Apache-2.0
ciborium-ll0.2.2Apache-2.0
clap4.6.0MIT OR Apache-2.0
clap_builder4.6.0MIT OR Apache-2.0
clap_derive4.6.0MIT OR Apache-2.0
clap_lex1.1.0MIT OR Apache-2.0
colorchoice1.0.4MIT OR Apache-2.0
console0.15.11MIT
const-oid0.9.6Apache-2.0 OR MIT
constant_time_eq0.4.2CC0-1.0 OR MIT-0 OR Apache-2.0
core-foundation-sys0.8.7MIT OR Apache-2.0
cpufeatures0.2.17MIT OR Apache-2.0
crc3.4.0MIT OR Apache-2.0
crc-catalog2.4.0MIT OR Apache-2.0
crc32fast1.5.0MIT OR Apache-2.0
criterion0.5.1Apache-2.0 OR MIT
criterion-plot0.5.0MIT/Apache-2.0
crossbeam0.8.4MIT OR Apache-2.0
crossbeam-channel0.5.15MIT OR Apache-2.0
crossbeam-deque0.8.6MIT OR Apache-2.0
crossbeam-epoch0.9.18MIT OR Apache-2.0
crossbeam-queue0.3.12MIT OR Apache-2.0
crossbeam-utils0.8.21MIT OR Apache-2.0
crunchy0.2.4MIT
crypto-bigint0.5.5Apache-2.0 OR MIT
crypto-common0.1.7MIT OR Apache-2.0
cryptoxide0.4.4MIT/Apache-2.0
curve25519-dalek4.1.3BSD-3-Clause
curve25519-dalek-derive0.1.1MIT/Apache-2.0
darling0.21.3MIT
darling_core0.21.3MIT
darling_macro0.21.3MIT
dashmap6.1.0MIT
dashu-base0.4.1MIT OR Apache-2.0
dashu-int0.4.1MIT OR Apache-2.0
der0.7.10Apache-2.0 OR MIT
deranged0.5.8MIT OR Apache-2.0
derive_more1.0.0MIT
derive_more-impl1.0.0MIT
digest0.9.0MIT OR Apache-2.0
displaydoc0.2.5MIT OR Apache-2.0
dyn-clone1.0.20MIT OR Apache-2.0
ecdsa0.16.9Apache-2.0 OR MIT
ed255192.2.3Apache-2.0 OR MIT
ed25519-dalek2.2.0BSD-3-Clause
either1.15.0MIT OR Apache-2.0
elliptic-curve0.13.8Apache-2.0 OR MIT
encode_unicode1.0.0Apache-2.0 OR MIT
equivalent1.0.2Apache-2.0 OR MIT
errno0.3.14MIT OR Apache-2.0
fastrand2.3.0Apache-2.0 OR MIT
ff0.13.1MIT/Apache-2.0
fiat-crypto0.2.9MIT OR Apache-2.0 OR BSD-1-Clause
filetime0.2.27MIT/Apache-2.0
find-msvc-tools0.1.9MIT OR Apache-2.0
fnv1.0.7Apache-2.0 / MIT
foldhash0.1.5Zlib
form_urlencoded1.2.2MIT OR Apache-2.0
fs20.4.3MIT/Apache-2.0
funty2.0.0MIT
futures0.3.32MIT OR Apache-2.0
futures-channel0.3.32MIT OR Apache-2.0
futures-core0.3.32MIT OR Apache-2.0
futures-executor0.3.32MIT OR Apache-2.0
futures-io0.3.32MIT OR Apache-2.0
futures-macro0.3.32MIT OR Apache-2.0
futures-sink0.3.32MIT OR Apache-2.0
futures-task0.3.32MIT OR Apache-2.0
futures-util0.3.32MIT OR Apache-2.0
generic-array0.14.7MIT
getrandom0.4.2MIT OR Apache-2.0
glob0.3.3MIT OR Apache-2.0
group0.13.0MIT/Apache-2.0
half2.7.1MIT OR Apache-2.0
hamming0.1.3MIT/Apache-2.0
hashbrown0.16.1MIT OR Apache-2.0
heck0.5.0MIT OR Apache-2.0
hermit-abi0.5.2MIT OR Apache-2.0
hex0.4.3MIT OR Apache-2.0
hmac0.12.1MIT OR Apache-2.0
hostname0.3.1MIT
http1.4.0MIT OR Apache-2.0
http-body1.0.1MIT
http-body-util0.1.3MIT
httparse1.10.1MIT OR Apache-2.0
hyper1.8.1MIT
hyper-rustls0.27.7Apache-2.0 OR ISC OR MIT
hyper-util0.1.20MIT
iana-time-zone0.1.65MIT OR Apache-2.0
iana-time-zone-haiku0.1.2MIT OR Apache-2.0
icu_collections2.1.1Unicode-3.0
icu_locale_core2.1.1Unicode-3.0
icu_normalizer2.1.1Unicode-3.0
icu_normalizer_data2.1.1Unicode-3.0
icu_properties2.1.2Unicode-3.0
icu_properties_data2.1.2Unicode-3.0
icu_provider2.1.1Unicode-3.0
id-arena2.3.0MIT/Apache-2.0
ident_case1.0.1MIT/Apache-2.0
idna1.1.0MIT OR Apache-2.0
idna_adapter1.2.1Apache-2.0 OR MIT
indexmap2.13.0Apache-2.0 OR MIT
indicatif0.17.11MIT
ipnet2.12.0MIT OR Apache-2.0
iri-string0.7.10MIT OR Apache-2.0
is-terminal0.4.17MIT
is_terminal_polyfill1.70.2MIT OR Apache-2.0
itertools0.13.0MIT OR Apache-2.0
itoa1.0.17MIT OR Apache-2.0
jobserver0.1.34MIT OR Apache-2.0
js-sys0.3.91MIT OR Apache-2.0
k2560.13.4Apache-2.0 OR MIT
lazy_static1.5.0MIT OR Apache-2.0
leb128fmt0.1.0MIT OR Apache-2.0
libc0.2.183MIT OR Apache-2.0
libredox0.1.14MIT
linux-raw-sys0.12.1Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT
litemap0.8.1Unicode-3.0
lock_api0.4.14MIT OR Apache-2.0
log0.4.29MIT OR Apache-2.0
lru-slab0.1.2MIT OR Apache-2.0 OR Zlib
lz41.28.1MIT
lz4-sys1.11.1+lz4-1.10.0MIT
match_cfg0.1.0MIT/Apache-2.0
matchers0.2.0MIT
memchr2.8.0Unlicense OR MIT
memmap20.9.10MIT OR Apache-2.0
miette5.10.0Apache-2.0
miette-derive5.10.0Apache-2.0
minicbor0.26.5BlueOak-1.0.0
minicbor-derive0.16.2BlueOak-1.0.0
mio1.1.1MIT
nu-ansi-term0.50.3MIT
num-bigint0.4.6MIT OR Apache-2.0
num-conv0.2.0MIT OR Apache-2.0
num-integer0.1.46MIT OR Apache-2.0
num-modular0.6.1Apache-2.0
num-order1.2.0Apache-2.0
num-rational0.4.2MIT OR Apache-2.0
num-traits0.2.19MIT OR Apache-2.0
num_cpus1.17.0MIT OR Apache-2.0
number_prefix0.4.0MIT
once_cell1.21.4MIT OR Apache-2.0
once_cell_polyfill1.70.2MIT OR Apache-2.0
oorandom11.1.5MIT
opaque-debug0.3.1MIT OR Apache-2.0
pallas-addresses1.0.0-alpha.5Apache-2.0
pallas-codec1.0.0-alpha.5Apache-2.0
pallas-crypto1.0.0-alpha.5Apache-2.0
pallas-network1.0.0-alpha.5Apache-2.0
pallas-primitives1.0.0-alpha.5Apache-2.0
pallas-traverse1.0.0-alpha.5Apache-2.0
parking_lot0.12.5MIT OR Apache-2.0
parking_lot_core0.9.12MIT OR Apache-2.0
paste1.0.15MIT OR Apache-2.0
peg0.8.5MIT
peg-macros0.8.5MIT
peg-runtime0.8.5MIT
percent-encoding2.3.2MIT OR Apache-2.0
pin-project-lite0.2.17Apache-2.0 OR MIT
pin-utils0.1.0MIT OR Apache-2.0
pkcs80.10.2Apache-2.0 OR MIT
pkg-config0.3.32MIT OR Apache-2.0
plain0.2.3MIT/Apache-2.0
plotters0.3.7MIT
plotters-backend0.3.7MIT
plotters-svg0.3.7MIT
portable-atomic1.13.1Apache-2.0 OR MIT
potential_utf0.1.4Unicode-3.0
powerfmt0.2.0MIT OR Apache-2.0
ppv-lite860.2.21MIT OR Apache-2.0
pretty0.11.3MIT
prettyplease0.2.37MIT OR Apache-2.0
proc-macro21.0.106MIT OR Apache-2.0
proptest1.10.0MIT OR Apache-2.0
quick-error1.2.3MIT/Apache-2.0
quinn0.11.9MIT OR Apache-2.0
quinn-proto0.11.14MIT OR Apache-2.0
quinn-udp0.5.14MIT OR Apache-2.0
quote1.0.45MIT OR Apache-2.0
r-efi6.0.0MIT OR Apache-2.0 OR LGPL-2.1-or-later
radium0.7.0MIT
rand0.9.2MIT OR Apache-2.0
rand_chacha0.9.0MIT OR Apache-2.0
rand_core0.9.5MIT OR Apache-2.0
rand_xorshift0.4.0MIT OR Apache-2.0
rayon1.11.0MIT OR Apache-2.0
rayon-core1.13.0MIT OR Apache-2.0
redox_syscall0.7.3MIT
ref-cast1.0.25MIT OR Apache-2.0
ref-cast-impl1.0.25MIT OR Apache-2.0
regex1.12.3MIT OR Apache-2.0
regex-automata0.4.14MIT OR Apache-2.0
regex-syntax0.8.10MIT OR Apache-2.0
reqwest0.12.28MIT OR Apache-2.0
rfc69790.4.0Apache-2.0 OR MIT
ring0.17.14Apache-2.0 AND ISC
rustc-hash2.1.1Apache-2.0 OR MIT
rustc_version0.4.1MIT OR Apache-2.0
rustix1.1.4Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT
rustls0.23.37Apache-2.0 OR ISC OR MIT
rustls-pki-types1.14.0MIT OR Apache-2.0
rustls-webpki0.103.9ISC
rustversion1.0.22MIT OR Apache-2.0
rusty-fork0.3.1MIT/Apache-2.0
ryu1.0.23Apache-2.0 OR BSL-1.0
same-file1.0.6Unlicense/MIT
schemars1.2.1MIT
scopeguard1.2.0MIT OR Apache-2.0
sec10.7.3Apache-2.0 OR MIT
secp256k10.26.0CC0-1.0
secp256k1-sys0.8.2CC0-1.0
semver1.0.27MIT OR Apache-2.0
serde1.0.228MIT OR Apache-2.0
serde_core1.0.228MIT OR Apache-2.0
serde_derive1.0.228MIT OR Apache-2.0
serde_json1.0.149MIT OR Apache-2.0
serde_spanned0.6.9MIT OR Apache-2.0
serde_urlencoded0.7.1MIT/Apache-2.0
serde_with3.17.0MIT OR Apache-2.0
serde_with_macros3.17.0MIT OR Apache-2.0
sha20.9.9MIT OR Apache-2.0
sharded-slab0.1.7MIT
shlex1.3.0MIT OR Apache-2.0
signal-hook-registry1.4.8MIT OR Apache-2.0
signature2.2.0Apache-2.0 OR MIT
slab0.4.12MIT
smallvec1.15.1MIT OR Apache-2.0
snap1.1.1BSD-3-Clause
socket20.6.3MIT OR Apache-2.0
spki0.7.3Apache-2.0 OR MIT
stable_deref_trait1.2.1MIT OR Apache-2.0
static_assertions1.1.0MIT OR Apache-2.0
strsim0.11.1MIT
strum0.26.3MIT
strum_macros0.26.4MIT
subtle2.6.1BSD-3-Clause
syn2.0.117MIT OR Apache-2.0
sync_wrapper1.0.2Apache-2.0
synstructure0.13.2MIT
tap1.0.1MIT
tar0.4.44MIT OR Apache-2.0
tempfile3.27.0MIT OR Apache-2.0
thiserror2.0.18MIT OR Apache-2.0
thiserror-impl2.0.18MIT OR Apache-2.0
thread_local1.1.9MIT OR Apache-2.0
threadpool1.8.1MIT/Apache-2.0
time0.3.47MIT OR Apache-2.0
time-core0.1.8MIT OR Apache-2.0
time-macros0.2.27MIT OR Apache-2.0
tinystr0.8.2Unicode-3.0
tinytemplate1.2.1Apache-2.0 OR MIT
tinyvec1.10.0Zlib OR Apache-2.0 OR MIT
tinyvec_macros0.1.1MIT OR Apache-2.0 OR Zlib
tokio1.50.0MIT
tokio-macros2.6.1MIT
tokio-rustls0.26.4MIT OR Apache-2.0
tokio-util0.7.18MIT
toml0.8.23MIT OR Apache-2.0
toml_datetime0.6.11MIT OR Apache-2.0
toml_edit0.22.27MIT OR Apache-2.0
toml_write0.1.2MIT OR Apache-2.0
tower0.5.3MIT
tower-http0.6.8MIT
tower-layer0.3.3MIT
tower-service0.3.3MIT
tracing0.1.44MIT
tracing-appender0.2.4MIT
tracing-attributes0.1.31MIT
tracing-core0.1.36MIT
tracing-log0.2.0MIT
tracing-serde0.2.0MIT
tracing-subscriber0.3.22MIT
try-lock0.2.5MIT
typed-arena2.0.2MIT
typenum1.19.0MIT OR Apache-2.0
unarray0.1.4MIT OR Apache-2.0
unicode-ident1.0.24(MIT OR Apache-2.0) AND Unicode-3.0
unicode-segmentation1.12.0MIT OR Apache-2.0
unicode-width0.2.2MIT OR Apache-2.0
unicode-xid0.2.6MIT OR Apache-2.0
untrusted0.9.0ISC
uplc1.1.21Apache-2.0
url2.5.8MIT OR Apache-2.0
utf8_iter1.0.4Apache-2.0 OR MIT
utf8parse0.2.2Apache-2.0 OR MIT
uuid1.22.0Apache-2.0 OR MIT
valuable0.1.1MIT
version_check0.9.5MIT/Apache-2.0
vrf_dalek0.1.0Unknown
wait-timeout0.2.1MIT/Apache-2.0
walkdir2.5.0Unlicense/MIT
want0.3.1MIT
wasi0.9.0+wasi-snapshot-preview1Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT
wasip21.0.2+wasi-0.2.9Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT
wasip30.4.0+wasi-0.3.0-rc-2026-01-06Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT
wasm-bindgen0.2.114MIT OR Apache-2.0
wasm-bindgen-futures0.4.64MIT OR Apache-2.0
wasm-bindgen-macro0.2.114MIT OR Apache-2.0
wasm-bindgen-macro-support0.2.114MIT OR Apache-2.0
wasm-bindgen-shared0.2.114MIT OR Apache-2.0
wasm-encoder0.244.0Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT
wasm-metadata0.244.0Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT
wasm-streams0.4.2MIT OR Apache-2.0
wasmparser0.244.0Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT
web-sys0.3.91MIT OR Apache-2.0
web-time1.1.0MIT OR Apache-2.0
webpki-roots1.0.6CDLA-Permissive-2.0
winapi0.3.9MIT/Apache-2.0
winapi-i686-pc-windows-gnu0.4.0MIT/Apache-2.0
winapi-util0.1.11Unlicense OR MIT
winapi-x86_64-pc-windows-gnu0.4.0MIT/Apache-2.0
windows-core0.62.2MIT OR Apache-2.0
windows-implement0.60.2MIT OR Apache-2.0
windows-interface0.59.3MIT OR Apache-2.0
windows-link0.2.1MIT OR Apache-2.0
windows-result0.4.1MIT OR Apache-2.0
windows-strings0.5.1MIT OR Apache-2.0
windows-sys0.61.2MIT OR Apache-2.0
windows-targets0.53.5MIT OR Apache-2.0
windows_aarch64_gnullvm0.53.1MIT OR Apache-2.0
windows_aarch64_msvc0.53.1MIT OR Apache-2.0
windows_i686_gnu0.53.1MIT OR Apache-2.0
windows_i686_gnullvm0.53.1MIT OR Apache-2.0
windows_i686_msvc0.53.1MIT OR Apache-2.0
windows_x86_64_gnu0.53.1MIT OR Apache-2.0
windows_x86_64_gnullvm0.53.1MIT OR Apache-2.0
windows_x86_64_msvc0.53.1MIT OR Apache-2.0
winnow0.7.15MIT
wit-bindgen0.51.0Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT
wit-bindgen-core0.51.0Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT
wit-bindgen-rust0.51.0Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT
wit-bindgen-rust-macro0.51.0Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT
wit-component0.244.0Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT
wit-parser0.244.0Apache-2.0 WITH LLVM-exception OR Apache-2.0 OR MIT
writeable0.6.2Unicode-3.0
wyz0.5.1MIT
xattr1.6.1MIT OR Apache-2.0
yoke0.8.1Unicode-3.0
yoke-derive0.8.1Unicode-3.0
zerocopy0.8.42BSD-2-Clause OR Apache-2.0 OR MIT
zerocopy-derive0.8.42BSD-2-Clause OR Apache-2.0 OR MIT
zerofrom0.1.6Unicode-3.0
zerofrom-derive0.1.6Unicode-3.0
zeroize1.8.2Apache-2.0 OR MIT
zeroize_derive1.4.3Apache-2.0 OR MIT
zerotrie0.2.3Unicode-3.0
zerovec0.11.5Unicode-3.0
zerovec-derive0.11.2Unicode-3.0
zmij1.0.21MIT
zstd0.13.3MIT
zstd-safe7.2.4MIT OR Apache-2.0
zstd-sys2.0.16+zstd.1.5.7MIT/Apache-2.0

Regenerating This Page

This page is generated from Cargo.lock metadata. To regenerate after dependency changes:

python3 scripts/generate-licenses.py > docs/src/reference/third-party-licenses.md

Troubleshooting

Common issues and their solutions when running Dugite.

Build Issues

Compilation is slow

The initial build compiles all dependencies from source, which takes several minutes. Subsequent builds are much faster due to cargo caching.

For faster development iteration, use debug builds:

cargo build  # debug mode, faster compilation

Only use --release when running against a live network.

Connection Issues

Cannot connect to peers

Symptoms: Node starts but never receives blocks. Logs show connection failures.

Possible causes:

  1. Firewall blocking outbound connections on port 3001. Ensure outbound TCP connections to port 3001 are allowed.

  2. Incorrect network magic. Verify the NetworkMagic in your config matches the target network:

    • Mainnet: 764824073
    • Preview: 2
    • Preprod: 1
  3. DNS resolution failure. If topology uses hostnames, ensure DNS is working:

    nslookup preview-node.play.dev.cardano.org
    
  4. Stale topology. Peer addresses may change. Download the latest topology from the Cardano Operations Book.

Handshake failures

Error: Handshake failed: version mismatch

This usually means the peer does not support the protocol version Dugite is requesting (V14+). Ensure you are connecting to an up-to-date cardano-node (version 10.x+).

Socket Issues

Cannot connect to node socket

Error: Cannot connect to node socket './node.sock': No such file or directory

Solutions:

  1. Node is not running. Start the node first.

  2. Wrong socket path. Verify the socket path matches what the node was started with:

    dugite-cli query tip --socket-path /path/to/actual/node.sock
    
  3. Permission denied. Ensure the user running the CLI has read/write access to the socket file.

  4. Stale socket file. If the node crashed, the socket file may remain. Delete it and restart:

    rm ./node.sock
    dugite-node run ...
    

Socket permission denied

Error: Permission denied (os error 13)

The Unix socket file inherits the permissions of the process that created it. Ensure both the node and CLI processes run as the same user, or adjust the socket file permissions.

Storage Issues

Database corruption

Symptoms: Node crashes on startup with storage errors.

Solution: The safest approach is to delete the database and resync:

rm -rf ./db-path
dugite-node run ...

For faster recovery, use Mithril snapshot import:

rm -rf ./db-path
dugite-node mithril-import --network-magic 2 --database-path ./db-path
dugite-node run ...

Disk space

Cardano databases grow continuously. Approximate sizes:

NetworkDatabase Size
Mainnet90-140+ GB
Preview8-15+ GB
Preprod20-35+ GB

Monitor disk usage and ensure adequate free space.

Sync Issues

Sync is slow

Possible causes:

  1. Single peer. Dugite benefits from multiple peers for block fetching. Ensure your topology includes multiple bootstrap peers or enable ledger-based peer discovery.

  2. Network latency. The ChainSync protocol has an inherent per-header RTT (~300ms). High-latency connections will reduce throughput.

  3. Slow disk. Storage performance depends on disk I/O speed. SSDs are strongly recommended. On Linux, enable io_uring for improved UTxO storage performance: cargo build --release --features io-uring.

  4. CPU-bound during ledger validation. Block processing includes UTxO validation and Plutus script execution. This is CPU-intensive during sync.

Recommendation: Use Mithril snapshot import to bypass the initial sync bottleneck entirely.

Sync stalls

Symptoms: Progress percentage stops increasing, no new blocks logged.

Possible causes:

  1. Peer disconnected. The node will reconnect automatically with exponential backoff. Wait a few minutes.

  2. All peers at same height. If all configured peers are also syncing, they may not have new blocks to serve. Add more peers to the topology.

  3. Resource exhaustion. Check for out-of-memory or file descriptor limits.

Memory Issues

Out of memory

Dugite's memory usage depends on:

  • UTxO set size (the largest memory consumer)
  • Number of connected peers
  • VolatileDB (last k=2160 blocks in memory)

For mainnet, expect memory usage of 8-16 GB depending on sync progress.

If running on a memory-constrained system, ensure adequate swap space is configured.

Logging

Increase log verbosity

Use the RUST_LOG environment variable:

# Debug all crates
RUST_LOG=debug dugite-node run ...

# Debug specific crate
RUST_LOG=dugite_network=debug dugite-node run ...

# Trace level (very verbose)
RUST_LOG=trace dugite-node run ...

Log to file

Use the built-in file logging:

dugite-node run --log-output file --log-dir /var/log/dugite ...

Log files are rotated daily by default. See Logging for rotation options and multi-target output.

SIGHUP Topology Reload

To update topology without restarting:

# Edit topology.json
kill -HUP $(pidof dugite-node)

The node will log that the topology was reloaded and update the peer manager with the new configuration.

Block Producer Issues

Block producer shows ZERO stake

Cause: Snapshot loaded before UTxO store was attached, corrupting pool_stake values.

Fix: Automatic on restart — rebuild_stake_distribution runs after UTxO store attachment.

Verify: Check the log for "Block producer: pool stake in 'set' snapshot" with a non-zero pool_stake_lovelace value.

Node enters reconnection loop after forging

Cause: Forged block lost a slot battle and was persisted to ImmutableDB.

Symptoms: Log shows "intersection fell to Origin" or the node repeatedly reconnects to upstream peers.

Fix: The fork recovery mechanism now handles this automatically. If the issue persists, re-import from Mithril:

dugite-node mithril-import --network-magic <magic> --database-path <path>

See Fork Recovery & ImmutableDB Contamination for details on how the recovery mechanism works.

Epoch & State Issues

Epoch number appears wrong (e.g., epoch 445 instead of 1239)

Cause: Snapshot saved with incorrect epoch_length defaults (mainnet 432000 instead of preview 86400).

Fix: Automatic correction on load — the epoch is recalculated from the tip slot using genesis parameters.

Log message: "Snapshot epoch differs from computed epoch — correcting"

VRF verification failures after restart

Cause: Epoch nonce in snapshot may be stale if saved with wrong epoch boundaries.

Fix: VRF verification is non-fatal (nonce_established=false) until at least one epoch transition is observed. The node will sync normally; forging is enabled once the nonce is established.

Getting Help

If you encounter an issue not covered here:

  1. Check the GitHub issues
  2. Open a new issue with:
    • Dugite version (dugite-node --version)
    • Operating system
    • Configuration files (redact any sensitive info)
    • Relevant log output
    • Steps to reproduce