Running in production
Database
By default, CometBFT uses thesyndtr/goleveldb package for its in-process
key-value database. If you want maximal performance, it may be best to install
the real C-implementation of LevelDB and compile CometBFT to use that using
make build COMETBFT_BUILD_OPTIONS=cleveldb. See the install
instructions for details.
CometBFT keeps multiple distinct databases in the $CMTHOME/data:
blockstore.db: Keeps the entire blockchain - stores blocks, block commits, and block metadata, each indexed by height. Used to sync new peers.evidence.db: Stores all verified evidence of misbehavior.state.db: Stores the current blockchain state (i.e. height, validators, consensus params). Only grows if consensus params or validators change. Also used to temporarily store intermediate results during block processing.tx_index.db: Indexes transactions and by tx hash and height. The tx results are indexed if they are added to theFinalizeBlockresponse in the application.
Logging
Default logging level (log_level = "main:info,state:info,statesync:info,*:error") should suffice for
normal operation mode. Read this
post
for details on how to configure log_level config variable. Some of the
modules can be found here. If
you’re trying to debug CometBFT or asked to provide logs with debug
logging level, you can do so by running CometBFT with
--log_level="*:debug".
Write Ahead Logs (WAL)
CometBFT uses write ahead logs for the consensus (cs.wal) and the mempool
(mempool.wal). Both WALs have a max size of 1GB and are automatically rotated.
Consensus WAL
Theconsensus.wal is used to ensure we can recover from a crash at any point
in the consensus state machine.
It writes all consensus messages (timeouts, proposals, block part, or vote)
to a single file, flushing to disk before processing messages from its own
validator. Since CometBFT validators are expected to never sign a conflicting vote, the
WAL ensures we can always recover deterministically to the latest state of the consensus without
using the network or re-signing any consensus messages.
If your consensus.wal is corrupted, see below.
Mempool WAL
Themempool.wal logs all incoming transactions before running CheckTx, but is
otherwise not used in any programmatic way. It’s just a kind of manual
safe guard. Note the mempool provides no durability guarantees - a tx sent to one or many nodes
may never make it into the blockchain if those nodes crash before being able to
propose it. Clients must monitor their transactions by subscribing over websockets,
polling for them, or using /broadcast_tx_commit. In the worst case, transactions can be
resent from the mempool WAL manually.
For the above reasons, the mempool.wal is disabled by default. To enable, set
mempool.wal_dir to where you want the WAL to be located (e.g.
data/mempool.wal).
DoS Exposure and Mitigation
Validators are supposed to setup Sentry Node Architecture to prevent Denial-of-Service attacks.P2P
The core of the CometBFT peer-to-peer system isMConnection. Each
connection has MaxPacketMsgPayloadSize, which is the maximum packet
size and bounded send & receive queues. One can impose restrictions on
send & receive rate per connection (SendRate, RecvRate).
The number of open P2P connections can become quite large, and hit the operating system’s open
file limit (since TCP connections are considered files on UNIX-based systems). Nodes should be
given a sizable open file limit, e.g. 8192, via ulimit -n 8192 or other deployment-specific
mechanisms.
RPC
Attack Exposure and Mitigation
It is generally not recommended for RPC endpoints to be exposed publicly, and especially so if the node in question is a validator, as the CometBFT RPC does not currently provide advanced security features. Public exposure of RPC endpoints without appropriate protection can make the associated node vulnerable to a variety of attacks. It is entirely up to operators to ensure, if nodes’ RPC endpoints have to be exposed publicly, that appropriate measures have been taken to mitigate against attacks. Some examples of mitigation measures include, but are not limited to:- Never publicly exposing the RPC endpoints of validators (i.e. if the RPC endpoints absolutely have to be exposed, ensure you do so only on full nodes and with appropriate protection)
- Correct usage of rate-limiting, authentication and caching (e.g. as provided by reverse proxies like nginx and/or DDoS protection services like Cloudflare)
- Only exposing the specific endpoints absolutely necessary for the relevant use cases (configurable via nginx/Cloudflare/etc.)
Endpoints Returning Multiple Entries
Endpoints returning multiple entries are limited by default to return 30 elements (100 max). See the RPC Documentation for more information.Debugging CometBFT
If you ever have to debug CometBFT, the first thing you should probably do is check out the logs. See How to read logs, where we explain what certain log statements mean. If, after skimming through the logs, things are not clear still, the next thing to try is querying the/status RPC endpoint. It provides the necessary info:
whenever the node is syncing or not, what height it is on, etc.
/dump_consensus_state will give you a detailed overview of the consensus
state (proposer, latest validators, peers states). From it, you should be able
to figure out why, for example, the network had halted.
/consensus_state, which returns
just the votes seen at the current height.
If, after consulting with the logs and above endpoints, you still have no idea
what’s happening, consider using cometbft debug kill sub-command. This
command will scrap all the available info and kill the process. See
Debugging for the exact format.
You can inspect the resulting archive yourself or create an issue on
Github. Before opening an issue
however, be sure to check if there’s no existing
issue already.
Monitoring CometBFT
Each CometBFT instance has a standard/health RPC endpoint, which responds
with 200 (OK) if everything is fine and 500 (or no response) - if something is
wrong.
Other useful endpoints include mentioned earlier /status, /net_info and
/validators.
CometBFT also can report and serve Prometheus metrics. See
Metrics.
cometbft debug dump sub-command can be used to periodically dump useful
information into an archive. See Debugging for more
information.
What happens when my app dies
You are supposed to run CometBFT under a process supervisor (like systemd or runit). It will ensure CometBFT is always running (despite possible errors). Getting back to the original question, if your application dies, CometBFT will panic. After a process supervisor restarts your application, CometBFT should be able to reconnect successfully. The order of restart does not matter for it.Signal handling
We catch SIGINT and SIGTERM and try to clean up nicely. For other signals we use the default behavior in Go: Default behavior of signals in Go programs.Corruption
NOTE: Make sure you have a backup of the CometBFT data directory.Possible causes
Remember that most corruption is caused by hardware issues:- RAID controllers with faulty / worn out battery backup, and an unexpected power loss
- Hard disk drives with write-back cache enabled, and an unexpected power loss
- Cheap SSDs with insufficient power-loss protection, and an unexpected power-loss
- Defective RAM
- Defective or overheating CPU(s)
- Database systems configured with fsync=off and an OS crash or power loss
- Filesystems configured to use write barriers plus a storage layer that ignores write barriers. LVM is a particular culprit.
- CometBFT bugs
- Operating system bugs
- Admin error (e.g., directly modifying CometBFT data-directory contents)
WAL Corruption
If consensus WAL is corrupted at the latest height and you are trying to start CometBFT, replay will fail with panic. Recovering from data corruption can be hard and time-consuming. Here are two approaches you can take:- Delete the WAL file and restart CometBFT. It will attempt to sync with other peers.
- Try to repair the WAL file manually:
-
Create a backup of the corrupted WAL file:
-
Use
./scripts/wal2jsonto create a human-readable version: - Search for a “CORRUPTED MESSAGE” line.
-
By looking at the previous message and the message after the corrupted one
and looking at the logs, try to rebuild the message. If the consequent
messages are marked as corrupted too (this may happen if length header
got corrupted or some writes did not make it to the WAL ~ truncation),
then remove all the lines starting from the corrupted one and restart
CometBFT.
-
After editing, convert this file back into binary form by running:
Hardware
Processor and Memory
While actual specs vary depending on the load and validators count, minimal requirements are:- 1GB RAM
- 25GB of disk space
- 1.4 GHz CPU
- 2GB RAM
- 100GB SSD
- x64 2.0 GHz 2v CPU
Validator signing on 32 bit architectures (or ARM)
Both oured25519 and secp256k1 implementations require constant time
uint64 multiplication. Non-constant time crypto can (and has) leaked
private keys on both ed25519 and secp256k1. This doesn’t exist in hardware
on 32 bit x86 platforms (source), and it
depends on the compiler to enforce that it is constant time. It’s unclear at
this point whenever the Golang compiler does this correctly for all
implementations.
We do not support nor recommend running a validator on 32 bit architectures OR
the “VIA Nano 2000 Series”, and the architectures in the ARM section rated
“S-”.
Operating Systems
CometBFT can be compiled for a wide range of operating systems thanks to Go language (the list of $OS/$ARCH pairs can be found here). While we do not favor any operation system, more secure and stable Linux server distributions (like CentOS) should be preferred over desktop operation systems (like Mac OS).Miscellaneous
NOTE: if you are going to use CometBFT in a public domain, make sure you read hardware recommendations for a validator in the Cosmos network.Configuration parameters
p2p.flush_throttle_timeoutp2p.max_packet_msg_payload_sizep2p.send_ratep2p.recv_rate
mempool.recheck
mempool.recheck=false.
mempool.broadcast
consensus.skip_timeout_commit
skip_timeout_commit=false when there is economics on the line
because proposers should wait to hear for more votes. But if you don’t
care about that and want the fastest consensus, you can skip it. It will
be kept false by default for public deployments (e.g. Cosmos
Hub) while for enterprise
applications, setting it to true is not a problem.
consensus.peer_gossip_sleep_duration
consensus.timeout_commit
timeout_commit (time we sleep before
proposing the next block).
p2p.addr_book_strict
addr_book_strict
to false (turn it off).
rpc.max_open_connections
ulimit -n 8192.
…for N connections, such as 50k:
rpc.grpc_max_open_connections.