Man page - ceph-bluestore-tool(8)
Packages contas this manual
Manual
| CEPH-BLUESTORE-TOOL(8) | Ceph | CEPH-BLUESTORE-TOOL(8) |
NAME
ceph-bluestore-tool - bluestore administrative tool
SYNOPSIS
ceph-bluestore-tool command [ --dev device ... ] [ -i osd_id ] [ --path osd path ] [ --out-dir dir ] [ --log-file | -l filename ] [ --deep ] ceph-bluestore-tool fsck|repair --path osd path [ --deep ] ceph-bluestore-tool qfsck --path osd path ceph-bluestore-tool allocmap --path osd path ceph-bluestore-tool restore_cfb --path osd path ceph-bluestore-tool show-label --dev device ... ceph-bluestore-tool show-label-at --dev device --offset lba... ceph-bluestore-tool prime-osd-dir --dev device --path osd path ceph-bluestore-tool bluefs-export --path osd path --out-dir dir ceph-bluestore-tool bluefs-bdev-new-wal --path osd path --dev-target new-device ceph-bluestore-tool bluefs-bdev-new-db --path osd path --dev-target new-device ceph-bluestore-tool bluefs-bdev-migrate --path osd path --dev-target new-device --devs-source device1 [--devs-source device2] ceph-bluestore-tool free-dump|free-score --path osd path [ --allocator block/bluefs-wal/bluefs-db/bluefs-slow ] ceph-bluestore-tool bluefs-stats --path osd path ceph-bluestore-tool bluefs-files --path osd path ceph-bluestore-tool reshard --path osd path --sharding new sharding [ --sharding-ctrl control string ] ceph-bluestore-tool show-sharding --path osd path ceph-bluestore-tool trim --path osd path ceph-bluestore-tool zap-device --dev dev path ceph-bluestore-tool revert-wal-to-plain --path osd path
DESCRIPTION
ceph-bluestore-tool is a utility to perform low-level administrative operations on a BlueStore instance.
COMMANDS
help
fsck [ --deep ] (on|off) or (yes|no) or (1|0) or (true|false)
repair
qfsck
allocmap
restore_cfb
bluefs-export
bluefs-bdev-sizes --path osd path
bluefs-bdev-expand --path osd path
bluefs-bdev-new-wal --path osd path --dev-target new-device
bluefs-bdev-new-db --path osd path --dev-target new-device
bluefs-bdev-migrate --dev-target new-device --devs-source device1 [--devs-source device2]
- if the source list has DB volume - the target device replaces it.
- if the source list has WAL volume - the target device replaces it.
- if the source list has slow volume only - the operation isn't permitted and requires explicit allocation via a new-DB/new-WAL command.
show-label --dev device [...]
show-label-at --dev device --offset
*lba*[...]
free-dump --path osd path [ --allocator block/bluefs-wal/bluefs-db/bluefs-slow ]
free-score --path osd path [ --allocator block/bluefs-wal/bluefs-db/bluefs-slow ]
bluefs-stats --path osd path
bluefs-files --path osd path
reshard --path osd path --sharding new sharding [ --resharding-ctrl control string ]
show-sharding --path osd path
- command
- trim --path osd path
An SSD that has been used heavily may experience performance degradation. This operation uses TRIM / discard to free unused blocks from BlueStore and BlueFS block devices, and allows the drive to perform more efficient internal housekeeping. If BlueStore runs with discard enabled, this option may not be useful.
- command
- zap-device --dev dev path
Zeros all device label locations. This effectively makes device appear empty.
- command
- revert-wal-to-plain --path osd path
Changes WAL files from envelope mode to the legacy plain mode. Useful for downgrades, or if you might want to disable this new feature (bluefs_wal_envelope_mode).
OPTIONS
- --dev *device*
- Add device to the list of devices to consider
- -i *osd_id*
- Operate as OSD osd_id. Connect to monitor for OSD specific options. If monitor is unavailable, add --no-mon-config to read from ceph.conf instead.
- --devs-source *device*
- Add device to the list of devices to consider as sources for migrate operation
- --dev-target *device*
- Specify target device migrate operation or device to add for adding new DB/WAL.
- --path *osd path*
- Specify an osd path. In most cases, the device list is inferred from the symlinks present in osd path. This is usually simpler than explicitly specifying the device(s) with --dev. Not necessary if -i osd_id is provided.
- --out-dir *dir*
- Output directory for bluefs-export
- -l, --log-file *log file*
- file to log to
- --log-level *num*
- debug log level. Default is 30 (extremely verbose), 20 is very verbose, 10 is verbose, and 1 is not very verbose.
- --deep
- deep scrub/repair (read and validate object data, not just metadata)
- --allocator *name*
- Useful for free-dump and free-score actions. Selects allocator(s).
- --resharding-ctrl *control string*
- Provides control over resharding process. Specifies how often refresh RocksDB iterator, and how large should commit batch be before committing to RocksDB. Option format is: <iterator_refresh_bytes>/<iterator_refresh_keys>/<batch_commit_bytes>/<batch_commit_keys> Default: 10000000/10000/1000000/1000
ADDITIONAL CEPH.CONF OPTIONS
Any configuration option that is accepted by OSD can be also passed to ceph-bluestore-tool. Useful to provide necessary configuration options when access to monitor/ceph.conf is impossible and -i option cannot be used.
DEVICE LABELS
Every BlueStore block device has a block label at the beginning of the device. Main device might optionaly have additional labels at different locations for the sake of OSD robustness. You can dump the contents of the label with:
ceph-bluestore-tool show-label --dev *device*
The main device will have a lot of metadata, including information that used to be stored in small files in the OSD data directory. The auxiliary devices (db and wal) will only have the minimum required fields (OSD UUID, size, device type, birth time). The main device contains additional label copies at offsets: 1GiB, 10GiB, 100GiB and 1000GiB. Corrupted labels are fixed as part of repair:
ceph-bluestore-tool repair --dev *device*
OSD DIRECTORY PRIMING
You can generate the content for an OSD data directory that can start up a BlueStore OSD with the prime-osd-dir command:
ceph-bluestore-tool prime-osd-dir --dev *main device* --path /var/lib/ceph/osd/ceph-*id*
BLUEFS LOG RESCUE
Some versions of BlueStore were susceptible to BlueFS log growing extremely large - beyond the point of making booting OSD impossible. This state is indicated by booting that takes very long and fails in _replay function.
- This can be fixed by::
- ceph-bluestore-tool fsck --path osd path --bluefs_replay_recovery=true
- It is advised to first check if rescue process would be successful::
- ceph-bluestore-tool fsck --path osd path --bluefs_replay_recovery=true --bluefs_replay_recovery_disable_compact=true
If above fsck is successful fix procedure can be applied.
AVAILABILITY
ceph-bluestore-tool is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to the Ceph documentation at https://docs.ceph.com for more information.
SEE ALSO
ceph-osd(8)
COPYRIGHT
2010-2014, Inktank Storage, Inc. and contributors. Licensed under Creative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0)
| December 17, 2025 | dev |