iPad设置里的experimental feature软件是什么

投稿:4粉丝:9分享--dynmicweibozoneqqbaidu将视频贴到博客或论坛视频地址复制嵌入代码复制微信扫一扫分享收藏0硬币--稍后看马克一下~用手机看转移阵地~用或其他应用扫描二维码手机下视频请使用扫码若未安装客户端,可直接扫此码下载应用看过该视频的还喜欢正在加载...miniOFFNIF is the Erlang OTP R13B03 version introduced in this version is only an experimental feature, according to the original plan, NIF in R14B version of a formal nature, the corresponding API will also be stabilized in the later version. Can not wait,
1. Introduction Http://xumingyong.javaeye.com/blog/586743 last blog post, I am in Erlang/OTP-R13B03, the use of the network packet nif realized grasping function, but due to R13B04 version, NIF form interface, occurred change (see next paragraph summ
Copyright (C) , All Rights Reserved.
版权所有 闽ICP备号
processed in 0.041 (s). 11 q(s)Feature #13942
ceph-disk: support bluestore
Status:ResolvedPriority:HighAssignee:Category:-Target version:12/01/2015% Done:0%
Source:otherTags:Backport:Reviewed:Affected Versions:
Description
bluestore (newstore) is based on a very small file system (with osd metadata, like the keyring, features, etc.) and one or more block devices.
These devices are symlinked from the data directory, similar to how 'journal' is a symlink for the current FileStore.
ceph-disk create:
- create a small partition for osd_data
- create a large partition (remainder of disk, by default) for data
- symlink from $osd_data/block
- [optional] create a mid-size partition for metadata (rocksdb).
use probably needs to specify this, since it'll probably be 1/Nth of their available SSD space on the host.
- symlink from $osd_data/block.db
- [optional] create a small partition for the write-ahead-log (basically the journal).
default size of 128MB is sufficient.
- symlink from $osd_data/block.wal
(note that block.db is preferable to block.wal as the space will be used for both the wal and sst files.
both would be used if the host has HDD, SSD, and NVME or NVRAM.)
- ceph-disk activate:
I think we can fully generalize this to re-use the journal UUID for any subsidiary block device (s/journal/block/ or similar).
Then, make activate simply require that all symlinks in $osd_data resolve to devices before activating the OSD.
The missing piece is that ceph-disk needs to figure out the uuid from a journal device in order to map it back to the parent osd_data device.
Right now it does
out = _check_output(
'ceph-osd',
'-i', '0',
# this is ignored
'--get-journal-uuid',
'--osd-journal',
close_fds=True,
but I think we need to replace this with some generic-ish way to identifying which OSD the device belongs too.
For bluestore I can just stuff the uuid in the first block of the device?
And then we can make a --get-device-uuid command that either parses the FileJournal header or a bluestore first-block-has-uuid header?
Related issues
Blocked by Ceph - : bluestore broken in current master
Related to
Duplicates
Duplicated by
Blocked by
Copied from
Updated by
Status changed from New to Verified
fixes the block device probing part
Updated by
Status changed from Verified to In Progress
Assignee set to Loic Dachary
Updated by
Status changed from In Progress to Verified
Assignee deleted (Loic Dachary)
For the 'ceph-disk prepare' part, I think we should keep it simple initially:
ceph-disk --osd-objectstore bluestore maindev[:dbdev[:waldev]]
and teach ceph-disk how to do the partitioning for bluestore (no generic way to ask ceph-osd that).
We can leave off the db/wal devices initially, and then make activate work, so that there is something functional.
Then add dbdev and waldev support last.
Updated by
encryption support will need to extend to block as well as osd-data since the data is no longer in the osd-data partition
Updated by
Status changed from Verified to In Progress
Assignee set to Loic Dachary
Updated by
Assignee deleted (Loic Dachary)
Updated by
Assignee set to Loic Dachary
Updated by
Updated by
Rebase to master complete, make check passes, working on ceph-disk suite problems now.
Updated by
bluestore fails to initialize on a ceph-disk prepared device (no external journal).
Updated by
ceph.conf has
enable experimental unrecoverable data corrupting features = *
bluestore fsck on mount = true
bluestore block db size =
bluestore block wal size =
bluestore block size =
osd objectstore = bluestore
ceph-prepare + activate via udev lead to /var/lib/ceph/osd/ceph-2
-rw-r--r--. 1 root root
187 Jan 28 06:27 activate.monmap
-rw-r--r--. 1 ceph ceph
3 Jan 28 06:27 active
lrwxrwxrwx. 1 ceph ceph
58 Jan 28 06:27 block -& /dev/disk/by-partuuid/1-a21d3d78c6e
-rw-r--r--. 1 ceph ceph
Jan 28 06:27 block.db
-rw-r--r--. 1 ceph ceph
37 Jan 28 06:27 block_uuid
-rw-r--r--. 1 ceph ceph
Jan 28 06:27 block.wal
-rw-r--r--. 1 ceph ceph
2 Jan 28 06:27 bluefs
-rw-r--r--. 1 ceph ceph
37 Jan 28 06:27 ceph_fsid
-rw-r--r--. 1 ceph ceph
37 Jan 28 06:27 fsid
-rw-------. 1 ceph ceph
56 Jan 28 06:27 keyring
-rw-r--r--. 1 ceph ceph
8 Jan 28 06:27 kv_backend
-rw-r--r--. 1 ceph ceph
21 Jan 28 06:27 magic
-rw-r--r--. 1 ceph ceph
6 Jan 28 06:27 ready
-rw-r--r--. 1 root root
0 Jan 28 06:27 systemd
-rw-r--r--. 1 ceph ceph
10 Jan 28 06:27 type
-rw-r--r--. 1 ceph ceph
2 Jan 28 06:27 whoami
which shows as expected with ceph-disk list
/dev/vda :
/dev/vda1 other, xfs, mounted on /
/dev/vdb :
/dev/vdb3 ceph block, for /dev/vdb1
/dev/vdb1 ceph data, active, cluster ceph, osd.2, block /dev/vdb3
/dev/vdc other, unknown
/dev/vdd other, unknown
but the osd fails with
07:03:50.f9278afc7c0
0 ceph version 10.0.2-1092-gffcedda (ffcedda1c4986ab66bbf4dc70fe89), process ceph-osd, pid 27432
07:03:50.f9278afc7c0
5 object store type is bluestore
07:03:50.f9278afc7c0 -1 WARNING: experimental feature 'bluestore' is enabled
Please be aware that this feature is experimental, untested,
unsupported, and may result in data corruption, data loss,
and/or irreparable damage to your cluster.
Do not use
feature with important data.
07:03:50.f9278afc7c0
1 accepter.accepter.bind my_inst.addr is 0.0.0.0: need_addr=1
07:03:50.f9278afc7c0
1 accepter.accepter.bind my_inst.addr is 0.0.0.0: need_addr=1
07:03:50.f9278afc7c0
1 accepter.accepter.bind my_inst.addr is 0.0.0.0: need_addr=1
07:03:50.f9278afc7c0
1 accepter.accepter.bind my_inst.addr is 0.0.0.0: need_addr=1
07:03:50.f9278afc7c0 -1 write_pid_file: failed to open pid file 'osd.2.pid': (13) Permission denied
07:03:50.f9278afc7c0 -1 WARNING: the following dangerous and experimental features are enabled: *
07:03:50.f9278afc7c0 10 ErasureCodePluginSelectJerasure: load: jerasure_sse4
07:03:50.f9278afc7c0 10 load: jerasure load: lrc load: isa
07:03:50.f9278afc7c0
1 bluestore(/var/lib/ceph/osd/ceph-2) _open_path using fs driver 'generic'
07:03:50.f9278afc7c0
1 -- 0.0.0.0: messenger.start
07:03:50.f9278afc7c0
1 -- :/0 messenger.start
07:03:50.f9278afc7c0
1 -- 0.0.0.0: messenger.start
07:03:50.f9278afc7c0
1 -- 0.0.0.0: messenger.start
07:03:50.f9278afc7c0
1 -- 0.0.0.0: messenger.start
07:03:50.f9278afc7c0
1 -- :/0 messenger.start
07:03:50.f9278afc7c0
2 osd.2 0 mounting /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
07:03:50.f9278afc7c0
1 bluestore(/var/lib/ceph/osd/ceph-2) mount path /var/lib/ceph/osd/ceph-2
07:03:50.f9278afc7c0
1 bluestore(/var/lib/ceph/osd/ceph-2) fsck
07:03:50.f9278afc7c0
1 bluestore(/var/lib/ceph/osd/ceph-2) _open_path using fs driver 'generic'
07:03:50.f9278afc7c0
1 bdev(/var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
07:03:50.f9278afc7c0
1 bdev(/var/lib/ceph/osd/ceph-2/block) open size
(10240 MB) block_size
07:03:50.f9278afc7c0
1 bdev(/var/lib/ceph/osd/ceph-2/block.db) open path /var/lib/ceph/osd/ceph-2/block.db
07:03:50.f9278afc7c0
1 bdev(/var/lib/ceph/osd/ceph-2/block.db) open size 536 kB) block_size
07:03:50.f9278afc7c0
1 bluefs add_block_device bdev 0 path /var/lib/ceph/osd/ceph-2/block.db size 65536 kB
07:03:50.f9278afc7c0
1 bdev(/var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
07:03:50.f9278afc7c0
1 bdev(/var/lib/ceph/osd/ceph-2/block) open size
(10240 MB) block_size
07:03:50.f9278afc7c0
1 bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 10240 MB
07:03:50.f9278afc7c0
1 bdev(/var/lib/ceph/osd/ceph-2/block.wal) open path /var/lib/ceph/osd/ceph-2/block.wal
07:03:50.f9278afc7c0
1 bdev(/var/lib/ceph/osd/ceph-2/block.wal) open size
(128 MB) block_size
07:03:50.f9278afc7c0
1 bluefs add_block_device bdev 2 path /var/lib/ceph/osd/ceph-2/block.wal size 128 MB
07:03:50.f9278afc7c0
1 bluefs mount
07:03:50.f9278afc7c0 -1 WARNING: experimental feature 'rocksdb' is enabled
Please be aware that this feature is experimental, untested,
unsupported, and may result in data corruption, data loss,
and/or irreparable damage to your cluster.
Do not use
feature with important data.
07:03:50.f9278afc7c0
set rocksdb option compression = kNoCompression
07:03:50.f9278afc7c0
set rocksdb option max_write_buffer_number = 16
07:03:50.f9278afc7c0
set rocksdb option min_write_buffer_number_to_merge = 3
07:03:50.f9278afc7c0
set rocksdb option recycle_log_file_num = 16
07:03:50.f9278afc7c0
set rocksdb option compression = kNoCompression
07:03:50.f9278afc7c0
set rocksdb option max_write_buffer_number = 16
07:03:50.f9278afc7c0
set rocksdb option min_write_buffer_number_to_merge = 3
07:03:50.f9278afc7c0
set rocksdb option recycle_log_file_num = 16
07:03:50.f9278afc7c0
4 rocksdb: RocksDB version: 4.3.0
07:03:50.f9278afc7c0
4 rocksdb: Git sha rocksdb_build_git_sha:
07:03:50.f9278afc7c0
4 rocksdb: Compile date Jan 27 2016
07:03:50.f9278afc7c0
4 rocksdb: DB SUMMARY
07:03:50.f9278afc7c0
4 rocksdb: CURRENT file:
07:03:50.f9278afc7c0
4 rocksdb: IDENTITY file:
07:03:50.f9278afc7c0
4 rocksdb: MANIFEST file:
MANIFEST-000008 size: 110 Bytes
07:03:50.f9278afc7c0
2 rocksdb: Error when reading /var/lib/ceph/osd/ceph-2/db dir
07:03:50.f9278afc7c0
2 rocksdb: Error when reading /var/lib/ceph/osd/ceph-2/db.slow dir
07:03:50.f9278afc7c0
4 rocksdb: Write Ahead Log file in db.wal: 000009.log size: 253 ;
07:03:50.f9278afc7c0
4 rocksdb:
Options.error_if_exists: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.create_if_missing: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.paranoid_checks: 1
07:03:50.f9278afc7c0
4 rocksdb:
Options.env: 0x7f
07:03:50.f9278afc7c0
4 rocksdb:
Options.info_log: 0x7f
07:03:50.f9278afc7c0
4 rocksdb:
Options.max_open_files: 5000
07:03:50.f9278afc7c0
4 rocksdb: Options.max_file_opening_threads: 1
07:03:50.f9278afc7c0
4 rocksdb:
Options.max_total_wal_size: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.disableDataSync: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.use_fsync: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.max_log_file_size: 0
07:03:50.f9278afc7c0
4 rocksdb: Options.max_manifest_file_size:
07:03:50.f9278afc7c0
4 rocksdb:
Options.log_file_time_to_roll: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.keep_log_file_num: 1000
07:03:50.f9278afc7c0
4 rocksdb:
Options.recycle_log_file_num: 16
07:03:50.f9278afc7c0
4 rocksdb:
Options.allow_os_buffer: 1
07:03:50.f9278afc7c0
4 rocksdb:
Options.allow_mmap_reads: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.allow_fallocate: 1
07:03:50.f9278afc7c0
4 rocksdb:
Options.allow_mmap_writes: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.create_missing_column_families: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.db_log_dir:
07:03:50.f9278afc7c0
4 rocksdb:
Options.wal_dir: db.wal
07:03:50.f9278afc7c0
4 rocksdb:
Options.table_cache_numshardbits: 4
07:03:50.f9278afc7c0
4 rocksdb:
Options.delete_obsolete_files_period_micros:
07:03:50.f9278afc7c0
4 rocksdb:
Options.max_background_compactions: 1
07:03:50.f9278afc7c0
4 rocksdb:
Options.max_subcompactions: 1
07:03:50.f9278afc7c0
4 rocksdb:
Options.max_background_flushes: 1
07:03:50.f9278afc7c0
4 rocksdb:
Options.WAL_ttl_seconds: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.WAL_size_limit_MB: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.manifest_preallocation_size: 4194304
07:03:50.f9278afc7c0
4 rocksdb:
Options.allow_os_buffer: 1
07:03:50.f9278afc7c0
4 rocksdb:
Options.allow_mmap_reads: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.allow_mmap_writes: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.is_fd_close_on_exec: 1
07:03:50.f9278afc7c0
4 rocksdb:
Options.stats_dump_period_sec: 600
07:03:50.f9278afc7c0
4 rocksdb:
Options.advise_random_on_open: 1
07:03:50.f9278afc7c0
4 rocksdb:
Options.db_write_buffer_size: 0d
07:03:50.f9278afc7c0
4 rocksdb:
Options.access_hint_on_compaction_start: NORMAL
07:03:50.f9278afc7c0
4 rocksdb:
Options.new_table_reader_for_compaction_inputs: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.compaction_readahead_size: 0d
07:03:50.f9278afc7c0
4 rocksdb:
Options.random_access_max_buffer_size: 1048576d
07:03:50.f9278afc7c0
4 rocksdb:
Options.writable_file_max_buffer_size: 1048576d
07:03:50.f9278afc7c0
4 rocksdb:
Options.use_adaptive_mutex: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.rate_limiter: (nil)
07:03:50.f9278afc7c0
4 rocksdb:
Options.delete_scheduler.rate_bytes_per_sec: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.bytes_per_sync: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.wal_bytes_per_sync: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.wal_recovery_mode: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.enable_thread_tracking: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.row_cache: None
07:03:50.f9278afc7c0
4 rocksdb:
Options.wal_filter: None
07:03:50.f9278afc7c0
4 rocksdb: Compression algorithms supported:
07:03:50.f9278afc7c0
4 rocksdb:
Snappy supported: 1
07:03:50.f9278afc7c0
4 rocksdb:
Zlib supported: 1
07:03:50.f9278afc7c0
4 rocksdb:
Bzip supported: 0
07:03:50.f9278afc7c0
4 rocksdb:
LZ4 supported: 0
07:03:50.f9278afc7c0
4 rocksdb: Fast CRC32 supported: 0
07:03:50.f9278afc7c0
4 rocksdb: Recovering from manifest file: MANIFEST-000008
07:03:50.f9278afc7c0
4 rocksdb: --------------- Options for column family [default]:
07:03:50.f9278afc7c0
4 rocksdb:
Options.comparator: rocksdb.InternalKeyComparator:leveldb.BytewiseComparator
07:03:50.f9278afc7c0
4 rocksdb:
Options.merge_operator: None
07:03:50.f9278afc7c0
4 rocksdb:
Options.compaction_filter: None
07:03:50.f9278afc7c0
4 rocksdb:
Options.compaction_filter_factory: None
07:03:50.f9278afc7c0
4 rocksdb:
Options.memtable_factory: SkipListFactory
07:03:50.f9278afc7c0
4 rocksdb:
Options.table_factory: BlockBasedTable
07:03:50.f9278afc7c0
4 rocksdb:
table_factory options:
flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x7ff0)
cache_index_and_filter_blocks: 0
index_type: 0
hash_index_allow_collision: 1
checksum: 1
no_block_cache: 0
block_cache: 0x7f
block_cache_size: 8388608
block_cache_compressed: (nil)
block_size: 4096
block_size_deviation: 10
block_restart_interval: 16
filter_policy: nullptr
whole_key_filtering: 1
skip_table_builder_flush: 0
format_version: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.write_buffer_size: 4194304
07:03:50.f9278afc7c0
4 rocksdb:
Options.max_write_buffer_number: 16
07:03:50.f9278afc7c0
4 rocksdb:
Options.compression: NoCompression
07:03:50.f9278afc7c0
4 rocksdb:
Options.prefix_extractor: nullptr
07:03:50.f9278afc7c0
4 rocksdb:
Options.num_levels: 7
07:03:50.f9278afc7c0
4 rocksdb:
Options.min_write_buffer_number_to_merge: 3
07:03:50.f9278afc7c0
4 rocksdb:
Options.max_write_buffer_number_to_maintain: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.compression_opts.window_bits: -14
07:03:50.f9278afc7c0
4 rocksdb:
Options.compression_opts.level: -1
07:03:50.f9278afc7c0
4 rocksdb:
Options.compression_opts.strategy: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.level0_file_num_compaction_trigger: 4
07:03:50.f9278afc7c0
4 rocksdb:
Options.level0_slowdown_writes_trigger: 20
07:03:50.f9278afc7c0
4 rocksdb:
Options.level0_stop_writes_trigger: 24
07:03:50.f9278afc7c0
4 rocksdb:
Options.target_file_size_base: 2097152
07:03:50.f9278afc7c0
4 rocksdb:
Options.target_file_size_multiplier: 1
07:03:50.f9278afc7c0
4 rocksdb:
Options.max_bytes_for_level_base:
07:03:50.f9278afc7c0
4 rocksdb: Options.level_compaction_dynamic_level_bytes: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.max_bytes_for_level_multiplier: 10
07:03:50.f9278afc7c0
4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
07:03:50.f9278afc7c0
4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
07:03:50.f9278afc7c0
4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
07:03:50.f9278afc7c0
4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
07:03:50.f9278afc7c0
4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
07:03:50.f9278afc7c0
4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
07:03:50.f9278afc7c0
4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
07:03:50.f9278afc7c0
4 rocksdb:
Options.max_sequential_skip_in_iterations: 8
07:03:50.f9278afc7c0
4 rocksdb:
Options.expanded_compaction_factor: 25
07:03:50.f9278afc7c0
4 rocksdb:
Options.source_compaction_factor: 1
07:03:50.f9278afc7c0
4 rocksdb:
Options.max_grandparent_overlap_factor: 10
07:03:50.f9278afc7c0
4 rocksdb:
Options.arena_block_size: 524288
07:03:50.f9278afc7c0
4 rocksdb:
Options.soft_pending_compaction_bytes_limit: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.hard_pending_compaction_bytes_limit: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.rate_limit_delay_max_milliseconds: 1000
07:03:50.f9278afc7c0
4 rocksdb:
Options.disable_auto_compactions: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.filter_deletes: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.verify_checksums_in_compaction: 1
07:03:50.f9278afc7c0
4 rocksdb:
Options.compaction_style: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.compaction_pri: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.compaction_options_universal.size_ratio: 1
07:03:50.f9278afc7c0
4 rocksdb: Options.compaction_options_universal.min_merge_width: 2
07:03:50.f9278afc7c0
4 rocksdb: Options.compaction_options_universal.max_merge_width:
07:03:50.f9278afc7c0
4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
07:03:50.f9278afc7c0
4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1
07:03:50.f9278afc7c0
4 rocksdb: Options.compaction_options_fifo.max_table_files_size:
07:03:50.f9278afc7c0
4 rocksdb:
Options.table_properties_collectors:
07:03:50.f9278afc7c0
4 rocksdb:
Options.inplace_update_support: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.inplace_update_num_locks: 10000
07:03:50.f9278afc7c0
4 rocksdb:
Options.min_partial_merge_operands: 2
07:03:50.f9278afc7c0
4 rocksdb:
Options.memtable_prefix_bloom_bits: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.memtable_prefix_bloom_probes: 6
07:03:50.f9278afc7c0
4 rocksdb:
Options.memtable_prefix_bloom_huge_page_tlb_size: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.bloom_locality: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.max_successive_merges: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.optimize_fllters_for_hits: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.paranoid_file_checks: 0
07:03:50.f9278afc7c0
4 rocksdb:
Options.compaction_measure_io_stats: 0
07:03:50.f9278afc7c0
2 rocksdb: Unable to load table properties for file 4 --- NotFound:
07:03:50.f9278afc7c0
4 rocksdb: Recovered from manifest file:db/MANIFEST-000008 succeeded,manifest_file_number is 8, next_file_number is 10, last_sequence is 2, log_number is 0,prev_log_number is 0,max_column_family is 0
07:03:50.f9278afc7c0
4 rocksdb: Column family [default] (ID 0), log number is 7
07:03:50.f9278afc7c0 -1 rocksdb: Corruption: Can't access /000004.sst: NotFound:
07:03:50.f9278afc7c0 -1 bluestore(/var/lib/ceph/osd/ceph-2) _open_db erroring opening db:
07:03:50.f9278afc7c0
1 bluefs umount
07:03:50.f9278afc7c0
1 bdev(/var/lib/ceph/osd/ceph-2/block.db) close
07:03:51.f9278afc7c0
1 bdev(/var/lib/ceph/osd/ceph-2/block) close
07:03:51.f9278afc7c0
1 bdev(/var/lib/ceph/osd/ceph-2/block.wal) close
07:03:51.f9278afc7c0
1 bdev(/var/lib/ceph/osd/ceph-2/block) close
07:03:51.f9278afc7c0 -1 osd.2 0 OSD:init: unable to mount object store
07:03:51.f9278afc7c0 -1  ** ERROR: osd init failed: (5) Input/output error
Updated by
Now fails with
ceph.conf has
enable experimental unrecoverable data corrupting features = *
bluestore fsck on mount = true
bluestore block size =
osd objectstore = bluestore
the data was populated with
-rw-r--r--. 1 root root 187 Jan 29 06:18 activate.monmap
lrwxrwxrwx. 1 ceph ceph
58 Jan 29 06:18 block -& /dev/disk/by-partuuid/f04cc152-13bd-4ef0-b4c1-940d564cfa58
-rw-r--r--. 1 ceph ceph
37 Jan 29 06:18 block_uuid
-rw-r--r--. 1 ceph ceph
2 Jan 29 06:18 bluefs
-rw-r--r--. 1 ceph ceph
37 Jan 29 06:18 ceph_fsid
-rw-r--r--. 1 ceph ceph
37 Jan 29 06:18 fsid
-rw-r--r--. 1 ceph ceph
8 Jan 29 06:18 kv_backend
-rw-r--r--. 1 ceph ceph
21 Jan 29 06:18 magic
-rw-r--r--. 1 ceph ceph
10 Jan 29 06:18 type
-rw-r--r--. 1 ceph ceph
2 Jan 29 06:18 whoami
where the block symlink was done by ceph-disk, not ceph-osd mkfs.
command_check_call(
'ceph-osd',
'--cluster', cluster,
'--mkkey',
'-i', osd_id,
'--monmap', monmap,
'--osd-data', path,
'--osd-uuid', fsid,
'--keyring', os.path.join(path, 'keyring'),
'--setuser', get_ceph_user(),
'--setgroup', get_ceph_user(),
# ceph-disk list
/dev/vda :
/dev/vda1 other, xfs, mounted on /
/dev/vdb :
/dev/vdb3 ceph block, for /dev/vdb1
/dev/vdb1 ceph data, active, cluster ceph, osd.2, block /dev/vdb3
/dev/vdc other, unknown
/dev/vdd other, unknown
# sgdisk --print /dev/vdb
Disk /dev/vdb:
sectors, 10.0 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): CADDC7C-21543
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)
Start (sector)
End (sector)
ceph block
Updated by
&frickler& loicd: regarding http://tracker.ceph.com/issues/13942#note-12, I'm seeing the same error in current master with my CBT-based-testing
&frickler& loicd: jewel is working fine for me, however, at least in that regard
&loicd& frickler: ah, interesting ! thanks for sharing. Did you ask sage about it ?
&frickler& loicd: not yet, I just tested that reverting https://github.com/ceph/ceph/pull/7223 seems to fix it, though
&loicd& frickler: good intel :-)
Updated by
Blocked by : bluestore broken in current master added
Updated by
Status changed from In Progress to Resolved
Also available in:}

我要回帖

更多关于 smartfeature os 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信