Index index by Group index by Distribution index by Vendor index by creation date index by Name Mirrors Help Search

ceph-base-16.2.14.66+g7aa6ce9419f-3.2 RPM for ppc64le

From OpenSuSE Ports Tumbleweed for ppc64le

Name: ceph-base Distribution: openSUSE Tumbleweed
Version: 16.2.14.66+g7aa6ce9419f Vendor: openSUSE
Release: 3.2 Build date: Sat Feb 3 04:31:25 2024
Group: System/Filesystems Build host: obs-power9-12
Size: 44813054 Source RPM: ceph-16.2.14.66+g7aa6ce9419f-3.2.src.rpm
Packager: http://bugs.opensuse.org
Url: http://ceph.com/
Summary: Ceph Base Package
Base is the package that includes all the files shared amongst ceph servers

Provides

Requires

License

LGPL-2.1 and LGPL-3.0 and CC-BY-SA-3.0 and GPL-2.0 and BSL-1.0 and BSD-3-Clause and MIT

Changelog

* Thu Jan 25 2024 Dominique Leuenberger <dimstar@opensuse.org>
  - Advertised user/groups that are generated by the pre scripts:
    * package cephadm generates user/group cephadm
    * package ceph-common generates user/group ceph
* Tue Dec 19 2023 Dominique Leuenberger <dimstar@opensuse.org>
  - Add ceph-cmake-3.28.patch: Fix build with cmake 3.28 and no git
    command found (https://github.com/ceph/ceph/pull/54963,
    boo#1218111).
* Mon Sep 11 2023 Tim Serong <tserong@suse.com>
  - Update to 16.2.14-66-g7aa6ce9419f:
    + (bsc#1207765) rgw/rados: check_quota() uses real bucket owner
    + (bsc#1212559) pacific: os/bluestore: cumulative bluefs backport
      This notably includes:
    * os/bluestore: BlueFS fine grain locking
    * os/bluestore/bluefs: Fix improper vselector tracking in _flush_special()
    * os/bluestore: enable 4K allocation unit for BlueFS
    * os/bluestore/bluefs: Fix sync compactionA
    + (bsc#1213217) ceph.spec.in: Require fmt-devel < 10
    + ceph.spec.in: enable build on riscv64 for openSUSE Factory
    + ceph.spec.in: Require Cython >= 0.29 but < 3
    + cephadm: update to the latest container images:
    * registry.suse.com/ses/7.1/ceph/prometheus-server:2.37.6
    * registry.suse.com/ses/7.1/ceph/prometheus-node-exporter:1.5.0
    * registry.suse.com/ses/7.1/ceph/grafana:8.5.22
    * registry.suse.com/ses/7.1/ceph/haproxy:2.0.31
  - Drop ceph-test.changes (no longer necessary since using _multibuild)
* Wed Aug 16 2023 Ana Guerrero <ana.guerrero@suse.com>
  - restrict to older Cython
* Tue Jun 27 2023 Tim Serong <tserong@suse.com>
  - Remove _constraints file, add README-constraints.txt and pre_checkin.env
* Wed May 24 2023 Tim Serong <tserong@suse.com>
  - Add "#!BuildConstraint" to spec files for compatibility with _multibuild
* Thu May 11 2023 Tim Serong <tserong@suse.com>
  - Update to 16.2.13-66-g54799ee0666:
    + (bsc#1199880) mgr: don't dump global config holding gil
    + (bsc#1209621) cephadm: fix NFS haproxy failover if active node disappears
    + (bsc#1210153) mgr/cephadm: fix handling of mgr upgrades with 3 or more mgrs
    + (bsc#1210243, bsc#1210314) ceph-volume: fix regression in activate
    + (bsc#1210719) cephadm: mount host /etc/hosts for daemon containers in podman deployments
    + (bsc#1210784) mgr/dashboard: Fix SSO error: 'str' object has no attribute 'decode'
    + (bsc#1210944) cmake: patch boost source to support python 3.11
    + (bsc#1211090) fix FTBFS on s390x
* Thu May 04 2023 Frederic Crozat <fcrozat@suse.com>
  - Add _multibuild to define additional spec files as additional
    flavors.  Eliminates the need for source package links in OBS.
* Tue Mar 28 2023 Tim Serong <tserong@suse.com>
  - Update to 16.2.11-65-g8b7e6fc0182:
    + (bsc#1201088) test/librados: fix FTBFS on gcc 13
    + (bsc#1208820) mgr/dashboard: allow to pass controls on iscsi disk create
* Fri Mar 10 2023 Tim Serong <tserong@suse.com>
  - Update to 16.2.11-62-gce6291a3463:
    + (bsc#1201088) fix FTBFS on gcc 13
* Tue Feb 07 2023 Tim Serong <tserong@suse.com>
  - Update to 16.2.11-58-g38d6afd3b78:
    + test/CMakeLists.txt: move 'APPEND rgw_libs Boost::filesystem' to top level
* Fri Jan 27 2023 Tim Serong <tserong@suse.com>
  - Update to 16.2.11-57-g9be7fb44a33:
    + ceph.spec.in: Replace %usrmerged macro with regular version check
  - checkin.sh: default to ses7p branch
* Fri Jan 27 2023 Michael Fritch <michael.fritch@suse.com>
  - Update to 16.2.11-56-gc067055f8f5:
    + (bsc#1199183) osd, tools, kv: non-aggressive, on-line trimming of accumulated dups
    + (bsc#1200262) ceph-volume: fix fast device alloc size on mulitple device
    + (bsc#1200501) cephadm: update monitoring container images
    + (bsc#1200978) mgr/dashboard: prevent alert redirect
    + (bsc#1201797) mgr/volumes: Add subvolumegroup resize cmd
    + (bsc#1201837) mgr/volumes: Fix subvolume discover during upgrade (CVE-2022-0670)
    + (bsc#1201976) monitoring/ceph-mixin: add RGW host to label info
    + (bsc#1202077) mgr/dashboard: enable addition of custom Prometheus alerts
    + (bsc#1203375) python-common: Add 'KB' to supported suffixes in SizeMatcher
    + (bsc#1204430) ceph-crash: drop privleges to run as "ceph" user, rather than root (CVE-2022-3650)
    + (bsc#1205025) rgw: Guard against malformed bucket URLs (CVE-2022-3854)
    + (bsc#1205436) mgr/dashboard: fix rgw connect when using ssl
* Thu Oct 06 2022 Tim Serong <tserong@suse.com>
  - Update to 16.2.9-539-gea74dd900cd:
    + (bsc#1202292) ceph.spec.in: Add -DFMT_DEPRECATED_OSTREAM to CXXFLAGS
* Tue Jul 26 2022 Tim Serong <tserong@suse.com>
  - Update to 16.2.9-538-g9de83fa4064:
    + (bsc#1201604) cephfs-shell: move source to separate subdirectory
* Wed Jul 13 2022 Tim Serong <tserong@suse.com>
  - Update to 16.2.9-536-g41a9f9a5573:
    + (bsc#1195359, bsc#1200553) rgw: check bucket shard init status in RGWRadosBILogTrimCR
    + (bsc#1194131) ceph-volume: honour osd_dmcrypt_key_size option (CVE-2021-3979)
    + (bsc#1196046) mgr/cephadm: try to get FQDN for configuration files
* Thu Jun 09 2022 Tim Serong <tserong@suse.com>
  - Update to 16.2.9-158-gd93952c7eea:
    + cmake: check for python(\d)\.(\d+) when building boost
    + make-dist: patch boost source to support python 3.10
* Thu Jun 02 2022 Stefen Allen <stefen.allen@suse.com>
  - Update to ceph-16.2.9-58-ge2e5cb80063:
    + (bsc#1200064, pr#480) Remove last vestiges of docker.io image paths
* Mon May 23 2022 Michael Fritch <michael.fritch@suse.com>
  - Update to 16.2.9.50-g7d9f12156fb:
    + (jsc#SES-2515) High-availability NFS export
    + (bsc#1196044) cephadm: prometheus: The generatorURL in alerts is only using hostname
    + (bsc#1196785) cephadm: avoid crashing on expected non-zero exit
    + (bsc#1187748) When an RBD is mapped, it is attempted to be deployed as an OSD.
* Tue Apr 19 2022 Michael Fritch <michael.fritch@suse.com>
  - Update to 16.2.7-969-g6195a460d89
    + (jsc#SES-2515) High-availability NFS export
* Thu Mar 31 2022 Stefen Allen <stefen.allen@suse.com>
  - Update to v16.2.7-654-gd5a90ff46f0
    + (bsc#1196733) remove build directory during %clean
* Wed Mar 30 2022 Stefen Allen <stefen.allen@suse.com>
  - Update to v16.2.7-652-gf5dc462fdb5
    + (bsc#1194875) [SES7P] include/buffer: include <memory>
* Thu Mar 24 2022 Stefen Allen <stefen.allen@suse.com>
  - Update to 16.2.7-650-gd083eaa3886
    + (pr#469) cephadm: update image paths to registry.suse.com
    + (pr#468) cephadm: use snmp-notifier image from registry.suse.de
    + (pr#467) cephadm: infer the default container image during pull
    + (pr#465) mgr/cephadm: try to get FQDN for inventory address
    + Sync _constaints file for IBS and OBS
* Tue Mar 15 2022 Stefen Allen <stefen.allen@suse.com>
  - Update to 16.2.7-640-gceb23c7491b
    + (bsc#1194875) common: fix FTBFS due to dout & need_dynamic on GCC-12
    + (bsc#1196938) cephadm: preserve authorized_keys file during upgrade
* Tue Mar 08 2022 Stefen Allen <stefen.allen@suse.com>
  - Update to 16.2.7-596-g7d574789716
    + Update Prometheus Container image paths (pr #459)
    + mgr/dashboard: Fix documentation URL (pr #456)
    + mgr/dashboard: Adapt downstream branded navigation page (pr #454)
* Fri Mar 04 2022 Stefen Allen <stefen.allen@suse.com>
  - Update to 16.2.7-577-g3e3603b5dd1
    + Update prometheus-server version
* Mon Jan 10 2022 Stefen Allen <stefen.allen@suse.com>
  - Update to 16.2.7-37-gb3be69440db:
    + (bsc#1194353) Downstream branding breaks dashboard npm build
    + (bsc#1188911) OSD marked down causes wrong backfill_toofull
* Mon Nov 22 2021 Nathan Cutler <ncutler@suse.com>
  - Update to 16.2.6-463-g22e7612f9ad:
    + (bsc#1178073) mgr/dashboard: fix downstream NFS doc links
* Wed Nov 10 2021 Nathan Cutler <ncutler@suse.com>
  - Preservation of Bugzilla, Jira and CVE citations from earlier incarnations of
    this changes file after double-checking that none of these fixes got lost in
    the pacific rebase:
    + bsc#1163764 (--container-init feature cherry-picked to octopus)
    + bsc#1170200 (mgr/dashboard: Fix for CrushMap viewer items getting compressed vertically)
    + bsc#1172926 (mgr/orchestrator: Sort 'ceph orch device ls' by host)
    + bsc#1173079 (mgr/devicehealth: device_health_metrics pool gets created even without any OSDs in the cluster)
    + bsc#1174466 (mon: have 'mon stat' output json as well)
    + bsc#1174526 (mgr/dashboard: allow getting fresh inventory data from the orchestrator)
    + bsc#1174529 (rpm: on SUSE, podman is required for cephadm to work)
    + bsc#1174644 (cephadm: log to file)
    + bsc#1175120 (downstream branding)
    + bsc#1175161 (downstream branding)
    + bsc#1175169 (downstream branding)
    + bsc#1176390 (mgr/dashboard: enable different URL for users of browser to Grafana)
    + bsc#1176451 (Drop patch "rpm: on SUSE, podman is required for cephadm to work")
    + bsc#1176489 (mgr/cephadm: lock multithreaded access to OSDRemovalQueue)
    + bsc#1176499 (mgr/cephadm: fix RemoveUtil.load_from_store())
    + bsc#1176638 (ceph-volume: batch: call the right prepare method)
    + bsc#1176679 (mgr/dashboard: enable different URL for users of browser to Grafana)
    + bsc#1176828 (cephadm: command_unit: call systemctl with verbose=True)
    + bsc#1177078 (mgr/dashboard: Fix bugs in a unit test and i18n translation)
    + bsc#1177151 (python-common: do not skip unavailable devices)
    + bsc#1177319 (--container-init feature cherry-picked to octopus)
    + bsc#1177344 (mgr/dashboard: support Orchestrator and user-defined Ganesha cluster)
    + bsc#1177360 (cephadm: silence "Failed to evict container" log msg)
    + bsc#1177450 (ceph-volume: don't exit before empty report can be printed)
    + bsc#1177643 (Revert "spec: Podman (temporarily) requires apparmor-abstractions on suse")
    + bsc#1177676 (cephadm: allow uid/gid == 0 in copy_tree, copy_files, move_files)
    + bsc#1177843 (CVE-2020-25660)
    + bsc#1177857 (mgr/cephadm: upgrade: fail gracefully, if daemon redeploy fails)
    + bsc#1177933 (cephadm: configure journald as the logdriver)
    + bsc#1178531 (cephadm: set default container_image to registry.suse.com/ses/7/ceph/ceph)
    + bsc#1178837 (rgw: cls/user: set from_index for reset stats calls)
    + bsc#1178860 (mgr/dashboard: Disable TLS 1.0 and 1.1)
    + bsc#1178905 (CVE-2020-25678)
    + bsc#1178932 (cephadm: reference the last local image by digest)
    + bsc#1179016 (rpm: require smartmontools on SUSE)
    + bsc#1179452 (mgr/insights: Test environment requires 'six')
    + bsc#1179526 (rgw: during GC defer, prevent new GC enqueue)
    + bsc#1179569 (cephadm: reference the last local image by digest)
    + bsc#1179802 (CVE-2020-27781)
    + bsc#1179997 (CVE-2020-27839)
    + bsc#1180107 (ceph-volume: pass --filter-for-batch from drive-group subcommand)
    + bsc#1180155 (CVE-2020-27781)
    + bsc#1181291 (mgr/cephadm: alias rgw-nfs -> nfs)
    + bsc#1182766 (cephadm: fix 'inspect' and 'pull')
    + bsc#1183074 (CVE-2021-20288)
    + bsc#1183561 (mgr/cephadm: on ssh connection error, advice chmod 0600)
    + bsc#1183899 (bluestore: fix huge reads/writes at BlueFS)
    + bsc#1184231 (cephadm: Allow to use paths in all <_devices> drivegroup sections)
    + bsc#1184517 (cls/rgw: look for plane entries in non-ascii plain namespace too)
    + bsc#1185246 (rgw: check object locks in multi-object delete)
    + bsc#1185619 (CVE-2021-3524)
    + bsc#1185619 (CVE-2021-3524)
    + bsc#1186020 (CVE-2021-3531)
    + bsc#1186021 (CVE-2021-3509)
    + bsc#1186348 (mgr/zabbix: adapt zabbix_sender default path)
    + bsc#1188979 ("mgr/cephadm: pass --container-init to "cephadm deploy" if specified" and "Revert "cephadm: default container_init to False")
    + bsc#1189173 (downstream branding)
    + jsc#SES-1071 (ceph-volume: major batch refactor - upstream PR#34740)
    + jsc#SES-185 (SES support with cache software)
    + jsc#SES-704 (mgr/snap_schedule)
* Fri Nov 05 2021 Nathan Cutler <ncutler@suse.com>
  - Update to 16.2.6-462-g5fefbbf8888:
    + rebased on top of upstream commit SHA1 dd7139c66c1d36da50475ec97d8d6b54b07d1dea
    * (bsc#1191751) rgw/tracing: unify SO version numbers within librgw2 package
    * spec: make selinux scriptlets respect CEPH_AUTO_RESTART_ON_UPGRADE
* Mon Sep 20 2021 Stefen Allen <sallen@suse.com>
  - Update to Version: 16.2.6.45+g8fda9838398:
    + rebased on top of upstream commit SHA1 dbc87327c37d0f305c2107e487cb98a072ae858b
      upstream 16.2.6 release
      https://ceph.io/releases/v16-2-6-pacific-released/
* Thu Sep 02 2021 Nathan Cutler <ncutler@suse.com>
  - Update to 16.2.5-504-g6a3a59bd19e:
    + rebased on top of upstream commit SHA1 0d1e1f2973cae7645126fc88a72743367c790d9d
    + (bsc#1189605) cmake: exclude "grafonnet-lib" target from "all"
* Fri Jul 30 2021 Nathan Cutler <ncutler@suse.com>
  - Update to 16.2.5-113-g8b5bda7684e:
    + (bsc#1188741) compression/snappy: use uint32_t to be compatible with 1.1.9
      improved version of patch that did not work as intended
* Tue Jul 27 2021 Nathan Cutler <ncutler@suse.com>
  - Update to 16.2.5-111-ga5b472dfcf8:
    + (bsc#1188741) compression/snappy: use uint32_t to be compatible with 1.1.9
* Thu Jul 22 2021 Nathan Cutler <ncutler@suse.com>
  - Update to 16.2.5-110-gc5d9c915c46:
    + rebased on top of upstream commit SHA1 7feddc9819ca05586f230accd67b4e26a328e618
    + (bsc#1186348) mgr/zabbix: adapt zabbix_sender default path
* Thu Jul 08 2021 Nathan Cutler <ncutler@suse.com>
  - Update to 16.2.5-29-g97c2c82c2f5:
    + rebased on top of upstream commit SHA1 0883bdea7337b95e4b611c768c0279868462204a
      upstream 16.2.5 release
      https://ceph.io/releases/v16-2-5-pacific-released/
    + cherry-pick fix for bsc#1188111:
    * include/denc: include used header
    * mon,osd: always init local variable
    * common/Formatter: include used header
* Sat Jun 26 2021 Nathan Cutler <ncutler@suse.com>
  - Update to 16.2.4-564-g9689286366a:
    + rebased on top of upstream commit SHA1 e57defcbcc91e67aac958c4a52d657a7a907e8ef
* Thu Jun 24 2021 Nathan Cutler <ncutler@suse.com>
  - Update _constraints: only honor physical memory, not 'any memory'
    (e.g. swap). But then, be happy with 8GB (bumping the current
    x86_64 worker pool from 16 to 64). (Dominique Leuenberger)
* Fri May 14 2021 Nathan Cutler <ncutler@suse.com>
  - Update to 16.2.4-26-g555d38aa5a5:
    + rebased on top of v16.2.4 tag
      https://ceph.io/releases/v16-2-4-pacific-released/
    * mgr/dashboard: fix base-href: revert it to previous approach
    * (bsc#1186021) mgr/dashboard: fix cookie injection issue (CVE-2021-3509)
    * mgr/dashboard: fix set-ssl-certificate{,-key} commands
    * (bsc#1186020) rgw: RGWSwiftWebsiteHandler::is_web_dir checks empty subdir_name (CVE-2021-3531)
    * (bsc#1185619) rgw: sanitize \r in s3 CORSConfiguration’s ExposeHeader (CVE-2021-3524)
    * systemd: remove ProtectClock=true for ceph-osd@.service
* Thu May 06 2021 Nathan Cutler <ncutler@suse.com>
  - Update to 16.2.3-26-g422932e923:
    + rebased on top of upstream pacific SHA1 381b476cb3900f9a92eb95d03b4850b953cfd79a
      Pacific v16.2.3 release
      see https://ceph.io/releases/v16-2-3-pacific-released/
    * cephadm: normalize image digest in 'ls' output too
      Pacific v16.2.2 release
      see https://ceph.io/releases/v16-2-2-pacific-released/
* Wed May 05 2021 Nathan Cutler <ncutler@suse.com>
  - Update to 16.2.1-283-g9f37a4bec4:
    + rebased on top of upstream pacific SHA1 717ce59b76c659aaef8c5aec1355c0ac5cef7234
      Pacific v16.2.1 release
      see https://ceph.io/releases/v16-2-1-pacific-released/
    * (bsc#1183074) - (CVE-2021-20288) ceph: Unauthorized global_id reuse
    * (bsc#1184231) cephadm: Allow to use paths in all <_devices> drivegroup sections
* Tue Apr 13 2021 Nathan Cutler <ncutler@suse.com>
  - _constraints: raise s390x disk constraint to 42G after seeing a build fail
    with "write error: No space left on device"
* Thu Apr 08 2021 Nathan Cutler <ncutler@suse.com>
  - Update to 16.2.0-91-g24bd0c4acf:
    + rebase on top of upstream pacific SHA1 4cbaf866034715d053e6259dcd5bd8e4e1d1e1ed
* Thu Apr 01 2021 Nathan Cutler <ncutler@suse.com>
  - Update to 16.2.0-31-g5922b2b9c1:
    + rebase on top of upstream v16.2.0 (first stable release in Pacific series)
      see https://ceph.io/releases/v16-2-0-pacific-released/
    + (bsc#1192838) cephadm: Fix iscsi client caps (allow mgr <service status> calls)
    + (bsc#1200317) mgr/cephadm: fix and improve osd draining
    + (bsc#1206158) add iscsi and nfs to upgrade process
* Fri Mar 26 2021 Nathan Cutler <ncutler@suse.com>
  - Update to 16.1.0-1217-g8e1da7347e:
    + rpm: drop extraneous explicit sqlite-libs runtime dependency
* Thu Mar 25 2021 Nathan Cutler <ncutler@suse.com>
  - pre_checkin.sh: add README-packaging.txt as a source file to ceph-test.spec
    (to pacify obs-service-source_validator)
* Thu Mar 25 2021 Nathan Cutler <ncutler@suse.com>
  - Update to 16.1.0-1216-gbaca20b112:
    + spec: prepare openSUSE usrmerge (boo#1029961)
* Thu Mar 25 2021 Nathan Cutler <ncutler@suse.com>
  - Update to 16.1.0-1215-gd99465b6ba
    + rebase on top of upstream commit 3eb70cf622aace689e45749e8a92fce033d3d55c
      (tip of "pacific" branch)
    * introduce libnpmem and libpmemobj dependencies to for RBD_RWL and
      RBD_SSD_CACHE features backed by system PMDK
    * introduce libcephsqlite
* Thu Mar 25 2021 Nathan Cutler <ncutler@suse.com>
  - Add README-packaging.txt
* Wed Jan 27 2021 Nathan Cutler <ncutler@suse.com>
  - Update to 16.1.0-46-g571704f730
    + rebase on top of upstream v16.1.0 (Pacific release candidate)
    + (bsc#1192840) mgr/mgr_module.py: CLICommand: Fix parsing of kwargs arguments
    + drop obsolete downstream patches that were causing conflicts:
    * cephadm: use registry.suse.com by default
    * cephadm: add global flag --container-init
    * mgr/cephadm: append --container-init to basecommand
    * cephadm: remove container-init subparser from "deploy"

Files

/etc/logrotate.d/ceph
/etc/sudoers.d/ceph-smartctl
/usr/bin/ceph-crash
/usr/bin/ceph-dencoder
/usr/bin/ceph-kvstore-tool
/usr/bin/ceph-run
/usr/bin/cephfs-data-scan
/usr/bin/cephfs-journal-tool
/usr/bin/cephfs-table-tool
/usr/bin/crushtool
/usr/bin/monmaptool
/usr/bin/osdmaptool
/usr/lib/python3.11/site-packages/ceph_volume
/usr/lib/python3.11/site-packages/ceph_volume-1.0.0-py3.11.egg-info
/usr/lib/python3.11/site-packages/ceph_volume-1.0.0-py3.11.egg-info/PKG-INFO
/usr/lib/python3.11/site-packages/ceph_volume-1.0.0-py3.11.egg-info/SOURCES.txt
/usr/lib/python3.11/site-packages/ceph_volume-1.0.0-py3.11.egg-info/dependency_links.txt
/usr/lib/python3.11/site-packages/ceph_volume-1.0.0-py3.11.egg-info/entry_points.txt
/usr/lib/python3.11/site-packages/ceph_volume-1.0.0-py3.11.egg-info/not-zip-safe
/usr/lib/python3.11/site-packages/ceph_volume-1.0.0-py3.11.egg-info/requires.txt
/usr/lib/python3.11/site-packages/ceph_volume-1.0.0-py3.11.egg-info/top_level.txt
/usr/lib/python3.11/site-packages/ceph_volume/__init__.py
/usr/lib/python3.11/site-packages/ceph_volume/__pycache__
/usr/lib/python3.11/site-packages/ceph_volume/__pycache__/__init__.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/__pycache__/configuration.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/__pycache__/decorators.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/__pycache__/exceptions.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/__pycache__/log.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/__pycache__/main.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/__pycache__/process.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/__pycache__/terminal.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/activate
/usr/lib/python3.11/site-packages/ceph_volume/activate/__init__.py
/usr/lib/python3.11/site-packages/ceph_volume/activate/__pycache__
/usr/lib/python3.11/site-packages/ceph_volume/activate/__pycache__/__init__.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/activate/__pycache__/main.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/activate/main.py
/usr/lib/python3.11/site-packages/ceph_volume/api
/usr/lib/python3.11/site-packages/ceph_volume/api/__init__.py
/usr/lib/python3.11/site-packages/ceph_volume/api/__pycache__
/usr/lib/python3.11/site-packages/ceph_volume/api/__pycache__/__init__.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/api/__pycache__/lvm.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/api/lvm.py
/usr/lib/python3.11/site-packages/ceph_volume/configuration.py
/usr/lib/python3.11/site-packages/ceph_volume/decorators.py
/usr/lib/python3.11/site-packages/ceph_volume/devices
/usr/lib/python3.11/site-packages/ceph_volume/devices/__init__.py
/usr/lib/python3.11/site-packages/ceph_volume/devices/__pycache__
/usr/lib/python3.11/site-packages/ceph_volume/devices/__pycache__/__init__.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/devices/lvm
/usr/lib/python3.11/site-packages/ceph_volume/devices/lvm/__init__.py
/usr/lib/python3.11/site-packages/ceph_volume/devices/lvm/__pycache__
/usr/lib/python3.11/site-packages/ceph_volume/devices/lvm/__pycache__/__init__.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/devices/lvm/__pycache__/activate.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/devices/lvm/__pycache__/batch.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/devices/lvm/__pycache__/common.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/devices/lvm/__pycache__/create.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/devices/lvm/__pycache__/deactivate.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/devices/lvm/__pycache__/listing.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/devices/lvm/__pycache__/main.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/devices/lvm/__pycache__/migrate.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/devices/lvm/__pycache__/prepare.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/devices/lvm/__pycache__/trigger.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/devices/lvm/__pycache__/zap.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/devices/lvm/activate.py
/usr/lib/python3.11/site-packages/ceph_volume/devices/lvm/batch.py
/usr/lib/python3.11/site-packages/ceph_volume/devices/lvm/common.py
/usr/lib/python3.11/site-packages/ceph_volume/devices/lvm/create.py
/usr/lib/python3.11/site-packages/ceph_volume/devices/lvm/deactivate.py
/usr/lib/python3.11/site-packages/ceph_volume/devices/lvm/listing.py
/usr/lib/python3.11/site-packages/ceph_volume/devices/lvm/main.py
/usr/lib/python3.11/site-packages/ceph_volume/devices/lvm/migrate.py
/usr/lib/python3.11/site-packages/ceph_volume/devices/lvm/prepare.py
/usr/lib/python3.11/site-packages/ceph_volume/devices/lvm/trigger.py
/usr/lib/python3.11/site-packages/ceph_volume/devices/lvm/zap.py
/usr/lib/python3.11/site-packages/ceph_volume/devices/raw
/usr/lib/python3.11/site-packages/ceph_volume/devices/raw/__init__.py
/usr/lib/python3.11/site-packages/ceph_volume/devices/raw/__pycache__
/usr/lib/python3.11/site-packages/ceph_volume/devices/raw/__pycache__/__init__.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/devices/raw/__pycache__/activate.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/devices/raw/__pycache__/common.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/devices/raw/__pycache__/list.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/devices/raw/__pycache__/main.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/devices/raw/__pycache__/prepare.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/devices/raw/activate.py
/usr/lib/python3.11/site-packages/ceph_volume/devices/raw/common.py
/usr/lib/python3.11/site-packages/ceph_volume/devices/raw/list.py
/usr/lib/python3.11/site-packages/ceph_volume/devices/raw/main.py
/usr/lib/python3.11/site-packages/ceph_volume/devices/raw/prepare.py
/usr/lib/python3.11/site-packages/ceph_volume/devices/simple
/usr/lib/python3.11/site-packages/ceph_volume/devices/simple/__init__.py
/usr/lib/python3.11/site-packages/ceph_volume/devices/simple/__pycache__
/usr/lib/python3.11/site-packages/ceph_volume/devices/simple/__pycache__/__init__.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/devices/simple/__pycache__/activate.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/devices/simple/__pycache__/main.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/devices/simple/__pycache__/scan.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/devices/simple/__pycache__/trigger.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/devices/simple/activate.py
/usr/lib/python3.11/site-packages/ceph_volume/devices/simple/main.py
/usr/lib/python3.11/site-packages/ceph_volume/devices/simple/scan.py
/usr/lib/python3.11/site-packages/ceph_volume/devices/simple/trigger.py
/usr/lib/python3.11/site-packages/ceph_volume/drive_group
/usr/lib/python3.11/site-packages/ceph_volume/drive_group/__init__.py
/usr/lib/python3.11/site-packages/ceph_volume/drive_group/__pycache__
/usr/lib/python3.11/site-packages/ceph_volume/drive_group/__pycache__/__init__.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/drive_group/__pycache__/main.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/drive_group/main.py
/usr/lib/python3.11/site-packages/ceph_volume/exceptions.py
/usr/lib/python3.11/site-packages/ceph_volume/inventory
/usr/lib/python3.11/site-packages/ceph_volume/inventory/__init__.py
/usr/lib/python3.11/site-packages/ceph_volume/inventory/__pycache__
/usr/lib/python3.11/site-packages/ceph_volume/inventory/__pycache__/__init__.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/inventory/__pycache__/main.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/inventory/main.py
/usr/lib/python3.11/site-packages/ceph_volume/log.py
/usr/lib/python3.11/site-packages/ceph_volume/main.py
/usr/lib/python3.11/site-packages/ceph_volume/process.py
/usr/lib/python3.11/site-packages/ceph_volume/systemd
/usr/lib/python3.11/site-packages/ceph_volume/systemd/__init__.py
/usr/lib/python3.11/site-packages/ceph_volume/systemd/__pycache__
/usr/lib/python3.11/site-packages/ceph_volume/systemd/__pycache__/__init__.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/systemd/__pycache__/main.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/systemd/__pycache__/systemctl.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/systemd/main.py
/usr/lib/python3.11/site-packages/ceph_volume/systemd/systemctl.py
/usr/lib/python3.11/site-packages/ceph_volume/terminal.py
/usr/lib/python3.11/site-packages/ceph_volume/tests
/usr/lib/python3.11/site-packages/ceph_volume/tests/__init__.py
/usr/lib/python3.11/site-packages/ceph_volume/tests/__pycache__
/usr/lib/python3.11/site-packages/ceph_volume/tests/__pycache__/__init__.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/tests/__pycache__/conftest.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/tests/__pycache__/test_configuration.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/tests/__pycache__/test_decorators.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/tests/__pycache__/test_inventory.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/tests/__pycache__/test_main.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/tests/__pycache__/test_process.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/tests/__pycache__/test_terminal.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/tests/conftest.py
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/__init__.py
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/__pycache__
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/__pycache__/__init__.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/__pycache__/test_zap.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/lvm
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/lvm/__init__.py
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/lvm/__pycache__
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/lvm/__pycache__/__init__.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/lvm/__pycache__/test_activate.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/lvm/__pycache__/test_batch.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/lvm/__pycache__/test_common.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/lvm/__pycache__/test_create.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/lvm/__pycache__/test_deactivate.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/lvm/__pycache__/test_listing.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/lvm/__pycache__/test_migrate.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/lvm/__pycache__/test_prepare.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/lvm/__pycache__/test_trigger.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/lvm/__pycache__/test_zap.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/lvm/test_activate.py
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/lvm/test_batch.py
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/lvm/test_common.py
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/lvm/test_create.py
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/lvm/test_deactivate.py
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/lvm/test_listing.py
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/lvm/test_migrate.py
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/lvm/test_prepare.py
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/lvm/test_trigger.py
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/lvm/test_zap.py
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/raw
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/raw/__init__.py
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/raw/__pycache__
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/raw/__pycache__/__init__.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/raw/__pycache__/test_list.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/raw/__pycache__/test_prepare.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/raw/test_list.py
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/raw/test_prepare.py
/usr/lib/python3.11/site-packages/ceph_volume/tests/devices/test_zap.py
/usr/lib/python3.11/site-packages/ceph_volume/tests/test_configuration.py
/usr/lib/python3.11/site-packages/ceph_volume/tests/test_decorators.py
/usr/lib/python3.11/site-packages/ceph_volume/tests/test_inventory.py
/usr/lib/python3.11/site-packages/ceph_volume/tests/test_main.py
/usr/lib/python3.11/site-packages/ceph_volume/tests/test_process.py
/usr/lib/python3.11/site-packages/ceph_volume/tests/test_terminal.py
/usr/lib/python3.11/site-packages/ceph_volume/util
/usr/lib/python3.11/site-packages/ceph_volume/util/__init__.py
/usr/lib/python3.11/site-packages/ceph_volume/util/__pycache__
/usr/lib/python3.11/site-packages/ceph_volume/util/__pycache__/__init__.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/util/__pycache__/arg_validators.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/util/__pycache__/constants.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/util/__pycache__/device.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/util/__pycache__/disk.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/util/__pycache__/encryption.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/util/__pycache__/lsmdisk.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/util/__pycache__/prepare.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/util/__pycache__/system.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/util/__pycache__/templates.cpython-311.pyc
/usr/lib/python3.11/site-packages/ceph_volume/util/arg_validators.py
/usr/lib/python3.11/site-packages/ceph_volume/util/constants.py
/usr/lib/python3.11/site-packages/ceph_volume/util/device.py
/usr/lib/python3.11/site-packages/ceph_volume/util/disk.py
/usr/lib/python3.11/site-packages/ceph_volume/util/encryption.py
/usr/lib/python3.11/site-packages/ceph_volume/util/lsmdisk.py
/usr/lib/python3.11/site-packages/ceph_volume/util/prepare.py
/usr/lib/python3.11/site-packages/ceph_volume/util/system.py
/usr/lib/python3.11/site-packages/ceph_volume/util/templates.py
/usr/lib/systemd/system-preset/50-ceph.preset
/usr/lib/systemd/system/ceph-crash.service
/usr/lib/systemd/system/ceph.target
/usr/lib64/ceph
/usr/lib64/ceph/compressor
/usr/lib64/ceph/compressor/libceph_lz4.so
/usr/lib64/ceph/compressor/libceph_lz4.so.2
/usr/lib64/ceph/compressor/libceph_lz4.so.2.0.0
/usr/lib64/ceph/compressor/libceph_snappy.so
/usr/lib64/ceph/compressor/libceph_snappy.so.2
/usr/lib64/ceph/compressor/libceph_snappy.so.2.0.0
/usr/lib64/ceph/compressor/libceph_zlib.so
/usr/lib64/ceph/compressor/libceph_zlib.so.2
/usr/lib64/ceph/compressor/libceph_zlib.so.2.0.0
/usr/lib64/ceph/compressor/libceph_zstd.so
/usr/lib64/ceph/compressor/libceph_zstd.so.2
/usr/lib64/ceph/compressor/libceph_zstd.so.2.0.0
/usr/lib64/ceph/crypto
/usr/lib64/ceph/crypto/libceph_crypto_openssl.so
/usr/lib64/ceph/erasure-code
/usr/lib64/ceph/erasure-code/libec_clay.so
/usr/lib64/ceph/erasure-code/libec_jerasure.so
/usr/lib64/ceph/erasure-code/libec_jerasure_generic.so
/usr/lib64/ceph/erasure-code/libec_lrc.so
/usr/lib64/ceph/erasure-code/libec_shec.so
/usr/lib64/ceph/erasure-code/libec_shec_generic.so
/usr/lib64/libos_tp.so
/usr/lib64/libos_tp.so.1
/usr/lib64/libos_tp.so.1.0.0
/usr/lib64/libosd_tp.so
/usr/lib64/libosd_tp.so.1
/usr/lib64/libosd_tp.so.1.0.0
/usr/lib64/rados-classes
/usr/lib64/rados-classes/libcls_2pc_queue.so
/usr/lib64/rados-classes/libcls_2pc_queue.so.1
/usr/lib64/rados-classes/libcls_2pc_queue.so.1.0.0
/usr/lib64/rados-classes/libcls_cas.so
/usr/lib64/rados-classes/libcls_cas.so.1
/usr/lib64/rados-classes/libcls_cas.so.1.0.0
/usr/lib64/rados-classes/libcls_cephfs.so
/usr/lib64/rados-classes/libcls_cephfs.so.1
/usr/lib64/rados-classes/libcls_cephfs.so.1.0.0
/usr/lib64/rados-classes/libcls_cmpomap.so
/usr/lib64/rados-classes/libcls_cmpomap.so.1
/usr/lib64/rados-classes/libcls_cmpomap.so.1.0.0
/usr/lib64/rados-classes/libcls_fifo.so
/usr/lib64/rados-classes/libcls_fifo.so.1
/usr/lib64/rados-classes/libcls_fifo.so.1.0.0
/usr/lib64/rados-classes/libcls_hello.so
/usr/lib64/rados-classes/libcls_hello.so.1
/usr/lib64/rados-classes/libcls_hello.so.1.0.0
/usr/lib64/rados-classes/libcls_journal.so
/usr/lib64/rados-classes/libcls_journal.so.1
/usr/lib64/rados-classes/libcls_journal.so.1.0.0
/usr/lib64/rados-classes/libcls_kvs.so
/usr/lib64/rados-classes/libcls_kvs.so.1
/usr/lib64/rados-classes/libcls_kvs.so.1.0.0
/usr/lib64/rados-classes/libcls_lock.so
/usr/lib64/rados-classes/libcls_lock.so.1
/usr/lib64/rados-classes/libcls_lock.so.1.0.0
/usr/lib64/rados-classes/libcls_log.so
/usr/lib64/rados-classes/libcls_log.so.1
/usr/lib64/rados-classes/libcls_log.so.1.0.0
/usr/lib64/rados-classes/libcls_lua.so
/usr/lib64/rados-classes/libcls_lua.so.1
/usr/lib64/rados-classes/libcls_lua.so.1.0.0
/usr/lib64/rados-classes/libcls_numops.so
/usr/lib64/rados-classes/libcls_numops.so.1
/usr/lib64/rados-classes/libcls_numops.so.1.0.0
/usr/lib64/rados-classes/libcls_otp.so
/usr/lib64/rados-classes/libcls_otp.so.1
/usr/lib64/rados-classes/libcls_otp.so.1.0.0
/usr/lib64/rados-classes/libcls_queue.so
/usr/lib64/rados-classes/libcls_queue.so.1
/usr/lib64/rados-classes/libcls_queue.so.1.0.0
/usr/lib64/rados-classes/libcls_rbd.so
/usr/lib64/rados-classes/libcls_rbd.so.1
/usr/lib64/rados-classes/libcls_rbd.so.1.0.0
/usr/lib64/rados-classes/libcls_refcount.so
/usr/lib64/rados-classes/libcls_refcount.so.1
/usr/lib64/rados-classes/libcls_refcount.so.1.0.0
/usr/lib64/rados-classes/libcls_rgw.so
/usr/lib64/rados-classes/libcls_rgw.so.1
/usr/lib64/rados-classes/libcls_rgw.so.1.0.0
/usr/lib64/rados-classes/libcls_rgw_gc.so
/usr/lib64/rados-classes/libcls_rgw_gc.so.1
/usr/lib64/rados-classes/libcls_rgw_gc.so.1.0.0
/usr/lib64/rados-classes/libcls_sdk.so
/usr/lib64/rados-classes/libcls_sdk.so.1
/usr/lib64/rados-classes/libcls_sdk.so.1.0.0
/usr/lib64/rados-classes/libcls_timeindex.so
/usr/lib64/rados-classes/libcls_timeindex.so.1
/usr/lib64/rados-classes/libcls_timeindex.so.1.0.0
/usr/lib64/rados-classes/libcls_user.so
/usr/lib64/rados-classes/libcls_user.so.1
/usr/lib64/rados-classes/libcls_user.so.1.0.0
/usr/lib64/rados-classes/libcls_version.so
/usr/lib64/rados-classes/libcls_version.so.1
/usr/lib64/rados-classes/libcls_version.so.1.0.0
/usr/libexec/ceph
/usr/libexec/ceph/ceph_common.sh
/usr/sbin/ceph-create-keys
/usr/share/fillup-templates/sysconfig.ceph
/usr/share/man/man8/ceph-create-keys.8.gz
/usr/share/man/man8/ceph-deploy.8.gz
/usr/share/man/man8/ceph-kvstore-tool.8.gz
/usr/share/man/man8/ceph-run.8.gz
/usr/share/man/man8/crushtool.8.gz
/usr/share/man/man8/monmaptool.8.gz
/usr/share/man/man8/osdmaptool.8.gz
/var/lib/ceph/bootstrap-mds
/var/lib/ceph/bootstrap-mgr
/var/lib/ceph/bootstrap-osd
/var/lib/ceph/bootstrap-rbd
/var/lib/ceph/bootstrap-rbd-mirror
/var/lib/ceph/bootstrap-rgw
/var/lib/ceph/crash
/var/lib/ceph/crash/posted
/var/lib/ceph/tmp


Generated by rpm2html 1.8.1

Fabrice Bellet, Tue Apr 9 10:54:05 2024