Index index by Group index by Distribution index by Vendor index by creation date index by Name Mirrors Help Search

rook-integration-1.6.2+git0.ge8fd65f08-3.3 RPM for x86_64

From OpenSuSE Tumbleweed for x86_64

Name: rook-integration Distribution: openSUSE Tumbleweed
Version: 1.6.2+git0.ge8fd65f08 Vendor: openSUSE
Release: 3.3 Build date: Fri Feb 23 12:54:51 2024
Group: System/Benchmark Build host: reproducible
Size: 41325489 Source RPM: rook-1.6.2+git0.ge8fd65f08-3.3.src.rpm
Packager: https://bugs.opensuse.org
Url: https://rook.io/
Summary: Application which runs Rook integration tests
This package is intended to be used only for testing. Please don't install it
in production environments.

Rook's integration tests conveniently get built into a standalone binary. The
tests require a running Kubernetes cluster, and the image being tested must be
pushed to all Kubernetes cluster nodes as 'rook/ceph:master'. They also require
that 'kubectl' works without additional connection arguments from the system
which will run the binary. The integration tests can be flaky and are best run
on a Kubernetes cluster which has not previously run the integration tests.

The list of possible integration test suites can be gotten from the integration
binary with the argument [-test.list '.*']. A subset of test suites can be run
by specifying a regular expression (or a specific test suite name) as an
argument to [-test.run]. All Ceph test suites can be run with the argument
[-test.run 'TestCeph'].

Provides

Requires

License

Apache-2.0

Changelog

* Fri Feb 23 2024 Dominique Leuenberger <dimstar@opensuse.org>
  - Use %patch -P N instead of deprecated %patchN.
* Thu Feb 10 2022 Dirk Müller <dmueller@suse.com>
  - avoid bashism in post scripts (bsc#1195391)
* Fri May 07 2021 Stefan Haas <stefan.haas@suse.com>
  - Update to v1.6.2
    * Set base Ceph operator image and example deployments to v16.2.2
    * Update snapshot APIs from v1beta1 to v1
    * Documentation for creating static PVs
    * Allow setting primary-affinity for the OSD
    * Remove unneeded debug log statements
    * Preserve volume claim template annotations during upgrade
    * Allow re-creating erasure coded pool with different settings
    * Double mon failover timeout during a node drain
    * Remove unused volumesource schema from CephCluster CRD
    * Set the device class on raw mode osds
    * External cluster schema fix to allow not setting mons
    * Add phase to the CephFilesystem CRD
    * Generate full schema for volumeClaimTemplates in the CephCluster CRD
    * Automate upgrades for the MDS daemon to properly scale down and scale up
    * Add Vault KMS support for object stores
    * Ensure object store endpoint is initialized when creating an object user
    * Support for OBC operations when RGW is configured with TLS
    * Preserve the OSD topology affinity during upgrade for clusters on PVCs
    * Unify timeouts for various Ceph commands
    * Allow setting annotations on RGW service
    * Expand PVC size of mon daemons if requested
  - Update to v1.6.1
    * Disable host networking by default in the CSI plugin with option to enable
    * Fix the schema for erasure-coded pools so replication size is not required
    * Improve node watcher for adding new OSDs
    * Operator base image updated to v16.2.1
    * Deployment examples updated to Ceph v15.2.11
    * Update Ceph-CSI to v3.3.1
    * Allow any device class for the OSDs in a pool instead of restricting the schema
    * Fix metadata OSDs for Ceph Pacific
    * Allow setting the initial CRUSH weight for an OSD
    * Fix object store health check in case SSL is enabled
    * Upgrades now ensure latest config flags are set for MDS and RGW
    * Suppress noisy RGW log entry for radosgw-admin commands
  - Update to v1.6.0
    * Removed Storage Providers
    * CockroachDB
    * EdgeFS
    * YugabyteDB
    * Ceph
    * Support for creating OSDs via Drive Groups was removed.
    * Ceph Pacific (v16) support
    * CephFilesystemMirror CRD to support mirroring of CephFS volumes with Pacific
    * Ceph CSI Driver
    * CSI v3.3.0 driver enabled by default
    * Volume Replication Controller for improved RBD replication support
    * Multus support
    * GRPC metrics disabled by default
    * Ceph RGW
    * Extended the support of vault KMS configuration
    * Scale with multiple daemons with a single deployment instead of a separate deployment for each rgw daemon
    * OSDs
    * LVM is no longer used to provision OSDs
    * More efficient updates for multiple OSDs at the same time
    * Multiple Ceph mgr daemons are supported for stretch clusters
      and other clusters where HA of the mgr is critical (set count: 2 under mgr in the CephCluster CR)
    * Pod Disruption Budgets (PDBs) are enabled by default for Mon,
      RGW, MDS, and OSD daemons. See the disruption management settings.
    * Monitor failover can be disabled, for scenarios where
      maintenance is planned and automatic mon failover is not desired
    * CephClient CRD has been converted to use the controller-runtime library
* Wed Apr 21 2021 Stefan Haas <stefan.haas@suse.com>
  - Update to v1.5.10
    * Ceph
    * Update Ceph-CSI to v3.2.1 (#7506)
    * Use latest Ceph API for setting dashboard and rgw credentials (#7641)
    * Redact secret info from reconcile diffs in debug logs (#7630)
    * Continue to get available devices if failed to get a device info (#7608)
    * Include RGW pods in list for rescheduling from failed node (#7537)
    * Enforce pg_auto_scaler on rgw pools (#7513)
    * Prevent voluntary mon drain while another mon is failing over (#7442)
    * Avoid restarting all encrypted OSDs on cluster growth (#7489)
    * Set secret type on external cluster script (#7473)
    * Fix init container "expand-encrypted-bluefs" for encrypted OSDs (#7466)
    * Fail pool creation if the sub failure domain is the same as the failure domain (#7284)
    * Set default backend for vault and remove temp key for encrypted OSDs (#7454)
* Wed Mar 03 2021 Stefan Haas <stefan.haas@suse.com>
  - Update to v1.5.7
    * Ceph
    * CSI Troubleshooting Guide (#7157)
    * Print device information in OSD prepare logs (#7194)
    * Expose vault curl error in the OSD init container for KCS configurations (#7193)
    * Prevent re-using a device to configure an OSD on PVC from a previous cluster (#7170)
    * Remove crash collector if all Ceph pods moved off a node (#7160)
    * Add helm annotation to keep CRDs in the helm chart during uninstall (#7162)
    * Bind mgr modules to all interfaces instead of pod ip (#7151)
    * Check for orchestration cancellation while waiting for all OSDs to start (#7112)
    * Skip pdb reconcile on create and delete events (#7155)
    * Silence harmless errors in log when the operator is still initializing (#7056)
    * Add --extra-create-metadata flag to the CSI driver (#7147)
    * Add deviceClass to the object store schema (#7132)
    * Simplify the log-collector container name (#7133)
    * Skip csi detection if CSI is disabled (#6866)
    * Remove Rook pods stuck in terminating state on a failed node (#6999)
    * Timeout for rgw configuration to prevent stuck object store when no healthy OSDs (#7075)
    * Update lib bucket provisioner for OBCs (#7086)
  - Drop csi-images-SUSE.patch
* Wed Nov 18 2020 Mike Latimer <mlatimer@suse.com>
  - Derive CSI and sidecar image versions from code defaults rather
    than images found in the build service
* Fri Nov 06 2020 Mike Latimer <mlatimer@suse.com>
  - Update to v1.4.7
    * Ceph
    * Log warning about v14.2.13 being an unsupported Ceph version due to
      errors creating new OSDs (#6545)
    * Disaster recovery guide for PVCs (#6452)
    * Set the deviceClass for OSDs in non-PVC clusters (#6545)
    * External cluster script to fail if prometheus port is not default (#6504)
    * Remove the osd pvc from the osd purge job (#6533)
    * External cluster script added additional checks for monitoring
      endpoint (#6473)
    * Ignore Ceph health error MDS_ALL_DOWN during reconciliation (#6494)
    * Add optional labels to mon pods (#6515)
    * Assert type for logging errors before using it (#6503)
    * Check for orphaned mon resources with every reconcile (#6493)
    * Update the mon PDBs if the maxUnavailable changed (#6469)
    * NFS
    * Update documentation and examples (#6455)
* Wed Oct 28 2020 Mike Latimer <mlatimer@suse.com>
  - Drop OFFSET from cephcsi image tag
* Mon Oct 26 2020 Mike Latimer <mlatimer@suse.com>
  - Update helm chart to use appropriate version prefix for the final registry
    destination (e.g. registry.suse.com or registry.opensuse.org)
  - Improve consistency with image tags
* Tue Oct 20 2020 Mike Latimer <mlatimer@suse.com>
  - Update to v1.4.6
    * Support IPv6 single-stack (#6283)
    * Only start a single CSI provisioner in single-node test clusters (#6437)
    * Raw mode OSD on LV-backed PVC (#6184)
    * Capture ceph-volume detailed log in non-pvc scenario on failure (#6426)
    * Add --upgrade option to external cluster script (#6392)
    * Capture stderr when executing ceph commands and write to log (#6395)
    * Reduce the retry count for the bucket health check for more accurate
      status (#6408)
    * Prevent closing of monitoring channel more than once (#6369)
    * Check underlying block status for encrypted OSDs (#6367)
  - Add 'latest' and appVersion tags to helm chart
  - Include sample manifests in helm chart

Files

/usr/bin/rook-integration
/usr/share/rook-integration
/usr/share/rook-integration/ceph-csi-image-name
/usr/share/rook-integration/ceph-image-name
/usr/share/rook-integration/rook-image-name


Generated by rpm2html 1.8.1

Fabrice Bellet, Sat Nov 23 00:04:24 2024