Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Creation of ceph_test_radios_io_sequence application #1

Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
70 commits
Select commit Hold shift + click to select a range
7830383
doc/dev/developer_guide/testing_integration_tests: Document the rando…
mchangir Nov 29, 2022
56504d2
kv/rocksdb: return error for dump_objectstore_kv_stats asok command
ifed01 Jul 22, 2024
9a46c52
qa: do the set/get attribute on the remote filesystem
joscollin Aug 8, 2024
5639fa2
mds: dump log segment end along with offset
vshankar Aug 1, 2024
e5728c4
mds: dump log segment in segment expiry callback
vshankar Aug 1, 2024
d1d3a8c
mds: batch backtrace updates by pool-id when expiring a log segment
vshankar Feb 2, 2024
9f27bde
qa/cephfs: add test to verify backtrace update failure on deleted dat…
vshankar May 3, 2024
9c70adf
mgr/nfs: add cmount_path
avanthakkar Oct 10, 2023
1abb411
mgr/nfs: adopt API & unit tests for nfs exports
avanthakkar Aug 27, 2024
d38858d
doc: cmount_path documentation for CEPHFS nfs exports
avanthakkar Aug 27, 2024
af63b47
doc: Update pendingreleasenotes for CephFS NFS exports
avanthakkar Aug 27, 2024
d02b94d
mgr/nfs: ensure user_id for create_export_from_dict
avanthakkar Aug 28, 2024
b8f3db5
mgr/nfs: add additional tests for cmount_path & user_id deletion
avanthakkar Aug 29, 2024
ac998b5
qa/task: update alertmanager endpoints version
nizamial09 Aug 21, 2024
ef68253
qa: relocate subvol creation overrides and test
mchangir May 14, 2024
7323164
crimson/osd: more detailed debug logs
xxhdx1985126 Aug 31, 2024
4af5134
mgr/dashboard: mgr/dashboard: Select no device by default in EC profile
afreen23 Sep 2, 2024
25e0a32
mgr/cephadm: renaming whitelist_domains field to allowlist_domains
rkachach Sep 12, 2024
b61c761
mgr/dashboard: fix table column pipe transform
ivoalmeida Sep 13, 2024
3bb41eb
mgr/dashboard: fix start time format
ivoalmeida Sep 11, 2024
e3d8a37
orch: refactor boolean handling in drive group spec
guits Sep 12, 2024
86a0a80
mgr/dashboard: fix table filter
ivoalmeida Sep 13, 2024
fc41513
crimson/osd/pg: rollback ops by copying obc beforehand and recover after
xxhdx1985126 Sep 8, 2024
b9ef436
crimson/osd/ops_executor: simplify OpsExecuter::rollback_obc_if_modified
xxhdx1985126 Sep 11, 2024
4ed5005
crimson/osd/ops_executer: revoke OpsExecuter::get_obc()
xxhdx1985126 Sep 14, 2024
1716224
crimson/osd/pg: make "PG::submit_error_log()" and
xxhdx1985126 Sep 14, 2024
36c620b
doc/README.md: create selectable commands
zdover23 Sep 14, 2024
e3953d3
mgr/dashboard: add service management for oauth2-proxy
Pegonzal Aug 21, 2024
6174c65
Merge pull request #59379 from rhcs-dashboard/oauth2-proxy-ui
Pegonzal Sep 16, 2024
4d96f43
Merge pull request #57459 from mchangir/mds-snap_schedule-relocate-te…
rishabh-d-dave Sep 16, 2024
bc830a3
mgr/dashboard: add service management for mgmt-gateway
Pegonzal Jul 16, 2024
a376752
mgr/dashboard: add SSO through oauth2 protocol
Pegonzal Jul 8, 2024
858fcd5
Merge pull request #45807 from mchangir/doc-document-random-selection…
anthonyeleven Sep 16, 2024
8123035
Merge pull request #59557 from afreen23/wip-ec-profile
afreen23 Sep 16, 2024
bda1c7f
mon/NVMeofGw*:
leonidc Sep 8, 2024
b7543d6
Merge pull request #59766 from rkachach/fix_issue_68052
adk3798 Sep 16, 2024
94418d9
mgr/dashboard: fix UI modal issues
nizamial09 Sep 9, 2024
421e7c5
Merge pull request #59752 from guits/drive-group-spec-bool-args
adk3798 Sep 16, 2024
82e3d59
Merge pull request #58456 from rhcs-dashboard/auth2-sso
Pegonzal Sep 16, 2024
a2c6b20
Merge pull request #58618 from rhcs-dashboard/mgmt-gateway-ui
Pegonzal Sep 16, 2024
e0e467a
mgr/dashboard: Adding group and pool name to service name
afreen23 Sep 11, 2024
1d01d04
mgr/dashboard: Add mTLS support
afreen23 Sep 14, 2024
e1934d5
Merge pull request #59668 from rhcs-dashboard/fix-modal-hidden-in-nav
afreen23 Sep 16, 2024
b895e59
doc: nit fixes for nfs doc
avanthakkar Sep 3, 2024
2b69864
Merge pull request #59781 from ivoalmeida/table-filter-fix
afreen23 Sep 16, 2024
fb471bd
Merge pull request #59805 from afreen23/wip-nvme-mtls
afreen23 Sep 17, 2024
a08ddab
mgr/volumes: add earmarking for subvol
avanthakkar Sep 17, 2024
d2f8d10
qa/cephfs: update tests for test_volumes & unit-test for earmarking
avanthakkar Sep 17, 2024
174b9d4
doc: document earmark option for subvolume and new commands
avanthakkar Sep 17, 2024
1234ac5
Merge pull request #59727 from ivoalmeida/snap-schedule-start-time-fix
afreen23 Sep 17, 2024
3fdbc16
rbd-mirror: allow mirroring to a different namespace
nbalacha Aug 23, 2024
21454d0
mgr/dashboard: remove orch required decorator from host UI router (list)
Sep 17, 2024
48b0a20
rbd: display mirror uuid for mirror pool info output
nbalacha Sep 17, 2024
8ccb652
Merge pull request #59777 from ivoalmeida/table-column-pipe-transform…
afreen23 Sep 17, 2024
b57eb17
Merge pull request #59093 from joscollin/wip-fix-get-set-dirty-snap-id
vshankar Sep 17, 2024
8bd63f8
Merge PR #55421 into main
vshankar Sep 17, 2024
528f09c
mgr/dashboard: fix minor issues in carbon tables
nizamial09 Sep 12, 2024
08cec69
Merge pull request #58728 from ifed01/wip-ifed-ret-error-kv-stats
ifed01 Sep 17, 2024
4adee01
Merge pull request #54277 from rhcs-dashboard/nfs-export-apply-fix
adk3798 Sep 17, 2024
e0b3453
Merge pull request #59788 from zdover23/wip-doc-2024-09-14-README-md-…
zdover23 Sep 17, 2024
5ecc740
Merge pull request #59373 from rhcs-dashboard/test-ms-alertmanager-v2
nizamial09 Sep 17, 2024
ee9bb3d
Merge pull request #59765 from rhcs-dashboard/carbon-ui-cleanups
nizamial09 Sep 18, 2024
d0a3655
Merge PR #59111 into main
vshankar Sep 18, 2024
82de5f0
Merge pull request #59823 from rhcs-dashboard/remove-orch-required-de…
nizamial09 Sep 18, 2024
4c7c472
Merge pull request #59543 from xxhdx1985126/wip-63844
Matan-B Sep 18, 2024
2ae674d
Merge pull request #59417 from nbalacha/wip-nbalacha-ns-mirroring
idryomov Sep 18, 2024
ea93cad
Merge pull request #59667 from leonidc/gw-no-subsystems
leonidc Sep 18, 2024
afb6399
src/test/osd/ceph_test_rados_io_sequence
JonBailey1993 Sep 2, 2024
81c8c8c
common/io_exerciser: performance, readability and safety improvements
JonBailey1993 Sep 19, 2024
d313974
common/io_exerciser: code enhancements to ceph_test_rados_io_sequence
JonBailey1993 Sep 27, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions PendingReleaseNotes
Original file line number Diff line number Diff line change
Expand Up @@ -279,6 +279,10 @@ CephFS: Disallow delegating preallocated inode ranges to clients. Config
* RGW: in bucket notifications, the `principalId` inside `ownerIdentity` now contains
complete user id, prefixed with tenant id

* NFS: The export create/apply of CephFS based exports will now have a additional parameter `cmount_path` under the FSAL block,
which specifies the path within the CephFS to mount this export on. If this and the other
`EXPORT { FSAL {} }` options are the same between multiple exports, those exports will share a single CephFS client. If not specified, the default is `/`.

>=18.0.0

* The RGW policy parser now rejects unknown principals by default. If you are
Expand Down
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,15 +81,15 @@ To build Ceph, follow this procedure:
contains `do_cmake.sh` and `CONTRIBUTING.rst`.
2. Run the `do_cmake.sh` script:

``./do_cmake.sh``
./do_cmake.sh

``do_cmake.sh`` by default creates a "debug build" of Ceph, which can be
up to five times slower than a non-debug build. Pass
``-DCMAKE_BUILD_TYPE=RelWithDebInfo`` to ``do_cmake.sh`` to create a
non-debug build.
3. Move into the `build` directory:

``cd build``
cd build
4. Use the `ninja` buildsystem to build the development environment:

ninja -j3
Expand Down Expand Up @@ -120,11 +120,11 @@ To build Ceph, follow this procedure:

To build only certain targets, run a command of the following form:

``ninja [target name]``
ninja [target name]

5. Install the vstart cluster:

``ninja install``
ninja install

### CMake Options

Expand Down
39 changes: 38 additions & 1 deletion doc/cephfs/fs-volumes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -276,7 +276,7 @@ Use a command of the following form to create a subvolume:

.. prompt:: bash #

ceph fs subvolume create <vol_name> <subvol_name> [--size <size_in_bytes>] [--group_name <subvol_group_name>] [--pool_layout <data_pool_name>] [--uid <uid>] [--gid <gid>] [--mode <octal_mode>] [--namespace-isolated]
ceph fs subvolume create <vol_name> <subvol_name> [--size <size_in_bytes>] [--group_name <subvol_group_name>] [--pool_layout <data_pool_name>] [--uid <uid>] [--gid <gid>] [--mode <octal_mode>] [--namespace-isolated] [--earmark <earmark>]


The command succeeds even if the subvolume already exists.
Expand All @@ -289,6 +289,15 @@ The subvolume can be created in a separate RADOS namespace by specifying the
default subvolume group with an octal file mode of ``755``, a uid of its
subvolume group, a gid of its subvolume group, a data pool layout of its parent
directory, and no size limit.
You can also assign an earmark to a subvolume using the ``--earmark`` option.
The earmark is a unique identifier that tags the subvolume for specific purposes,
such as NFS or SMB services. By default, no earmark is set, allowing for flexible
assignment based on administrative needs. An empty string ("") can be used to remove
any existing earmark from a subvolume.

The earmarking mechanism ensures that subvolumes are correctly tagged and managed,
helping to avoid conflicts and ensuring that each subvolume is associated
with the intended service or use case.

Removing a subvolume
~~~~~~~~~~~~~~~~~~~~
Expand Down Expand Up @@ -418,6 +427,7 @@ The output format is JSON and contains the following fields.
* ``pool_namespace``: RADOS namespace of the subvolume
* ``features``: features supported by the subvolume
* ``state``: current state of the subvolume
* ``earmark``: earmark of the subvolume

If a subvolume has been removed but its snapshots have been retained, the
output contains only the following fields.
Expand Down Expand Up @@ -522,6 +532,33 @@ subvolume using the metadata key:
Using the ``--force`` flag allows the command to succeed when it would
otherwise fail (if the metadata key did not exist).

Getting earmark of a subvolume
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Use a command of the following form to get the earmark of a subvolume:

.. prompt:: bash #

ceph fs subvolume earmark get <vol_name> <subvol_name> [--group_name <subvol_group_name>]

Setting earmark of a subvolume
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Use a command of the following form to set the earmark of a subvolume:

.. prompt:: bash #

ceph fs subvolume earmark set <vol_name> <subvol_name> [--group_name <subvol_group_name>] <earmark>

Removing earmark of a subvolume
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Use a command of the following form to remove the earmark of a subvolume:

.. prompt:: bash #

ceph fs subvolume earmark rm <vol_name> <subvol_name> [--group_name <subvol_group_name>]

Creating a Snapshot of a Subvolume
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -317,16 +317,16 @@ directory tree within the ``suites/`` subdirectory of the `ceph/qa sub-directory
The set of all tests defined by a given subdirectory of ``suites/`` is
called an "integration test suite", or a "teuthology suite".

Combination of yaml facets is controlled by special files (``%`` and
``+``) that are placed within the directory tree and can be thought of as
operators. The ``%`` file is the "convolution" operator and ``+``
signifies concatenation.
Combination of YAML facets is controlled by special files (``%``, ``+`` and ``$``)
that are placed within the directory tree and can be thought of as
operators. The ``%`` file is the "convolution" operator, ``+`` signifies
concatenation and ``$`` is the "random selection" operator.

Convolution operator
^^^^^^^^^^^^^^^^^^^^
Convolution operator - ``%``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The convolution operator, implemented as a (typically empty) file called ``%``,
tells teuthology to construct a test matrix from yaml facets found in
tells teuthology to construct a test matrix from YAML facets found in
subdirectories below the directory containing the operator.

For example, the `ceph-deploy suite
Expand Down Expand Up @@ -421,8 +421,8 @@ tests will still preserve the correct numerator (subset of subsets).
You can disable nested subsets using the ``--no-nested-subset`` argument to
``teuthology-suite``.

Concatenation operator
^^^^^^^^^^^^^^^^^^^^^^
Concatenation operator - ``+``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

For even greater flexibility in sharing yaml files between suites, the
special file plus (``+``) can be used to concatenate files within a
Expand Down Expand Up @@ -561,6 +561,15 @@ rest of the cluster (``5-finish-upgrade.yaml``).
The last stage is requiring the updated release (``ceph require-osd-release quincy``,
``ceph osd set-require-min-compat-client quincy``) and running the ``final-workload``.

Random Selection Operator - ``$``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The presence of a file named ``$`` provides a hint to teuthology to randomly
include one of the YAML fragments in the test. Such a scenario is typically
seen when we need to choose one of the flavors of the features/options to be
tested randomly.


Position Independent Linking
----------------------------

Expand Down
7 changes: 4 additions & 3 deletions doc/man/8/rbd.rst
Original file line number Diff line number Diff line change
Expand Up @@ -543,8 +543,9 @@ Commands

:command:`mirror pool info` [*pool-name*]
Show information about the pool or namespace mirroring configuration.
For a pool, it includes mirroring mode, peer UUID, remote cluster name,
and remote client name. For a namespace, it includes only mirroring mode.
For both pools and namespaces, it includes the mirroring mode, mirror UUID
and remote namespace. For pools, it additionally includes the site name,
peer UUID, remote cluster name, and remote client name.

:command:`mirror pool peer add` [*pool-name*] *remote-cluster-spec*
Add a mirroring peer to a pool.
Expand All @@ -555,7 +556,7 @@ Commands
This requires mirroring to be enabled on the pool.

:command:`mirror pool peer remove` [*pool-name*] *uuid*
Remove a mirroring peer from a pool. The peer uuid is available
Remove a mirroring peer from a pool. The peer UUID is available
from ``mirror pool info`` command.

:command:`mirror pool peer set` [*pool-name*] *uuid* *key* *value*
Expand Down
19 changes: 14 additions & 5 deletions doc/mgr/nfs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -283,7 +283,7 @@ Create CephFS Export

.. code:: bash

$ ceph nfs export create cephfs --cluster-id <cluster_id> --pseudo-path <pseudo_path> --fsname <fsname> [--readonly] [--path=/path/in/cephfs] [--client_addr <value>...] [--squash <value>] [--sectype <value>...]
$ ceph nfs export create cephfs --cluster-id <cluster_id> --pseudo-path <pseudo_path> --fsname <fsname> [--readonly] [--path=/path/in/cephfs] [--client_addr <value>...] [--squash <value>] [--sectype <value>...] [--cmount_path <value>]

This creates export RADOS objects containing the export block, where

Expand Down Expand Up @@ -318,6 +318,12 @@ values may be separated by a comma (example: ``--sectype krb5p,krb5i``). The
server will negotatiate a supported security type with the client preferring
the supplied methods left-to-right.

``<cmount_path>`` specifies the path within the CephFS to mount this export on. It is
allowed to be any complete path hierarchy between ``/`` and the ``EXPORT {path}``. (i.e. if ``EXPORT { Path }`` parameter is ``/foo/bar`` then cmount_path could be ``/``, ``/foo`` or ``/foo/bar``).

.. note:: If this and the other ``EXPORT { FSAL {} }`` options are the same between multiple exports, those exports will share a single CephFS client.
If not specified, the default is ``/``.

.. note:: Specifying values for sectype that require Kerberos will only function on servers
that are configured to support Kerberos. Setting up NFS-Ganesha to support Kerberos
is outside the scope of this document.
Expand Down Expand Up @@ -477,9 +483,9 @@ For example,::
],
"fsal": {
"name": "CEPH",
"user_id": "nfs.mynfs.1",
"fs_name": "a",
"sec_label_xattr": ""
"sec_label_xattr": "",
"cmount_path": "/"
},
"clients": []
}
Expand All @@ -494,6 +500,9 @@ as when creating a new export), with the exception of the
authentication credentials, which will be carried over from the
previous state of the export where possible.

!! NOTE: The ``user_id`` in the ``fsal`` block should not be modified or mentioned in the JSON file as it is auto-generated for CephFS exports.
It's auto-generated in the format ``nfs.<cluster_id>.<fs_name>.<hash_id>``.

::

$ ceph nfs export apply mynfs -i update_cephfs_export.json
Expand All @@ -514,9 +523,9 @@ previous state of the export where possible.
],
"fsal": {
"name": "CEPH",
"user_id": "nfs.mynfs.1",
"fs_name": "a",
"sec_label_xattr": ""
"sec_label_xattr": "",
"cmount_path": "/"
},
"clients": []
}
Expand Down
1 change: 0 additions & 1 deletion qa/suites/fs/functional/subvol_versions/.qa

This file was deleted.

This file was deleted.

This file was deleted.

Empty file.
Empty file.
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,6 @@ tasks:
curl -s http://${PROM_IP}:9095/api/v1/alerts
curl -s http://${PROM_IP}:9095/api/v1/alerts | jq -e '.data | .alerts | .[] | select(.labels | .alertname == "CephMonDown") | .state == "firing"'
# check alertmanager endpoints are responsive and mon down alert is active
curl -s http://${ALERTM_IP}:9093/api/v1/status
curl -s http://${ALERTM_IP}:9093/api/v1/alerts
curl -s http://${ALERTM_IP}:9093/api/v1/alerts | jq -e '.data | .[] | select(.labels | .alertname == "CephMonDown") | .status | .state == "active"'
curl -s http://${ALERTM_IP}:9093/api/v2/status
curl -s http://${ALERTM_IP}:9093/api/v2/alerts
curl -s http://${ALERTM_IP}:9093/api/v2/alerts | jq -e '.[] | select(.labels | .alertname == "CephMonDown") | .status | .state == "active"'
26 changes: 26 additions & 0 deletions qa/tasks/cephfs/test_backtrace.py
Original file line number Diff line number Diff line change
Expand Up @@ -100,3 +100,29 @@ def test_backtrace(self):
# we don't update the layout in all the old pools whenever it changes
old_pool_layout = self.fs.read_layout(file_ino, pool=old_data_pool_name)
self.assertEqual(old_pool_layout['object_size'], 4194304)

def test_backtrace_flush_on_deleted_data_pool(self):
"""
that the MDS does not go read-only when handling backtrace update errors
when backtrace updates are batched and flushed to RADOS (during journal trim)
and some of the pool have been removed.
"""
data_pool = self.fs.get_data_pool_name()
extra_data_pool_name_1 = data_pool + '_extra1'
self.fs.add_data_pool(extra_data_pool_name_1)

self.mount_a.run_shell(["mkdir", "dir_x"])
self.mount_a.setfattr("dir_x", "ceph.dir.layout.pool", extra_data_pool_name_1)
self.mount_a.run_shell(["touch", "dir_x/file_x"])
self.fs.flush()

extra_data_pool_name_2 = data_pool + '_extra2'
self.fs.add_data_pool(extra_data_pool_name_2)
self.mount_a.setfattr("dir_x/file_x", "ceph.file.layout.pool", extra_data_pool_name_2)
self.mount_a.run_shell(["setfattr", "-x", "ceph.dir.layout", "dir_x"])
self.run_ceph_cmd("fs", "rm_data_pool", self.fs.name, extra_data_pool_name_1)
self.fs.flush()

# quick test to check if the mds has handled backtrace update failure
# on the deleted data pool without going read-only.
self.mount_a.run_shell(["mkdir", "dir_y"])
13 changes: 13 additions & 0 deletions qa/tasks/cephfs/test_mirroring.py
Original file line number Diff line number Diff line change
Expand Up @@ -1505,9 +1505,22 @@ def test_get_set_mirror_dirty_snap_id(self):
"""
That get/set ceph.mirror.dirty_snap_id attribute succeeds in a remote filesystem.
"""
log.debug('reconfigure client auth caps')
self.get_ceph_cmd_result(
'auth', 'caps', "client.{0}".format(self.mount_b.client_id),
'mds', 'allow rw',
'mon', 'allow r',
'osd', 'allow rw pool={0}, allow rw pool={1}'.format(
self.backup_fs.get_data_pool_name(),
self.backup_fs.get_data_pool_name()))
log.debug(f'mounting filesystem {self.secondary_fs_name}')
self.mount_b.umount_wait()
self.mount_b.mount_wait(cephfs_name=self.secondary_fs_name)
log.debug('setting ceph.mirror.dirty_snap_id attribute')
self.mount_b.run_shell(["mkdir", "-p", "d1/d2/d3"])
attr = str(random.randint(1, 10))
self.mount_b.setfattr("d1/d2/d3", "ceph.mirror.dirty_snap_id", attr)
log.debug('getting ceph.mirror.dirty_snap_id attribute')
val = self.mount_b.getfattr("d1/d2/d3", "ceph.mirror.dirty_snap_id")
self.assertEqual(attr, val, f"Mismatch for ceph.mirror.dirty_snap_id value: {attr} vs {val}")

Expand Down
Loading