Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Un special-case system drive btrfs-in-partition treatment #2824 #2835

Conversation

phillxnet
Copy link
Member

Removing legacy system drive special treatment that was originally intended solely to present a partitioned system pool member as if it was a whole disk: the only entity we understood prior to our data pool btrfs-in-partition capability. This normalises/simplifies our lowest level scan_disks() procedure. Predominantly as we now only migrate info in one direction: from partition to parent, rather than the prior system drive only special-case of from parent to partition.

Fixes #2824
Fixes #2749

Includes:
Updating all scan_disks() unittests to reflect:

  • The proposed new behaviour from un special-casing of the system drive.
  • Our more recent 5GB default (was 1 GB): where existing test data allows.
  • Fix recent tests data/result re lsblk -p option (dev path). And:
  • Major refactoring to improve scan_disks() and _update_disk_state() readability.
  • Adapt _update_disk_state() re un-special-case OS btrfs-in-partition. Here we drop the auto-creation of our prior 'special case' OS Pool. Instead, by adding an auto redirect role (if "/" is in a partition), we can manage this pool, including its import, by our regular data pool means.
  • Update disks_table.jst to enable info and import icons for OS drive.
  • Simplify/speed-up _update_disk_state()
  • Adapt Pools & Shares overview pages to no managed Pools.
  • Initial introduction (no Pools) now explains Pool as btrfs volume, & Share as btrfs subvolume. Where Share = portion of a Pool.
  • Web-UI contextual link to Pool import via Disk member.
  • Explicit Web-UI use of 'Un Managed' to reference an un imported pool.
  • Comments to clarify our use of DB Disk.btrfs_uuid field.
  • Add TODO re attached.uuid use for Pool member attribution.
  • Ensure OS Pool import sets in-progress Pool.role="root" This role is used internally by the Pool model, and btrfs.py shares_info() to account for boot-to-snapshot system Pool mount accommodations.

Incidental functional change:

  • Enable nvme SMART as now supported by smartmontools.

Removing legacy system drive special treatment that was originally
intended solely to present a partitioned system pool member as if
it was a whole disk: the only entity we understood prior to our data
pool btrfs-in-partition capability. This normalises/simplifies our
lowest level scan_disks() procedure. Predominantly as we now only
migrate info in one direction: from partition to parent, rather than
the prior system drive only special-case of from parent to partition.

Includes:
Updating all scan_disks() unittests to reflect:
- The proposed new behaviour from un special-casing of the system drive.
- Our more recent 5GB default (was 1 GB): where existing test data allows.
- Fix recent tests data/result re lsblk -p option (dev path).
And:
- Major refactoring to improve scan_disks() and _update_disk_state()
readability.
- Adapt _update_disk_state() re un-special-case OS btrfs-in-partition.
Here we drop the auto-creation of our prior 'special case' OS Pool.
Instead, by adding an auto redirect role (if "/" is in a partition), we
can manage this pool, including its import, by our regular data pool
means.
- Update disks_table.jst to enable info and import icons for OS drive.
- Simplify/speed-up _update_disk_state()
- Adapt Pools & Shares overview pages to no managed Pools.
- Initial introduction (no Pools) now explains Pool as btrfs volume, &
Share as btrfs subvolume. Where Share = portion of a Pool.
- Web-UI contextual link to Pool import via Disk member.
- Explicit Web-UI use of 'Un Managed' to reference an un imported pool.
- Comments to clarify our use of DB Disk.btrfs_uuid field.
- Add TODO re attached.uuid use for Pool member attribution.
- Ensure OS Pool import sets in-progress Pool.role="root"
This role is used internally by the Pool model, and btrfs.py
shares_info() to account for boot-to-snapshot system Pool mount
accommodations.

Incidental functional change:
- Enable nvme SMART as now supported by smartmontools.
@phillxnet
Copy link
Member Author

Developmental context/history available in the proceeding draft PR:

Un special-case system drive btrfs-in-partition treatment #2824 #2830

Testing

An rpm was successfully build and installed on a Leap 15.5 host. So we have all unit test passing.

We also now have NO Pools on initial install (fresh installs only such as from our installer, which does not yet include this rpm). And our OS dive is now represented as the parent, like all data drives, i.e. we only manage partitions via a redirect role: which is auto-assigned to the OS drive now:

OS-drive-as-parent-and-unimported

We also now discourage the import of this inevitably still slightly special Pool. Rockstor does not ever fully manage this pool as we do not mount it, unlike all other pools. Our installer establishes this pool and sets up fstab/systemd to manage it. All other pools are mounted by Rockstor and so this OS Pool import is now designated: "Advanced Users only". And note that when imported, as per all data Pools, we create a mount at the top level of the entire Pool to have 'say' over the entire Pool: but for a boot to snapshot OS pool such as our installer sets up: this includes all snapper managed snapshots: which we purposefully do not surface. A simplification that again influences the decision here: re no auto-mount of OS Pool and it's discouragement. All in this will server the vast majority of users better as we then encourage further an OS/data seperation.

@phillxnet
Copy link
Member Author

Testing continued

We now have the following on the Pool & Share overview pages:

Pools Overview

No managed Pool - disks available

no-pools-disks-available

Managed Pool/s - disks available

managed-data-pool-disk-available

Managed Pool - NO disks available

managed-pools-no-disks-available

Shares Overview:

No Shares - no managed Pool

no-shares-no-managed-pool

No Shares - but a managed pool exists:

(unchanged from prior behaviour)

no-shares-but-managed-pool-exists

@phillxnet
Copy link
Member Author

phillxnet commented Apr 18, 2024

OS drive ('ROOT' Pool) imported:

(now strongly discouraged)

ROOT-Pool-imported

Consequent import of the previously surfaced /home subvol

consequent-home-share-import

@phillxnet
Copy link
Member Author

Upgrade from prior Stable installer

Our prior (not last) stable rpm was 4.1.0. An install instance was instantiated from our prior installer (Leap 15.3) and subscribed to stable; and all updates applied. This install was then zypper dup'ed via our doc howto instructions:

https://rockstor.com/docs/howtos/15-3_to_15-4.html

Resulting in a Leap 15.4 (was 15.3) now 4.6.1-0 (last stable) Rockstor instance.

An unpublished rpm v5.0.8-2835 was build & signed from this development pull requests branch and placed in a local signed, LAN accessible, repo; which was then added to both zypper and dnf in the above now 15.4 stable instance. A Web-UI update was then initiated.

Prior to Web-UI update:

An original 4.1.0-0 installer (Leap 15.3) subscribed to stable and moved to 4.6.1-0 (Leap 15.4) via referenced HowTo. Default system Pool, with 3 member managed data Pool:

disks-4 1 0-1-leap15 3-to-4 6 1-0-leap15 4

Post Web-UI update:

  • Login again as required.
  • N.B. Ctrl+Shift+r in browser: as recommended in Web-UI.

disks-4 1 0-1-leap15 3-to-4 6 1-0-leap15 4-to-5 0 8-2835

@phillxnet
Copy link
Member Author

From our last PR merged in testing: #2829 (comment) our testing channel was no longer able to foster a successful rpmbuild. Merging this PR as the indicated issues addressed by this PR, and the prior testing channel's intended breakage, are all resolved by this merge. Putting us back on target for our current next stable milestone:

https://github.com/rockstor/rockstor-core/milestone/27

@phillxnet phillxnet merged commit d53f147 into rockstor:testing Apr 18, 2024
@phillxnet phillxnet deleted the 2824-Un-special-case-system-drive-btrfs-in-partition-treatment branch April 18, 2024 15:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant