Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Distribution installers (from iso) should do more ZFS #14355

Open
mcladams opened this issue Jan 6, 2023 · 17 comments
Open

Distribution installers (from iso) should do more ZFS #14355

mcladams opened this issue Jan 6, 2023 · 17 comments
Labels
Type: Feature Feature request or new feature

Comments

@mcladams
Copy link

mcladams commented Jan 6, 2023

Edit. below discribes distribution actions outwith OpenZFS

OpenZFS should facilitate zfs as on option; such as btrfs is; on multiple distributions; without requring wiping of an entire drive.

Describe the feature would like to see added to OpenZFS

Users can install to root zfs without wiping entire disk via mutliple distribution live installer isos.

How will this feature improve OpenZFS?

Users will no longer be in fear of installing a distribution to zfs due to current behaviour of requring a full disk.

Additional context

distribtuiton installers that actually provide zfs as an option should not need an entire drive

@mcladams mcladams added the Type: Feature Feature request or new feature label Jan 6, 2023
@rincebrain
Copy link
Contributor

rincebrain commented Jan 6, 2023

It's not clear to me what you're referring to when you say "requires an entire drive". Are you thinking of a specific installer which does this? Because OpenZFS certainly doesn't require that.

@mcladams
Copy link
Author

mcladams commented Jan 6, 2023

I replied but may have been lost..
The few distribution installers that offer to install to zfs; require a full disk.
Am I "thinking of a specific distribution installer that requires a full disk"? All of them.
I'll be happily proved wrong.

@mskarbek
Copy link
Contributor

mskarbek commented Jan 6, 2023

@mclad and what exactly do you expect from the OpenZFS maintainers? That is a distribution/installer issue. OpenZFS will happily create a pool on a partition if installers provide one, but it is an installer job to do that.

@mcladams
Copy link
Author

mcladams commented Jan 6, 2023 via email

@mcladams
Copy link
Author

mcladams commented Jan 6, 2023 via email

@ryao
Copy link
Contributor

ryao commented Jan 7, 2023

While servers pay the bills for me right now, my original reason for doing ZFS development was for home use. Many of my early patches were aimed at that. Many of the patches I do today benefit both equally. I still make extensive use of ZFS in my home.

That said, a younger version of myself had dismissed the idea of in place filesystem conversion since the two different filesystems should have different alignment requirements and I was only thinking about the btrfs tool that supposedly left all data in place. However, thinking about it now: if we assume moving things around is okay as long as restoring it to a consistent ext4 state works, then I suspect that it is doable. If I think about Microsoft's one way FAT to NTFS conversion that does not allow for going back, then it seems even more doable, even though trying to fabricate a merkle tree that is preinitialized with data is likely to be a pain.

It would still need some background research that I cannot immediately do, especially since I already have a few projects taking my attention right now, but I will say this. You have piqued my interest.

@mcladams
Copy link
Author

mcladams commented Jan 7, 2023 via email

@mskarbek
Copy link
Contributor

mskarbek commented Jan 7, 2023

@mclad still, what do you expect from OpenZFS maintainers? You can't expect dialog when there is no will for one in the beginning. What you describe is possible without any changes on the OpenZFS side. Only changes needed are to be made on the installers side. So if you want better support for example in Ubuntu go and talk to the Canonical. Just remember that because of licensing "issues" only few distributions attempted to integrate OpenZFS and even fewer brought that work to the somewhat usable state. Don't expect that OpenZFS maintainers will maintain forks of every distribution installer. They won't. Its installers maintainers job to add proper support. They have everything they need to do so from the OpenZFS perspective.

@mcladams
Copy link
Author

mcladams commented Jan 7, 2023

@mskarbek

Firstly I expect dialogue hence this is a feature request not a bug report.

(and just for reference it late in gmt +8 and I'm australian so some comments may be overly jaunty)
Zeroly, the presumption that as Im unkownl am only now making issues on github and elsewhere and replying to others does not preculude the fact I've read almost everything from the docs sites and links; and been tesitng zfs edge cases for may years.
Your suggestion

still, what do you expect from OpenZFS maintainers? You can't expect dialog when there is no will for one in the beginning. What you describe is possible without any changes on the OpenZFS side. Only changes needed are to be made on the installers side. So if you want better support for example in Ubuntu go and talk to the Canonical.

So because I test a dozen distrubitions on zfs 2.1.7 on root with zfsbootmenu ... I as unknown, relatively unschooled, should approach canonical and another dozen distrubutions ? Is that not something zfsdevs would have the contatcts for?

Licencing issues is a f..rigging smokescreen over excuses because its too hard.

FFS I installed btrfs on a raid6 setup mid 00s. BTRFS although not technically comparable in my opinoons is now a breeze to install to mirror or raid installations by comparison to back then, and zfs now.

Heres zls from one tesing box i'm on currently. zfs list fails in not exposing canmount and mounted properties by default. My use case is testing various distributions.
(alias zls='zfs list -o name,used,referenced,canmount,mounted,mountpoint')

root@lunar-gamer:~# zls -r zroot
NAME                                 USED     REFER  CANMOUNT  MOUNTED  MOUNTPOINT
zroot                                121G      244K  off       no       none
zroot/DATA                          3.25G      288K  off       no       /data
zroot/DATA/media                    2.19G     2.09G  on        yes      /data/media
zroot/DATA/projects                  480M      303M  on        yes      /data/projects
zroot/DATA/projects/ref              192K      192K  on        yes      /data/projects/ref
zroot/DATA/storage                   606M      606M  -         -        -
zroot/DATA/vm                        192K      192K  on        yes      /data/vm
zroot/DATA/zvol                      192K      192K  off       no       none
zroot/LINUX                         5.81G      192K  off       no       /
zroot/LINUX/opt                     3.85G     3.67G  on        yes      /opt
zroot/LINUX/srv                      272K      192K  on        yes      /srv
zroot/LINUX/usr                      245M      192K  off       no       /usr
zroot/LINUX/usr/local                245M      241M  on        yes      /usr/local
zroot/LINUX/var                     1.72G      192K  off       no       /var
zroot/LINUX/var/lib                 1.72G      192K  off       no       /var/lib
zroot/LINUX/var/lib/containers       192K      192K  off       no       /var/lib/containers
zroot/LINUX/var/lib/snapd           1.72G      520K  noauto    no       /var/lib/snapd
zroot/LINUX/var/lib/snapd/snaps     1.72G     1.72G  noauto    no       /var/lib/snapd/snaps
zroot/ROOT                           111G      192K  off       no       none
zroot/ROOT/debian11                 24.8G      192K  off       no       none
zroot/ROOT/debian11/console         1.19G     1.06G  noauto    no       /
zroot/ROOT/debian11/home            4.09G     4.00G  noauto    no       /home
zroot/ROOT/debian11/mx21-fluxbox    1.49G     2.35G  noauto    no       /
zroot/ROOT/debian11/pve-console     2.34G     3.18G  noauto    no       /
zroot/ROOT/debian11/pve-mystery     3.29G     3.29G  noauto    no       /
zroot/ROOT/debian11/pve30-cli       5.02G     5.61G  noauto    no       /
zroot/ROOT/debian11/pve30-gnm       6.82G     6.88G  noauto    no       /
zroot/ROOT/debian11/root             527M      525M  noauto    no       /root
zroot/ROOT/debtesting               18.8G      192K  off       no       none
zroot/ROOT/debtesting/console       1.36G     1.20G  noauto    no       /
zroot/ROOT/debtesting/home           548M      492M  noauto    no       /home
zroot/ROOT/debtesting/kaisen_kde    87.7M     14.6G  on        no       none
zroot/ROOT/debtesting/kaisen_lxqt   16.8G     14.6G  noauto    no       /
zroot/ROOT/debtesting/root          44.7M     33.5M  noauto    no       /root
zroot/ROOT/fedora36                 12.5G      192K  off       no       none
zroot/ROOT/fedora36/home             400M      400M  noauto    no       /home
zroot/ROOT/fedora36/nobara          12.2G     8.89G  noauto    no       /
zroot/ROOT/fedora36/root             584K      308K  noauto    no       /root
zroot/ROOT/ubuntu2204               8.48G      192K  off       no       /
zroot/ROOT/ubuntu2204/gnome         7.17G     5.33G  noauto    no       /
zroot/ROOT/ubuntu2204/home           612M      333M  noauto    no       /home
zroot/ROOT/ubuntu2204/root           201M      199M  noauto    no       /root
zroot/ROOT/ubuntu2204/server         519M     5.98G  noauto    no       /
zroot/ROOT/ubuntu2304               41.3G      192K  off       no       none
zroot/ROOT/ubuntu2304/gnome-nosnap  25.7G     15.1G  noauto    no       /
zroot/ROOT/ubuntu2304/home          6.27G     2.41G  noauto    yes      /home
zroot/ROOT/ubuntu2304/root          20.9M     14.6M  noauto    yes      /root
zroot/ROOT/ubuntu2304/studio-kde    9.29G     11.2G  noauto    yes      /
zroot/ROOT/void                     5.31G      192K  off       no       none
zroot/ROOT/void/home                 192K      192K  noauto    no       /home
zroot/ROOT/void/root                 192K      192K  noauto    no       /root
zroot/ROOT/void/void-xcfe           5.31G     5.31G  noauto    no       /

And I have funcion zlsm() { zls $@ | grep -e ' on ' -e ' yes ' ; }

root@lunar-gamer:~# zlsm
vault/data/media                    4.39G      302M  on        yes      /data/media
vault/data/opt                        96K       96K  on        yes      /data/opt
vault/devops/PVE/vz                 89.1G     5.01G  on        yes      /var/lib/vz
vault/media/APP/downloads           53.0G     53.0G  on        yes      /share/downloads
vault/media/APP/glob                20.6G      104G  on        yes      /share/glob
vault/media/APP/library_pc           176G      176G  on        yes      /share/library_pc
vault/media/LINUX/lxsteam           2.08G     1.58G  on        yes      /home/mike/.local/Steam
vault/media/MUSIC/dj_bylabel         167G      167G  on        yes      /share/dj_bylabel
vault/media/video/library            139G      139G  on        yes      /share/library
zroot/DATA/media                    2.19G     2.09G  on        yes      /data/media
zroot/DATA/projects                  480M      303M  on        yes      /data/projects
zroot/DATA/projects/ref              192K      192K  on        yes      /data/projects/ref
zroot/DATA/vm                        192K      192K  on        yes      /data/vm
zroot/LINUX/opt                     3.85G     3.67G  on        yes      /opt
zroot/LINUX/srv                      272K      192K  on        yes      /srv
zroot/LINUX/usr/local                245M      241M  on        yes      /usr/local
zroot/ROOT/debtesting/kaisen_kde    87.7M     14.6G  on        no       none
zroot/ROOT/ubuntu2304/home          6.27G     2.41G  noauto    yes      /home
zroot/ROOT/ubuntu2304/root          20.9M     14.6M  noauto    yes      /root
zroot/ROOT/ubuntu2304/studio-kde    9.29G     11.2G  noauto    yes      /

@mcladams
Copy link
Author

mcladams commented Jan 7, 2023

Offtopic:
Other test boxes are more arch, void and nix focused. Lets face it, cananoical has suicided ubuntu for many with zsys and snapd; and given arch kde on Steam Deck, arch will eventually win.

#1 on distrowatch is MX-linux which does not have systemd by default
#2 is endeavourOS based on Arch
#whatever is Void Linux which natively installs zfsbootmenu; which i use on every system by default after some testing last year. I can't explain the pleasure I felt finally being able to run apt purge zsys

@Fabian-Gruenbichler
Copy link
Contributor

Proxmox dev here - our installer only supports full disks as installation target on purpose. It's a simple, fast, straight-forward bare metal installer that is not supposed to cover every use case under the sun - it's purpose is to get a usable, sane install onto your server in a few minutes without having to answer hundreds of questions. You can always use a live-CD + debootstrap if you want to have a custom/niche setup that is fully under your control, or re-use the more customizable Debian installer and install Proxmox products on top.

@mcladams
Copy link
Author

mcladams commented Jan 9, 2023 via email

@mcladams
Copy link
Author

mcladams commented Jan 9, 2023 via email

@mcladams
Copy link
Author

mcladams commented Jan 9, 2023 via email

@mcladams
Copy link
Author

mcladams commented Jan 9, 2023 via email

@grahamperrin
Copy link
Contributor

FreeBSD versions 13.1 and greater install to OpenZFS by default.

https://docs.freebsd.org/en/books/handbook/book/#bsdinstall-part-zfs


#14355 (comment)

… should not need an entire drive

https://docs.freebsd.org/en/books/handbook/book/#bsdinstall-part-manual

@mcladams
Copy link
Author

mcladams commented Jan 21, 2023 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Feature Feature request or new feature
Projects
None yet
Development

No branches or pull requests

6 participants