-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Distribution installers (from iso) should do more ZFS #14355
Comments
It's not clear to me what you're referring to when you say "requires an entire drive". Are you thinking of a specific installer which does this? Because OpenZFS certainly doesn't require that. |
I replied but may have been lost.. |
@mclad and what exactly do you expect from the OpenZFS maintainers? That is a distribution/installer issue. OpenZFS will happily create a pool on a partition if installers provide one, but it is an installer job to do that. |
Ok if you think your work is done,.. why can't I install into /dev/sdx with
zfs.
Make your mind whether you(plural) are only zfs for massive server
installations or provide equivalent support and respect to desktop users.
What I expect from zfs-mainters?
I guess better dialog with the major distributions such that their
installers can install to zfs root in partition/dev/sdx
Or in mirror or zraid constellations.
The proxmox-ve installer is perhaps the best example of what's possible.
Still requiring entire disks.
** Install to root zfs should be as supported and no more complex as
install to eg btrfs**
…On Sat, 7 Jan 2023, 5:57 am Marcin Skarbek, ***@***.***> wrote:
@mclad <https://github.com/mclad> and what exactly do you expect from the
OpenZFS maintainers? That is a distribution/installer issue. OpenZFS will
happily create a pool on a partition if installers provide one, but it is
an installer job to do that.
—
Reply to this email directly, view it on GitHub
<#14355 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACSKZNCIEVHC4MJ3KOSBSYDWRCILFANCNFSM6AAAAAATSVHBJI>
.
You are receiving this because you were mentioned.
Message ID: ***@***.***>
|
Additionally and in particular, how many OpenZFS maintainers come from from
a desktop background as opposed to servers. Yes zfs is incredibly
incredible with draid or just massive vdev sets of mirrors with a hundred
14TB drives.
Open tools like zfsbootmenu and sanoid provide functionality to desktops
and this use case will still only grow.
On my testing box I have,.. 12 distributions booting seamlessly with
zfsbootmenu. For Dev and pen-testing purposes. But in every single case; I
installed to an ext4 partition then did the rsync to waiting zfs datasets ,
then chroot, genfstab, install or build zfs, update whatever init, etc. Use
cases such as mine will become more frequent.
Advanced installation options would be bliss such as:
Install to zroot/ROOT/distro/bootenv
Meta and off topic:
Or even just install to /target I have prepared and mounted previously with
whatever firestarter
…On Sat, 7 Jan 2023, 5:57 am Marcin Skarbek, ***@***.***> wrote:
@mclad <https://github.com/mclad> and what exactly do you expect from the
OpenZFS maintainers? That is a distribution/installer issue. OpenZFS will
happily create a pool on a partition if installers provide one, but it is
an installer job to do that.
—
Reply to this email directly, view it on GitHub
<#14355 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACSKZNCIEVHC4MJ3KOSBSYDWRCILFANCNFSM6AAAAAATSVHBJI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
While servers pay the bills for me right now, my original reason for doing ZFS development was for home use. Many of my early patches were aimed at that. Many of the patches I do today benefit both equally. I still make extensive use of ZFS in my home. That said, a younger version of myself had dismissed the idea of in place filesystem conversion since the two different filesystems should have different alignment requirements and I was only thinking about the btrfs tool that supposedly left all data in place. However, thinking about it now: if we assume moving things around is okay as long as restoring it to a consistent ext4 state works, then I suspect that it is doable. If I think about Microsoft's one way FAT to NTFS conversion that does not allow for going back, then it seems even more doable, even though trying to fabricate a merkle tree that is preinitialized with data is likely to be a pain. It would still need some background research that I cannot immediately do, especially since I already have a few projects taking my attention right now, but I will say this. You have piqued my interest. |
Offtopic:
First thing I thought of when learning of zfs only late last decade was I
need this for data integrity.
Almost immediately following by, this will be perfect for my multiboot
requirements.
I have an aversion to testing something properly in a VM, I feel I need to
run it on metal. ZFS snapshots and now zfsbootmenu make it easy. I'm just
waiting until my zroot/ROOT/distro/bootenv heirarchy can be joined by
windows up in there for the cases like legacy VBA code... And an odd game
of two.
And when I can run any bootable on metal dataset under KVM from whichever
other distribution... Wake me I'm obviously dreaming.
…On Sat, 7 Jan 2023, 10:52 am Richard Yao, ***@***.***> wrote:
While servers pay the bills for me right now, my original reason for doing
ZFS development was for home use. Many of my early patches were aimed at
that. Many of the patches I do today benefit both equally. I still make
extensive use of ZFS in my home.
That said, a younger version of myself had dismissed the idea of in place
filesystem conversion since the two different filesystems should have
different alignment requirements. However, thinking about it now: if we
assume moving things around is okay as long as restoring it to a consistent
ext4 state works, then I suspect that it is doable. If I think about
Microsoft's one way FAT to NTFS conversion that does not allow for going
back, then it seems even more doable, even though trying to fabricate a
merkle tree that is preinitialized with data is likely to be a pain.
It would still need some background research that I cannot immediately do,
especially since I already have a few projects taking my attention right
now, but I will say this. You have piqued my interest.
—
Reply to this email directly, view it on GitHub
<#14355 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACSKZNBQHI466TBI43L7VEDWRDK5LANCNFSM6AAAAAATSVHBJI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@mclad still, what do you expect from OpenZFS maintainers? You can't expect dialog when there is no will for one in the beginning. What you describe is possible without any changes on the OpenZFS side. Only changes needed are to be made on the installers side. So if you want better support for example in Ubuntu go and talk to the Canonical. Just remember that because of licensing "issues" only few distributions attempted to integrate OpenZFS and even fewer brought that work to the somewhat usable state. Don't expect that OpenZFS maintainers will maintain forks of every distribution installer. They won't. Its installers maintainers job to add proper support. They have everything they need to do so from the OpenZFS perspective. |
Firstly I expect dialogue hence this is a feature request not a bug report. (and just for reference it late in gmt +8 and I'm australian so some comments may be overly jaunty)
So because I test a dozen distrubitions on zfs 2.1.7 on root with zfsbootmenu ... I as unknown, relatively unschooled, should approach canonical and another dozen distrubutions ? Is that not something zfsdevs would have the contatcts for? Licencing issues is a f..rigging smokescreen over excuses because its too hard. FFS I installed btrfs on a raid6 setup mid 00s. BTRFS although not technically comparable in my opinoons is now a breeze to install to mirror or raid installations by comparison to back then, and zfs now. Heres zls from one tesing box i'm on currently. zfs list fails in not exposing canmount and mounted properties by default. My use case is testing various distributions.
And I have funcion zlsm() { zls $@ | grep -e ' on ' -e ' yes ' ; }
|
Offtopic: #1 on distrowatch is MX-linux which does not have systemd by default |
Proxmox dev here - our installer only supports full disks as installation target on purpose. It's a simple, fast, straight-forward bare metal installer that is not supposed to cover every use case under the sun - it's purpose is to get a usable, sane install onto your server in a few minutes without having to answer hundreds of questions. You can always use a live-CD + debootstrap if you want to have a custom/niche setup that is fully under your control, or re-use the more customizable Debian installer and install Proxmox products on top. |
Nice. So much respect for proxmox. Massively used in academic communities
i.e. poor students such at I.
I know it's edge use like most of things I do; I install proxmox to zfs on
smallest SSD I have with the have most of my GB unformatted option, then
portable via gparted copy, clonezilla or zfs send recv operations.
Or just install proxmox-ve on something such as LMDE 5 on ext4, then rsync
to zfs, which works lovely for a dev box.
Outwith proxmox, beyond able to handle firmware such at Nvidia or recent
amdgpu makes installers just a little easier than debootsrap / mmdebstrap
My idea is an advanced installer will just say say, you've mounted /target
? Fine I'll install there, then advanced user, do what you need in chroot
before rebooting.
Cheers, Mike
…On Mon, 9 Jan 2023, 5:09 pm Fabian-Gruenbichler, ***@***.***> wrote:
Proxmox dev here - our installer only supports full disks as installation
target on purpose. It's a simple, fast, straight-forward bare metal
installer that is not supposed to cover every use case under the sun - it's
purpose is to get a usable, sane install onto your server in a few minutes
without having to answer hundreds of questions. You can always use a
live-CD + debootstrap if you want to have a custom/niche setup that is
fully under your control, or re-use the more customizable Debian installer
and install Proxmox products on top.
—
Reply to this email directly, view it on GitHub
<#14355 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACSKZNHOUPT5DLTWMVE6TULWRPITXANCNFSM6AAAAAATSVHBJI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Offtopic:
Proxmox anything, I always put /var/lib/pve-cluster on his own zfs dataset,
first thing. Than I can test pve with different boot environments but with
same that dir and /etc/pve
…On Mon, 9 Jan 2023, 5:09 pm Fabian-Gruenbichler, ***@***.***> wrote:
Proxmox dev here - our installer only supports full disks as installation
target on purpose. It's a simple, fast, straight-forward bare metal
installer that is not supposed to cover every use case under the sun - it's
purpose is to get a usable, sane install onto your server in a few minutes
without having to answer hundreds of questions. You can always use a
live-CD + debootstrap if you want to have a custom/niche setup that is
fully under your control, or re-use the more customizable Debian installer
and install Proxmox products on top.
—
Reply to this email directly, view it on GitHub
<#14355 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACSKZNHOUPT5DLTWMVE6TULWRPITXANCNFSM6AAAAAATSVHBJI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Elsewhere I made a detailed explanation of how I can have any
distribution, even like Ubuntu Server 1804 or Fedora 36; any Arch,
eventually be with latest openZFS and zfsbootmenu.
Easy for me now but not the inexperienced. All issue suggestions I make on
zfs is from many many late nights experimenting and failing until I don't.
…On Mon, 9 Jan 2023, 5:09 pm Fabian-Gruenbichler, ***@***.***> wrote:
Proxmox dev here - our installer only supports full disks as installation
target on purpose. It's a simple, fast, straight-forward bare metal
installer that is not supposed to cover every use case under the sun - it's
purpose is to get a usable, sane install onto your server in a few minutes
without having to answer hundreds of questions. You can always use a
live-CD + debootstrap if you want to have a custom/niche setup that is
fully under your control, or re-use the more customizable Debian installer
and install Proxmox products on top.
—
Reply to this email directly, view it on GitHub
<#14355 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACSKZNHOUPT5DLTWMVE6TULWRPITXANCNFSM6AAAAAATSVHBJI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Off-topic:
Final offtopic. Fabian I'll find you or different proxmox devs elsewhere
but I'll just leave this here briefly while I have time: after installation
and dataset creation... Or mounting /var/lib/pve-cluster
is/var/lib/vz/templates for KVM and lxc
Which is filled with symlinks of isos and tar.gz either from where. Old box
on icsci currently.
I've made latest OMV with with latest PVE in LXC not KVM but an inelegant
hack not worth the effort. I'll reply such in PVE and OMV forums where can
"we run OMV on LXC on ZFS" is a neverending repeated question.
…On Mon, 9 Jan 2023, 8:53 pm Mike Adams, ***@***.***> wrote:
Nice. So much respect for proxmox. Massively used in academic communities
i.e. poor students such at I.
I know it's edge use like most of things I do; I install proxmox to zfs on
smallest SSD I have with the have most of my GB unformatted option, then
portable via gparted copy, clonezilla or zfs send recv operations.
Or just install proxmox-ve on something such as LMDE 5 on ext4, then rsync
to zfs, which works lovely for a dev box.
Outwith proxmox, beyond able to handle firmware such at Nvidia or recent
amdgpu makes installers just a little easier than debootsrap / mmdebstrap
My idea is an advanced installer will just say say, you've mounted /target
? Fine I'll install there, then advanced user, do what you need in chroot
before rebooting.
Cheers, Mike
On Mon, 9 Jan 2023, 5:09 pm Fabian-Gruenbichler, ***@***.***>
wrote:
> Proxmox dev here - our installer only supports full disks as installation
> target on purpose. It's a simple, fast, straight-forward bare metal
> installer that is not supposed to cover every use case under the sun - it's
> purpose is to get a usable, sane install onto your server in a few minutes
> without having to answer hundreds of questions. You can always use a
> live-CD + debootstrap if you want to have a custom/niche setup that is
> fully under your control, or re-use the more customizable Debian installer
> and install Proxmox products on top.
>
> —
> Reply to this email directly, view it on GitHub
> <#14355 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ACSKZNHOUPT5DLTWMVE6TULWRPITXANCNFSM6AAAAAATSVHBJI>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
|
FreeBSD versions 13.1 and greater install to OpenZFS by default. https://docs.freebsd.org/en/books/handbook/book/#bsdinstall-part-zfs
https://docs.freebsd.org/en/books/handbook/book/#bsdinstall-part-manual |
Thanks for the reply. But I quote from that btw excellent manual
2.6.4 Guided zFS partitioning/installation
This partitioning mode only works with whole disks and will erase the
contents of the entire disk
The Proxmox-VE installer is slightly better because it gives the option to
leave however much unpartitioned space at the end of the disk[s] the user
wants. I like have a recovery distro installed there and swap.
https://pve.proxmox.com/wiki/Installation
Advanced ZFS Configuration Options
The installer creates the ZFS pool rpool. No swap space is created but you
can reserve some unpartitioned space on the install disks for swap. You can
also create a swap zvol after the installation, although this can lead to
problems. (see ZF <https://pve.proxmox.com/wiki/ZFS_on_Linux#zfs_swap>
…On Sun, 22 Jan 2023, 2:05 am Graham Perrin, ***@***.***> wrote:
FreeBSD <https://www.freebsd.org/> versions 13.1 and greater install to
OpenZFS by default.
https://docs.freebsd.org/en/books/handbook/book/#bsdinstall-part-zfs
------------------------------
#14355 (comment)
<#14355 (comment)>
… should not need an entire drive
https://docs.freebsd.org/en/books/handbook/book/#bsdinstall-part-manual
—
Reply to this email directly, view it on GitHub
<#14355 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACSKZNB36QJN2VW2EC7CU23WTQQP3ANCNFSM6AAAAAATSVHBJI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Edit. below discribes distribution actions outwith OpenZFS
OpenZFS should facilitate zfs as on option; such as btrfs is; on multiple distributions; without requring wiping of an entire drive.
Describe the feature would like to see added to OpenZFS
Users can install to root zfs without wiping entire disk via mutliple distribution live installer isos.
How will this feature improve OpenZFS?
Users will no longer be in fear of installing a distribution to zfs due to current behaviour of requring a full disk.
Additional context
distribtuiton installers that actually provide zfs as an option should not need an entire drive
The text was updated successfully, but these errors were encountered: