-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
expandsize=16.0E #1468
Comments
Please see thread on ZoL list with the same subject for possible reason.
|
Where does this stand? Is this still an issue or was it explained? |
It's still an issue.
|
Have you manually run |
I have now, and it's still 16.0E. |
Somewhere it's definitely being miscalculated, we'll need to run it down. |
There is currently a subtle bug in the SA implementation which can crop up which prevents us from safely using multiple variable length SAs in one object. Fortunately, the only existing use case for this are symlinks with SA based xattrs. Therefore, until the root cause in the SA code can be identified and fixed we prevent adding SA xattrs to symlinks. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#1468
There is currently a subtle bug in the SA implementation which can crop up which prevents us from safely using multiple variable length SAs in one object. Fortunately, the only existing use case for this are symlinks with SA based xattrs. Therefore, until the root cause in the SA code can be identified and fixed we prevent adding SA xattrs to symlinks. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#1468
I can't reproduce this on my test rig with latest HEAD. |
I ran into this today.
|
I have exactly the same problem: I am running zol-0.6.5.8 and, after having migrated a file-based zpool to a mirror zpool (files were 60GB, drives are 64GB) I have set autoexpand to on for the pool and run the zfs pool online -e for both disks. Can this issue be reopened? |
Reopening. It would be helpful if someone could come up with a simple test case which reproduces this. |
As a follow up, mine no longer shows this way.
|
Some data on same problem, same steps and procedure, on different pools and disks.
Commands and Data:
|
This may be fixed in 2e215fe. |
Closing, this should be fixed. |
I am experiencing this on the current Git (0.8.0-rc2_22_gd649604).
It's worth nothing there's actually a lot of disk size variation. I have some 4TB disks and some 8TB disks. 9 raidz2 arrays are made of 4TB disks, and 3 are made of 8TB disks. One of the "4TB raidz2" arrays is made of 9x 4TB disks and 1x 8TB disk because... well that happened. Also the Special mirror disks have different sizes. |
The issue is caused by a small discrepancy in how userland creates the partition layout and the kernel estimates available space: * zpool command: subtract 9M from the usable device size, then align to 1M boundary. 9M is the sum of 1M "start" partition alignment + 8M EFI "reserved" partition. * kernel module: subtract 10M from the device size. 10M is the sum of 1M "start" partition alignment + 1m "end" partition alignment + 8M EFI "reserved" partition. For devices where the number of sectors is not a multiple of the alignment size the zpool command will create a partition layout which reserves less than 1M after the 8M EFI "reserved" partition: Disk /dev/sda: 1024 MiB, 1073739776 bytes, 2097148 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 49811D40-16F4-4E41-84A9-387703950D7F Device Start End Sectors Size Type /dev/sda1 2048 2078719 2076672 1014M Solaris /usr & Apple ZFS /dev/sda9 2078720 2095103 16384 8M Solaris reserved 1 When the kernel module vdev_open() the device its max_asize ends up being slightly smaller than asize: this results in a huge number (16E) reported by metaslab_class_expandable_space(). This change prevents bdev_max_capacity() from returing a size smaller than bdev_capacity(). Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: George Wilson <george.wilson@delphix.com> Reviewed by: Sara Hartse <sara.hartse@delphix.com> Signed-off-by: loli10K <ezomori.nozomu@gmail.com> Closes #1468 Closes #8391
I have (and had) autoexpand=on, but still get expandsize=16.0E. I replaced
three 1.5TB disks with tree 3TB disks about a month ago, and i did get extra
space (I'm not 100% sure I got the size I had expected/should but...).
How much is this actually, and how do I know where this available
space is?
I found Illumos #1948 (ZoL pull #908), but that's closed and don't seem to have
been merged into ZoL. In that, it seems that '16E' just means undefined.. Do I
actually have free/unallocated space or not?
The text was updated successfully, but these errors were encountered: