Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some tests fail to cleanup when existing zpools are imported #6627

Closed
stewartadam opened this issue Sep 10, 2017 · 2 comments · Fixed by #6632
Closed

Some tests fail to cleanup when existing zpools are imported #6627

stewartadam opened this issue Sep 10, 2017 · 2 comments · Fixed by #6632
Labels
Component: Test Suite Indicates an issue with the test framework or a test case

Comments

@stewartadam
Copy link

stewartadam commented Sep 10, 2017

System information

Type Version/Name
Distribution Name Fedora
Distribution Version 26
Linux Kernel 4.12.9-300.fc26.x86_64
Architecture x86_64
ZFS Version 0.7.1-1 (with encryption patches)
SPL Version 0.7.1-1

I've installed SPL from the upstream f26 repo and compiled 0.7.1 from source, patching in:

  • Add libtpool (thread pools) (46364cb)
  • Send / Recv Fixes following b52563 (9b84076)
  • Native Encryption for ZFS on Linux (b525630)

While running the test suite to troubleshoot #6621 I observed several failures that were not present using the same RPM packages I compiled in a VM when no other pools were available. e.g. functional/acl/posix/cleanup failed to cleanup due to a grep: Unmatched ( or \( which appeared to impact subsequent tests:

SUCCESS: userdel staff1
SUCCESS: groupdel zfsgrp
grep: Unmatched ( or \(
grep: write error: Broken pipe
rm: cannot remove '/var/tmp/testdir': Device or resource busy
ERROR: rm -rf /var/tmp/testdir exited 1
NOTE: Performing test-fail callback (/usr/share/zfs/zfs-tests/callbacks/zfs_dbgmsg.ksh)

This is the snippet from /usr/share/zfs/zfs-tests/include/libtest.shlib that appears to cause the issue:

exclude=`eval echo \"'(${KEEP})'\"`
		ALL_POOLS=$(zpool list -H -o name 
		    | grep -v "$NO_POOLS" | egrep -v "$exclude")

In my case I had 3 pools, so KEEP=data data-raidz ssd-scratch.

@stewartadam
Copy link
Author

stewartadam commented Sep 10, 2017

As well functional/bootfs/bootfs_004_neg also appears to log errors because of the test directory is not prefixed in its cleanup step:

13:27:11.67 Invalid pool names are rejected by zpool set bootfs
13:27:11.67 NOTE: Performing local cleanup via log_onexit (cleanup)
13:27:11.67 rm: cannot remove '/bootfs_004.4047.dat': No such file or directory

Code:

function cleanup {
	if poolexists $POOL; then
		log_must zpool destroy $POOL
	fi
	rm /bootfs_004.$$.dat
}

That should read rm $TESTDIR/bootfs_004.$$.dat to match the earlier mkfile invokation.

@loli10K loli10K added the Component: Test Suite Indicates an issue with the test framework or a test case label Sep 11, 2017
@behlendorf
Copy link
Contributor

@stewartadam I'd definitely believe there are still some lingering issues like this in the test suite. The ZTS was adapted from Illumos and might have slightly different assumptions about the environment and of course bugs. It would be great if you could open PRs for any problems you uncover so we can get those issues addressed.

behlendorf pushed a commit that referenced this issue Sep 25, 2017
* Add 'zfs bookmark' coverage (zfs_bookmark_cliargs)

 * Add OpenZFS 8166 coverage (zpool_scrub_offline_device)

 * Fix "busy" zfs_mount_remount failures

 * Fix bootfs_003_pos, bootfs_004_neg, zdb_005_pos local cleanup

 * Update usage of $KEEP variable, add get_all_pools() function

 * Enable history_008_pos and rsend_019_pos (non-32bit builders)

 * Enable zfs_copies_005_neg, update local cleanup

 * Fix zfs_send_007_pos (large_dnode + OpenZFS 8199)

 * Fix rollback_003_pos (use dataset name, not mountpoint, to unmount)

 * Update default_raidz_setup() to work properly with more than 3 disks

 * Use $TEST_BASE_DIR instead of hardcoded (/var)/tmp for file VDEVs

 * Update usage of /dev/random to /dev/urandom

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: loli10K <ezomori.nozomu@gmail.com>
Issue #6086 
Closes #5658 
Closes #6143 
Closes #6421 
Closes #6627 
Closes #6632
FransUrbo pushed a commit to FransUrbo/zfs that referenced this issue Apr 28, 2019
* Add 'zfs bookmark' coverage (zfs_bookmark_cliargs)

 * Add OpenZFS 8166 coverage (zpool_scrub_offline_device)

 * Fix "busy" zfs_mount_remount failures

 * Fix bootfs_003_pos, bootfs_004_neg, zdb_005_pos local cleanup

 * Update usage of $KEEP variable, add get_all_pools() function

 * Enable history_008_pos and rsend_019_pos (non-32bit builders)

 * Enable zfs_copies_005_neg, update local cleanup

 * Fix zfs_send_007_pos (large_dnode + OpenZFS 8199)

 * Fix rollback_003_pos (use dataset name, not mountpoint, to unmount)

 * Update default_raidz_setup() to work properly with more than 3 disks

 * Use $TEST_BASE_DIR instead of hardcoded (/var)/tmp for file VDEVs

 * Update usage of /dev/random to /dev/urandom

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: loli10K <ezomori.nozomu@gmail.com>
Issue openzfs#6086 
Closes openzfs#5658 
Closes openzfs#6143 
Closes openzfs#6421 
Closes openzfs#6627 
Closes openzfs#6632
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Component: Test Suite Indicates an issue with the test framework or a test case
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants