Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pool_name related changes in run_sync_test in uzfs_test_sync.sh #42

Merged
merged 16 commits into from
May 4, 2018

Conversation

mynktl
Copy link
Member

@mynktl mynktl commented Apr 25, 2018

Changes in pool_name in uzfs_test_sync.sh
   - To avoid conflict between tests.
Changing build config to USER for ZFS in travis userspace build
   - This change will help in reducing travis build time for userspace build
Avoid running ZTEST in parallel with ZVOL related test by uzfs_test
   - ZTEST and uzfs_test are using vdev-files from /tmp directory. Running ZTEST 
      and uzfs_test in parallel leads to two issues : 
            1. Panic in ZTEST. Since we are running ZTEST and uzfs_test
                in parallel, MMP writes in ZTEST will have much delay which
                will lead to panic as spa mode in ZTEST is set to PANIC.

             2. ZTEST have a test case to replace disk-file in RAIDZ pool.
                In uzfs_zvol_test we are importing pool through CLI by
                scanning files in /tmp directory. At some point, when replace
                is in-progress in ZTEST and While doing import through
                ZPOOL we get an assertion failure in spa_config_valid for
                ztest pool as ondisk config (at vdev level) and zpool.cache
                will be different.

mayank added 10 commits April 11, 2018 12:47
- removing jobs related to test_type in Travis file.
- setting build branch to zfs-0.7-release for Travis build
- removing pool/dataset creation in rebuild test

Signed-off-by: mayank <mayank.patel@cloudbyte.com>
…iff.

- fixing SIGSEGV error in uzfs_spa_fini.
	- If pool is busy then we should not destroy spa->spa_us.
- fixing race condition in uzfs_spa_fini.
	- If there are two threads calling spa_close at a time, then there
	  are chances of race condition in uzfs_spa_fini.
- Changes in test_uzfs.sh
	- display log if any test case fails
	- removed init_test/close_test function

Signed-off-by: mayank <mayank.patel@cloudbyte.com>
Signed-off-by: mayank <mayank.patel@cloudbyte.com>
	txg_update_thread cleanup is being handled in uzfs_spa_fini which
	was being called after spa_deactivate in spa_evict_all.
	In txg_update_thread, dsl_sync_task opens pool which performs
	spa_open->uzfs_spa_init if spa state is POOL_STATE_UNINITIALIZED.
	Since, pool state is already set to POOL_STATE_UNINITIALIZED by spa_deactivate,
	txg_update_thread will execute uzfs_spa_init and that cause
	uzfs_spa_fini waits for infinite time for txg_update_thread cleanup.

- Added restart of zrepl before fio_test in test_uzfs.sh to avoid bind error in fio
- releasing spa_namespace_lock in uzfs_spa_fini before sleep to avoid deadlock with
  txg_update_thread

Signed-off-by: mayank <mayank.patel@cloudbyte.com>
Signed-off-by: mayank <mayank.patel@cloudbyte.com>
- enabling parallel build in travis

Signed-off-by: mayank <mayank.patel@cloudbyte.com>
- changes in test_uzfs.sh
       - exporting pool in test_uzfs.sh before executing UZFS_TEST binary.
	 As, pool should be used by one process only.
       - added start_time and end_time in test_uzfs.sh logs.
       - fix in run_uzfs_test function as some tests were not being executed
	 due to improper if..else statement
- fix in spa_evict_all regarding uzfs_spa_fini. Since uzfs_spa_init was
  protected through _KERNEL macro, uzfs spa_fini also should be protected
  through _KERNEL macro only.
- enabling ztest in travis kernel build.

Signed-off-by: mayank <mayank.patel@cloudbyte.com>
Signed-off-by: mayank <mayank.patel@cloudbyte.com>
- enabling user config build for zfs in travis userspace build

Signed-off-by: mayank <mayank.patel@cloudbyte.com>
@mynktl mynktl requested review from jkryl and vishnuitta April 25, 2018 17:13
Copy link

@vishnuitta vishnuitta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

run_zrepl_uzfs_test and run_zrepl_rebuild_uzfs_test need to do export. They are using same pool, and, I think they are not running in parallel, and so, using same pool should be fine.

@mynktl
Copy link
Member Author

mynktl commented Apr 26, 2018

@vishnuitta
In run_zrepl_uzfs_test and run_zrepl_rebuild_uzfs_test, we are sending IO's over network from uzfs_test to zrepl. So i haven't exported pool in both the test cases.

@vishnuitta
Copy link

you are right @mynktl ..
In run_uzfs_test, pool6 has been used twice.. it should be OK as we are doing cleanup.. may be worth to change its name..
as part of cleanup, can you add 'zpool label clear' and writing 0's to disk for initial 100MB of disk
And, for which pool name, we got the seg fault in travis?

@mynktl
Copy link
Member Author

mynktl commented Apr 26, 2018

@vishnuitta
Thanks for pointing out same pool_name in run_uzfs_test. I will change pool_name for that test.
As part of cleanup, we are destroying disk itself in cleanup_uzfs_test. So, zpool label clear may not be needed in this case.
In travis, script exited with assertion error logs only, we couldn't figure out pool_name for which assertion triggered.

@vishnuitta
Copy link

vishnuitta commented Apr 26, 2018

Mayank, its still better to do 'zpool label clear' and zeriong out of 100MB during cleanup time.. we won't be sure how ext4 is providing this truncated file, i.e., whether its creating at altogether new place or giving the same place on disk without zeriong it. Also, lets print the pool name in our scripts, so that, we can find the pool name for which issue is coming .

@mynktl
Copy link
Member Author

mynktl commented Apr 26, 2018

Ok.

mayank added 3 commits April 27, 2018 00:00
      - changes related pool_name
      - adding labelclear + zeroing initial 100 MB
        of disk/file in cleanup_uzfs_test

Signed-off-by: mayank <mayank.patel@cloudbyte.com>
…nsumed by test_uzfs.sh

Signed-off-by: mayank <mayank.patel@cloudbyte.com>
Signed-off-by: mayank <mayank.patel@cloudbyte.com>
Copy link

@vishnuitta vishnuitta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These changes are good. But, I don't see the changes of the import API call in uzfs_test framework which sets import_args.

@mynktl
Copy link
Member Author

mynktl commented May 4, 2018

@vishnuitta @jkryl I have updated PR. Please re-review changes.

@vishnuitta vishnuitta merged commit 5c1b54b into mayadata-io:zfs-0.7-release May 4, 2018
@vishnuitta
Copy link

Changes are good

jkryl pushed a commit that referenced this pull request Jun 26, 2018
* Use of RTE ring buffer API from DPDK in vdev disk AIO backend
jkryl pushed a commit that referenced this pull request Jun 26, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants