-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exclude special allocation class buffers from L2ARC #12285
Conversation
a08c926
to
95c93d2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for proposing a fix for this, a few thoughts inline.
95c93d2
to
427da3d
Compare
0550f15
to
0fd076f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me, although I didn't locally test it confirm it behaves as intended.
It seems special allocation class buffers are still dumped to L2ARC eventually after continuous reading. |
In a VM where all buffers are stored on a special vdev, L2ARC caches no buffers only when both Regarding the |
I believe we should not be caching buffers with The |
It looks like the previous behavior was to cache holes, but I agree there's not much value in that. Let's also refactor |
c5af38e
to
30a88a3
Compare
30a88a3: Refactored |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On first glance I wonder if we could combine these three functions, but really they're just different enough to make that pretty awkward. So I think this is fine.
Thanks for the change and hope it will be integrated soon and go to 2.1.x branch as well! I patched the change and testing it on my system. I found out weird behavior which I asked about on zfs-discuss@ mailing list: I don't know if it is related to this changes or not. Interestingly: Update: after reboot the l2_hdr_size looks correct: 44575872 for the 68G L2ARC usage with recordsize of 128K. Update2: as l2_hdr_size are the naked l2arc headers the numbers make sense for me now. All good, thanks for bringing this feature in. |
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to allow for special or dedup allocation class to specify the L2 ARC should not be used. However, we pass along the BP for the dbuf from dbuf_read() to dbuf_read_impl() to account for Direct IO writes. The previous call for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not take into account the BP passed from dbuf_read_impl(). I updated this so the BP is now passed into this function. If the BP passed is NULL, then the default behavior of dbuf_is_l2cacheable() remains the same by just using the db->db_blkptr. However, the test failure was being caused by trim_l2arc.ksh setting DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This caused Direct IO reads to take place only filling up the L2 ARC with indirect blocks instead of data blocks. This ultimately led to a failure for this test due to verify_trim_io getting: Too few trim IOs issued 2/5 So I update the test case to not use Direct IO as we are wanting to fill up the L2 ARC with data buffers using random_reads.fio. This allows for the logic of checking the number of trims to be correct now. Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
Motivation and Context
Closes #11761.
Description
Special allocation class vdevs may have roughly the same performance as
L2ARC vdevs. Exclude those buffers from being cacheable on L2ARC.
How Has This Been Tested?
Types of changes
Checklist:
Signed-off-by
.