Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exclude special allocation class buffers from L2ARC #12285

Merged
merged 1 commit into from
Nov 11, 2021

Conversation

gamanakis
Copy link
Contributor

@gamanakis gamanakis commented Jun 26, 2021

Motivation and Context

Closes #11761.

Description

Special allocation class vdevs may have roughly the same performance as
L2ARC vdevs. Exclude those buffers from being cacheable on L2ARC.

How Has This Been Tested?

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Performance enhancement (non-breaking change which improves efficiency)
  • Code cleanup (non-breaking change which makes code smaller or more readable)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Library ABI change (libzfs, libzfs_core, libnvpair, libuutil and libzfsbootenv)
  • Documentation (a change to man pages or other documentation)

Checklist:

@gamanakis gamanakis marked this pull request as ready for review June 26, 2021 18:11
@gamanakis gamanakis marked this pull request as draft June 26, 2021 23:09
@gamanakis gamanakis changed the title Exclude special allocation class buffers from L2ARC [WIP] Exclude special allocation class buffers from L2ARC Jun 26, 2021
@gamanakis gamanakis force-pushed the l2arc_exclude_special branch 5 times, most recently from a08c926 to 95c93d2 Compare June 28, 2021 10:28
@gamanakis gamanakis changed the title [WIP] Exclude special allocation class buffers from L2ARC Exclude special allocation class buffers from L2ARC Jun 28, 2021
@gamanakis gamanakis marked this pull request as ready for review June 28, 2021 16:35
Copy link
Contributor

@behlendorf behlendorf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for proposing a fix for this, a few thoughts inline.

include/sys/dbuf.h Outdated Show resolved Hide resolved
@behlendorf behlendorf added the Status: Code Review Needed Ready for review and testing label Jun 28, 2021
@gamanakis gamanakis force-pushed the l2arc_exclude_special branch from 95c93d2 to 427da3d Compare June 28, 2021 20:09
include/sys/dbuf.h Outdated Show resolved Hide resolved
include/sys/dbuf.h Outdated Show resolved Hide resolved
include/sys/dbuf.h Outdated Show resolved Hide resolved
include/sys/dbuf.h Outdated Show resolved Hide resolved
@gamanakis gamanakis force-pushed the l2arc_exclude_special branch 3 times, most recently from 0550f15 to 0fd076f Compare June 29, 2021 10:07
Copy link
Contributor

@behlendorf behlendorf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good to me, although I didn't locally test it confirm it behaves as intended.

@behlendorf behlendorf requested a review from don-brady June 29, 2021 19:34
@gamanakis
Copy link
Contributor Author

gamanakis commented Jun 30, 2021

It seems special allocation class buffers are still dumped to L2ARC eventually after continuous reading.

@gamanakis
Copy link
Contributor Author

gamanakis commented Jun 30, 2021

In a VM where all buffers are stored on a special vdev, L2ARC caches no buffers only when both dbuf_is_l2cacheable() and
DNODE_LEVEL_IS_L2CACHEABLE() are commented out.

Regarding the dbuf_is_l2cacheable(): it still caches hole block pointers, BP_IS_HOLE(bp) meaning their DVA is empty.

@gamanakis
Copy link
Contributor Author

I believe we should not be caching buffers with BP_IS_HOLE() in L2ARC. In that case DVA_GET_VDEV(bp->blk_dva) = 0 and rvd->vdev_children[0] != NULL which makes the vdev_allocation_bias checks meaningless.

The blkptr_t of the buffer to be prefetched is known in dbuf_prefetch_impl(), so that we could refactor DNODE_LEVEL_IS_L2CACHEABLE() in the same way as dbuf_is_l2cacheable(). Then we would avoid caching buffers with BP_IS_HOLE() there too.

@behlendorf
Copy link
Contributor

It looks like the previous behavior was to cache holes, but I agree there's not much value in that. Let's also refactor DNODE_LEVEL_IS_L2CACHEABLE in a similar way to dbuf_is_l2cacheable() to handle these cases.

@gamanakis gamanakis force-pushed the l2arc_exclude_special branch 2 times, most recently from c5af38e to 30a88a3 Compare July 2, 2021 09:13
@gamanakis
Copy link
Contributor Author

30a88a3: Refactored DNODE_LEVEL_IS_L2CACHEABLE() and DMU_OS_IS_L2CACHEABLE() in a similar way to dbuf_is_l2cacheable().

Copy link
Contributor

@behlendorf behlendorf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On first glance I wonder if we could combine these three functions, but really they're just different enough to make that pretty awkward. So I think this is fine.

@behlendorf behlendorf requested a review from mmaybee September 2, 2021 19:46
@IvanVolosyuk
Copy link
Contributor

IvanVolosyuk commented Sep 6, 2021

Thanks for the change and hope it will be integrated soon and go to 2.1.x branch as well!

I patched the change and testing it on my system. I found out weird behavior which I asked about on zfs-discuss@ mailing list:
https://zfsonlinux.topicbox.com/groups/zfs-discuss/T4bc9cd0b4cf1a157/l2arc-is-not-read-from

I don't know if it is related to this changes or not. Interestingly:
l2_hdr_size 4 1532352
I have only 64k and 128k object outside of special allocation class, so it gives me <2G cached bytes, which is very close to the state when I switched from ZFS-2.0.5 to ZFS-2.1.0 with this commit patched in.

Update: after reboot the l2_hdr_size looks correct: 44575872 for the 68G L2ARC usage with recordsize of 128K.

Update2: as l2_hdr_size are the naked l2arc headers the numbers make sense for me now. All good, thanks for bringing this feature in.

bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Jul 18, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Jul 23, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Jul 24, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Jul 26, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Jul 27, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Jul 27, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Jul 31, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Aug 1, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Aug 5, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Aug 6, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Aug 7, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Aug 7, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Aug 8, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Aug 9, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Aug 13, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Aug 14, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Aug 15, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Aug 19, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Aug 19, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Aug 24, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Aug 26, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Aug 27, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Aug 27, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Aug 27, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Aug 28, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Sep 4, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Sep 9, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Sep 10, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
bwatkinson added a commit to bwatkinson/zfs that referenced this pull request Sep 12, 2024
PR openzfs#12285 introduced a new module parameter (l2arc_exclude_special) to
allow for special or dedup allocation class to specify the L2 ARC should
not be used. However, we pass along the BP for the dbuf from dbuf_read()
to dbuf_read_impl() to account for Direct IO writes. The previous call
for dbuf_is_l2cacheable() directly used the db->db_blkptr so it did not
take into account the BP passed from dbuf_read_impl(). I updated this so
the BP is now passed into this function. If the BP passed is NULL, then
the default behavior of dbuf_is_l2cacheable() remains the same by just
using the db->db_blkptr.

However, the test failure was being caused by trim_l2arc.ksh setting
DIRECT=1 before calling random_reads.fio to fill up the L2 ARC. This
caused Direct IO reads to take place only filling up the L2 ARC with
indirect blocks instead of data blocks. This ultimately led to a failure
for this test due to verify_trim_io getting:
Too few trim IOs issued 2/5
So I update the test case to not use Direct IO as we are wanting to fill
up the L2 ARC with data buffers using random_reads.fio. This allows for
the logic of checking the number of trims to be correct now.

Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Accepted Ready to integrate (reviewed, tested)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Enable "small file" data to be excluded from L2ARC
7 participants