Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lintian: DEP5 conformance messages for Ubuntu 12.04 Precise Pangolin #32

Closed
dajhorn opened this issue Apr 22, 2012 · 0 comments
Closed
Assignees

Comments

@dajhorn
Copy link
Member

dajhorn commented Apr 22, 2012

The lintian in Ubuntu 12.04 is returning new DEP5 conformance warnings for the source package:

W: zfs-linux source: missing-license-paragraph-in-dep5-copyright gpl-2 (paragraph at line 145)
W: zfs-linux source: missing-license-paragraph-in-dep5-copyright gpl-2+ (paragraph at line 1170)
W: zfs-linux source: missing-license-paragraph-in-dep5-copyright cddl-1.0 (paragraph at line 1166)

And errors for the binary packages:

E: libzfs-dev: copyright-should-refer-to-common-license-file-for-gpl
E: libuutil1: copyright-should-refer-to-common-license-file-for-gpl
E: libzpool1: copyright-should-refer-to-common-license-file-for-gpl
E: libnvpair1: copyright-should-refer-to-common-license-file-for-gpl
E: zfs-initramfs: copyright-should-refer-to-common-license-file-for-gpl
E: zfs-dkms: copyright-should-refer-to-common-license-file-for-gpl
E: zfsutils: copyright-should-refer-to-common-license-file-for-gpl
E: libzfs1: copyright-should-refer-to-common-license-file-for-gpl
@ghost ghost assigned dajhorn Apr 22, 2012
dajhorn referenced this issue Sep 15, 2012
Satisfy the missing-license-paragraph-in-dep5-copyright warning and
copyright-should-refer-to-common-license-file-for-gpl error by adding
separate license blocks for the GPL and CDDL in the debian/copyright file.

Closes: dajhorn/pkg-zfs#32
@dajhorn dajhorn closed this as completed Sep 15, 2012
dajhorn pushed a commit that referenced this issue Mar 24, 2016
Add a test designed to generate contention on the taskq spinlock by
using a large number of threads (100) to perform a large number (131072)
of trivial work items from a single queue.  This simulates conditions
that may occur with the zio free taskq when a 1TB file is removed from a
ZFS filesystem, for example.  This test should always pass.  Its purpose
is to provide a benchmark to easily measure the effectiveness of taskq
optimizations using statistics from the kernel lock profiler.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #32
dajhorn pushed a commit that referenced this issue Mar 24, 2016
Testing has shown that tq->tq_lock can be highly contended when a
large number of small work items are dispatched.  The lock hold time
is reduced by the following changes:

1) Use exclusive threads in the work_waitq

When a single work item is dispatched we only need to wake a single
thread to service it.  The current implementation uses non-exclusive
threads so all threads are woken when the dispatcher calls wake_up().
If a large number of threads are in the queue this overhead can become
non-negligible.

2) Conditionally add/remove threads from work waitq outside of tq_lock

Taskq threads need only add themselves to the work wait queue if there
are no pending work items.  Furthermore, the add and remove function
calls can be made outside of the taskq lock since the wait queues are
protected from concurrent access by their own spinlocks.

3) Call wake_up() outside of tq->tq_lock

Again, the wait queues are protected by their own spinlock, so the
dispatcher functions can drop tq->tq_lock before calling wake_up().

A new splat test taskq:contention was added in a prior commit to measure
the impact of these changes.  The following table summarizes the
results using data from the kernel lock profiler.

                        tq_lock time    %diff   Wall clock (s)  %diff
original:               39117614.10     0       41.72           0
exclusive threads:      31871483.61     18.5    34.2            18.0
unlocked add/rm waitq:  13794303.90     64.7    16.17           61.2
unlocked wake_up():     1589172.08      95.9    16.61           60.2

Each row reflects the average result over 5 test runs.
/proc/lock_stats was zeroed out before and collected after each run.
Column 1 is the cumulative hold time in microseconds for tq->tq_lock.
The tests are cumulative; each row reflects the code changes of the
previous rows.  %diff is calculated with respect to "original" as
100*(orig-new)/orig.

Although calling wake_up() outside of the taskq lock dramatically
reduced the taskq lock hold time, the test actually took slightly more
wall clock time.  This is because the point of contention shifts from
the taskq lock to the wait queue lock.  But the change still seems
worthwhile since it removes our taskq implementation as a bottleneck,
assuming the small increase in wall clock time to be statistical
noise.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #32
dajhorn pushed a commit that referenced this issue Mar 24, 2016
This reverts commit ec2b410.

A race condition was introduced by which a wake_up() call can be lost
after the taskq thread determines there is no pending work items,
leading to deadlock:

1. taksq thread enables interrupts
2. dispatcher thread runs, queues work item, call wake_up()
3. taskq thread runs, adds self to waitq, sleeps

This could easily happen if an interrupt for an IO completion was
outstanding at the point where the taskq thread reenables interrupts,
just before the call to add_wait_queue_exclusive().  The handler would
run immediately within the race window.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #32
dajhorn pushed a commit that referenced this issue Mar 24, 2016
Testing has shown that tq->tq_lock can be highly contended when a
large number of small work items are dispatched.  The lock hold time
is reduced by the following changes:

1) Use exclusive threads in the work_waitq

When a single work item is dispatched we only need to wake a single
thread to service it.  The current implementation uses non-exclusive
threads so all threads are woken when the dispatcher calls wake_up().
If a large number of threads are in the queue this overhead can become
non-negligible.

2) Conditionally add/remove threads from work waitq

Taskq threads need only add themselves to the work wait queue if
there are no pending work items.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant