Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

setgid permissions broken #262

Closed
kkoning opened this issue Jun 3, 2011 · 1 comment
Closed

setgid permissions broken #262

kkoning opened this issue Jun 3, 2011 · 1 comment
Milestone

Comments

@kkoning
Copy link

kkoning commented Jun 3, 2011

The setgid bit does not work on ZFS directories. Steps to reproduce:

kkoning@atlantis:/scratch$ mkdir test
kkoning@atlantis:/scratch$ chown kkoning:home test
kkoning@atlantis:/scratch$ chmod g+ws test
kkoning@atlantis:/scratch$ ls -l
total 2
drwxrwsr-x 2 kkoning home 2 2011-06-03 04:04 test
kkoning@atlantis:/scratch$ cd test
kkoning@atlantis:/scratch/test$ touch file
kkoning@atlantis:/scratch/test$ ls -l
total 1
-rw-r--r-- 1 kkoning kkoning 0 2011-06-03 04:04 file
kkoning@atlantis:/scratch/test$

@behlendorf
Copy link
Contributor

Thanks for reporting the bug and providing a simple test case, we'll look in to it.

fuhrmannb pushed a commit to fuhrmannb/cstor that referenced this issue Nov 3, 2020
sdimitro pushed a commit to sdimitro/zfs that referenced this issue May 23, 2022
…tanding allocations (ZettaCache::write_slots) (openzfs#262)

We've observed that under heavy read workloads, we aren't keeping the
disks busy ingesting the new blocks.  We only allocate and write up to
ZettaCache::write_slots concurrently (default of 32 per disk).  Under
some workloads, it seems that the tasks related to insertion and writing
are not scheduled frequently enough to ensure that we always have writes
outstandin.  In other words, we complete the 32*NDISKS writes that have
been allocated, but don't allocate and issue more writes in time to keep
the disks busy.

This commit increases the number of "write slots", by introducing a new
tunable OUTSTANDING_ALLOCATIONS_PER_DISK, which can be set higher than
DISK_WRITE_MAX_QUEUE_DEPTH such that we will always have blocks
allocated and waiting in the Disk::writer_tx channel for the writer
threads to pick up when the previous write completes.

Note that this tunable is a trade off between higher ingest throughput,
and longer time to wait for the outstanding_writes lock in
flush_checkpoint() (with the state lock held).

Uncached reads with 32 threads:

before:
987MiB/s to application
1600MB/s from S3 (5x read inflation, taking into account 3x compression)
ingest 700 blocks/sec (2MB/s)
wait 0ms for outstanding_writes

after:
886MiB/s to application (-10%)
1500MB/s from S3 (5x read inflation, taking into account 3x compression)
ingest 10,000 blocks/sec (30MB/s) (+1300%)
wait ~30ms for outstanding_writes
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants