Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Just update #1

Open
wants to merge 8,568 commits into
base: master
Choose a base branch
from
Open

Just update #1

wants to merge 8,568 commits into from

Conversation

sharego
Copy link
Owner

@sharego sharego commented Sep 29, 2013

No description provided.

Zuul and others added 29 commits June 25, 2024 20:37
The proxy may use a sub-request to fetch a container listing response,
for example for updating shard ranges. If it then fails to parse the
response, it should use the sub-request path when logging a warning or
error. Before, the proxy would use the original request path which may
have been to an object.

Change-Id: Id904f4cc0f911f9e6e53d4b3ad7f98aacd2b335d
Something about them threw off the exit-immediately-on-non-zero-exit
behavior when building.

Related-Bug: #2070029
Change-Id: I3e40ebd78a9f8e93c905b24a12f5f54b2d27c2d9
Continue also tagging it "py3" so any users of that tag don't become
stuck in time.

Closes-Bug: #2037268
Closes-Bug: #2070029
Change-Id: I38d9469238d2eb6647414c1107e68ff6f3a15797
The ssl.wrap_socket method has been removed in 3.12.
SSLContext.wrap_socket should now be used.

Change-Id: I6119e054289eac263ff5448d7d118209f98678d9
Previously, `twine check <swift wheel>` would report

   WARNING  `long_description_content_type` missing.
   defaulting to `text/x-rst`.

Change-Id: I5d004218515d17b8cbace46b2c1e646f734779f3
Change-Id: I022404cd90d9755a09c20619c3a72588d3367467
When the proxy passes the container-update headers to the object server
now include the db_state, which it already had in hand. This will be
written to async_pending and allow the object-updater to know more about
a container rather then just relying on container_path attribute.

This patch also cleans up the PUT, POST and DELETE _get_update_target
paths refactoring the call into _backend_requests, only used by these
methods, so it only happens once.

Change-Id: Ie665e5c656c7fb27b45ee7427fe4b07ad466e3e2
It is currently possible to configure "X-Container-Meta-Quota-Bytes"
and "X-Container-Meta-Quota-Count" on containers, as well as
"X-Account-Meta-Quota-Bytes" on accounts. However, there is no way
to configure an account quota limit for the number of files or
"quota-count". This limitation could potentially allow a user to
exhaust filesystem inodes.

This change introduces the "X-Account-Meta-Quota-Count" header,
allowing users with the ResellerAdmin role to add a quota-count
limit on accounts. The middleware will enforce this limit similarly
to the existing quota-byte limit.

Co-authored-by: Azmain Adib <adib1905@gmail.com>
Co-authored-by: Daanish Khan <daanish1337@gmail.com>
Co-authored-by: Mohamed Hassaneen <mohammedashoor89@gmail.com>
Co-authored-by: Nada El-Mestkawy <nadamaged05@gmail.com>
Co-authored-by: Tra Bui <trabui.0517@gmail.com>

Change-Id: I1d11b2974de8649a111f2462a8fef51039e35d7f
See https://review.opendev.org/c/openstack/swift/+/918365 for motivation.

A handful of [files]scripts entries still remain, but they're more
involved to change over. These ones were all fairly mechanical to fix.

Related-Change: Ifcc8138e7b55d5b82bea0d411ec6bfcca2c77c83
Change-Id: Ia43d7cd3921bc6c9ff0cee3100ef5f486fd3edcb
I *think* swift.common.manager is a reasonable place for it?

Related-Change: Ifcc8138e7b55d5b82bea0d411ec6bfcca2c77c83
Change-Id: I48a345eedbf2369d481d18c0e2db37845673b647
Related-Change: Ifcc8138e7b55d5b82bea0d411ec6bfcca2c77c83
Change-Id: I7f479359f94f102414f30e8aac87f377d10d34ee
Add unit tests to verify the precedence of access_log_ and log_
prefixes to options.

Add pointers from proxy_logging sections in other sample config files
to the proxy-server.conf-sample file.

Change-Id: Id18176d3790fd187e304f0e33e3f74a94dc5305c
Prior to the Related-Change, non_negative_int would raise a ValueError
if it was passed a positive float. It now casts the float to an int,
which is consistent with the docstring.

This patch adds test assertions to verify this behavior, and similar
behavior of config_positive_int_value.

Change-Id: I5d62e14c228544af634190f16321ee97a8c4211c
Related-Change: I06508279d59fa57296dd85548f271a7812aeb45f
Recent OSes are using ISO format for their kernel logs.

Closes-Bug: #2072609
Change-Id: I92fce513d06d8b0875dabf9e9a1b2c5a3a79d9b5
Unfortunately, kotori had to be switched to a whitelist,
so it's not appropriate for a public address list anymore.

Change-Id: I86f10fcf4fd0bc5902c3dad49c6f3019fa159b64
tipabu and others added 30 commits January 8, 2025 18:21
Historically, the object-server would validate the ETag of an object
whenever it was streaming the complete object. This minimizes the
possibility of returning corrupted data to clients, but

- Clients that only ever make ranged requests get no benefit and
- MD5 can be rather CPU-intensive; this is especially noticeable
  in all-flash clusters/policies where Swift is not disk-constrained.

Add a new `etag_validate_pct` option to tune down this validation.
This takes values from 100 (default; all whole-object downloads are
validated) down to 0 (none are).

Note that even with etag validation turned off, the object-auditor
should eventually detect and quarantine corrupted objects. However,
transient read errors may cause clients to download corrupted data.

Hat-tip to Jianjian for all the profiling work!

Co-Authored-By: Jianjian Huo <jhuo@nvidia.com>
Change-Id: Iae48e8db642f6772114c0ae7c6bdd9c653cd035b
Change-Id: Ic66b9ae89837afe31929ce07cc625dfc28314ea3
They were removed upstream recently, so now Bandit is complaining about
the unknown test.

See PyCQA/bandit#1212

Change-Id: Ie668d49a56c0a6542d28128656cfd44f7c089ec4
The available bandit tests change with time (e.g. the
Related-Change). We shouldn't try to maintain the list.

Related-Change: Ie668d49a56c0a6542d28128656cfd44f7c089ec4
Change-Id: I6eb106abbac28ffbb9a3f64e8aa60218cbe75682
... just like we would do in a normal container. Previously, we'd try to
read a byte from the client which, due to a bug in eventlet HTTP framing,
would either hang until we hit a timeout or worse read from the next
pipelined request.

This required that we reset a (repeatedly-reused!) request in s3api to
have an empty body, or it would start triggering 411s, too.

See also: eventlet/eventlet#985

Closes-Bug: #2081103
Change-Id: I56c1ecc4edb953c0bade8744e4bed584099f29c7
Add a new tunable, `stale_worker_timeout`, defaulting to 86400 (i.e. 24
hours). Once this time elapses following a reload, the manager process
will issue SIGKILLs to any remaining stale workers.

This gives operators a way to configure a limit for how long old code
and configs may still be running in their cluster.

To enable this, the temporary reload child (which waits for the reload
to complete then closes the accept socket on all the old workers) has
grown the ability to send state to the re-exec'ed manager. Currently,
this is limited to just the set of pre-re-exec child PIDs and their
reload times, though it was designed to be reasonably extensible.

This allows the new manager to recognize stale workers as they exit
instead of logging

   Ignoring wait() result from unknown PID ...

With the improved knowledge of subprocesses, we can kick the log level
for the above message up from info to warning; we no longer expect it
to trigger in practice.

Drive-by: Add logging to ServersPerPortStrategy.register_worker_exit
that's comparable to what WorkersStrategy does.

Change-Id: I8227939d04fda8db66fb2f131f2c71ce8741c7d9
Related-Change: I8227939d04fda8db66fb2f131f2c71ce8741c7d9
Change-Id: I149a2df2d942bba02049947865b000c9cf1a89bc
Previously, test_object_move_no_such_object_no_tombstone_ancient
would fail intermittently, with an assertion that two timestamps
were almost (but not quite) equal.

This probably comes down to the fact that it's passing floats as
timestamps down into FakeInternalClient's parse(); specifically,
values like 1738046018.2900746 and 1738045066.1442454 are known
to previously fail.

Just fixing the usage doesn't fix the foot-gun, though -- so fix
up parse() to be internally consistent, even if passed a float.

Change-Id: Ide1271dc4ef54b64d2dc99ef658e8340abb0b6ce
Add the storage_policy attribute to the metadata returned
when listing containers using the GET account API function.

The storage policy of a container is a very useful attribute
for telemetry and billing purposes, as it determines the location
and method/redundancy of on-disk storage for the objects in the
container. Ceilometer currently cannot define the storage policy as a
metadata attribute in Gnocchi as GET account, the most efficient way
of discovering all containers in an account, does not return the
storage policy for each container.

Returning the storage policy for each container in GET account
is the ideal way of resolving this issue, as it allows Ceilometer
to find all containers' storage policies without performing additional
costly API calls.

Special care has been taken to ensure the change is backwards
compatible when migrating from pre-storage policy versions
of Swift, even though those versions are quite old now.
This special handling can be removed if support for migrating
from older versions is discontinued.

Closes-bug: #2097074
Change-Id: I52b37cfa49cac8675f5087bcbcfe18db0b46d887
The following configuration options are deprecated:

 * expiring_objects_container_divisor
 * expiring_objects_account_name

The upstream maintainers are not aware of any clusters where these have
been configured to non-default values.

UpgradeImpact:

Operators are encouraged to remove their "container_divisor" setting and
use the default value of 86400.

If a cluster was deployed with a non-standard "account_name", operators
should remove the option from all configs so they are using a supported
configuration going forward, but will need to deploy stand-alone expirer
processes with legacy expirer config to clean-up old expiration tasks
from the previously configured account name.

Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Co-Authored-By: Jianjian Huo <jhuo@nvidia.com>
Change-Id: I5ea9e6dc8b44c8c5f55837debe24dd76be7d6248
The encrypter middleware uses an update_footers callback to send
request footers. Previously, FakeSwift combined footers with captured
request headers in a single dict. Tests could not therefore
specifically assert that *footers* had been captured rather than
headers.

This patch modifies FakeSwift to capture footers separately for each
request. Footers are still merged with the request headers in order to
synthesise GET or HEAD response headers when a previously uploaded
object is returned.

Unfortunately the change cannot be as simple as adding another
attribute to the FakeSwiftCall namedtuple. A list of these namedtuples
is returned by FakeSwift.calls_with_headers. Some tests cast the
namedtuples to 3-tuples and will break if the length of the namedtuple
changes.  Other tests access the attributes of the namedtuples by name
and will break if the list values are changed to plain 3-tuples.

Some test churn is therefore inevitable:

* FakeSwiftCall is changed from a namedtuple to a class. This prevents
  future tests assuming it is a fixed length tuple. It also supports a
  headers_and_footers property to return the combination of uploaded
  headers and footer that was previously (confusingly) returned by
  FakeSwiftCall.headers.

* A new property FakeSwift.call_list has been added which returns a
  list of FakeSwiftCalls.

* FakeSwift.calls_with_headers now returns a 3-tuple. Tests that
  previously assumed this was a namedtuple have been changed to use
  FakeSwift.call_list instead, which gives them objects with the same
  named attributes as the previous namedtuple. Tests that previously
  treated the namedtuple as a 3-tuple do not need to be changed.

* Tests that access the 'private' FakeSwift._calls have been changed
  to use FakeSwift.call_list.

Change-Id: If24b6fa50f1d67a7bbbf9a1794c70d37c41971f7
Make encrypter unit test assertions more explicit by using assertions
on the named attributes of FakeSwiftCall rather than assertions on
position-indexed call tuples.

Change-Id: I871ddcc4ba559e7e4c0d0e28464780c6cd669797
Change-Id: I9a793e4c352cd40a073326315a6ae144a14de1e0
Closes-Bug: #1674543
Change-Id: Ic74dbcaf6d8293ae41984d5cd61f0326c91988e2
This implementation uses abstract sockets for process notifications,
similar to systemd's notify sockets, but notifiers use a PID-specific
name from a well-known namespace and listeners are assumed to be
ephemeral.

Update swift-reload to use these instead of polling child processes to
determine when a server reload has completed.  Bonus: it also acts as a
non-blocking lock to prevent two swift-reload commands from reloading a
process at the same time.

Closes-Bug: #2098405
Related-Change: Ib2dd9513d3bb7c7686e6fa35485317bbad915876
Change-Id: I5f36aba583650bddddff5e55ac557302d023ea1b
...at least, provided the client sent a X-Amz-Content-SHA256 header.
Apparently Content-MD5 is no longer strictly required by AWS? Or maybe
it never was, provided the client sent a SHA256 of the content.

This also allows us to test with newer boto3, botocore, s3transfer.

Related-Bug: #2098529
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Change-Id: Ifbcde9820bee72d80cab0fe3e67ea0f5817df949
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.