-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Let FreeListPageResource hold FreeList in sync #925
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
3 tasks
No bind_freelist. No having references to FreeList from multiple places. Enumerate free lists from each space instead from global arrays. If Map64::allocate_contiguous_chunks wants to update RawMemoryFreeList, we pass a &mut RawMemoryFreeList to it. If Map32::finalize_static_space_map wants to update the start addresses of spaces (now page resources'), we give it a call back so that we can call a method of the plan to update the starts for it at the right time.
If a function creates a RawMemoryFreeList, it must know how much space the RawMemoryFreeList occupies at the start of the Space. That function should report this amount.
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 4, 2023
The main part of this PR is adding a `HasSpaces` trait that enumerates spaces in a plan. This trait can be automatically generated using derive macro and attributes, saving the effort to enumerate spaces by hand. - This PR adds a `HasSpaces` trait that enumerates spaces of a plan. Plans (including NoGC) are now required to implement this trait. - This PR implements derived macro for `HasSpaces` - The `#[trace]` and `#[fallback_trace]` attributes are replaced by the `#[space]` and `#[parent]` attributes for generality. A dedicated `#[copy_semantics(xxxxx)]` attribute is added to specify copy semantics for `trace_object`. - The manually written `Plan::get_spaces()` method is removed in favor for the automatically generated `HasSpace::for_each_space` method. - Added a `Plan::verify_side_metadata_sanity` that calls `Space::verify_side_metadata_sanity` for all spaces in the plan. This replaces the manual invocations of `Space::verify_side_metadata_sanity` in the constructors of plans. This PR also slightly refactors the mmtk-macros crate. - The `syn` dependency is one major version behind the latest. This PR bumps the version and refactors the code to adapt to this change. - `mmtk-core/macros/src/lib.rs` is cleaned up so that it only contains functions with minimum function bodies. Those functions for derive macros are required to be in the root module of the crate. Their function bodies are off-loaded to other modules. This PR is a stepping stone for #925. This PR makes it possible to enumerate spaces of a plan without resorting to unsafe global maps.
This PR tries to do too many things at the same time, for example, refactoring Map32 to properly synchronise its operations. I am splitting this into multiple PRs:
|
github-merge-queue bot
pushed a commit
that referenced
this pull request
May 10, 2024
This PR removes the `shared_fl_map` field from `Map32`, and removes `fl_map` and `fl_page_resource` fields from `Map64`. These global maps were used to enumerate and update the starting address of spaces (stored in `CommonFreeListPageResource` instances), for several different purposes. 1. `Map32` can only decide the starting address of discontiguous spaces after all contiguous spaces are placed, therefore it needs to modify the start address of all discontiguous `FreeListPageResource` instances. - `shared_fl_map` was used to locate `CommonFreeListPageResource` instances by the space ordinal. - With this PR, we use `Plan::for_each_space_mut` to enumerate spaces, and use the newly added `Space::maybe_get_page_resource_mut` method to get the page resource (if it has one) in order to update their starting addresses. 2. `Map64` allocates `RawMemoryFreeList` in the beginning of each space that uses `FreeListPageResource`. It needs to update the start address of each space to take the address range occupied by `RawMemoryFreeList` into account. - `fl_page_resource` was used to locate `CommonFreeListPageResource` instances by the space ordinal. - `fl_map` was used to locate the underlying `RawMemoryFreeList` instances of the corresponding `FreeListPageResource` by the space ordinal. - With this PR, we initialize the start address of the `FreeListPageResource` immediately after a `RawMemoryFreeList` is created, taking the limit of `RawMemoryFreeList` into account. Therefore, it is no longer needed to update the starting address later. Because those global maps are removed, several changes are made to `VMMap` and its implementations `Map32` and `Map64`. - The `VMMap::boot` and the `VMMap::bind_freelist` no longer serve any purpose, and are removed. - `Map64::finalize_static_space_map` now does nothing but setting `finalized` to true. The `CommonFreeListPageResource` data structure was introduced for the sole purpose to be referenced by those global maps. This PR removes `CommonFreeListPageResource` and `FreeListPageResourceInner`, and relocates their fields. - `common: CommonPageResource` is now a field of `FreeListPageResource`. - The `free_list: Box<dyn FreeList>` and `start: Address` are moved into `FreeListPageResourceSync` because they have always been accessed while holding the mutex `FreeListPageResource::sync`. - Note that the method `FreeListPageResource::release_pages` used to call `free_list.size()` without holding the `sync` mutex, and subsequently calls `free_list.free()` with the `sync` mutex. The algorithm of `FreeList` guarantees `free()` will not race with `size()`. But the `Mutex` forbids calling the read-only operation `free()` without holding the lock because in general it is possible that a read-write operations may race with read-only operations, too. For now, we extended the range of the mutex to cover the whole function. After the changes above, the `FreeListPageResource` no longer needs `UnsafeCell`. This PR is one step of the greater goal of removing unsafe operations related to FreeList, PageResource and VMMap. See: #853 Related PRs: - #925: This PR is a subset of that PR.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
DRAFT: This is highly experimental, and currently it breaks 32-bit systems completely.
This PR does the following
FreeList
instance intoFreeListPageResource::sync
.RawMemoryFreeList
at the start of each space when creatingFreeListPageResource
.Map64::allocate_contiguous_chunks
to look upRawMemoryFreeList
from address usingMap64::fl_map
. Instead the caller will pass theSome(&mut RawMemoryFreeList)
toMap64::allocate_contiguous_chunks
if the space actually uses aRawMemoryFreeList
.Map32::finalize_static_space_map
can update their start addresses once the location of discontiguous spaces is determined.This PR does not attempt to
IntArrayFreeList
. It's relatively independent. I'll leave it to a different PR.Description
Moving
FreeList
insync
FreeList
instances are not thread safe, and always requires lock to access. WheneverFreeListPageResource
accesses theFreeList
, it always holds the mutex ofFreeListPageResource::sync
. This PR attempts to move theFreeList
intosync
to make the access safe. By doing this, we eliminate the need to cast&FreeListPageResource
to&mut FreeListPageResource
.We observe that the
limit
ofRawMemoryFreeList
is determined when it is created. So we let it return the limit directly so that we don't need to update the starting address later. TheFreeList::maybe_get_limit()
returnsSome(limit)
if it isRawMemoryFreeList
, andNone
if it isIntArrayFreeList
. The caller can decide whether to bind the freelist. But I plan to revamp the plan creation and space allocation process so that we won't need a separatefinalize_static_space_map
.Eliminating the need to register FreeListPageResources or RawMemoryFreeLists in global maps
The global maps
Map32::shared_fl_map
andMap64::{fl_page_resources,fl_map}
were inherited from JikesRVM. They provided ways to enumerate FreeListPageResources and RawMemoryFreeList instances. This is unsafe in Rust.We found them unnecessary. We can enumerate page resources from plans, and we can determine the size of
RawMemoryFreeList
eagerly.Related issues
#853