Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Let FreeListPageResource hold FreeList in sync #925

Closed
wants to merge 17 commits into from

Conversation

wks
Copy link
Collaborator

@wks wks commented Aug 28, 2023

DRAFT: This is highly experimental, and currently it breaks 32-bit systems completely.

This PR does the following

  • Move the FreeList instance into FreeListPageResource::sync.
  • Eliminate the need to "bind_freelist" and "boot" for both 32-bit and 64-bit systems.
  • Eliminate the need to "finalize_static_space_map" for 64-bit systems. Instead it eagerly handle the size of RawMemoryFreeList at the start of each space when creating FreeListPageResource.
  • Eliminated the need for Map64::allocate_contiguous_chunks to look up RawMemoryFreeList from address using Map64::fl_map. Instead the caller will pass the Some(&mut RawMemoryFreeList) to Map64::allocate_contiguous_chunks if the space actually uses a RawMemoryFreeList.
  • Let plans enumerate spaces mutably, and let spaces enumerate its PageResource mutably (if there is any) so that on 32-bit systems, Map32::finalize_static_space_map can update their start addresses once the location of discontiguous spaces is determined.

This PR does not attempt to

  • Eliminate the unsafety due to the two-level structure of IntArrayFreeList. It's relatively independent. I'll leave it to a different PR.

Description

Moving FreeList in sync

FreeList instances are not thread safe, and always requires lock to access. Whenever FreeListPageResource accesses the FreeList, it always holds the mutex of FreeListPageResource::sync. This PR attempts to move the FreeList into sync to make the access safe. By doing this, we eliminate the need to cast &FreeListPageResource to &mut FreeListPageResource.

We observe that the limit of RawMemoryFreeList is determined when it is created. So we let it return the limit directly so that we don't need to update the starting address later. The FreeList::maybe_get_limit() returns Some(limit) if it is RawMemoryFreeList, and None if it is IntArrayFreeList. The caller can decide whether to bind the freelist. But I plan to revamp the plan creation and space allocation process so that we won't need a separate finalize_static_space_map.

Eliminating the need to register FreeListPageResources or RawMemoryFreeLists in global maps

The global maps Map32::shared_fl_map and Map64::{fl_page_resources,fl_map} were inherited from JikesRVM. They provided ways to enumerate FreeListPageResources and RawMemoryFreeList instances. This is unsafe in Rust.

We found them unnecessary. We can enumerate page resources from plans, and we can determine the size of RawMemoryFreeList eagerly.

Related issues

#853

wks added 9 commits August 29, 2023 20:46
No bind_freelist.

No having references to FreeList from multiple places.

Enumerate free lists from each space instead from global arrays.

If Map64::allocate_contiguous_chunks wants to update RawMemoryFreeList,
we pass a &mut RawMemoryFreeList to it.

If Map32::finalize_static_space_map wants to update the start addresses
of spaces (now page resources'), we give it a call back so that we can
call a method of the plan to update the starts for it at the right time.
If a function creates a RawMemoryFreeList, it must know how much space
the RawMemoryFreeList occupies at the start of the Space.  That function
should report this amount.
github-merge-queue bot pushed a commit that referenced this pull request Sep 4, 2023
The main part of this PR is adding a `HasSpaces` trait that enumerates
spaces in a plan. This trait can be automatically generated using derive
macro and attributes, saving the effort to enumerate spaces by hand.

- This PR adds a `HasSpaces` trait that enumerates spaces of a plan.
Plans (including NoGC) are now required to implement this trait.
-   This PR implements derived macro for `HasSpaces`
- The `#[trace]` and `#[fallback_trace]` attributes are replaced by the
`#[space]` and `#[parent]` attributes for generality. A dedicated
`#[copy_semantics(xxxxx)]` attribute is added to specify copy semantics
for `trace_object`.
- The manually written `Plan::get_spaces()` method is removed in favor
for the automatically generated `HasSpace::for_each_space` method.
- Added a `Plan::verify_side_metadata_sanity` that calls
`Space::verify_side_metadata_sanity` for all spaces in the plan. This
replaces the manual invocations of `Space::verify_side_metadata_sanity`
in the constructors of plans.

This PR also slightly refactors the mmtk-macros crate.
- The `syn` dependency is one major version behind the latest. This PR
bumps the version and refactors the code to adapt to this change.
- `mmtk-core/macros/src/lib.rs` is cleaned up so that it only contains
functions with minimum function bodies. Those functions for derive
macros are required to be in the root module of the crate. Their
function bodies are off-loaded to other modules.

This PR is a stepping stone for
#925. This PR makes it possible to
enumerate spaces of a plan without resorting to unsafe global maps.
@wks
Copy link
Collaborator Author

wks commented Sep 12, 2023

This PR tries to do too many things at the same time, for example, refactoring Map32 to properly synchronise its operations. I am splitting this into multiple PRs:

@wks wks closed this Sep 12, 2023
github-merge-queue bot pushed a commit that referenced this pull request May 10, 2024
This PR removes the `shared_fl_map` field from `Map32`, and removes
`fl_map` and `fl_page_resource` fields from `Map64`. These global maps
were used to enumerate and update the starting address of spaces (stored
in `CommonFreeListPageResource` instances), for several different
purposes.

1. `Map32` can only decide the starting address of discontiguous spaces
after all contiguous spaces are placed, therefore it needs to modify the
start address of all discontiguous `FreeListPageResource` instances.
- `shared_fl_map` was used to locate `CommonFreeListPageResource`
instances by the space ordinal.
- With this PR, we use `Plan::for_each_space_mut` to enumerate spaces,
and use the newly added `Space::maybe_get_page_resource_mut` method to
get the page resource (if it has one) in order to update their starting
addresses.
2. `Map64` allocates `RawMemoryFreeList` in the beginning of each space
that uses `FreeListPageResource`. It needs to update the start address
of each space to take the address range occupied by `RawMemoryFreeList`
into account.
- `fl_page_resource` was used to locate `CommonFreeListPageResource`
instances by the space ordinal.
- `fl_map` was used to locate the underlying `RawMemoryFreeList`
instances of the corresponding `FreeListPageResource` by the space
ordinal.
- With this PR, we initialize the start address of the
`FreeListPageResource` immediately after a `RawMemoryFreeList` is
created, taking the limit of `RawMemoryFreeList` into account.
Therefore, it is no longer needed to update the starting address later.

Because those global maps are removed, several changes are made to
`VMMap` and its implementations `Map32` and `Map64`.

- The `VMMap::boot` and the `VMMap::bind_freelist` no longer serve any
purpose, and are removed.
- `Map64::finalize_static_space_map` now does nothing but setting
`finalized` to true.

The `CommonFreeListPageResource` data structure was introduced for the
sole purpose to be referenced by those global maps. This PR removes
`CommonFreeListPageResource` and `FreeListPageResourceInner`, and
relocates their fields.

- `common: CommonPageResource` is now a field of `FreeListPageResource`.
- The `free_list: Box<dyn FreeList>` and `start: Address` are moved into
`FreeListPageResourceSync` because they have always been accessed while
holding the mutex `FreeListPageResource::sync`.
- Note that the method `FreeListPageResource::release_pages` used to
call `free_list.size()` without holding the `sync` mutex, and
subsequently calls `free_list.free()` with the `sync` mutex. The
algorithm of `FreeList` guarantees `free()` will not race with `size()`.
But the `Mutex` forbids calling the read-only operation `free()` without
holding the lock because in general it is possible that a read-write
operations may race with read-only operations, too. For now, we extended
the range of the mutex to cover the whole function.

After the changes above, the `FreeListPageResource` no longer needs
`UnsafeCell`.

This PR is one step of the greater goal of removing unsafe operations
related to FreeList, PageResource and VMMap. See:
#853

Related PRs:
- #925: This PR is a subset of
that PR.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant