Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mimalloc Allocator #643

Merged
merged 735 commits into from
Dec 5, 2022
Merged
Show file tree
Hide file tree
Changes from 250 commits
Commits
Show all changes
735 commits
Select commit Hold shift + click to select a range
8e6aec6
fix bulk zeroing
paigereeves Nov 15, 2021
7392d41
fix returning blocks
paigereeves Nov 16, 2021
940373f
doubly linked listed, skeleton for block list metadata
paigereeves Nov 23, 2021
9cb00b7
fix copying blocklist and circular lists
paigereeves Nov 29, 2021
47276d6
single threaded block sweep
paigereeves Dec 1, 2021
c27becf
eager sweeping
paigereeves Dec 1, 2021
7a485de
remove freelistmarksweep plan
paigereeves Dec 2, 2021
b3fe5d3
locking lists
paigereeves Dec 2, 2021
e95d2b5
no block level sweep for eager sweeping
paigereeves Dec 8, 2021
7f39090
Interact with mimalloc
paigereeves May 23, 2021
f9b16b2
reset
paigereeves Jun 16, 2021
b0eee1a
free list allocator and mark sweep space skeletons
paigereeves Jun 16, 2021
eefef24
alloc (untested)
paigereeves Jun 16, 2021
3ee9d8d
free list allocator slow path (untested)
paigereeves Jun 16, 2021
a9e1aea
get_size_class, make_free_list, init_size_classes (basic, untested)
paigereeves Jun 16, 2021
28c6ae4
acquiring space
paigereeves Jun 16, 2021
2e50599
added free lists struct (broken)
paigereeves Jun 21, 2021
5a474db
cleaning
paigereeves Jun 21, 2021
f1cdcd0
fix segfault
paigereeves Jun 21, 2021
620a6f6
restructure
paigereeves Jun 22, 2021
9188ca1
free list allocator fix
paigereeves Jun 23, 2021
a4aca0d
Send large objects to the immortal space
paigereeves Jun 23, 2021
c2cecc7
boxed allocator
paigereeves Jun 24, 2021
5b63b96
get_bin
paigereeves Jun 24, 2021
df09254
_mi_bin
paigereeves Jun 29, 2021
353b995
start on unoptimised blocks array
paigereeves Jul 1, 2021
79f9de9
new_chunk bug
paigereeves Jul 1, 2021
df8bca5
use metadata
paigereeves Jul 7, 2021
6b89a1b
fix segfaults
paigereeves Jul 13, 2021
df69b46
refactor fast and slow paths to match mimalloc
paigereeves Jul 13, 2021
087be13
local frees
paigereeves Jul 13, 2021
e70eaf5
add thread free list, define free, refactor metadataspec access, add …
paigereeves Jul 19, 2021
5200c37
tls metadata on the side, some progress on marksweep eager gc
paigereeves Jul 20, 2021
9c9bdb7
lazy sweep
paigereeves Jul 23, 2021
8ead449
reset allocators upon gc
paigereeves Jul 23, 2021
52a678d
block level sweep (sigsegv)
paigereeves Jul 23, 2021
977ffd1
merge
paigereeves Aug 1, 2021
63ad8f8
begin using chunkmap from immix
paigereeves Aug 1, 2021
6fa13a0
revert nogc; sweeping from immix
paigereeves Aug 2, 2021
409b208
refactor metadata, coarse grain sweep
paigereeves Aug 5, 2021
89524b3
block lists in the allocator
paigereeves Aug 8, 2021
897157f
fix bug in block_free_collect
paigereeves Aug 22, 2021
e3e4e6d
added assertions and removed dead code. null block bug in acquire_blo…
paigereeves Aug 22, 2021
7dfe4c9
remove big in alloc_slow_once, use is_zero
paigereeves Aug 24, 2021
3e7f86d
refactoring, new bug with consumed list disappearing
paigereeves Aug 24, 2021
7579a0a
fix disappearing blocklists, other bug still there
paigereeves Aug 25, 2021
55781a4
defensive assertions, tracing object, fix small bug in block_free_col…
paigereeves Sep 13, 2021
cf79260
use common plan
paigereeves Sep 14, 2021
2aa1e89
Interact with mimalloc
paigereeves May 23, 2021
fe1a14b
reset
paigereeves Jun 16, 2021
50357eb
free list allocator and mark sweep space skeletons
paigereeves Jun 16, 2021
291b8a4
alloc (untested)
paigereeves Jun 16, 2021
0e5fd9e
free list allocator slow path (untested)
paigereeves Jun 16, 2021
1609d17
get_size_class, make_free_list, init_size_classes (basic, untested)
paigereeves Jun 16, 2021
89ae347
acquiring space
paigereeves Jun 16, 2021
080e677
added free lists struct (broken)
paigereeves Jun 21, 2021
fbb91b6
cleaning
paigereeves Jun 21, 2021
fb61c82
fix segfault
paigereeves Jun 21, 2021
0a7dfc9
restructure
paigereeves Jun 22, 2021
32306fd
free list allocator fix
paigereeves Jun 23, 2021
fe219ed
Send large objects to the immortal space
paigereeves Jun 23, 2021
80c3199
boxed allocator
paigereeves Jun 24, 2021
182164f
get_bin
paigereeves Jun 24, 2021
3e89a9d
_mi_bin
paigereeves Jun 29, 2021
4be3f2c
start on unoptimised blocks array
paigereeves Jul 1, 2021
95f472c
new_chunk bug
paigereeves Jul 1, 2021
efbe78b
use metadata
paigereeves Jul 7, 2021
768e11f
fix segfaults
paigereeves Jul 13, 2021
9b5e5df
refactor fast and slow paths to match mimalloc
paigereeves Jul 13, 2021
f6f2654
local frees
paigereeves Jul 13, 2021
e6dd4d0
add thread free list, define free, refactor metadataspec access, add …
paigereeves Jul 19, 2021
1660308
tls metadata on the side, some progress on marksweep eager gc
paigereeves Jul 20, 2021
308aa27
lazy sweep
paigereeves Jul 23, 2021
5a27dad
reset allocators upon gc
paigereeves Jul 23, 2021
207cd7f
block level sweep (sigsegv)
paigereeves Jul 23, 2021
7afb78a
merge
paigereeves Aug 1, 2021
50ffba7
begin using chunkmap from immix
paigereeves Aug 1, 2021
589a1c8
revert nogc; sweeping from immix
paigereeves Aug 2, 2021
c0f58e7
refactor metadata, coarse grain sweep
paigereeves Aug 5, 2021
3135d1d
block lists in the allocator
paigereeves Aug 8, 2021
b111fef
fix bug in block_free_collect
paigereeves Aug 22, 2021
1019293
added assertions and removed dead code. null block bug in acquire_blo…
paigereeves Aug 22, 2021
cb5d6d1
remove big in alloc_slow_once, use is_zero
paigereeves Aug 24, 2021
94f4633
refactoring, new bug with consumed list disappearing
paigereeves Aug 24, 2021
b0501c6
fix disappearing blocklists, other bug still there
paigereeves Aug 25, 2021
f61fc43
defensive assertions, tracing object, fix small bug in block_free_col…
paigereeves Sep 13, 2021
5161540
use common plan
paigereeves Sep 14, 2021
af9249b
use immortal space instead of los
paigereeves Sep 15, 2021
94f1620
rebase
paigereeves Sep 15, 2021
0e8d75c
rebase
paigereeves Sep 15, 2021
40ed0e6
eager sweeping
paigereeves Oct 4, 2021
3260c92
Interact with mimalloc
paigereeves May 23, 2021
af84c04
reset
paigereeves Jun 16, 2021
4665691
free list allocator and mark sweep space skeletons
paigereeves Jun 16, 2021
b68ec5a
alloc (untested)
paigereeves Jun 16, 2021
b0c9e66
free list allocator slow path (untested)
paigereeves Jun 16, 2021
ffc5916
restructure
paigereeves Jun 22, 2021
e20e16e
Send large objects to the immortal space
paigereeves Jun 23, 2021
7caa344
boxed allocator
paigereeves Jun 24, 2021
aa2e38f
get_bin
paigereeves Jun 24, 2021
9fd14e2
_mi_bin
paigereeves Jun 29, 2021
4ee2a80
start on unoptimised blocks array
paigereeves Jul 1, 2021
a99bec4
use metadata
paigereeves Jul 7, 2021
35bb776
fix segfaults
paigereeves Jul 13, 2021
806b5c4
local frees
paigereeves Jul 13, 2021
593c745
add thread free list, define free, refactor metadataspec access, add …
paigereeves Jul 19, 2021
8d79346
tls metadata on the side, some progress on marksweep eager gc
paigereeves Jul 20, 2021
297c290
lazy sweep
paigereeves Jul 23, 2021
6341eab
reset allocators upon gc
paigereeves Jul 23, 2021
98e626b
block level sweep (sigsegv)
paigereeves Jul 23, 2021
c5e7e3d
merge
paigereeves Aug 1, 2021
b1924ae
begin using chunkmap from immix
paigereeves Aug 1, 2021
e7961b1
revert nogc; sweeping from immix
paigereeves Aug 2, 2021
f242223
refactor metadata, coarse grain sweep
paigereeves Aug 5, 2021
599148c
block lists in the allocator
paigereeves Aug 8, 2021
b84920d
added assertions and removed dead code. null block bug in acquire_blo…
paigereeves Aug 22, 2021
7b33375
remove big in alloc_slow_once, use is_zero
paigereeves Aug 24, 2021
37140df
refactoring, new bug with consumed list disappearing
paigereeves Aug 24, 2021
fa02ffd
fix disappearing blocklists, other bug still there
paigereeves Aug 25, 2021
df00c07
defensive assertions, tracing object, fix small bug in block_free_col…
paigereeves Sep 13, 2021
5d570d2
use common plan
paigereeves Sep 14, 2021
fe9e579
Interact with mimalloc
paigereeves May 23, 2021
1ab7136
reset
paigereeves Jun 16, 2021
2c0413d
free list allocator and mark sweep space skeletons
paigereeves Jun 16, 2021
7be739c
added free lists struct (broken)
paigereeves Jun 21, 2021
1327eb1
fix segfault
paigereeves Jun 21, 2021
0c3d012
restructure
paigereeves Jun 22, 2021
0b67e24
free list allocator fix
paigereeves Jun 23, 2021
ed5d463
Send large objects to the immortal space
paigereeves Jun 23, 2021
de304e4
use metadata
paigereeves Jul 7, 2021
4d28650
fix segfaults
paigereeves Jul 13, 2021
d12e0ac
local frees
paigereeves Jul 13, 2021
bb42af9
add thread free list, define free, refactor metadataspec access, add …
paigereeves Jul 19, 2021
19e639a
tls metadata on the side, some progress on marksweep eager gc
paigereeves Jul 20, 2021
6046265
lazy sweep
paigereeves Jul 23, 2021
9a72ec2
block level sweep (sigsegv)
paigereeves Jul 23, 2021
d51efc8
merge
paigereeves Aug 1, 2021
47916a1
begin using chunkmap from immix
paigereeves Aug 1, 2021
d95156d
revert nogc; sweeping from immix
paigereeves Aug 2, 2021
4fab008
refactor metadata, coarse grain sweep
paigereeves Aug 5, 2021
a568490
block lists in the allocator
paigereeves Aug 8, 2021
b8a1919
refactoring, new bug with consumed list disappearing
paigereeves Aug 24, 2021
bff3f25
fix disappearing blocklists, other bug still there
paigereeves Aug 25, 2021
9708a18
defensive assertions, tracing object, fix small bug in block_free_col…
paigereeves Sep 13, 2021
b1a41f8
use common plan
paigereeves Sep 14, 2021
a2215fe
use immortal space instead of los
paigereeves Sep 15, 2021
c56d0f3
rebase
paigereeves Sep 15, 2021
70e9eff
rebase
paigereeves Sep 15, 2021
bab9631
eager sweeping
paigereeves Oct 4, 2021
164856a
rebase
paigereeves Oct 6, 2021
f97aae8
bulk zeroing
paigereeves Oct 21, 2021
18fc83f
remove old trace outputs
paigereeves Oct 27, 2021
10c9d87
properly send to LOS
paigereeves Oct 28, 2021
f5b07fb
fix bulk zeroing
paigereeves Nov 15, 2021
c0db6c5
fix returning blocks
paigereeves Nov 16, 2021
3adfb09
doubly linked listed, skeleton for block list metadata
paigereeves Nov 23, 2021
5c483c2
fix copying blocklist and circular lists
paigereeves Nov 29, 2021
babcc04
single threaded block sweep
paigereeves Dec 1, 2021
096b192
eager sweeping
paigereeves Dec 1, 2021
548a1a7
remove freelistmarksweep plan
paigereeves Dec 2, 2021
4ed5e86
locking lists
paigereeves Dec 2, 2021
c44976e
no block level sweep for eager sweeping
paigereeves Dec 8, 2021
e38ab51
lock block lists in lazy reset
paigereeves Dec 13, 2021
e2354f7
replicate free list allocator in openjdk
paigereeves Dec 22, 2021
ab2775d
Let plans specify behaviour on mutator destruction
paigereeves Jan 19, 2022
e0bf5cf
fix and refactor abandoned block acquiration
paigereeves Jan 19, 2022
1b195b1
fix stress OOM
paigereeves Feb 9, 2022
f477f62
fix malloc allocator thread destruction method
paigereeves Feb 9, 2022
9d5e936
optimise sweeping, remove alloc bit, remove surplus bins
paigereeves Feb 23, 2022
c7d98b8
refactor finding blocks slightly
paigereeves Feb 23, 2022
1c06c8e
debugging
paigereeves May 1, 2022
1f00d1a
Merge remote-tracking branch 'upstream/master'
paigereeves May 29, 2022
c4f3778
Merge branch 'master' of github.com:paigereeves/mmtk-core
paigereeves May 29, 2022
b0b6620
update edge processing
paigereeves May 30, 2022
57b0343
fix iterations SIGSEGV bug and non-zeroing space
paigereeves Jun 21, 2022
36ac5ed
Merge remote-tracking branch 'upstream/master'
paigereeves Jun 21, 2022
c229949
merge
paigereeves Jun 21, 2022
b9d4b26
small changes
paigereeves Aug 17, 2022
d7013a0
delete head files
paigereeves Aug 17, 2022
f60a1f4
Merge branch 'master' into mimalloc
qinsoon Sep 20, 2022
a7d86bc
Fix build and tests
qinsoon Sep 26, 2022
4af402d
Fix warnings and style
qinsoon Sep 26, 2022
a26b6a7
Merge branch 'master' into mimalloc
qinsoon Sep 26, 2022
6624d93
Revert the changes about destroy_mutator
qinsoon Sep 29, 2022
413ebe3
Merge branch 'master' into mimalloc
qinsoon Sep 29, 2022
53dc9d3
Fix build and tests
qinsoon Sep 29, 2022
a67541e
Merge the malloc feature with malloc_mark_sweep
qinsoon Sep 29, 2022
ac9d878
Change ci-test: no need to test mark sweep with malloc_mark_sweep.
qinsoon Sep 29, 2022
f633402
Malloc mark sweep uses MS_ACTIVE_CHUNK. MarksweepSpace should not clear
qinsoon Sep 29, 2022
8414a6c
Remove jar files that were checked in.
qinsoon Sep 29, 2022
0303c11
Fix style
qinsoon Sep 29, 2022
23cc82c
Clear alloc bit in sweep_block
qinsoon Sep 30, 2022
3d58d34
MS_FREE spec region is address.
qinsoon Sep 30, 2022
21dcc82
Respect align/offset in freelist allocator. Add tests for align/offset.
qinsoon Oct 4, 2022
fb8007e
Remove unnecessary VM type parameter in Block
qinsoon Oct 4, 2022
0cc899a
mi_bin returns usize. No need to cast to usize
qinsoon Oct 4, 2022
1356613
Get stress GC work properly with free list allocator
qinsoon Oct 5, 2022
6a2b313
Merge branch 'master' into master
qinsoon Oct 5, 2022
fd02823
Fix build
qinsoon Oct 5, 2022
5df931d
Reset ms mutator instead of rebind.
qinsoon Oct 5, 2022
377dd5a
Clean up release/sweep code
qinsoon Oct 6, 2022
b9162fe
Revert changes about BlockList.lock()
qinsoon Oct 6, 2022
25f352e
Bulk zero side mark bit in prepare()
qinsoon Oct 6, 2022
799ee4b
Refactor chunk/chunk_map for immix and marksweep. Fix a bug in the
qinsoon Oct 6, 2022
3860f2d
Minor clean up for chunk map
qinsoon Oct 6, 2022
8ee6c8f
Allow allow_slow_once_preciese_test to return Address::ZERO. Move
qinsoon Oct 7, 2022
b272e1b
Remove the metadata module from marksweepspace
qinsoon Oct 10, 2022
91a58f2
Remove policy-specific mark sweep code from the plan.
qinsoon Oct 10, 2022
9bae214
Add on_mutator_destroy for Allocator. Remove destroy_mutator from plan.
qinsoon Oct 10, 2022
38aa26c
Move mallocspace to marksweepspace as a module.
qinsoon Oct 10, 2022
d9b6d4c
Remove some build configs from Cargo.toml
qinsoon Oct 10, 2022
b007bbd
For MallocSpace, global specs need to be consistent for all the spaces
qinsoon Oct 10, 2022
3c3250d
Merge branch 'master' into mimalloc
qinsoon Oct 10, 2022
2de41db
Some cleanup: revert MAX_PHASES, remove wrong allocatr/space mapping for
qinsoon Oct 11, 2022
b3d2f60
Get mi_bin work for 32 bits. Update max object size. Add tests for
qinsoon Oct 11, 2022
f72d7f9
Only clear block-level metadata
qinsoon Oct 12, 2022
c8b9a9f
Add ObjectModel::OBJECT_REF_OFFSET_FROM_ALLOCATION. Rewrite
qinsoon Oct 14, 2022
3302f5f
OBJECT_REF_OFFSET constants define a range. Some refactoring around
qinsoon Oct 16, 2022
03cfc06
Add VM::USE_ALLOCATION_OFFSET. Do simple sweep for MS block if possible.
qinsoon Oct 17, 2022
2bee90b
Comments for MS block
qinsoon Oct 17, 2022
92a14a2
Fix dummyVM build.
qinsoon Oct 17, 2022
9b80744
Minor fix
qinsoon Oct 17, 2022
9d91dc0
Use Block::containing() to get the block that contains the object, and
qinsoon Oct 18, 2022
56d7ff7
Add n_free_list to ReservedAllocators. Refactor abandoned blocks in mark
qinsoon Oct 25, 2022
e854ead
Cleanup
qinsoon Oct 25, 2022
c2481b3
Merge branch 'master' into master
qinsoon Oct 25, 2022
9190727
Fix API check. Minor cleanup.
qinsoon Oct 25, 2022
abc4c31
Move block lists to the policy so it is not public
qinsoon Oct 25, 2022
f52d003
Remove unnecessary #[allow(unused)]
qinsoon Nov 1, 2022
3df0b07
Fix allocator selector dedup. Fix duplicate store lbock in block_list
qinsoon Nov 8, 2022
3b34e76
Use non-zero Block and Option<Block>
qinsoon Nov 10, 2022
ff1be54
Minor fix on documentation
qinsoon Nov 10, 2022
56c408a
Add set_zero and set_zero_atomic to SideMetadataSpec
qinsoon Nov 10, 2022
0a73add
Address reviews for the allocator
qinsoon Nov 10, 2022
d9bd805
Remove the export of is_alloced_by_malloc
qinsoon Nov 10, 2022
1da4ba6
Add an assertion to check if the allocation is within the current cell.
qinsoon Nov 10, 2022
3084ce7
Merge branch 'master' into mimalloc
qinsoon Nov 10, 2022
f4c36f6
Add #Safety for set_zero
qinsoon Nov 10, 2022
1964e73
Add comments for non-zero blocks, and chunks.
qinsoon Nov 10, 2022
d78575c
Change an assert to a debug_assert
qinsoon Nov 10, 2022
40243df
Remove Block.is_zero(). Turn usize to a Block with map. Minor comments
qinsoon Nov 10, 2022
6e65bea
Add comments for lock
qinsoon Nov 10, 2022
161a030
Reclaim memory in dummyvm's destroy_mutator. Change set/is_set to
qinsoon Nov 10, 2022
3895121
Merge branch 'master' into mimalloc
qinsoon Nov 11, 2022
25897b8
Merge branch 'master' into mimalloc
qinsoon Nov 28, 2022
b2ec1b8
Remove object_ref_guard
qinsoon Nov 28, 2022
7002d5f
Fix grammar in doc
qinsoon Nov 29, 2022
9b65404
Merge branch 'master' into mimalloc
qinsoon Dec 2, 2022
9a3e758
Prefer using methods from the alloc_bit module
qinsoon Dec 2, 2022
c06bd5f
The alloc bit may not be set when we sweep an empty cell
qinsoon Dec 4, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 1 addition & 6 deletions .github/scripts/ci-test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -37,12 +37,7 @@ for fn in $(ls src/tests/*.rs); do

# Run the test with each plan it needs.
for MMTK_PLAN in $PLANS; do
# Deal with mark sweep specially, we only have malloc mark sweep, and we need to enable the feature to make it work.
if [[ $MMTK_PLAN == 'MarkSweep' ]]; then
env MMTK_PLAN=$MMTK_PLAN cargo test --features "malloc_mark_sweep,$FEATURES" -- $t;
else
env MMTK_PLAN=$MMTK_PLAN cargo test --features "$FEATURES" -- $t;
fi
env MMTK_PLAN=$MMTK_PLAN cargo test --features "$FEATURES" -- $t;
done
done

18 changes: 13 additions & 5 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@ atomic_refcell = "0.1.7"
strum = "0.24"
strum_macros = "0.24"
cfg-if = "1.0"
itertools = "0.10.5"

[dev-dependencies]
rand = "0.7.3"
Expand Down Expand Up @@ -115,11 +116,6 @@ work_packet_stats = []
# Count the malloc'd memory into the heap size
malloc_counted_size = []

# Use library malloc as the freelist allocator for mark sweep. This will makes mark sweep slower. As malloc may return addresses outside our
# normal heap range, we will have to use chunk-based SFT table. Turning on this feature will use a different SFT map implementation on 64bits,
# and will affect all the plans in the build. Please be aware of the consequence, and this is only meant to be experimental use.
malloc_mark_sweep = []

# Do not modify the following line - ci-common.sh matches it
# -- Mutally exclusive features --
# Only one feature from each group can be provided. Otherwise build will fail.
Expand All @@ -131,6 +127,18 @@ malloc_mark_sweep = []
malloc_mimalloc = ["mimalloc-sys"]
malloc_jemalloc = ["jemalloc-sys"]
malloc_hoard = ["hoard-sys"]
# Use the native mimalloc allocator for malloc. This is not tested by me (Yi) yet, and it is only used to make sure that some code
# is not compiled in default builds.
malloc_native_mimalloc = []

# If there are more groups, they should be inserted above this line
# Group:end

# Group:marksweepallocation
# default is native allocator with lazy sweeping
eager_sweeping = []
# Use library malloc as the freelist allocator for mark sweep. This will makes mark sweep slower. As malloc may return addresses outside our
# normal heap range, we will have to use chunk-based SFT table. Turning on this feature will use a different SFT map implementation on 64bits,
# and will affect all the plans in the build. Please be aware of the consequence, and this is only meant to be experimental use.
malloc_mark_sweep = []
# Group:end
21 changes: 16 additions & 5 deletions src/memory_manager.rs
Original file line number Diff line number Diff line change
Expand Up @@ -84,8 +84,11 @@ pub fn mmtk_init<VM: VMBinding>(builder: &MMTKBuilder) -> Box<MMTK<VM>> {
Box::new(mmtk)
}

/// Request MMTk to create a mutator for the given thread. For performance reasons, A VM should
/// store the returned mutator in a thread local storage that can be accessed efficiently.
/// Request MMTk to create a mutator for the given thread. The ownership
/// of returned boxed mutator is transferred to the binding, and the binding needs to take care of its
/// lifetime. For performance reasons, A VM should store the returned mutator in a thread local storage
/// that can be accessed efficiently. A VM may also copy and embed the mutator stucture to a thread-local data
/// structure, and use that as a reference to the mutator (it is okay to drop the box once the content is copied).
///
/// Arguments:
/// * `mmtk`: A reference to an MMTk instance.
Expand All @@ -103,12 +106,14 @@ pub fn bind_mutator<VM: VMBinding>(
mutator
}

/// Reclaim a mutator that is no longer needed.
/// Report to MMTk that a mutator is no longer needed. A binding should not attempt
/// to use the mutator after this call. MMTk will not attempt to reclaim the memory for the
/// mutator, so a binding should properly reclaim the memory for the mutator after this call.
///
/// Arguments:
/// * `mutator`: A reference to the mutator to be destroyed.
pub fn destroy_mutator<VM: VMBinding>(mutator: Box<Mutator<VM>>) {
drop(mutator);
pub fn destroy_mutator<VM: VMBinding>(mutator: &mut Mutator<VM>) {
mutator.on_destroy();
}

/// Flush the mutator's local states.
Expand Down Expand Up @@ -144,6 +149,12 @@ pub fn alloc<VM: VMBinding>(
// If you plan to use MMTk with a VM with its object size smaller than MMTk's min object size, you should
// meet the min object size in the fastpath.
debug_assert!(size >= MIN_OBJECT_SIZE);
// Assert alignment
debug_assert!(align >= VM::MIN_ALIGNMENT);
debug_assert!(align <= VM::MAX_ALIGNMENT);
// Assert offset
debug_assert!(VM::USE_ALLOCATION_OFFSET || offset == 0);

mutator.alloc(size, align, offset, semantics)
}

Expand Down
2 changes: 1 addition & 1 deletion src/plan/global.rs
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ pub fn create_plan<VM: VMBinding>(
vm_map, mmapper, options, scheduler,
)) as Box<dyn Plan<VM = VM>>,
PlanSelector::MarkSweep => Box::new(crate::plan::marksweep::MarkSweep::new(
vm_map, mmapper, options,
vm_map, mmapper, options, scheduler,
)) as Box<dyn Plan<VM = VM>>,
PlanSelector::Immix => Box::new(crate::plan::immix::Immix::new(
vm_map, mmapper, options, scheduler,
Expand Down
74 changes: 3 additions & 71 deletions src/plan/marksweep/gc_work.rs
Original file line number Diff line number Diff line change
@@ -1,77 +1,9 @@
use crate::policy::mallocspace::metadata::is_chunk_mapped;
use crate::policy::mallocspace::metadata::is_chunk_marked_unsafe;
use crate::policy::mallocspace::MallocSpace;
use crate::scheduler::{GCWork, GCWorker, WorkBucketStage};
use crate::util::heap::layout::vm_layout_constants::BYTES_IN_CHUNK;
use crate::util::Address;
use crate::vm::VMBinding;
use crate::MMTK;
use std::sync::atomic::Ordering;

use super::MarkSweep;

/// Simple work packet that just sweeps a single chunk
pub struct MSSweepChunk<VM: VMBinding> {
ms: &'static MallocSpace<VM>,
// starting address of a chunk
chunk: Address,
}

impl<VM: VMBinding> GCWork<VM> for MSSweepChunk<VM> {
#[inline]
fn do_work(&mut self, _worker: &mut GCWorker<VM>, _mmtk: &'static MMTK<VM>) {
self.ms.sweep_chunk(self.chunk);
}
}

/// Work packet that generates sweep jobs for gc workers. Each chunk is given its own work packet
pub struct MSSweepChunks<VM: VMBinding> {
plan: &'static MarkSweep<VM>,
}

impl<VM: VMBinding> MSSweepChunks<VM> {
pub fn new(plan: &'static MarkSweep<VM>) -> Self {
Self { plan }
}
}

impl<VM: VMBinding> GCWork<VM> for MSSweepChunks<VM> {
#[inline]
fn do_work(&mut self, _worker: &mut GCWorker<VM>, mmtk: &'static MMTK<VM>) {
let ms = self.plan.ms_space();
let mut work_packets: Vec<Box<dyn GCWork<VM>>> = vec![];
let mut chunk = unsafe { Address::from_usize(ms.chunk_addr_min.load(Ordering::Relaxed)) }; // XXX: have to use AtomicUsize to represent an Address
let end = unsafe { Address::from_usize(ms.chunk_addr_max.load(Ordering::Relaxed)) }
+ BYTES_IN_CHUNK;

// Since only a single thread generates the sweep work packets as well as it is a Stop-the-World collector,
// we can assume that the chunk mark metadata is not being accessed by anything else and hence we use
// non-atomic accesses
while chunk < end {
if is_chunk_mapped(chunk) && unsafe { is_chunk_marked_unsafe(chunk) } {
work_packets.push(Box::new(MSSweepChunk { ms, chunk }));
}

chunk += BYTES_IN_CHUNK;
}

debug!("Generated {} sweep work packets", work_packets.len());
#[cfg(debug_assertions)]
{
ms.total_work_packets
.store(work_packets.len() as u32, Ordering::SeqCst);
ms.completed_work_packets.store(0, Ordering::SeqCst);
ms.work_live_bytes.store(0, Ordering::SeqCst);
}

mmtk.scheduler.work_buckets[WorkBucketStage::Release].bulk_add(work_packets);
}
}

pub struct MSGCWorkContext<VM: VMBinding>(std::marker::PhantomData<VM>);
use crate::policy::gc_work::DEFAULT_TRACE;
use crate::scheduler::gc_work::PlanProcessEdges;
use crate::scheduler::gc_work::*;
use crate::vm::VMBinding;

pub struct MSGCWorkContext<VM: VMBinding>(std::marker::PhantomData<VM>);
impl<VM: VMBinding> crate::scheduler::GCWorkContext for MSGCWorkContext<VM> {
type VM = VM;
type PlanType = MarkSweep<VM>;
Expand Down
91 changes: 49 additions & 42 deletions src/plan/marksweep/global.rs
Original file line number Diff line number Diff line change
@@ -1,44 +1,50 @@
use crate::plan::global::BasePlan;
use crate::plan::global::CommonPlan;
use crate::plan::global::GcStatus;
use crate::plan::marksweep::gc_work::{MSGCWorkContext, MSSweepChunks};
use crate::plan::marksweep::gc_work::MSGCWorkContext;
use crate::plan::marksweep::mutator::ALLOCATOR_MAPPING;
use crate::plan::AllocationSemantics;
use crate::plan::Plan;
use crate::plan::PlanConstraints;
use crate::policy::mallocspace::metadata::ACTIVE_CHUNK_METADATA_SPEC;
use crate::policy::mallocspace::MallocSpace;
use crate::policy::space::Space;
use crate::scheduler::*;
use crate::scheduler::GCWorkScheduler;
use crate::util::alloc::allocators::AllocatorSelector;
#[cfg(not(feature = "global_alloc_bit"))]
use crate::util::alloc_bit::ALLOC_SIDE_METADATA_SPEC;
use crate::util::heap::layout::heap_layout::Mmapper;
use crate::util::heap::layout::heap_layout::VMMap;
use crate::util::heap::HeapMeta;
use crate::util::heap::VMRequest;
use crate::util::metadata::side_metadata::{SideMetadataContext, SideMetadataSanity};
use crate::util::options::Options;
use crate::util::VMWorkerThread;
use crate::vm::VMBinding;
use enum_map::EnumMap;
use mmtk_macros::PlanTraceObject;
use std::sync::Arc;

use enum_map::EnumMap;
#[cfg(feature = "malloc_mark_sweep")]
pub type MarkSweepSpace<VM> = crate::policy::marksweepspace::malloc_ms::MallocSpace<VM>;
#[cfg(feature = "malloc_mark_sweep")]
use crate::policy::marksweepspace::malloc_ms::MAX_OBJECT_SIZE;

use mmtk_macros::PlanTraceObject;
#[cfg(not(feature = "malloc_mark_sweep"))]
pub type MarkSweepSpace<VM> = crate::policy::marksweepspace::native_ms::MarkSweepSpace<VM>;
#[cfg(not(feature = "malloc_mark_sweep"))]
use crate::policy::marksweepspace::native_ms::MAX_OBJECT_SIZE;

#[derive(PlanTraceObject)]
pub struct MarkSweep<VM: VMBinding> {
#[fallback_trace]
common: CommonPlan<VM>,
#[trace]
ms: MallocSpace<VM>,
ms: MarkSweepSpace<VM>,
}

pub const MS_CONSTRAINTS: PlanConstraints = PlanConstraints {
moves_objects: false,
gc_header_bits: 2,
gc_header_words: 0,
num_specialized_scans: 1,
max_non_los_default_alloc_bytes: MAX_OBJECT_SIZE,
may_trace_duplicate_edges: true,
..PlanConstraints::default()
};
Expand All @@ -56,7 +62,6 @@ impl<VM: VMBinding> Plan for MarkSweep<VM> {
self.base().set_collection_kind::<Self>(self);
self.base().set_gc_status(GcStatus::GcPrepare);
scheduler.schedule_common_work::<MSGCWorkContext<VM>>(self);
scheduler.work_buckets[WorkBucketStage::Prepare].add(MSSweepChunks::<VM>::new(self));
}

fn get_allocator_mapping(&self) -> &'static EnumMap<AllocationSemantics, AllocatorSelector> {
Expand All @@ -65,11 +70,11 @@ impl<VM: VMBinding> Plan for MarkSweep<VM> {

fn prepare(&mut self, tls: VMWorkerThread) {
self.common.prepare(tls, true);
// Dont need to prepare for MallocSpace
self.ms.prepare();
}

fn release(&mut self, tls: VMWorkerThread) {
trace!("Marksweep: Release");
self.ms.release();
self.common.release(tls, true);
}

Expand All @@ -95,47 +100,49 @@ impl<VM: VMBinding> Plan for MarkSweep<VM> {
}

impl<VM: VMBinding> MarkSweep<VM> {
pub fn new(vm_map: &'static VMMap, mmapper: &'static Mmapper, options: Arc<Options>) -> Self {
let heap = HeapMeta::new(&options);
// if global_alloc_bit is enabled, ALLOC_SIDE_METADATA_SPEC will be added to
// SideMetadataContext by default, so we don't need to add it here.
#[cfg(feature = "global_alloc_bit")]
let global_metadata_specs =
SideMetadataContext::new_global_specs(&[ACTIVE_CHUNK_METADATA_SPEC]);
// if global_alloc_bit is NOT enabled,
// we need to add ALLOC_SIDE_METADATA_SPEC to SideMetadataContext here.
#[cfg(not(feature = "global_alloc_bit"))]
let global_metadata_specs = SideMetadataContext::new_global_specs(&[
ALLOC_SIDE_METADATA_SPEC,
ACTIVE_CHUNK_METADATA_SPEC,
]);

let res = MarkSweep {
ms: MallocSpace::new(global_metadata_specs.clone()),
common: CommonPlan::new(
pub fn new(
vm_map: &'static VMMap,
mmapper: &'static Mmapper,
options: Arc<Options>,
scheduler: Arc<GCWorkScheduler<VM>>,
) -> Self {
let mut heap = HeapMeta::new(&options);
let mut global_metadata_specs = SideMetadataContext::new_global_specs(&[]);
MarkSweepSpace::<VM>::extend_global_side_metadata_specs(&mut global_metadata_specs);

let res = {
let ms = MarkSweepSpace::new(
"MarkSweepSpace",
false,
VMRequest::discontiguous(),
global_metadata_specs.clone(),
vm_map,
mmapper,
&mut heap,
scheduler,
);

let common = CommonPlan::new(
vm_map,
mmapper,
options,
heap,
&MS_CONSTRAINTS,
global_metadata_specs,
),
};
);

// Use SideMetadataSanity to check if each spec is valid. This is also needed for check
// side metadata in extreme_assertions.
{
let mut side_metadata_sanity_checker = SideMetadataSanity::new();
res.common
.verify_side_metadata_sanity(&mut side_metadata_sanity_checker);
res.ms
.verify_side_metadata_sanity(&mut side_metadata_sanity_checker);
}
MarkSweep { common, ms }
};

let mut side_metadata_sanity_checker = SideMetadataSanity::new();
res.common
.verify_side_metadata_sanity(&mut side_metadata_sanity_checker);
res.ms
.verify_side_metadata_sanity(&mut side_metadata_sanity_checker);
res
}

pub fn ms_space(&self) -> &MallocSpace<VM> {
pub fn ms_space(&self) -> &MarkSweepSpace<VM> {
&self.ms
}
}
2 changes: 1 addition & 1 deletion src/plan/marksweep/mod.rs
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
//! Plan: marksweep (currently using malloc as its freelist allocator)
//! Plan: marksweep

mod gc_work;
mod global;
Expand Down
Loading