Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add alloc_no_gc #1218

Open
wants to merge 9 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
90 changes: 74 additions & 16 deletions src/memory_manager.rs
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ use crate::plan::{Mutator, MutatorContext};
use crate::scheduler::WorkBucketStage;
use crate::scheduler::{GCWork, GCWorker};
use crate::util::alloc::allocators::AllocatorSelector;
use crate::util::constants::{LOG_BYTES_IN_PAGE, MIN_OBJECT_SIZE};
use crate::util::constants::LOG_BYTES_IN_PAGE;
use crate::util::heap::layout::vm_layout::vm_layout;
use crate::util::opaque_pointer::*;
use crate::util::{Address, ObjectReference};
Expand Down Expand Up @@ -140,7 +140,19 @@ pub fn flush_mutator<VM: VMBinding>(mutator: &mut Mutator<VM>) {
mutator.flush()
}

/// Allocate memory for an object. For performance reasons, a VM should
/// Allocate memory for an object. This function is a GC safepoint. If MMTk fails
qinsoon marked this conversation as resolved.
Show resolved Hide resolved
/// to allocate memory, it will attempt a GC to free up some memory and retry
/// the allocation.
///
/// This function in most cases returns a valid memory address.
/// This function may return a zero address iif 1. MMTk attempts at least one GC,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/// This function may return a zero address iif 1. MMTk attempts at least one GC,
/// This function may return a zero address if 1. MMTk attempts at least one GC,

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually this paragraph should be removed.

/// 2. the GC does not free up enough memory, 3. MMTk calls [`crate::vm::Collection::out_of_memory`]
/// to the binding, and 4. the binding returns from [`crate::vm::Collection::out_of_memory`]
/// instead of throwing an exception/error.
///
/// * Note
///
/// For performance reasons, a VM should
/// implement the allocation fast-path on their side rather than just calling this function.
///
/// If the VM provides a non-zero `offset` parameter, then the returned address will be
Expand All @@ -159,24 +171,48 @@ pub fn alloc<VM: VMBinding>(
offset: usize,
semantics: AllocationSemantics,
) -> Address {
// MMTk has assumptions about minimal object size.
// We need to make sure that all allocations comply with the min object size.
// Ideally, we check the allocation size, and if it is smaller, we transparently allocate the min
// object size (the VM does not need to know this). However, for the VM bindings we support at the moment,
// their object sizes are all larger than MMTk's min object size, so we simply put an assertion here.
// If you plan to use MMTk with a VM with its object size smaller than MMTk's min object size, you should
// meet the min object size in the fastpath.
debug_assert!(size >= MIN_OBJECT_SIZE);
// Assert alignment
debug_assert!(align >= VM::MIN_ALIGNMENT);
debug_assert!(align <= VM::MAX_ALIGNMENT);
// Assert offset
debug_assert!(VM::USE_ALLOCATION_OFFSET || offset == 0);
#[cfg(debug_assertions)]
crate::util::alloc::allocator::asset_allocation_args::<VM>(size, align, offset);

mutator.alloc(size, align, offset, semantics)
}

/// Invoke the allocation slow path. This is only intended for use when a binding implements the fastpath on
/// Allocate memory for an object. This function is NOT a GC safepoint. If MMTk fails
/// to allocate memory, it will not attempt a GC, nor call [`crate::vm::Collection::out_of_memory`],
/// MMTk instead return a zero address.
qinsoon marked this conversation as resolved.
Show resolved Hide resolved
///
/// Generally [`alloc`] is preferred over this function. This function should only be used
/// when the binding does not want GCs to happen during the particular allocationq requests,
/// and is willing to deal with the allocation failure.
qinsoon marked this conversation as resolved.
Show resolved Hide resolved
///
/// Notes on [`alloc`] also apply to this function.
qinsoon marked this conversation as resolved.
Show resolved Hide resolved
///
/// Arguments:
/// * `mutator`: The mutator to perform this allocation request.
/// * `size`: The number of bytes required for the object.
/// * `align`: Required alignment for the object.
/// * `offset`: Offset associated with the alignment.
/// * `semantics`: The allocation semantic required for the allocation.
pub fn alloc_no_gc<VM: VMBinding>(
k-sareen marked this conversation as resolved.
Show resolved Hide resolved
mutator: &mut Mutator<VM>,
size: usize,
align: usize,
offset: usize,
semantics: AllocationSemantics,
) -> Address {
#[cfg(debug_assertions)]
crate::util::alloc::allocator::asset_allocation_args::<VM>(size, align, offset);

mutator.alloc_no_gc(size, align, offset, semantics)
}

/// Invoke the allocation slow path. This function is a GC safepoint. If MMTk fails
/// to allocate memory, it will attempt a GC to free up some memory and retry
/// the allocation. See [`alloc`] for more details.
///
/// * Notes
///
/// This is only intended for use when a binding implements the fastpath on
qinsoon marked this conversation as resolved.
Show resolved Hide resolved
/// the binding side. When the binding handles fast path allocation and the fast path fails, it can use this
/// method for slow path allocation. Calling before exhausting fast path allocaiton buffer will lead to bad
/// performance.
Expand All @@ -197,6 +233,28 @@ pub fn alloc_slow<VM: VMBinding>(
mutator.alloc_slow(size, align, offset, semantics)
}

/// Invoke the allocation slow path. This function is NOT a GC safepoint. If MMTk fails
/// to allocate memory, it will not attempt a GC, nor call [`crate::vm::Collection::out_of_memory`],
/// MMTk instead return a zero address. See [`alloc_no_gc`] for more details.
///
/// Notes on [`alloc_slow_no_gc`] also apply to this function.
qinsoon marked this conversation as resolved.
Show resolved Hide resolved
///
/// Arguments:
/// * `mutator`: The mutator to perform this allocation request.
/// * `size`: The number of bytes required for the object.
/// * `align`: Required alignment for the object.
/// * `offset`: Offset associated with the alignment.
/// * `semantics`: The allocation semantic required for the allocation.
pub fn alloc_slow_no_gc<VM: VMBinding>(
mutator: &mut Mutator<VM>,
size: usize,
align: usize,
offset: usize,
semantics: AllocationSemantics,
) -> Address {
mutator.alloc_slow_no_gc(size, align, offset, semantics)
}

/// Perform post-allocation actions, usually initializing object metadata. For many allocators none are
/// required. For performance reasons, a VM should implement the post alloc fast-path on their side
/// rather than just calling this function.
Expand Down
82 changes: 74 additions & 8 deletions src/plan/mutator_context.rs
Original file line number Diff line number Diff line change
Expand Up @@ -113,11 +113,29 @@ impl<VM: VMBinding> MutatorContext<VM> for Mutator<VM> {
offset: usize,
allocator: AllocationSemantics,
) -> Address {
unsafe {
let allocator = unsafe {
self.allocators
.get_allocator_mut(self.config.allocator_mapping[allocator])
}
.alloc(size, align, offset)
};
// The value should be default/unset at the beginning of an allocation request.
debug_assert!(!allocator.get_context().is_no_gc_on_fail());
allocator.alloc(size, align, offset)
}

fn alloc_no_gc(
&mut self,
size: usize,
align: usize,
offset: usize,
allocator: AllocationSemantics,
) -> Address {
let allocator = unsafe {
self.allocators
.get_allocator_mut(self.config.allocator_mapping[allocator])
};
// The value should be default/unset at the beginning of an allocation request.
debug_assert!(!allocator.get_context().is_no_gc_on_fail());
allocator.alloc_no_gc(size, align, offset)
}

fn alloc_slow(
Expand All @@ -127,11 +145,29 @@ impl<VM: VMBinding> MutatorContext<VM> for Mutator<VM> {
offset: usize,
allocator: AllocationSemantics,
) -> Address {
unsafe {
let allocator = unsafe {
self.allocators
.get_allocator_mut(self.config.allocator_mapping[allocator])
}
.alloc_slow(size, align, offset)
};
// The value should be default/unset at the beginning of an allocation request.
debug_assert!(!allocator.get_context().is_no_gc_on_fail());
allocator.alloc_slow(size, align, offset)
}

fn alloc_slow_no_gc(
&mut self,
size: usize,
align: usize,
offset: usize,
allocator: AllocationSemantics,
) -> Address {
let allocator = unsafe {
self.allocators
.get_allocator_mut(self.config.allocator_mapping[allocator])
};
// The value should be default/unset at the beginning of an allocation request.
debug_assert!(!allocator.get_context().is_no_gc_on_fail());
allocator.alloc_slow_no_gc(size, align, offset)
}

// Note that this method is slow, and we expect VM bindings that care about performance to implement allocation fastpath sequence in their bindings.
Expand Down Expand Up @@ -264,7 +300,7 @@ pub trait MutatorContext<VM: VMBinding>: Send + 'static {
fn prepare(&mut self, tls: VMWorkerThread);
/// Do the release work for this mutator.
fn release(&mut self, tls: VMWorkerThread);
/// Allocate memory for an object.
/// Allocate memory for an object. This is a GC safepoint. GC will be triggered if the allocation fails.
///
/// Arguments:
/// * `size`: the number of bytes required for the object.
Expand All @@ -278,7 +314,24 @@ pub trait MutatorContext<VM: VMBinding>: Send + 'static {
offset: usize,
allocator: AllocationSemantics,
) -> Address;
/// The slow path allocation. This is only useful when the binding
/// Allocate memory for an object. This is NOT a GC safepoint. If the allocation fails,
qinsoon marked this conversation as resolved.
Show resolved Hide resolved
/// the function returns a zero value without triggering a GC.
///
/// Arguments:
/// * `size`: the number of bytes required for the object.
/// * `align`: required alignment for the object.
/// * `offset`: offset associated with the alignment. The result plus the offset will be aligned to the given alignment.
/// * `allocator`: the allocation semantic used for this object.
fn alloc_no_gc(
&mut self,
size: usize,
align: usize,
offset: usize,
allocator: AllocationSemantics,
) -> Address;
/// The slow path allocation. This is a GC safepoint. GC will be triggered if the allocation fails.
///
/// This is only useful when the binding
/// implements the fast path allocation, and would like to explicitly
/// call the slow path after the fast path allocation fails.
fn alloc_slow(
Expand All @@ -288,6 +341,19 @@ pub trait MutatorContext<VM: VMBinding>: Send + 'static {
offset: usize,
allocator: AllocationSemantics,
) -> Address;
/// The slow path allocation. This is NOT a GC safepoint. If the allocation fails,
/// the function returns a zero value without triggering a GC.
///
/// This is only useful when the binding
/// implements the fast path allocation, and would like to explicitly
/// call the slow path after the fast path allocation fails.
fn alloc_slow_no_gc(
&mut self,
size: usize,
align: usize,
offset: usize,
allocator: AllocationSemantics,
) -> Address;
/// Perform post-allocation actions. For many allocators none are
/// required.
///
Expand Down
4 changes: 2 additions & 2 deletions src/policy/immix/immixspace.rs
Original file line number Diff line number Diff line change
Expand Up @@ -516,8 +516,8 @@ impl<VM: VMBinding> ImmixSpace<VM> {
}

/// Allocate a clean block.
pub fn get_clean_block(&self, tls: VMThread, copy: bool) -> Option<Block> {
let block_address = self.acquire(tls, Block::PAGES);
pub fn get_clean_block(&self, tls: VMThread, copy: bool, no_gc_on_fail: bool) -> Option<Block> {
let block_address = self.acquire(tls, Block::PAGES, no_gc_on_fail);
if block_address.is_zero() {
return None;
}
Expand Down
4 changes: 2 additions & 2 deletions src/policy/largeobjectspace.rs
Original file line number Diff line number Diff line change
Expand Up @@ -303,8 +303,8 @@ impl<VM: VMBinding> LargeObjectSpace<VM> {
}

/// Allocate an object
pub fn allocate_pages(&self, tls: VMThread, pages: usize) -> Address {
self.acquire(tls, pages)
pub fn allocate_pages(&self, tls: VMThread, pages: usize, no_gc_on_fail: bool) -> Address {
self.acquire(tls, pages, no_gc_on_fail)
}

/// Test if the object's mark bit is the same as the given value. If it is not the same,
Expand Down
8 changes: 6 additions & 2 deletions src/policy/lockfreeimmortalspace.rs
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ impl<VM: VMBinding> Space<VM> for LockFreeImmortalSpace<VM> {
data_pages + meta_pages
}

fn acquire(&self, _tls: VMThread, pages: usize) -> Address {
fn acquire(&self, _tls: VMThread, pages: usize, no_gc_on_fail: bool) -> Address {
trace!("LockFreeImmortalSpace::acquire");
let bytes = conversions::pages_to_bytes(pages);
let start = self
Expand All @@ -145,7 +145,11 @@ impl<VM: VMBinding> Space<VM> for LockFreeImmortalSpace<VM> {
})
.expect("update cursor failed");
if start + bytes > self.limit {
panic!("OutOfMemory")
if no_gc_on_fail {
return Address::ZERO;
} else {
panic!("OutOfMemory");
}
}
if self.slow_path_zeroing {
crate::util::memory::zero(start, bytes);
Expand Down
10 changes: 8 additions & 2 deletions src/policy/marksweepspace/native_ms/global.rs
Original file line number Diff line number Diff line change
Expand Up @@ -402,7 +402,13 @@ impl<VM: VMBinding> MarkSweepSpace<VM> {
crate::util::metadata::vo_bit::bzero_vo_bit(block.start(), Block::BYTES);
}

pub fn acquire_block(&self, tls: VMThread, size: usize, align: usize) -> BlockAcquireResult {
pub fn acquire_block(
&self,
tls: VMThread,
size: usize,
align: usize,
no_gc_on_fail: bool,
) -> BlockAcquireResult {
{
let mut abandoned = self.abandoned.lock().unwrap();
let bin = mi_bin::<VM>(size, align);
Expand All @@ -424,7 +430,7 @@ impl<VM: VMBinding> MarkSweepSpace<VM> {
}
}

let acquired = self.acquire(tls, Block::BYTES >> LOG_BYTES_IN_PAGE);
let acquired = self.acquire(tls, Block::BYTES >> LOG_BYTES_IN_PAGE, no_gc_on_fail);
if acquired.is_zero() {
BlockAcquireResult::Exhausted
} else {
Expand Down
41 changes: 30 additions & 11 deletions src/policy/space.rs
Original file line number Diff line number Diff line change
Expand Up @@ -80,8 +80,12 @@ pub trait Space<VM: VMBinding>: 'static + SFT + Sync + Downcast {
false
}

fn acquire(&self, tls: VMThread, pages: usize) -> Address {
trace!("Space.acquire, tls={:?}", tls);
fn acquire(&self, tls: VMThread, pages: usize, no_gc_on_fail: bool) -> Address {
trace!(
"Space.acquire, tls={:?}, no_gc_on_fail={:?}",
tls,
no_gc_on_fail
);

debug_assert!(
!self.will_oom_on_acquire(tls, pages << LOG_BYTES_IN_PAGE),
Expand All @@ -95,7 +99,7 @@ pub trait Space<VM: VMBinding>: 'static + SFT + Sync + Downcast {
VM::VMActivePlan::is_mutator(tls) && VM::VMCollection::is_collection_enabled();
// Is a GC allowed here? If we should poll but are not allowed to poll, we will panic.
// initialize_collection() has to be called so we know GC is initialized.
let allow_gc = should_poll && self.common().global_state.is_initialized();
let allow_gc = should_poll && self.common().global_state.is_initialized() && !no_gc_on_fail;

trace!("Reserving pages");
let pr = self.get_page_resource();
Expand All @@ -104,17 +108,25 @@ pub trait Space<VM: VMBinding>: 'static + SFT + Sync + Downcast {
trace!("Polling ..");

if should_poll && self.get_gc_trigger().poll(false, Some(self.as_space())) {
// Clear the request
pr.clear_request(pages_reserved);

// If we do not want GC on fail, just return zero.
if no_gc_on_fail {
return Address::ZERO;
}

// Otherwise do GC here
debug!("Collection required");
assert!(allow_gc, "GC is not allowed here: collection is not initialized (did you call initialize_collection()?).");

// Clear the request, and inform GC trigger about the pending allocation.
pr.clear_request(pages_reserved);
// Inform GC trigger about the pending allocation.
self.get_gc_trigger()
.policy
.on_pending_allocation(pages_reserved);

VM::VMCollection::block_for_gc(VMMutatorThread(tls)); // We have checked that this is mutator
unsafe { Address::zero() }

// Return zero -- the caller will handle re-allocation
qinsoon marked this conversation as resolved.
Show resolved Hide resolved
Address::ZERO
} else {
debug!("Collection not required");

Expand Down Expand Up @@ -211,6 +223,14 @@ pub trait Space<VM: VMBinding>: 'static + SFT + Sync + Downcast {
Err(_) => {
drop(lock); // drop the lock immediately

// Clear the request
pr.clear_request(pages_reserved);

// If we do not want GC on fail, just return zero.
if no_gc_on_fail {
return Address::ZERO;
}

// We thought we had memory to allocate, but somehow failed the allocation. Will force a GC.
assert!(
allow_gc,
Expand All @@ -220,14 +240,13 @@ pub trait Space<VM: VMBinding>: 'static + SFT + Sync + Downcast {
let gc_performed = self.get_gc_trigger().poll(true, Some(self.as_space()));
debug_assert!(gc_performed, "GC not performed when forced.");

// Clear the request, and inform GC trigger about the pending allocation.
pr.clear_request(pages_reserved);
// Inform GC trigger about the pending allocation.
self.get_gc_trigger()
.policy
.on_pending_allocation(pages_reserved);

VM::VMCollection::block_for_gc(VMMutatorThread(tls)); // We asserted that this is mutator.
unsafe { Address::zero() }
Address::ZERO
}
}
}
Expand Down
Loading
Loading