You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The following is the current implementation of the ProcessRegionModBuf work packet:
impl<E:ProcessEdgesWork>GCWork<E::VM>forProcessRegionModBuf<E>{fndo_work(&mutself,worker:&mutGCWorker<E::VM>,mmtk:&'static MMTK<E::VM>){// Scan modbuf only if the current GC is a nursery GCif mmtk.plan.generational().unwrap().is_current_gc_nursery(){// Collect all the entries in all the slicesletmut edges = vec![];// NOTE: one single vector.for slice in&self.modbuf{for edge in slice.iter_edges(){
edges.push(edge);}}// Forward entriesGCWork::do_work(&mutE::new(edges,false, mmtk), worker, mmtk)}}}
This two-level for-loop packs all edges in all slices in self.modbuf into one single Vec, and create one single E: ProcessEdgesWork instance from it. The size of each slice is not limited. In extreme cases, there can be more than four million edges from all slices (in the jython benchmark in the DaCapo Chopin benchmark suite).
We should split those edges into smaller vectors and make multiple ProcessEdgesWork instances so that we can parallelise the processing using multiple GC workers.
Update: It is also advisable to create a ProcessMemorySliceWork as a complement to the ProcessEdgesWork. Each ProcessMemorySliceWork can hold one or more slice to be processed. This basically makes the iteration of MemorySlice lazy (postponing it to the dedicated work packet), and can avoid the need to unpack a slice into a vector of edges just for creating ProcessEdgesWork instance. ProcessMemorySliceWork can be implemented by wrapping a ProcessEdgesWork inside, and call ProcessEdgesWork::process_edge for each edge in the MemorySlice.
The text was updated successfully, but these errors were encountered:
Why are we not enforcing the max packet size with assertions or something?
Well, that's a good idea. I previously consider the max packet size as a hint rather than a strict rule, and we don't consider it a violation if we sometimes add more items to the packet. But it should be OK if we enforce the max packet size so that the packet size remains bounded.
The following is the current implementation of the
ProcessRegionModBuf
work packet:This two-level for-loop packs all edges in all slices in
self.modbuf
into one singleVec
, and create one singleE: ProcessEdgesWork
instance from it. The size of each slice is not limited. In extreme cases, there can be more than four million edges from all slices (in thejython
benchmark in the DaCapo Chopin benchmark suite).We should split those edges into smaller vectors and make multiple ProcessEdgesWork instances so that we can parallelise the processing using multiple GC workers.
Update: It is also advisable to create a
ProcessMemorySliceWork
as a complement to theProcessEdgesWork
. EachProcessMemorySliceWork
can hold one or more slice to be processed. This basically makes the iteration of MemorySlice lazy (postponing it to the dedicated work packet), and can avoid the need to unpack a slice into a vector of edges just for creatingProcessEdgesWork
instance.ProcessMemorySliceWork
can be implemented by wrapping aProcessEdgesWork
inside, and callProcessEdgesWork::process_edge
for each edge in theMemorySlice
.The text was updated successfully, but these errors were encountered: