Skip to content

Commit

Permalink
Don't create a malloc'd block each time we start tracing.
Browse files Browse the repository at this point in the history
When we detect a possibly closed loop in the end user's program, we need to
determine whether the current thread is the one tracing it or not. We previously
allocated a chunk of memory when we started tracing and used its address as a
temporary thread ID.

Ideally we would use the system provided thread ID to determine this, but that
turns out to be a non-portable minefield. pthreads provides the pthread_t ID
type but that's notionally opaque (on OpenBSD, as far as I can tell from a quick
search, it's really an opaque struct about which no guarantees are provided,
though some programs such as Chrome do seem to rely on it being equivalent to a
usize). Some platforms (at least Linux and OpenBSD) use the PID ID type pid_t
for threads too and that's guaranteed to be a signed integer though not it seems
of a guaranteed size (though on both platforms it's actually a C int, so 32 bits
in practise). Rust provides a ThreadID type and an unstable "as_64" function
(rust-lang/rust#67939) but that provides no guarantees
about how much of the 64-bit integer space is used.

The challenge we have is that whatever ID we use it must fit into
(numbits(usize) - numbits(PHASE_TAGS)) i.e. we have 62 bits for the ID on a
64-bit system. If we rely on the system thread IDs it feels like we're storing
up a portability time bomb that might explode one day in the dim and distant
future. At least for now, the (minimal) extra performance we might get from that
doesn't seem worth the danger.

This commit is thus a relatively simple change. Rather than allocating a
malloc'd block every time we start tracing a thread, which means we have to be
quite careful about freeing memory, we allocate it on thread construction. This
simplifies the code slightly and (since it's an aligned address) we can feel
fairly safe that it plays nicely with PHASE_TAGS. We could, if we wanted,
allocate this block lazily the first time we trace but that feels like an
unnecessary optimisation at this point.
  • Loading branch information
ltratt committed Dec 5, 2020
1 parent ae5c029 commit 63ce9d5
Showing 1 changed file with 14 additions and 18 deletions.
32 changes: 14 additions & 18 deletions ykrt/src/mt.rs
Original file line number Diff line number Diff line change
Expand Up @@ -238,17 +238,15 @@ impl MTThread {
// count further.
return None;
}
let loc_id = Box::into_raw(Box::new(0u8));
let new_pack = loc_id as usize | PHASE_TRACING;
let new_pack = self.inner.tid as usize | PHASE_TRACING;
if loc.pack.compare_and_swap(lp, new_pack, Ordering::Release) == lp {
Rc::get_mut(&mut self.inner).unwrap().tracer =
Some((start_tracing(self.inner.tracing_kind), loc_id));
Some((start_tracing(self.inner.tracing_kind), self.inner.tid));
return None;
} else {
// We raced with another thread that's also trying to trace this
// Location, so free the malloc'd block.
unsafe { Box::from_raw(loc_id) };
}
// We raced with another thread that's (probably) trying to trace this
// Location or (less likely) has already compiled it so we go around the
// loop again to see what we should do.
} else {
let new_pack = PHASE_COUNTING | ((count + 1) << PHASE_NUM_BITS);
if loc.pack.compare_and_swap(lp, new_pack, Ordering::Release) == lp {
Expand All @@ -259,18 +257,17 @@ impl MTThread {
}
}
PHASE_TRACING => {
let loc_id = if let Some((_, loc_id)) = self.inner.tracer {
if let Some((_, tid)) = self.inner.tracer {
// This thread is tracing something...
if loc_id != ((lp & !PHASE_TAG) as *mut u8) {
if tid != ((lp & !PHASE_TAG) as *mut u8) {
// ...but we didn't start at the current Location.
return None;
}
// ...and we started at this Location, so we've got a complete loop!
loc_id
// ...and we started at this Location, so we've got a complete loop!
} else {
// Another thread is tracing this location.
return None;
};
}

let sir_trace = Rc::get_mut(&mut self.inner)
.unwrap()
Expand All @@ -291,8 +288,6 @@ impl MTThread {

let new_pack = ptr | PHASE_COMPILED;
loc.pack.store(new_pack, Ordering::Release);
// Free the small block of memory we used as a Location ID.
unsafe { Box::from_raw(loc_id) };
Rc::get_mut(&mut self.inner).unwrap().tracer = None;
return None;
}
Expand All @@ -316,6 +311,9 @@ impl MTThread {
/// The innards of a meta-tracer thread.
struct MTThreadInner {
mt: MT,
/// A value that uniquely identifies a thread and that is guaranteed not to overlap with
/// PHASE_TAG. For simplicities sake this is a pointer to a malloc'd chunk of memory.
tid: *mut u8,
hot_threshold: HotThreshold,
#[allow(dead_code)]
tracing_kind: TracingKind,
Expand All @@ -331,6 +329,7 @@ impl MTThreadInner {
let tracing_kind = mt.tracing_kind();
let inner = MTThreadInner {
mt,
tid: Box::into_raw(Box::new(0u8)),
hot_threshold,
tracing_kind,
tracer: None,
Expand All @@ -343,10 +342,7 @@ impl MTThreadInner {

impl Drop for MTThreadInner {
fn drop(&mut self) {
if let Some((_, loc_id)) = self.tracer {
// We were trying to trace something.
unsafe { Box::from_raw(loc_id) };
}
unsafe { Box::from_raw(self.tid) };
}
}

Expand Down

0 comments on commit 63ce9d5

Please sign in to comment.