Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use-after-free when using a ThreadRng from a std::thread_local destructor #968

Closed
nathdobson opened this issue Apr 25, 2020 · 29 comments · Fixed by #1035
Closed

Use-after-free when using a ThreadRng from a std::thread_local destructor #968

nathdobson opened this issue Apr 25, 2020 · 29 comments · Fixed by #1035

Comments

@nathdobson
Copy link

A ThreadRng is just a raw pointer to the std::thread_local variable THREAD_RNG_KEY. If a client reuses a ThreadRng after the THREAD_RNG_KEY is dropped (e.g. inside another std::thread_local destructor), this is undefined behavior.

On targets with thread local support in the linker, the memory is typically freed after all destructors are run, so the use-after-destroy is hard to detect. However, other targets destroy and free at the same time, causing a use-after-free. This test can reproduce the use-after-free on OSX.

On targets that support it, THREAD_RNG_KEY could use the unstable #[thread_local] instead of std::thread_local. Other targets would probably have to use an Rc.

I'm happy to write a PR once we confirm the intended direction.

@josephlr
Copy link
Member

@nathdobson thanks for pointing this out. We can't use #[thread_local] as that attribute is unstable and very few users of rand enable the nightly feature flag. Rc has the downside that the memory we allocate for every thread can potentially leak (as std::thread_local does not guarantee we will always run destructors).

After looking into the alternatives, I don't think we have a whole lot of options here. We either:

  • Say you shouldn't use a ThreadRng from (thread local) destructors.
  • Try to detect if you're using a ThreadRng from a thread local destructor and panic.
  • Use Rc where:
    thread_local!(
        static THREAD_RNG_KEY: Rc<ReseedingRng<Core, OsRng>> = { ... }
    );
    pub struct ThreadRng {
        rng: Rc<ReseedingRng<Core, OsRng>>,
    }
    pub fn thread_rng() -> ThreadRng {
        let rng = THREAD_RNG_KEY.with(|t| t.clone());
        ThreadRng { rng }
    }
    impl RngCore for ThreadRng {
        fn fill_bytes(&mut self, dest: &mut [u8]) {
            self.rng.fill_bytes(dest)
        }
    }
    • This makes ThreadRng no longer Copy which is a breaking change.
    • This can leak the ReseedingRng if either:
      • The thread_local destructor is never run
      • Any user leaks a ThreadRng
    • Perf for this isn't too bad, as the deref is essentially free and clone is just incrementing a count.

@burdges
Copy link
Contributor

burdges commented Apr 26, 2020

You could add some Cell<usize> counter which avoids leaks. It'd still break ThreadRng: Copy unless you give ThreadRng a lifetime somehow.

We obviously drop THREAD_RNG_KEY here, since Rc gives use-after-free too, so a drop flag in THREAD_RNG_KEY suffices, while preserving ThreadRng: Copy.

use secrecy::{Secret, ExposeSecret};  // Drop calls Zeroize but reimplement if you like

#[derive(Copy, Clone, Debug)]
pub struct ThreadRng( PhantomData<NonNull<ReseedingRng<Core, OsRng>>> );

thread_local! {
    static THREAD_RNG_KEY: (Secret<bool>,UnsafeCell<ReseedingRng<Core, OsRng>>) = (
        Secret::new(true), { 
        ...
    });
}

pub fn thread_rng() -> ThreadRng { ThreadRng(PhandomData) } 

fn real_thread_rng() -> NonNull<ReseedingRng<Core, OsRng>> {
    let raw = THREAD_RNG_KEY.with(|t| { assert!(t.0.expose_secret()) t.1.get() });
    NonNull::new(raw).unwrap()
}

We reduce ThreadRng to zero bytes with this approach too. I'd prefer pub struct ThreadRng; here, but maybe users can default initialize with struct Foo<T,R=ThreadRng> { .. } anyways, and maybe #![feature(negative_impls)] remains unstable even for autotraits?

@burdges
Copy link
Contributor

burdges commented Apr 26, 2020

We've use-after-free for either the Cell<usize> or Secret<bool> in both those approaches, oops.

@dhardy
Copy link
Member

dhardy commented Apr 26, 2020

destroy and free at the same time

I wasn't aware that this was possible. Can you provide more context? (Is it not a bug of the platform implementation?) As @josephlr says, it sounds like we don't have a good option here.

@josephlr
Copy link
Member

josephlr commented Apr 26, 2020

(Is it not a bug of the platform implementation?)

This was my initial reaction as well, and I don't think it is. If a thread has multiple thread-local objects, on thread termination you could either (a) run the destructor for every object then free all of the thread-local memory at once or (b) run the destructor for one object, free its memory, then destory+free the next object, and so on.

Most platforms do (a), but this bug is easier to spot with (b). Either is fine, as Rust just says that the destuctor runs then memory gets freed. With either (a) or (b) we could still hit this bug, as the thread-local RNG could get destroyed while other thread-local variables still have a handle to it. The std::thread_local implementation includes checks in LocalKey::with to handle this scenario.

One idea could be to just use LocalKey::with all the time, like:

thread_local!(
    static THREAD_RNG_KEY: ReseedingRng<Core, OsRng> = { ... }
);

// ThreadRng is zero-sized, but we have a marker to make it !Send + !Sync
pub struct ThreadRng(UnsafeCell<()>);

pub fn thread_rng() -> ThreadRng {
    ThreadRng(UnsafeCell::new(()))
}
impl RngCore for ThreadRng {
    fn fill_bytes(&mut self, dest: &mut [u8]) {
        THREAD_RNG_KEY.with(|rng| rng.fill_bytes(dest))
    }
}

This is really simple, but might be less performant, as you have to do thread-local lookup each time, instead of only once at ThreadRng creation time. We would need to look at benchmarks.

@bjorn3
Copy link
Contributor

bjorn3 commented Apr 26, 2020

LLVM may be able to optimize the TLS use by merging multiple accesses together. When Rust was still using green threads, TLS access was problematic, as LLVM would sometimes keep a pointer alive across yield points that could potentially make the current function get executed on a different native thread.

@dhardy
Copy link
Member

dhardy commented Apr 26, 2020

Thanks for the explanation @josephlr.

Try to detect if you're using a ThreadRng from a thread local destructor and panic.

Is there a reliable way to do this? If so, then simply disallowing usage in destructors with a check in debug code might be enough, otherwise I guess we need to use the Rc option.

Use-after-free is potentially a security issue. Is this exploitable? ChaChaRng reads from a buffer much of the time, thus potentially this could be used to read from unrelated memory, but only if that is allocated between when the thread memory is freed and the destructor finishes, and only in-process, so likely the worst (non-crash) case is that the RNG returns an incorrect result?

@bjorn3
Copy link
Contributor

bjorn3 commented Apr 26, 2020

ChaChaRng also writes to that buffer, right? That means that you can overwrite some unrelated memory by asking for a random number in a TLS destructor.

@burdges
Copy link
Contributor

burdges commented Apr 26, 2020

It's exploitable: Any transactional update jazz, or maybe some complex handshake code, could involve RAII guards that do ECDSA signatures in a destructor, so attackers would compromises secret keys if the PRNG lacks enough entropy. Avoid ECDSA etc. but yes exploitable.

@bjorn3
Copy link
Contributor

bjorn3 commented Apr 26, 2020

Wrong issue?

Edit: I now get what you meant.

@burdges
Copy link
Contributor

burdges commented Apr 26, 2020

I think LocalKey::with looks heavily optimized already, so we've good odds it suffices, especially with @bjorn3's note. If you want faster, then add some guard type for those special cases:

#[derive(Copy, Clone, Debug)]
pub struct ThreadRng(UnsafeCell<()>);

pub fn thread_rng() -> ThreadRng {
    ThreadRng(UnsafeCell::new(()))
}

thread_local! {
    static THREAD_RNG_KEY: RefCell<ReseedingRng<Core, OsRng>>) = (
        Secret::new(true), { 
        ...
    });
}

impl ThreadRng {
    fn as_ptr() -> NonNull<ReseedingRng<Core, OsRng>> {
        let raw = THREAD_RNG_KEY.with(|t| {  t.as_ptr() });
        NonNull::new(raw).unwrap()
    }

    pub fn faster<'a>(&'s self) -> ThreadRngRef<'a> {
        let b = THREAD_RNG_KEY.with(|t| {  t.borrow() });
        ThreadRngGuard(b)
    }
}

#[derive(Clone, Debug)]
pub struct ThreadRngRef<'a>( rc::Ref<'a, ReseedingRng<Core, OsRng>> );

impl<'a> ThreadRngRef<'a> {
    fn as_ptr() -> NonNull<ReseedingRng<Core, OsRng>> {
        NonNull::new(self.0.deref() as *const _ as *mut _).unwrap()
    }
}

All their RngCore methods invoke ThreadRng::as_ptr or ThreadRngRef::as_ptr respectively like now. We've no allocations that leak in this guard scheme. Almost all autotraits were disabled for rc::Ref, which presumably makes this safe. If however rc::Ref still causes use-after-free here, then you can achieve the same use-after-free without unsafe code: All our unsafe code promotes immutable references to mutable references, but we only require reads for explitability by my ECDSA example.

@josephlr
Copy link
Member

josephlr commented Apr 27, 2020

TL;DR; Using LocalKey::with every time has ~16% performance cost in the worst case, even with UnsafeCell tricks. If we're willing to have ThreadRng not be Copy, we can get back to our previous performance.

I ran the thread_rng_u32 and thread_rng_u64 benchmarks against a bunch of implementations. Those two benchmarks represent the worst-case senario, where we call RngCore methods in a tight loop. For the exact code used, see my fork, branches wip0 through wip8. All benchmarks were run on my x86_64 i7-7700K desktop. For all the trials/results, see this spreadsheet. Results are compared against the current implementation (branch wip0).

(1): LocalKey::with + RefCell

  • Performance (see branch wip1):
    • thread_rng_u32: 37.9% slower
    • thread_rng_u64: 30.7% slower
  • No unsafe code
  • ThreadRng can be Copy
thread_local!(
    static THREAD_RNG_KEY: RefCell<ReseedingRng<Core, OsRng>> = { ... }
);

pub struct ThreadRng(PhantomData<*const ()>);
pub fn thread_rng() -> ThreadRng {
    ThreadRng(PhantomData)
}
impl RngCore for ThreadRng {
    #[inline(always)]
    fn next_u32(&mut self) -> u32 {
        THREAD_RNG_KEY.with(|rng| rng.borrow_mut().next_u32())
    }
}

(2): LocalKey::with + UnsafeCell + "inner" access

  • Performance (see branch wip2):
    • thread_rng_u32: 12.4% slower
    • thread_rng_u64: 29.3% slower
  • Needs unsafe code
  • ThreadRng can be Copy
  • Running the methods inside the LocalKey::with closure, make use-after-free impossible (but makes optimization harder).
thread_local!(
    static THREAD_RNG_KEY: UnsafeCell<ReseedingRng<Core, OsRng>> = { ... }
);

pub struct ThreadRng(PhantomData<*const ()>);
pub fn thread_rng() -> ThreadRng {
    ThreadRng(PhantomData)
}
impl RngCore for ThreadRng {
    #[inline(always)]
    fn next_u32(&mut self) -> u32 {
        THREAD_RNG_KEY.with(|rng| unsafe { &mut *rng.get() }.next_u32())
    }
}

(3): LocalKey::with + UnsafeCell + "outer" access

  • Performance (see branch wip3):
    • thread_rng_u32: 16.8% slower
    • thread_rng_u64: 16.4% slower
  • Needs unsafe code
  • ThreadRng can be Copy
  • Essentially the implementation @burdges proposed
  • This implementation has a risk that the THREAD_RNG_KEY destructor could be run between when the pointer is fetched and the method executed. However, this is extremely unlikely.
thread_local!(
    static THREAD_RNG_KEY: UnsafeCell<ReseedingRng<Core, OsRng>> = { ... }
);

pub struct ThreadRng(PhantomData<*const ()>);
pub fn thread_rng() -> ThreadRng {
    ThreadRng(PhantomData)
}
impl ThreadRng {
    #[inline(always)]
    fn rng(&mut self) -> &mut ReseedingRng<Core, OsRng> {
        let ptr = THREAD_RNG_KEY.with(|rng| rng.get());
        unsafe { &mut *ptr }
    }
}
impl RngCore for ThreadRng {
    #[inline(always)]
    fn next_u32(&mut self) -> u32 {
        self.rng().next_u32()
    }
}

(4): Rc<RefCell>

  • Performance (see branch wip4):
    • thread_rng_u32: 3.1% slower
    • thread_rng_u64: 4.7% slower
  • No unsafe code
  • ThreadRng cannot be Copy
  • ThreadRng destructors must be run or memory is leaked.
thread_local!(
    static THREAD_RNG_KEY: Rc<RefCell<ReseedingRng<Core, OsRng>>> = { ... }
);

pub struct ThreadRng {
    rng: Rc<RefCell<ReseedingRng<Core, OsRng>>>,
}
pub fn thread_rng() -> ThreadRng {
    ThreadRng { rng: THREAD_RNG_KEY.with(|rng| rng.clone()) }
}
impl RngCore for ThreadRng {
    #[inline(always)]
    fn next_u32(&mut self) -> u32 {
        self.rng.borrow_mut().next_u32()
    }
}

Using Weak references (see branch wip5) is slower and doesn't really get us anything.

(5): Rc<UnsafeCell>

  • Performance (see branch wip7): approx. the same speed
  • Needs unsafe code
  • ThreadRng cannot be Copy
  • ThreadRng destructors must be run or memory is leaked.
thread_local!(
    static THREAD_RNG_KEY: Rc<UnsafeCell<ReseedingRng<Core, OsRng>>> = { ... }
);

pub struct ThreadRng {
    rng: Rc<UnsafeCell<ReseedingRng<Core, OsRng>>>,
}
pub fn thread_rng() -> ThreadRng {
    ThreadRng { rng: THREAD_RNG_KEY.with(|rng| rng.clone()) }
}
impl ThreadRng {
    #[inline(always)]
    fn rng(&mut self) -> &mut ReseedingRng<Core, OsRng> {
        unsafe { &mut *self.rng.get() }
    }
}
impl RngCore for ThreadRng {
    #[inline(always)]
    fn next_u32(&mut self) -> u32 {
        self.rng().next_u32()
    }
}

Using Weak references (see branch wip6) is slower and doesn't really get us anything.

(6): Manual reference counting

  • Performance (see branch wip8): approx. the same speed
  • Needs unsafe code
  • ThreadRng cannot be Copy
  • Avoids memory allocation/leaking problems of (5)
struct RngKey {
    rng: ReseedingRng<Core, OsRng>,
    count: usize,
}
impl Drop for RngKey {
    fn drop(&mut self) {
        if self.count != 0 {
            panic!("thread_local RNG dropped with outstanding references")
        }
    }
}
thread_local!(
    static THREAD_RNG_KEY: UnsafeCell<RngKey> = { ... }
);

pub struct ThreadRng {
    rng: NonNull<RngKey>,
}
pub fn thread_rng() -> ThreadRng {
    let raw = THREAD_RNG_KEY.with(|t| t.get());
    let mut rng = NonNull::new(raw).unwrap();
    unsafe { rng.as_mut() }.count += 1;
    ThreadRng { rng }
}
impl Drop for ThreadRng {
    fn drop(&mut self) {
        unsafe { self.rng.as_mut()}.count -= 1;
    }
}
impl ThreadRng {
    #[inline(always)]
    fn rng(&mut self) -> &mut ReseedingRng<Core, OsRng> {
        unsafe { &mut self.rng.as_mut().rng }
    }
}
impl RngCore for ThreadRng {
    #[inline(always)]
    fn next_u32(&mut self) -> u32 {
        self.rng().next_u32()
    }
}

@josephlr
Copy link
Member

josephlr commented Apr 27, 2020

Given the above results, my proposal would be:

  • Switch to implementation (3) right now, fixing this bug in a rand 0.7 point release.
  • For rand 0.8, switch to implementation (5) and make ThreadRng: !Copy.
    • If the memory allocation Rc does is an issue, we could switch to implementation (6).
    • Regardless of which implementation we choose for 0.8, we should stop having ThreadRng be Copy, allowing us to change the implementation in the future without breaking changes.

Does that sound reasonable to people?

@vks
Copy link
Collaborator

vks commented Apr 27, 2020

Option (4) might be a good choice, too. The performance impact is moderate, and anyone needing more performance can (and probably should) use StdRng directly.

@dhardy
Copy link
Member

dhardy commented Apr 27, 2020

Agreed. And neither of the differences between (4) and (5) don't appear very significant.

Interesting point about inner-vs-outer access (2 vs 3). Assuming thread boundaries are respected, I think the only way code could be called between the rng accessor and next_* is (a) a recursive call from the RNG or (b) an interrupt, and neither of these can destruct a thread-local object?

If the memory allocation Rc does is an issue

There is a risk such an issue would go undetected until it is hard to fix. On the other hand, it seems (6) may cause a panic in the originally-reported behaviour only on some platforms, thus being a portability issue, so neither is perfect.

Are there any other reasons we should/should not allow use of thread_rng in a thread-local destructor?

@burdges
Copy link
Contributor

burdges commented Apr 27, 2020

I'd kinda favor (6) partially because we could almost achieve (6) using cell::{RefCell,Ref} alone:

use core::cell::{RefCell,Ref};

type Inner = ReseedingRng<Core, OsRng>;

struct RngKey( RefCell<Inner> );

impl RngKey {
    fn static_borrow(&'static self) -> Ref<'static, Inner> { r.0.borrow() }
}

impl Drop for RngKey {
    fn drop(&mut self) {
        self.try_borrow_mut().expect("thread_local RNG dropped with outstanding references");
    }
}

thread_local! {
    static THREAD_RNG_KEY: RngKey = RngKey(RefCell::new({ ... }));
}

pub fn thread_rng() -> ThreadRng {
    ThreadRng(THREAD_RNG_KEY.with(RngKey::static_borrow))
}

#[derive(Clone, Debug)]
pub struct ThreadRng( Ref<'static, Inner> );

impl ThreadRng {
    #[inline(always)]
    fn rng(&mut self) -> &mut Inner {
        unsafe { self.0.deref() as *const _ as *mut _ as &mut _ }
    }
}

If this worked, we'd require unsafe code only in ThreadRng::rng, so if this provokes use-after-free then similar safe code should do so, which breaks RefCell inside LocalKey.

I think this code fails only because LocalKey::with wants a closure with a free lifetime, which static_borrow fails. We could however execute static_borrow without LocalKey to obtain Ref<'static,_>, so (6) enabling use-after-free still makes RefCell enable use-after-free.

We give up all autotraits except Unpin under this RefCell scheme, which makes me worry some of (6) et al. keep some autotraits incorrectly.

I'm unsure why Ref costs two usizes instead of only one https://doc.rust-lang.org/src/core/cell.rs.html#1161-1164 so maybe worth understanding, but presumably avoidable via manual (6).

@nathdobson
Copy link
Author

A possible issue with (6) is that panicking in a thread local destructor will abort on many targets (or at least OSX).

@burdges
Copy link
Contributor

burdges commented Apr 27, 2020

We must choose between panic (1,2,3,6) or leak (4,5) under this scenario, right? I think panic provides a safer more observable default.

If they happen, async destructors (poll_drop) should avoid both panics and leaks, btw.

@dhardy
Copy link
Member

dhardy commented Apr 28, 2020

If they happen, async destructors (poll_drop) should avoid both panics and leaks, btw.

You mean via an impl like (6) but which does not return Ready until count == 0? This does depend on platforms supporting async destructors in thread-local state, and it could hang if any ThreadRng handles leak.

dhardy added a commit to dhardy/rand that referenced this issue Apr 28, 2020
@burdges
Copy link
Contributor

burdges commented Apr 28, 2020

I created a branch in which to play with the pure RefCell trick, which errors as expected. I suspect this indicates some flaw in RefCell

We can eliminate the first unsafe by using a Box<RefCell<..>> or other tricks I think, not sure if 'static is even required. We cannot eliminate the second unsafe in rand pe se, but we could mock up another trick that exploits this, like reading some string. At this point if we remove the try_borrow_mut check then we seemingly have a use-after-free in RefCell, no? Is RefCell sound? What am I missing here?

@RalfJung
Copy link
Contributor

RalfJung commented Jul 8, 2020

Switch to implementation (3) right now, fixing this bug in a rand 0.7 point release.

Could you explain in a bit more detail the failure case with (3), i.e. this:

This implementation has a risk that the THREAD_RNG_KEY destructor could be run between when the pointer is fetched and the method executed. However, this is extremely unlikely.

This sounds like a data race, shouldn't that be impossible with thread-local variables?

@nagisa
Copy link
Contributor

nagisa commented Jul 19, 2020

Could you explain in a bit more detail the failure case with (3)

I’m also curious. As the sample implementation is written now, the example will just panic if thread_rng is no longer usable. From what I can tell there's no difference between implementations (3) and (2).

Unless the intent was to say that users can misuse the ThreadRng::rng method to obtain and retain the pointer?


In my opinion the possibility of a panic (from LocalKey::with) inside of a destructor is also a problem, but IIRC the rand APIs don’t really allow for anything else. Could we perhaps fall back to something slower like getrandom, perhaps, if TLS key can no longer be accessed?

@nagisa
Copy link
Contributor

nagisa commented Jul 19, 2020

It is not clear to me if 6 is a viable solution. mem::forget is safe. If somebody had a good reason to forget it for any reason they would end up with a panic in the TLS destructor, which can possibly abort the entire program for no good reason.

In particular I think something along the lines of the following snippet can be fairly plausible:

let thread_rng = Box::new(thread_rng());
let ptr = Box::into_raw(thread_rng);
// something that could maybe panic, if panic occurs you end up with a leaked reference
// eventually on thread termination the TLS destructor may run and may abort if the thread
// termination is due to this or a different panic.
let thread_rng = Box::from_raw(ptr);
// carry on...

@dhardy
Copy link
Member

dhardy commented Jul 20, 2020

@RalfJung I thought running dtors & freeing TLS at the same time was impossible, but apparently not (see the top of this issue). But I expect you know better than I do on this topic.

@nagisa given how thread_local works, I agree, there wouldn't appear to be a difference between (2) and (3).

Good point that we shouldn't assume a ThreadRng handle won't leak.


I guess this means that the best option is still (3) for a 0.7 patch release and (5) for 0.8.

@RalfJung
Copy link
Contributor

I thought running dtors & freeing TLS at the same time was impossible, but apparently not (see the top of this issue). But I expect you know better than I do on this topic.

I am not very knowledgeable on TLS things, I just looked a bit at the APIs when they came up for Miri.^^ I was just curious what kind of code flow you are imagining.

If I understand correctly, everything discussed here is actually layered on top of the thread_local! macro in libstd? In that case you should only rely on what is actually documented for its behavior, and request clarification / stronger guarantees where needed.

Right now, the docs don't give any indication for when it is legal to "leak" the TLS pointer out of LocalKey::(try_)with, which implies that it is never legal. It would help a lot if you could try to precisely summarize the guarantees you need from libstd, and then discuss with T-libs whether those guarantees can be added to the docs.

@dhardy
Copy link
Member

dhardy commented Jul 31, 2020

Lets evaluate any safety issues again.

Options (1) and (4) do not use unsafe.

All other options use unsafe to cast the result of UnsafeCell::get. This method has the constraint that when casting to &mut T, access must be unique: this implies that the implementation of ReseedingRng<Core, OsRng> must never call thread_rng (recursion). We control all applicable code, so we can tolerate this "safety leakage".

The current code and options (3) and (5) all leak a reference of an object referred by LocalKey::with. This violates the following constraint specified in the LocalKey documentation:

The with method yields a reference to the contained value which cannot be sent across threads or escape the given closure.

As @RalfJung says we could try to clarify this with the libs team. Alternatively we may as well simply choose (2) over (3) and a similar variation on (5).

Option (6) has a serious issue.


I think that's everything?

In that case, I believe our options are:

  1. Clarify with the libs team about the current code and about option (3) which leak a reference from LocalKey::with.
  2. Aim to use the thread_local attribute — but there is no guarantee this will ever be supported.
  3. Use option (1) or preferably (2) for rand 0.7.x, with additional options (4) or a variant on (5) for version 0.8.

I'm inclined to take the paths of least resistance: (2) for 0.7 and either (4) or a variant of (5) for 0.8. I believe we can create PRs now?

@dhardy
Copy link
Member

dhardy commented Sep 2, 2020

Fixed in the master branch. Now, the question is do we fix for 0.7 too? I'm inclined not to (since the perf hit may be more significant than the issue itself and 0.8 should not be too long now).

@vks
Copy link
Collaborator

vks commented Apr 22, 2021

@dhardy Can we close this now that 0.8 is out? Should we file a RustSec advisory?

@dhardy
Copy link
Member

dhardy commented Apr 22, 2021

Yes, we should close it. I don't think there is any exploit to report.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants