-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
deps: use heapless::Vec
for fixed-capacity Vec
#24
base: main
Are you sure you want to change the base?
Conversation
Switches to the actively maintained `heapless` crate for a `no_std`-compatible, fixed-capacity `Vec` implementation.
What are the concrete advantages of using I see that This would somewhat bloat the Firefox and Servo builds, for example, which already have other dependencies on I'm not opposed to this change, especially if lack of maintenance of |
My main thoughts are around
I agree that the dependency graph and build times increasing that much adds burden. Considering that pub struct LruVec<T, const N usize> {
arr: [T; N],
}
impl<T, const N: usize> LruVec<T, N> {
...
} From first glance, there's only a handful of methods to implement. I can look into doing the work. Is this something you would like? It would have the benefits of dropping a dependency, and likely decrease compile times. A possible downside would be if other parts of Servo/Firefox wanted to use |
Another option is to use optional dependencies to let users choose between the |
This might be the best compromise. Also, the only required dependency in Even when compiling |
The increase in compile time is small in absolute terms (less than 1 second of wall clock time), but it illustrates the change in total amount of code downloaded and compiled. |
I honestly think that is worth the gained advantage of using a well-maintained dependency. There may also be some possibility of also making Not familiar enough with the internals of |
Switches to the actively maintained
heapless
crate for ano_std
-compatible, fixed-capacityVec
implementation.