Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add some basic examples #2

Closed
wants to merge 8 commits into from
Closed

Add some basic examples #2

wants to merge 8 commits into from

Conversation

arlyon
Copy link
Contributor

@arlyon arlyon commented Jul 14, 2022

This adds prelude-free examples for cpp using your local toolchain, and for go using a precompiled toolchain with a particular version / architecture. I have not had a chance to confirm whether multiple arches works correctly, but may attempt to do so on the M1 macOS machine.

There are some more examples that have not been included in this PR directly, such as a semi-functional cpp toolchain example using LLVM precompiled releases, and cpp static linking example which needs some more polish.

  • cpp binary
  • cpp library
  • cpp static linked binary (using the lib)
  • cpp toolchain
  • go toolchain
  • rustc

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jul 14, 2022
compiler_src = cmd_args(download[0].default_outputs[0], format="{}/go/bin/go")
ctx.actions.run(["ln","-srf", compiler_src, compiler_dst.as_output()], category="cp_compiler")

# ctx.actions.symlink_file(compiler_dst, compiler_src)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ndmitchell I couldn't get it to symlink files so I just set it up to use the underlying OS ln.

@facebook-github-bot
Copy link
Contributor

@arlyon has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@arlyon has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@arlyon arlyon deleted the feat/examples branch July 28, 2022 09:10
facebook-github-bot pushed a commit that referenced this pull request Aug 16, 2022
Summary:
Users can now enable incremental compilation by running:
```
buck2 build -c rust.incremental=relative/path/to/target <TARGET>
```
This will generate an `incremental/<BUILD_MODE>/` directory which can be reused between builds. Since `rustc` is quite opinionated about incremental artifact hashing, file naming and placement, we enforce incremental compilation to run locally by skipping RE.

## Benchmarking (quick)

From a few local experiments run builds with and without incremental compilation turned on after making minor code changes, I see *> 80%* reduction in build times when using incremental compilation:

**example #1**: `//common/rust/tools/rust-expand:rust-expand`
```
|---------------------+-------------------+-------------------------|
| Build type          | 1st Build (clean) | 2nd Build (with change) |
|---------------------+-------------------+-------------------------|
| Without incremental | 9.2sec            | 14.2sec                 |
| With incremental    | 15.2sec           | 2.1sec                  |
|---------------------+-------------------+-------------------------|
```

**example #2**: `//common/rust/tools/rustfix2:buck`
```
|---------------------+-------------------+-------------------------|
| Build type          | 1st Build (clean) | 2nd Build (with change) |
|---------------------+-------------------+-------------------------|
| Without incremental | 7.6sec            | 3.4sec                  |
| With incremental    | 9sec              | 0.72sec                 |
|---------------------+-------------------+-------------------------|
```

For both examples above, 1st build gets run & timed like so:
```
for i in {1..5}; do buck2 clean && time buck2 build --show-full-output //common/rust/tools/rustfix2:buck; done
```
and 2nd build gets run & timed like so:
```
cd ~/fbcode
for i in {1..5}; do sed -i 's/println!("some some something 1");/&\n    println!("some some something 2");/' common/rust/tools/rustfix2/src/buck.rs && time buck2 build --show-full-output //common/rust/tools/rustfix2:buck; done
```
adding and removing `-c rust.incremental` to generate results with and without incremental compilation.

Reviewed By: krallin

Differential Revision: D38374692

fbshipit-source-id: 0f7412249da6e3dad2a381f0945e8cc8f9412065
facebook-github-bot pushed a commit that referenced this pull request Oct 13, 2022
…ingle file when the archive has no bitcode object.

Summary:
Current opt step would check if each object in an archive is bitcode. then:
1. If it is, run an actual opt to generate ELF.
2. If it's not bitcode (already an ELF), trigger a copy action to copy the file to the destination.

In the case of prebuilt library since all the objects are ELF already, it's more efficient to just copy the folder than #2.

This change bring down the total number of actions from 186.5k -> 144k (22.7%) for unicorn:index_server

{F780561315}

Reviewed By: christycylee

Differential Revision: D40319823

fbshipit-source-id: 4650cf8f6ef5170d2c6fb4d456e7042de77a4f2f
facebook-github-bot pushed a commit that referenced this pull request Dec 13, 2022
Summary:
Going from this stack trace: P570153247. Relevant threads are:

```
Thread 31 (LWP 16695):
#0  syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
#1  0x0000000008761b14 in <parking_lot::raw_rwlock::RawRwLock>::lock_shared_slow ()
#2  0x000000000808cc3e in <dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::get_running_map ()
#3  0x00000000082c49fe in <core::future::from_generator::GenFuture<<dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::do_recompute_projection::{closure#0}> as core::future::future::Future>::poll ()

Thread 29 (LWP 16693):
#0  syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
#1  0x0000000008761b14 in <parking_lot::raw_rwlock::RawRwLock>::lock_shared_slow ()
#2  0x000000000808a737 in <dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::eval_projection_task ()
#3  0x00000000082c4ae3 in <core::future::from_generator::GenFuture<<dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::do_recompute_projection::{closure#0}> as core::future::future::Future>::poll ()

Thread 19 (LWP 16683):
#0  syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
#1  0x0000000008761b14 in <parking_lot::raw_rwlock::RawRwLock>::lock_shared_slow ()
#2  0x0000000008093a3b in <dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>> as dice::introspection::graph::EngineForIntrospection>::currently_running_key_count ()

Thread 15 (LWP 16679):
#0  syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
#1  0x0000000008761b14 in <parking_lot::raw_rwlock::RawRwLock>::lock_shared_slow ()
#2  0x000000000808cc3e in <dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::get_running_map ()
#3  0x00000000082c49fe in <core::future::from_generator::GenFuture<<dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::do_recompute_projection::{closure#0}> as core::future::future::Future>::poll ()

Thread 3 (LWP 16667):
#0  syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
#1  0x0000000008762aac in <parking_lot::raw_rwlock::RawRwLock>::wait_for_readers ()
#2  0x0000000008760f13 in <parking_lot::raw_rwlock::RawRwLock>::lock_exclusive_slow ()
#3  0x000000000808cc64 in <dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::get_running_map ()
#4  0x00000000082c49fe in <core::future::from_generator::GenFuture<<dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::do_recompute_projection::{closure#0}> as core::future::future::Future>::poll ()
```

The comments in the code suggest releasing an entry will release the
lock, but it's not clear to me that this is actually supposed to work (the map
itself is still held, you could ask it for another entry).

That said, it appears somewhat pretty clear from thread 19, which is just
running the introspection task, that the map is overall locked.

This patch updates the code to make it extremely clear that the
currently_running guard is indeed released when we call the projection task,
since it's in its own scope.

It's a bit unclear to me exactly where the deadlock is. I suspect it happens
because `currently_running` is locked when we enter `eval_projection_task`,
which means that this function cannot acquire the lock again here:

```
        let sent = tx.send(node.dupe());
        assert!(sent.is_ok(), "receiver is still alive");

        if let Some(running_map) = self
            .currently_running
            .read()
            .get(&transaction_ctx.get_version())
        {
            let removed = running_map.remove(k);
            assert!(removed.is_some());
        }

```

That seems pretty clearly wrong, but the bit that's unclear to me is
how can this ever work?

Still theorizing, I suspect the reason is:

Normally, it works, because:

- eval_projection_versioned takes a read lock
- eval_projection_task also takes a read lock

But, if you ever get concurrent commands running, what will happen is:

- Thread 1: eval_projection_versioned takes a read lock
- Thread 2: attempts to take a write lock
- Thread 1: eval_projection_task also takes a read lock, can't have it because we block new read locks when a writer is waiting.

The reason I suspect I'm right is this thread, which is indeed attempting to acquire a write lock:

```
#1  0x0000000008762aac in <parking_lot::raw_rwlock::RawRwLock>::wait_for_readers ()
#2  0x0000000008760f13 in <parking_lot::raw_rwlock::RawRwLock>::lock_exclusive_slow ()
#3  0x000000000808cc64 in <dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::get_running_map ()

```

Reviewed By: ndmitchell

Differential Revision: D41996701

fbshipit-source-id: ba00e1e1052272ddd1a44b6c4b9d8bb924043ebb
facebook-github-bot pushed a commit that referenced this pull request Feb 2, 2023
Summary:
Attempt #2:

Refactor a lot of this to reuse `get_configured_node` already provided by the `DiceComputations`. `get()` for a `TargetExpr<ConfiguredTargetNode>` will now return a `Vec<anyhow::Result<MaybeComptaible<ConfiguredTargetNode>>>`, and all callers of this `get()` will need to call `filter_incompatible` on the result in order to get the `TargetSet`. cc bobyangyf

Previous summary:

This diff fixes the issue mentioned in T142816564, [this post](https://fb.workplace.com/groups/617497306123691/posts/666171784589576), and [this post](https://fb.workplace.com/groups/buck2appledogfooding/posts/683405846449916/?comment_id=683948533062314&reply_comment_id=727320665391767).

Reviewed By: bobyangyf

Differential Revision: D42923562

fbshipit-source-id: c3ea50b1cbc6b299f3c4bcca2be8551debf65d3f
facebook-github-bot pushed a commit that referenced this pull request Mar 16, 2023
Summary:
**Context**

Supporting fat and non-fat toolchains becomes problematic because a fat toolchain usually combines tools using `command_alias()`. So if we set the platform constraints for the individual tools (which we need, so that the thin toolchain gets the correct platform), we end up producing conflicting constraints.

```
   ┌─────────────────┐          ┌───────────────┐
   │ linux-toolchain │          │ fat-toolchain │
   └─────────────────┘          └───────────────┘
            │                           │
            │                           │
            │                           │
            │                           │
"tool" = attrs.exec_dep()   "tool" = attrs.exec_dep()
            │                           │
            │                           │
            │                           │
            │                           │
            │                           │
            │                           │
            │                           ▼
            │          ┌────────────────────────────────┐
            │          │command_alias(name = "fat-tool")│
            │          └────────────────────────────────┘
            │                           │
            │                      platform_exe
            │                           │
            │                           │
            │         ┌─────────────────┴─────────────────────┐
            │         │                                       │
            │         │                                       │
            │         │                                       │
            │         │                                       │
            ▼         ▼                                       ▼
     ┌─────────────────────────────────┐    ┌───────────────────────────────────┐
     │sh_binary(name="linux-only-tool")│    │sh_binary(name="windows-only-tool")│
     └─────────────────────────────────┘    └───────────────────────────────────┘
```

If we were to set the `target_compatible_with` constraints on `linux-only-tool` and `windows-only-tool`, then `fat-tool` is not resolved to any exec platform, as no exec platform satisfies *both* Windows and Linux.

**Solution/Workaround**

There are two solutions to the problem:

1. Set up mirrored aliases of all underlying tools
2. Set up an placeholder tool on `(apple|cxx|swift)_toolchain()` for the purposes of setting exec platform constraints

We opt for solution #2 as it's much simpler and does not require extensive changes to existing toolchain setup. Since the tools are optional, they can be unused without any downsides.

Reviewed By: rmaz

Differential Revision: D44090642

fbshipit-source-id: 399cfbfdf4dcbb4b334e22545cfea5e86e31c837
diliop added a commit to diliop/buck2 that referenced this pull request Jun 10, 2023
facebook-github-bot pushed a commit that referenced this pull request Jun 10, 2023
Summary: Upgrading nom from `6.1.2` to `7.1.3` to accommodate new nightly version

Reviewed By: dtolnay

Differential Revision: D46619142

fbshipit-source-id: 8b315360692079fce10e134d6de65eece4306ea1
facebook-github-bot pushed a commit that referenced this pull request Jun 15, 2023
Summary:
The existing `_get_argsfile_output` does two things:
1. creates the argsfile subtarget DefaultInfo provider
2. sets the `by_ext` attribute for use in xcode_data

Up the stack, #2 is replaced by the `CompileArgsfiles` record which contains a map of extension to `CompileArgsfile`.

Extracting out #1 allows us to restructure the code (later on) to
1. define the argsfile subtarget, DefaultInfo and action only for targets that want it
2. extract it out of compilation so that compilation code is simpler

Reviewed By: milend

Differential Revision: D46743454

fbshipit-source-id: 31a108410e49fb85851d91334ed598a10731e7d9
facebook-github-bot pushed a commit that referenced this pull request Aug 7, 2023
Summary:
This is another version of D47988498, which the minor change that fixes the bug the broke a few of our tests.

The problem was that in the `haskell_ghci` rule, we were generating directory names using the suffix based on the profiling attribute of the `haskell_ghci` target, instead of the one considered when building the actual library.

This meant that the interface files were not found for some packages and their modules couldn't be imported.
---------
# Summary from D47988498

## This stack
See D47987891.

## This diff

Change the names of generated artifacts (e.g. package directories, lib names) to consider the status of profiling.

This is needed to (a) make it clear for every artifact if it was build with or without profiling and (b) to fix buck2 errors from declaring outputs with the same name of running the actions with the same category name (e.g. "haskell_link_...").

Reviewed By: helfper

Differential Revision: D48110784

fbshipit-source-id: e12ec14aacb406d2334bfcd22e5ba81b82c847eb
facebook-github-bot pushed a commit that referenced this pull request Oct 9, 2023
Summary: i convinced myself that creating a link to the ocaml standard library directory as extracted from `ocamlopt.opt -config` was necessary and that we can't just assume `$OPAM_SWITCH/lib/ocaml` but have failed to prove that to myself today. together with ocaml-scripts issue [#2](facebook/ocaml-scripts#2) and similar past questions from vsiles i'm motivated to attempt to remove it so that we create but one link into .opam.

Differential Revision: D50082445
facebook-github-bot pushed a commit that referenced this pull request Oct 9, 2023
Summary:

i convinced myself that creating a link to the ocaml standard library directory as extracted from `ocamlopt.opt -config` was necessary and that we can't just assume `$OPAM_SWITCH/lib/ocaml` but have failed to prove that to myself today. together with ocaml-scripts issue [#2](facebook/ocaml-scripts#2) and similar past questions from vsiles i'm motivated to attempt to remove it so that we create but one link into .opam.

Differential Revision: D50082445
shayne-fletcher pushed a commit that referenced this pull request Oct 9, 2023
Summary:

i convinced myself that creating a link to the ocaml standard library directory as extracted from `ocamlopt.opt -config` was necessary and that we can't just assume `$OPAM_SWITCH/lib/ocaml` but have failed to prove that to myself today. together with ocaml-scripts issue [#2](facebook/ocaml-scripts#2) and similar past questions from vsiles i'm motivated to attempt to remove it so that we create but one link into .opam.

Differential Revision: D50082445
facebook-github-bot pushed a commit that referenced this pull request Oct 9, 2023
Summary:
Pull Request resolved: #442

i convinced myself that creating a link to the ocaml standard library directory as extracted from `ocamlopt.opt -config` was necessary and that we can't just assume `$OPAM_SWITCH/lib/ocaml` but have failed to prove that to myself today. together with ocaml-scripts issue [#2](facebook/ocaml-scripts#2) and similar past questions from vsiles i'm motivated to attempt to remove it so that we create but one link into .opam.

Reviewed By: vsiles

Differential Revision: D50082445

fbshipit-source-id: e46f2515c8c436cc5291550f6453f200defe0521
facebook-github-bot pushed a commit that referenced this pull request Feb 15, 2024
Summary:
The stack around D42099161 introduced the ability to dynamically set the log level via a debug-command. This requires using `tracing_subscriber::reload::Layer`. The implementation of that machinery is backed by a `RwLock` [here](https://fburl.com/code/4xv0ihpn). This `RwLock` is read locked once on every single `#[instrumentation]` call to check whether the instrumentation is active or not.

On top of that, something in our fbsource third-party config enables the `parking_lot` feature of `tracing_subscriber`. This means that this is a `parking_lot::RwLock`, not a `std::sync::RwLock`, which is bad because it's well known that parking lot has problems with excessive spinning.

What all that adds up to is that when you put `#[instrument]` on a super hot function in dice, you get dozens of threads that are all simultaneously doing this:

```
  thread #23, name = 'buck2-rt', stop reason = signal SIGSTOP
    frame #0: 0x000000000c94a1fa buck2`<parking_lot::raw_rwlock::RawRwLock>::lock_shared_slow + 122
    frame #1: 0x0000000007a80b54 buck2`<tracing_subscriber::layer::layered::Layered<tracing_subscriber::reload::Layer<tracing_subscriber::filter::layer_filters::Filtered<tracing_subscriber::fmt::fmt_layer::Layer<tracing_subscriber::registry::sharded::Registry, tracing_subscriber::fmt::format::DefaultFields, tracing_subscriber::fmt::format::Format, std::io::stdio::stderr>, tracing_subscriber::filter::env::EnvFilter, tracing_subscriber::registry::sharded::Registry>, tracing_subscriber::registry::sharded::Registry>, tracing_subscriber::registry::sharded::Registry> as tracing_core::subscriber::Subscriber>::enabled + 292
    frame #2: 0x000000000a172d77 buck2`<dice::legacy::incremental::dep_trackers::internals::Dep<dice::legacy::key::StoragePropertiesForKey<buck2_build_api::actions::calculation::BuildKey>> as dice::legacy::incremental::graph::dependencies::Dependency>::recompute::{closure#0} + 183
    frame #3: 0x000000000a03606c buck2`<futures_util::stream::futures_unordered::FuturesUnordered<core::pin::Pin<alloc::boxed::Box<dyn core::future::future::Future<Output = core::result::Result<(alloc::boxed::Box<dyn dice::legacy::incremental::graph::dependencies::ComputedDependency>, alloc::sync::Arc<dyn dice::legacy::incremental::graph::GraphNodeDyn>), dice::api::error::DiceError>> + core::marker::Send>>> as futures_util::stream::stream::StreamExt>::poll_next_unpin + 444
    frame #4: 0x0000000009fc4755 buck2`<dice::legacy::incremental::IncrementalEngine<dice::legacy::key::StoragePropertiesForKey<buck2_build_api::actions::calculation::BuildKey>>>::compute_whether_dependencies_changed::{closure#0} (.llvm.4229350879637282184) + 1733
    frame #5: 0x0000000009ef30d4 buck2`<dice::legacy::incremental::IncrementalEngine<dice::legacy::key::StoragePropertiesForKey<buck2_build_api::actions::calculation::BuildKey>>>::compute_whether_versioned_dependencies_changed::{closure#0}::{closure#0} + 228
    frame #6: 0x0000000009f194ec buck2`<buck2_futures::cancellable_future::CancellableFuture<<dice::legacy::incremental::IncrementalEngine<dice::legacy::key::StoragePropertiesForKey<buck2_build_api::actions::calculation::BuildKey>>>::new_dice_task::{closure#0}> as core::future::future::Future>::poll (.llvm.11181184606289051452) + 3500
    frame #7: 0x0000000009f04bbf buck2`<futures_util::future::future::map::Map<buck2_futures::cancellable_future::CancellableFuture<<dice::legacy::incremental::IncrementalEngine<dice::legacy::key::StoragePropertiesForKey<buck2_build_api::actions::calculation::BuildKey>>>::new_dice_task::{closure#0}>, buck2_futures::spawn::spawn_inner<<dice::legacy::incremental::IncrementalEngine<dice::legacy::key::StoragePropertiesForKey<buck2_build_api::actions::calculation::BuildKey>>>::new_dice_task::{closure#0}, dice::api::user_data::UserComputationData, futures_util::future::ready::Ready<()>>::{closure#0}> as core::future::future::Future>::poll + 31
    frame #8: 0x0000000009ff0339 buck2`<futures_util::future::future::flatten::Flatten<futures_util::future::future::Map<futures_util::future::ready::Ready<()>, buck2_futures::spawn::spawn_inner<<dice::legacy::incremental::IncrementalEngine<dice::legacy::key::StoragePropertiesForKey<buck2_build_api::actions::calculation::BuildKey>>>::new_dice_task::{closure#0}, dice::api::user_data::UserComputationData, futures_util::future::ready::Ready<()>>::{closure#1}>, <futures_util::future::future::Map<futures_util::future::ready::Ready<()>, buck2_futures::spawn::spawn_inner<<dice::legacy::incremental::IncrementalEngine<dice::legacy::key::StoragePropertiesForKey<buck2_build_api::actions::calculation::BuildKey>>>::new_dice_task::{closure#0}, dice::api::user_data::UserComputationData, futures_util::future::ready::Ready<()>>::{closure#1}> as core::future::future::Future>::Output> as core::future::future::Future>::poll + 201
    frame #9: 0x00000000093f9f22 buck2`<tokio::task::task_local::TaskLocalFuture<buck2_events::dispatch::EventDispatcher, core::pin::Pin<alloc::boxed::Box<dyn core::future::future::Future<Output = alloc::boxed::Box<dyn core::any::Any + core::marker::Send>> + core::marker::Send>>> as core::future::future::Future>::poll + 130
    frame #10: 0x000000000939fdcb buck2`<tracing::instrument::Instrumented<<buck2_build_api::spawner::BuckSpawner as buck2_futures::spawner::Spawner<dice::api::user_data::UserComputationData>>::spawn::{closure#0}> as core::future::future::Future>::poll + 139
    frame #11: 0x00000000091ca5a9 buck2`<tokio::runtime::task::core::Core<tracing::instrument::Instrumented<<buck2_build_api::spawner::BuckSpawner as buck2_futures::spawner::Spawner<dice::api::user_data::UserComputationData>>::spawn::{closure#0}>, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle>>>::poll + 169
    frame #12: 0x00000000091c1b22 buck2`<tokio::runtime::task::harness::Harness<tracing::instrument::Instrumented<<buck2_build_api::spawner::BuckSpawner as buck2_futures::spawner::Spawner<dice::api::user_data::UserComputationData>>::spawn::{closure#0}>, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle>>>::poll + 146
    frame #13: 0x000000000c920f6f buck2`<tokio::runtime::scheduler::multi_thread::worker::Context>::run_task + 895
    frame #14: 0x000000000c91f847 buck2`<tokio::runtime::scheduler::multi_thread::worker::Context>::run + 2103
    frame #15: 0x000000000c932264 buck2`<tokio::runtime::context::scoped::Scoped<tokio::runtime::scheduler::Context>>::set::<tokio::runtime::scheduler::multi_thread::worker::run::{closure#0}::{closure#0}, ()> + 52
    frame #16: 0x000000000c932169 buck2`tokio::runtime::context::runtime::enter_runtime::<tokio::runtime::scheduler::multi_thread::worker::run::{closure#0}, ()> + 441
    frame #17: 0x000000000c91efa6 buck2`tokio::runtime::scheduler::multi_thread::worker::run + 70
    frame #18: 0x000000000c906a50 buck2`<tracing::instrument::Instrumented<tokio::runtime::blocking::task::BlockingTask<<tokio::runtime::scheduler::multi_thread::worker::Launch>::launch::{closure#0}>> as core::future::future::Future>::poll + 160
    frame #19: 0x000000000c8f8af9 buck2`<tokio::loom::std::unsafe_cell::UnsafeCell<tokio::runtime::task::core::Stage<tracing::instrument::Instrumented<tokio::runtime::blocking::task::BlockingTask<<tokio::runtime::scheduler::multi_thread::worker::Launch>::launch::{closure#0}>>>>>::with_mut::<core::task::poll::Poll<()>, <tokio::runtime::task::core::Core<tracing::instrument::Instrumented<tokio::runtime::blocking::task::BlockingTask<<tokio::runtime::scheduler::multi_thread::worker::Launch>::launch::{closure#0}>>, tokio::runtime::blocking::schedule::BlockingSchedule>>::poll::{closure#0}> + 153
    frame #20: 0x000000000c90166b buck2`<tokio::runtime::task::core::Core<tracing::instrument::Instrumented<tokio::runtime::blocking::task::BlockingTask<<tokio::runtime::scheduler::multi_thread::worker::Launch>::launch::{closure#0}>>, tokio::runtime::blocking::schedule::BlockingSchedule>>::poll + 43
    frame #21: 0x000000000c90d9b8 buck2`<tokio::runtime::task::harness::Harness<tracing::instrument::Instrumented<tokio::runtime::blocking::task::BlockingTask<<tokio::runtime::scheduler::multi_thread::worker::Launch>::launch::{closure#0}>>, tokio::runtime::blocking::schedule::BlockingSchedule>>::poll + 152
    frame #22: 0x000000000c90b848 buck2`<tokio::runtime::blocking::pool::Inner>::run + 216
    frame #23: 0x000000000c9002ab buck2`std::sys_common::backtrace::__rust_begin_short_backtrace::<<tokio::runtime::blocking::pool::Spawner>::spawn_thread::{closure#0}, ()> + 187
    frame #24: 0x000000000c90042e buck2`<<std::thread::Builder>::spawn_unchecked_<<tokio::runtime::blocking::pool::Spawner>::spawn_thread::{closure#0}, ()>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} + 158
    frame #25: 0x000000000757e4b5 buck2`std::sys::unix::thread::Thread::new::thread_start::h618b0b8e7bda0b2b + 37
    frame #26: 0x00007f2ba1e9abaf
```

This hit an open source user super hard over in #555 .

This diff approaches this issue by putting all the `#[instrument]`s in dice behind `#[cfg(debug_assertions)]`. This allows them to still be used in debug mode, but keeps them out of any hot paths. I do not have numbers to show that most of these matter. The only one I know matters for sure is `recompute`.

Alternatives are to either try and tailor this to the callsites we know are hot (that sounds error prone) or to revert the stack that inadvertently introduced the RwLock. Even if we did that though, it's probably still safer to keep `#[instrument]` out of super hot paths like this.

Reviewed By: stepancheg

Differential Revision: D53744041

fbshipit-source-id: 85bce9af2fec8ad1d50adc241d3b8e2bfc5cec87
facebook-github-bot pushed a commit that referenced this pull request Jul 23, 2024
Summary:
Add some additional error context when cert check fails instead of overwriting it.

Reason #1 is that our cert check could be wrong, so if it is at least we show the actual error

Reason #2 If #1 isn't true it lets us collect some data to see what errors caused by invalid certs would look like, which could be useful

Reviewed By: JakobDegen

Differential Revision: D59988439

fbshipit-source-id: a2355ea4ac39eaa1f55a431c5fbc951fa63b62f0
facebook-github-bot pushed a commit that referenced this pull request Sep 19, 2024
Summary:
This makes lives easier by not needing to passthrough it.
Original diff D62916369
Landed the wrong version, :(

Reviewed By: IanChilds

Differential Revision: D62984484

fbshipit-source-id: 0805c94e694a722edd13e9b91cc73bad8c11e2b7
facebook-github-bot pushed a commit that referenced this pull request Dec 19, 2024
…cross all host platforms

Summary:
### Motivation

My team has a concrete need for buck to generate 100% matching zip files for the same sets of inputs on all host platforms (macOS, Linux, Windows). Current limitations:
1. File order can be different on file system with different case sensitivity.
2. Windows can't write correct posix mode (i.e. permissions) for any entries.

Although the entries themselves might fully match, those discrepancies result in different metadata, which results in a different zip file.

See D67149264 for an in-depth explanation of the use case that requires this level of determinism.

### Tentative solution #1

In D66386385, I made it so the asset generation rule was only executable from Linux. Paired with buck cross builds, it made so that outputs from macOS and Linux matched, but did not work on Windows [due to some lower level buck problem](https://fb.workplace.com/groups/930797200910874/posts/1548299102494011) (still unresolved).

### Tentative solution #2

In D66404381, I wrote my own Python script to create zip files. I got all the files and metadata to match everywhere, but I could not get around differences in the compression results. Decided not to pursue because compression is important for file size.

###  Tentative solution #3

In D67149264, I duplicated and tweaked buck's zip binary. It did work, but IanChilds rightfully pointed out that I'd be making maintenance on those libraries more difficult and that the team is even planning on deleting those, at some point.

### Tentative solution #4 (this diff!)

IanChilds advised me to try to fix buck itself to produce consistent results, so this is me giving it a try.

Because the root problem could not have been done in a backwards compatible way (the file permissions, specifically; see inlined comment), I decided to use an argument to control whether the zip tool should strive to produce a deterministic output or not, at the expense of some loss of metadata. The changes are simple and backwards compatible, but any feedback on the root problem, idea and execution are welcome.

Reviewed By: christolliday

Differential Revision: D67301945

fbshipit-source-id: c42ef7a52efd235b43509337913d905bcbaf3782
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants