-
Notifications
You must be signed in to change notification settings - Fork 233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add examples for using the prelude. #4
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Reviewed By: krallin Differential Revision: D38854059 fbshipit-source-id: fb8c42f62dc4c5adf83984da139a77a3874932a4
Summary: Sandcastle has a lot of messages like: ``` [2022-08-18T08:30:46.802-07:00] Waiting on daemon... [2022-08-18T08:30:53.802-07:00] Waiting on daemon... [2022-08-18T08:31:00.902-07:00] Waiting on daemon... [2022-08-18T08:31:07.902-07:00] Waiting on daemon... [2022-08-18T08:31:14.903-07:00] Waiting on daemon... [2022-08-18T08:31:22.002-07:00] Waiting on daemon... [2022-08-18T08:31:29.102-07:00] Waiting on daemon... [2022-08-18T08:31:36.202-07:00] Waiting on daemon... [2022-08-18T08:31:43.302-07:00] Waiting on daemon... [2022-08-18T08:31:50.402-07:00] Waiting on daemon... ``` Make it more clear we are waiting for buck2, and not any other process may be running on the same server: buck1, watchman, hg? Ideally together with this message we should print, what the daemon is doing. Differential Revision: D38932648 fbshipit-source-id: 63625044ebd98c3b9fec0a3e8142daac5feab1a5
Reviewed By: krallin Differential Revision: D38854063 fbshipit-source-id: b4ca2efac40a9d000a7f4b8141c249f8d7a6b310
Reviewed By: krallin Differential Revision: D38854061 fbshipit-source-id: fd65d730403af3e7a8c1335be7d427e4b11e1753
Reviewed By: krallin Differential Revision: D38854066 fbshipit-source-id: f788c84ee451a59cc4dcecc61c1d9685a72db96e
Reviewed By: krallin Differential Revision: D38854068 fbshipit-source-id: eaed04c1a189a8822660bc2b83b52cb0f128ec68
Summary: These types need to be moved to `buck2_common` (D38854872) to be shared between client and server, so pick less generic names. Reviewed By: jaspercb Differential Revision: D38854874 fbshipit-source-id: 8c40f2220b205a891c4ecc5421d104d147bc6844
Summary: These types need to be shared `buck2_server` and `buck2_client` crates. `buck2_common` is not exactly appropriate crate, because for example `buck2_build_api` depends on it, and it doesn't need to know about isolation. Probably another base crate like `buck2_cli_common` is needed, but move them to `buck2_common` for now. Reviewed By: krallin Differential Revision: D38854872 fbshipit-source-id: f2f04e48ab639a51c4a82946db981128e0d519f0
Summary: Code cleanup. Reviewed By: bobyangyf Differential Revision: D38896588 fbshipit-source-id: dde77f5a74b726ce6531c0646d7d5984219e7723
Summary: `buck2_query_parser` should not depend on `buck2_events`. This change reduces critical path from 9 nodes to 7 (but I don't expect compilation time to be significantly affected). Before: {F762636002} After: {F762635684} Reviewed By: krallin Differential Revision: D38888857 fbshipit-source-id: 5f49558f2d5085c676aa5139095c2c8c5263aed1
Reviewed By: krallin Differential Revision: D38888867 fbshipit-source-id: 5750a91f48debc5dadeff53208853ef706b1596f
Summary: Code cleanup. Reviewed By: krallin Differential Revision: D38889957 fbshipit-source-id: a57505c767cee85bb0a39a1b2a16da0d069b7708
Summary: Dead code. Reviewed By: krallin Differential Revision: D38889526 fbshipit-source-id: 3fa7b3082bc807d1ef6fb314cc3c4ce7f408ad70
Summary: Refactoring for the following diff D38898223. Reviewed By: bobyangyf Differential Revision: D38898371 fbshipit-source-id: b08e5c3470bbc983c2b21fe773437936d6e0a6c4
Summary: Add a `read_dir_include_ignored()` function Reviewed By: bobyangyf Differential Revision: D38741995 fbshipit-source-id: 2daa55698ddac908153c0f79dfa33673894485eb
Summary: Make `dynamic_output` temporarily take in both named and positional args. See next diff for migration steps Reviewed By: bobyangyf Differential Revision: D38744368 fbshipit-source-id: d372b2ee1b8097015b7b0f7e6f322c133f472f33
Summary: This refactors the Omnibus implementation in preparation of adding a hinting mechanism that would help us share roots across Omnibus targets. The changes here are, In C++: - Reorder C++ library providers so that the `native_link_target` is available when we add the linkable graph provider. This will be useful to add "hinted" roots (that is, libraries that should _always_ be treated as roots). - `cxx_link` uses the short-path for the argsfile instead of the base_path so that I can have both `libfoo.so.argsfile` and `omnibus/libfoo.so.argsfile`. in Omnibus: - I moved the symbol extraction logic into `_create_root`. So, instead of taking all roots and extracting all their symbols, we declare the symbol extraction when declaring the root. The same artifacts end up produced but they're declared elsewhere. - We now store a table of "lib dispositions" in the OmnibusSpec. This is just a map that tells us where a given lib ended up so we can compare with the expectations of the shared table. - We now store the root libraries in a separate map (called `root_products`) rather than in the `roots` attribute. This lets us declare them later in `_create_root`. - I extracted the code that figures out what is default-excluded into a function. The end goal of this is that it'll allow `_create_root` to decide whether it can reuse an shared root (using the dispositions), and if so return that from `_create_root` instead of creating one. Reviewed By: IanChilds Differential Revision: D38628582 fbshipit-source-id: 2e7b8933a1f0ff1d3a124221f76d921c19cabf2c
Summary: This needs to be here if we want to be ale to reuse symbols from a shared root. Reviewed By: IanChilds Differential Revision: D38653801 fbshipit-source-id: 55b347a244c3a36a8fa7b593840ad0cb66008ba3
Summary: Same as the previous diff. This needs to be there if we want to reuse this from a shared root. Reviewed By: IanChilds Differential Revision: D38653804 fbshipit-source-id: 5b660e1439a9dd1ef83db3b2e6ed9b9e4d5e14b1
Summary: We are going to use this to check if a given shared root can be reused or not. Reviewed By: IanChilds Differential Revision: D38653805 fbshipit-source-id: 2fb5070de1cc1dc1c944c79b3daefc2ca1f6deea
Reviewed By: IanChilds Differential Revision: D38628581 fbshipit-source-id: 5051d9500dab55da641d127d266f93413896f6fa
Summary: Next steps: * Land D38744368 (facebook/buck2@a6fef16) and this one at the same time * Wait for buck2 version bump, and then wait 2 days * Update all dynamic_output and dynamic lambda references to use the updated API * Clean up the code and tests (will remove the `separate_artifacts_outputs_check` added in this diff) Reviewed By: bobyangyf Differential Revision: D38786257 fbshipit-source-id: 75b21c7289650c7100ca0dfe3ea5123acaea731e
Summary: `cells` utility creates more than needed and pulls too many dependencies. Take only what's needed. Following diff D38896611 moves coercion into a separate crate. Reviewed By: scottcao Differential Revision: D38896618 fbshipit-source-id: 9ed0d55465b3f6be66b35d242e105949fd0dbd1e
Summary: So `request_metadata` can be called from crates which do not depend on `buck2_server` or `cli` crates (`buck2_test` or `buck_bxl`). Reviewed By: krallin Differential Revision: D38887006 fbshipit-source-id: 766dae9756b5829a3a992d436f4ae7d39a550496
Summary: So it can be used from `buck2_bxl` and `buck2_test` crates which do not depend on `buck2_server`. Reviewed By: krallin Differential Revision: D38887443 fbshipit-source-id: d83eed381ecab702db8970314c081cfff419c37b
Summary: Fewer dependencies between modules. Reviewed By: krallin Differential Revision: D38854873 fbshipit-source-id: 644da5fdb5cd591eee457889e3e1136f21a467d1
Summary: Code cleanup. Reviewed By: IanChilds Differential Revision: D38895423 fbshipit-source-id: c1df468fcec0eb5d736c2c00b653d0e533e8db55
Summary: Dependency is specified only in `Cargo.toml`. For `TARGETS` we have linter which would have find it. Reviewed By: krallin Differential Revision: D38888883 fbshipit-source-id: 54d5c230e4ee0e2b41528451738984372c4350e8
Summary: Fewer dependencies between modules. Reviewed By: krallin Differential Revision: D38855452 fbshipit-source-id: 226068ad10f33328877b0c0804b381b50706c4de
Summary: These utilities are needed by both client and server. Reviewed By: krallin Differential Revision: D38855513 fbshipit-source-id: 0d9306c695ec129e0c77bbe1a636b5aedcd36063
Summary: The macos `strip` doesn't accept the `--strip-debug` flag so use `-S`, which works the same on both linux and macos. https://fb.prod.workplace.com/groups/buck2users/posts/3220308031558889/ Reviewed By: ndmitchell Differential Revision: D38993081 fbshipit-source-id: 982152c7fcea0fb72c8eaba75e364ce8badafa71
Summary: Adding a helper function to assist users with manual configuration and also hopefully simplify auto configuration in the future. #forceTDHashing skip-frl-buck-targeting Reviewed By: luciang Differential Revision: D38843115 fbshipit-source-id: 78f38a3a7a859f449c8041ab645bf6e89dd183ff
Summary: Currently for matching artifact optimization, we check that `ArtifactValue` is equal, which checks equality on both `entry` and `deps` field of `ArtifactValue`. In reality, we just need to check if `entry` is equal, and we can update `deps` whenever the materialized entry gets redeclared. This diff does the following: - Make `ArtifactMaterializationData` hold `entry` and `deps` directly instead of holding just `ArtifactValue`. - On declare, for matching artifact optimization, just check that `entry` is equal. If `entry` is equal, declare now updates `deps` field of `ArtifactMaterializationData` (instead of doing nothing). Reviewed By: krallin Differential Revision: D38136394 fbshipit-source-id: 61b98b56b4bd7e8d64d4feacf856836fd3b23535
…th deps Summary: `file_content_path` should return materialized path for all materialized artifacts, not just materialized artifacts without deps Also deleted outdated docstring on `file_content_path`. Reviewed By: krallin Differential Revision: D38183109 fbshipit-source-id: cab56de4be40d1f7174ab3997da86ba46c876ce3
Summary: It's been rolled out for 2 months and we haven't spotted any issues. To simplify the code in the materializer, let's make this baked into deferred materializer as a feature and get rid of the buckconfig check. Reviewed By: krallin Differential Revision: D38892434 fbshipit-source-id: 960153d43bd570ad7064488ab0b597de8e0fee0a
Summary: Move declare logic in deferred materializer's command processor to a separate function. Logic is unchanged. Reviewed By: stepancheg Differential Revision: D39035856 fbshipit-source-id: 31d419b6d8d7b05b1b32134b4e117c8c055d2e8c
Summary: `get_if_already_materialized` is getting a little awkward to use because we need to pull specific items out of the materialized stage if the artifact is materialized in future diffs. Merge this function back into `declare`. Logic should remain unchanged. Reviewed By: stepancheg Differential Revision: D39037836 fbshipit-source-id: 1d6e671d00b6bbe6b5b9c2e0923465decf47c5a7
…f entire artifact entry Summary: Once an artifact is materialized, we don't need to hold the entire `entry`, just enough information to check for matching artifacts. This diff adds an `ArtifactEntryMetadata` struct that holds enough metadata about the artifact to check matching artifact optimization. For directory, this is the fingerprint. For files, symlinks, and external symlinks, we reuse `ActionDirectoryMember`. Instead of having `ArtifactMaterializationData` hold `entry`, this diff makes the Declared stage hold an `entry` and the Materialized stage hold just an `ArtifactEntryMetadata`. Matching artifact optimization should still work the same way, just by comparing `ArtifactEntryMetadata` instead of comparing `entry`. TODO(scottcao): Benchmark how much memory we save by doing this before landing this diff. Reviewed By: krallin Differential Revision: D38183107 fbshipit-source-id: 7fb381d49bd6d6710464ab92fb6db06e935be9cf
Summary: LinkableGraph is currently manually constructed via - `create_merged_linkable_graph` which merges the data off LinkableGraph on any of a target's deps - `add_linkable_node` where targets can add themselves to the merged graph - Adding of roots and excluded nodes for omnibus linking Using a tset saves the use of memory and allows deferring the aggregation until its actually required. But to do so, we need to change the API and update all of it's uses (which unfortunately is many) - move roots and excluded nodes into LinkableNode to make it easier to propagate this data in a tset - this requires updating the python code a bit - use a tset reduce to get the roots/excluded nodes Reviewed By: krallin Differential Revision: D38289350 fbshipit-source-id: a9b853c9a7b069fcd64352dcebefa90073c9d913
Summary: Using an `any` and creating a temporary array in `matches_target` utilizes extra memory. {F763623207} {F763623206} Iterate through the loop saves ~3.1 GB in allocated memory for FBIOS's group mapping. {F763623247} {F763623246} Reviewed By: d16r Differential Revision: D38961908 fbshipit-source-id: 70490c2c1956d6b160fd42241c63ad14a64c875b
Summary: Currently, we generate the link and resource graph maps and pass it in as an argument when computing link/resource group maps. In the case of cxx_libraries which are not linked as a dynamic library (link group), computing the map is wasteful and accounts for ~1.2 GB of allocated memory (total 11.5 GB): {F763647373} {F763647374} By deferring the graph map calculation until needed, we can save most of this allocation. For both linkable and resource graph, create a wrapper method get_x_map_func which returns a function that gets the map. After total allocated is 9.8 GB. {F763647810} {F763647811} Reviewed By: d16r Differential Revision: D38980909 fbshipit-source-id: 25303cccc1681d63b56a74a354b4542299f0c7d7
Summary: Add noop rules to avoid failing builds with target patterns. Reviewed By: scottcao Differential Revision: D39068617 fbshipit-source-id: 3987d6a936887fc4fc638f65e5bdeb6c6cd99724
Summary: When dumping nodes and edges, dump currently running nodes into a separate file too. Reviewed By: bobyangyf Differential Revision: D39037831 fbshipit-source-id: 98f5bd5e268b39becf1e91803d7fb7e3c5d360db
arlyon
force-pushed
the
feat/prelude-examples
branch
from
August 27, 2022 10:58
e363e98
to
6e207ee
Compare
@arlyon has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
facebook-github-bot
force-pushed
the
main
branch
from
August 29, 2022 10:08
c53c46c
to
4f0e2ec
Compare
facebook-github-bot
pushed a commit
that referenced
this pull request
Sep 19, 2022
Summary: This will probably need to wait on some changes to how the prelude is exported before it works correctly. Pull Request resolved: #4 Reviewed By: ndmitchell Differential Revision: D38942570 Pulled By: arlyon fbshipit-source-id: d9270277a185eefd61f61efec8a10946d7704a91
facebook-github-bot
pushed a commit
that referenced
this pull request
Dec 13, 2022
Summary: Going from this stack trace: P570153247. Relevant threads are: ``` Thread 31 (LWP 16695): #0 syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38 #1 0x0000000008761b14 in <parking_lot::raw_rwlock::RawRwLock>::lock_shared_slow () #2 0x000000000808cc3e in <dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::get_running_map () #3 0x00000000082c49fe in <core::future::from_generator::GenFuture<<dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::do_recompute_projection::{closure#0}> as core::future::future::Future>::poll () Thread 29 (LWP 16693): #0 syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38 #1 0x0000000008761b14 in <parking_lot::raw_rwlock::RawRwLock>::lock_shared_slow () #2 0x000000000808a737 in <dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::eval_projection_task () #3 0x00000000082c4ae3 in <core::future::from_generator::GenFuture<<dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::do_recompute_projection::{closure#0}> as core::future::future::Future>::poll () Thread 19 (LWP 16683): #0 syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38 #1 0x0000000008761b14 in <parking_lot::raw_rwlock::RawRwLock>::lock_shared_slow () #2 0x0000000008093a3b in <dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>> as dice::introspection::graph::EngineForIntrospection>::currently_running_key_count () Thread 15 (LWP 16679): #0 syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38 #1 0x0000000008761b14 in <parking_lot::raw_rwlock::RawRwLock>::lock_shared_slow () #2 0x000000000808cc3e in <dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::get_running_map () #3 0x00000000082c49fe in <core::future::from_generator::GenFuture<<dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::do_recompute_projection::{closure#0}> as core::future::future::Future>::poll () Thread 3 (LWP 16667): #0 syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38 #1 0x0000000008762aac in <parking_lot::raw_rwlock::RawRwLock>::wait_for_readers () #2 0x0000000008760f13 in <parking_lot::raw_rwlock::RawRwLock>::lock_exclusive_slow () #3 0x000000000808cc64 in <dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::get_running_map () #4 0x00000000082c49fe in <core::future::from_generator::GenFuture<<dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::do_recompute_projection::{closure#0}> as core::future::future::Future>::poll () ``` The comments in the code suggest releasing an entry will release the lock, but it's not clear to me that this is actually supposed to work (the map itself is still held, you could ask it for another entry). That said, it appears somewhat pretty clear from thread 19, which is just running the introspection task, that the map is overall locked. This patch updates the code to make it extremely clear that the currently_running guard is indeed released when we call the projection task, since it's in its own scope. It's a bit unclear to me exactly where the deadlock is. I suspect it happens because `currently_running` is locked when we enter `eval_projection_task`, which means that this function cannot acquire the lock again here: ``` let sent = tx.send(node.dupe()); assert!(sent.is_ok(), "receiver is still alive"); if let Some(running_map) = self .currently_running .read() .get(&transaction_ctx.get_version()) { let removed = running_map.remove(k); assert!(removed.is_some()); } ``` That seems pretty clearly wrong, but the bit that's unclear to me is how can this ever work? Still theorizing, I suspect the reason is: Normally, it works, because: - eval_projection_versioned takes a read lock - eval_projection_task also takes a read lock But, if you ever get concurrent commands running, what will happen is: - Thread 1: eval_projection_versioned takes a read lock - Thread 2: attempts to take a write lock - Thread 1: eval_projection_task also takes a read lock, can't have it because we block new read locks when a writer is waiting. The reason I suspect I'm right is this thread, which is indeed attempting to acquire a write lock: ``` #1 0x0000000008762aac in <parking_lot::raw_rwlock::RawRwLock>::wait_for_readers () #2 0x0000000008760f13 in <parking_lot::raw_rwlock::RawRwLock>::lock_exclusive_slow () #3 0x000000000808cc64 in <dice::incremental::IncrementalEngine<dice::projection::ProjectionKeyProperties<buck2_common::legacy_configs::dice::LegacyBuckConfigCellNamesKey>>>::get_running_map () ``` Reviewed By: ndmitchell Differential Revision: D41996701 fbshipit-source-id: ba00e1e1052272ddd1a44b6c4b9d8bb924043ebb
facebook-github-bot
pushed a commit
that referenced
this pull request
Feb 10, 2023
Summary: X-link: facebook/buck2-prelude#4 This open-sources the pyre BXL script that powers buck2 support for pyre. I'm SHRUG on this exact implementation, if there's a better or more preferred way of doing this (from either the buck2 or pyre perspective) please feel free to let me know. Reviewed By: zsol Differential Revision: D43024586 fbshipit-source-id: 79acbdb9781650ace22282f6400d6fb2d5ebe3db
facebook-github-bot
pushed a commit
that referenced
this pull request
Feb 15, 2024
Summary: The stack around D42099161 introduced the ability to dynamically set the log level via a debug-command. This requires using `tracing_subscriber::reload::Layer`. The implementation of that machinery is backed by a `RwLock` [here](https://fburl.com/code/4xv0ihpn). This `RwLock` is read locked once on every single `#[instrumentation]` call to check whether the instrumentation is active or not. On top of that, something in our fbsource third-party config enables the `parking_lot` feature of `tracing_subscriber`. This means that this is a `parking_lot::RwLock`, not a `std::sync::RwLock`, which is bad because it's well known that parking lot has problems with excessive spinning. What all that adds up to is that when you put `#[instrument]` on a super hot function in dice, you get dozens of threads that are all simultaneously doing this: ``` thread #23, name = 'buck2-rt', stop reason = signal SIGSTOP frame #0: 0x000000000c94a1fa buck2`<parking_lot::raw_rwlock::RawRwLock>::lock_shared_slow + 122 frame #1: 0x0000000007a80b54 buck2`<tracing_subscriber::layer::layered::Layered<tracing_subscriber::reload::Layer<tracing_subscriber::filter::layer_filters::Filtered<tracing_subscriber::fmt::fmt_layer::Layer<tracing_subscriber::registry::sharded::Registry, tracing_subscriber::fmt::format::DefaultFields, tracing_subscriber::fmt::format::Format, std::io::stdio::stderr>, tracing_subscriber::filter::env::EnvFilter, tracing_subscriber::registry::sharded::Registry>, tracing_subscriber::registry::sharded::Registry>, tracing_subscriber::registry::sharded::Registry> as tracing_core::subscriber::Subscriber>::enabled + 292 frame #2: 0x000000000a172d77 buck2`<dice::legacy::incremental::dep_trackers::internals::Dep<dice::legacy::key::StoragePropertiesForKey<buck2_build_api::actions::calculation::BuildKey>> as dice::legacy::incremental::graph::dependencies::Dependency>::recompute::{closure#0} + 183 frame #3: 0x000000000a03606c buck2`<futures_util::stream::futures_unordered::FuturesUnordered<core::pin::Pin<alloc::boxed::Box<dyn core::future::future::Future<Output = core::result::Result<(alloc::boxed::Box<dyn dice::legacy::incremental::graph::dependencies::ComputedDependency>, alloc::sync::Arc<dyn dice::legacy::incremental::graph::GraphNodeDyn>), dice::api::error::DiceError>> + core::marker::Send>>> as futures_util::stream::stream::StreamExt>::poll_next_unpin + 444 frame #4: 0x0000000009fc4755 buck2`<dice::legacy::incremental::IncrementalEngine<dice::legacy::key::StoragePropertiesForKey<buck2_build_api::actions::calculation::BuildKey>>>::compute_whether_dependencies_changed::{closure#0} (.llvm.4229350879637282184) + 1733 frame #5: 0x0000000009ef30d4 buck2`<dice::legacy::incremental::IncrementalEngine<dice::legacy::key::StoragePropertiesForKey<buck2_build_api::actions::calculation::BuildKey>>>::compute_whether_versioned_dependencies_changed::{closure#0}::{closure#0} + 228 frame #6: 0x0000000009f194ec buck2`<buck2_futures::cancellable_future::CancellableFuture<<dice::legacy::incremental::IncrementalEngine<dice::legacy::key::StoragePropertiesForKey<buck2_build_api::actions::calculation::BuildKey>>>::new_dice_task::{closure#0}> as core::future::future::Future>::poll (.llvm.11181184606289051452) + 3500 frame #7: 0x0000000009f04bbf buck2`<futures_util::future::future::map::Map<buck2_futures::cancellable_future::CancellableFuture<<dice::legacy::incremental::IncrementalEngine<dice::legacy::key::StoragePropertiesForKey<buck2_build_api::actions::calculation::BuildKey>>>::new_dice_task::{closure#0}>, buck2_futures::spawn::spawn_inner<<dice::legacy::incremental::IncrementalEngine<dice::legacy::key::StoragePropertiesForKey<buck2_build_api::actions::calculation::BuildKey>>>::new_dice_task::{closure#0}, dice::api::user_data::UserComputationData, futures_util::future::ready::Ready<()>>::{closure#0}> as core::future::future::Future>::poll + 31 frame #8: 0x0000000009ff0339 buck2`<futures_util::future::future::flatten::Flatten<futures_util::future::future::Map<futures_util::future::ready::Ready<()>, buck2_futures::spawn::spawn_inner<<dice::legacy::incremental::IncrementalEngine<dice::legacy::key::StoragePropertiesForKey<buck2_build_api::actions::calculation::BuildKey>>>::new_dice_task::{closure#0}, dice::api::user_data::UserComputationData, futures_util::future::ready::Ready<()>>::{closure#1}>, <futures_util::future::future::Map<futures_util::future::ready::Ready<()>, buck2_futures::spawn::spawn_inner<<dice::legacy::incremental::IncrementalEngine<dice::legacy::key::StoragePropertiesForKey<buck2_build_api::actions::calculation::BuildKey>>>::new_dice_task::{closure#0}, dice::api::user_data::UserComputationData, futures_util::future::ready::Ready<()>>::{closure#1}> as core::future::future::Future>::Output> as core::future::future::Future>::poll + 201 frame #9: 0x00000000093f9f22 buck2`<tokio::task::task_local::TaskLocalFuture<buck2_events::dispatch::EventDispatcher, core::pin::Pin<alloc::boxed::Box<dyn core::future::future::Future<Output = alloc::boxed::Box<dyn core::any::Any + core::marker::Send>> + core::marker::Send>>> as core::future::future::Future>::poll + 130 frame #10: 0x000000000939fdcb buck2`<tracing::instrument::Instrumented<<buck2_build_api::spawner::BuckSpawner as buck2_futures::spawner::Spawner<dice::api::user_data::UserComputationData>>::spawn::{closure#0}> as core::future::future::Future>::poll + 139 frame #11: 0x00000000091ca5a9 buck2`<tokio::runtime::task::core::Core<tracing::instrument::Instrumented<<buck2_build_api::spawner::BuckSpawner as buck2_futures::spawner::Spawner<dice::api::user_data::UserComputationData>>::spawn::{closure#0}>, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle>>>::poll + 169 frame #12: 0x00000000091c1b22 buck2`<tokio::runtime::task::harness::Harness<tracing::instrument::Instrumented<<buck2_build_api::spawner::BuckSpawner as buck2_futures::spawner::Spawner<dice::api::user_data::UserComputationData>>::spawn::{closure#0}>, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle>>>::poll + 146 frame #13: 0x000000000c920f6f buck2`<tokio::runtime::scheduler::multi_thread::worker::Context>::run_task + 895 frame #14: 0x000000000c91f847 buck2`<tokio::runtime::scheduler::multi_thread::worker::Context>::run + 2103 frame #15: 0x000000000c932264 buck2`<tokio::runtime::context::scoped::Scoped<tokio::runtime::scheduler::Context>>::set::<tokio::runtime::scheduler::multi_thread::worker::run::{closure#0}::{closure#0}, ()> + 52 frame #16: 0x000000000c932169 buck2`tokio::runtime::context::runtime::enter_runtime::<tokio::runtime::scheduler::multi_thread::worker::run::{closure#0}, ()> + 441 frame #17: 0x000000000c91efa6 buck2`tokio::runtime::scheduler::multi_thread::worker::run + 70 frame #18: 0x000000000c906a50 buck2`<tracing::instrument::Instrumented<tokio::runtime::blocking::task::BlockingTask<<tokio::runtime::scheduler::multi_thread::worker::Launch>::launch::{closure#0}>> as core::future::future::Future>::poll + 160 frame #19: 0x000000000c8f8af9 buck2`<tokio::loom::std::unsafe_cell::UnsafeCell<tokio::runtime::task::core::Stage<tracing::instrument::Instrumented<tokio::runtime::blocking::task::BlockingTask<<tokio::runtime::scheduler::multi_thread::worker::Launch>::launch::{closure#0}>>>>>::with_mut::<core::task::poll::Poll<()>, <tokio::runtime::task::core::Core<tracing::instrument::Instrumented<tokio::runtime::blocking::task::BlockingTask<<tokio::runtime::scheduler::multi_thread::worker::Launch>::launch::{closure#0}>>, tokio::runtime::blocking::schedule::BlockingSchedule>>::poll::{closure#0}> + 153 frame #20: 0x000000000c90166b buck2`<tokio::runtime::task::core::Core<tracing::instrument::Instrumented<tokio::runtime::blocking::task::BlockingTask<<tokio::runtime::scheduler::multi_thread::worker::Launch>::launch::{closure#0}>>, tokio::runtime::blocking::schedule::BlockingSchedule>>::poll + 43 frame #21: 0x000000000c90d9b8 buck2`<tokio::runtime::task::harness::Harness<tracing::instrument::Instrumented<tokio::runtime::blocking::task::BlockingTask<<tokio::runtime::scheduler::multi_thread::worker::Launch>::launch::{closure#0}>>, tokio::runtime::blocking::schedule::BlockingSchedule>>::poll + 152 frame #22: 0x000000000c90b848 buck2`<tokio::runtime::blocking::pool::Inner>::run + 216 frame #23: 0x000000000c9002ab buck2`std::sys_common::backtrace::__rust_begin_short_backtrace::<<tokio::runtime::blocking::pool::Spawner>::spawn_thread::{closure#0}, ()> + 187 frame #24: 0x000000000c90042e buck2`<<std::thread::Builder>::spawn_unchecked_<<tokio::runtime::blocking::pool::Spawner>::spawn_thread::{closure#0}, ()>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} + 158 frame #25: 0x000000000757e4b5 buck2`std::sys::unix::thread::Thread::new::thread_start::h618b0b8e7bda0b2b + 37 frame #26: 0x00007f2ba1e9abaf ``` This hit an open source user super hard over in #555 . This diff approaches this issue by putting all the `#[instrument]`s in dice behind `#[cfg(debug_assertions)]`. This allows them to still be used in debug mode, but keeps them out of any hot paths. I do not have numbers to show that most of these matter. The only one I know matters for sure is `recompute`. Alternatives are to either try and tailor this to the callsites we know are hot (that sounds error prone) or to revert the stack that inadvertently introduced the RwLock. Even if we did that though, it's probably still safer to keep `#[instrument]` out of super hot paths like this. Reviewed By: stepancheg Differential Revision: D53744041 fbshipit-source-id: 85bce9af2fec8ad1d50adc241d3b8e2bfc5cec87
facebook-github-bot
pushed a commit
that referenced
this pull request
Nov 21, 2024
Summary: Four specific parts of this diff, I'll break them down 1. Pass in "alternate" flag so `IncompatiblePlatformReason` error display prints newlines after each dependency 2. Add a bit of extra whitespace to final output ("is incompatible with...") to make it look a bit prettier, per the suggestion in the linked task 3. (minor) update code pointer in comment to point to the right place (previous code pointer must be the wrong commit, or code moved or something). Intentionally not shortening the URL so it's clearer where the link is going to go and you can open it in your IDE more easily if you want. 4. Update tests for new output string(s) NOTE: #4 (updating tests) is still a work in progress, I'm publishing this diff early for reviewing the other logic and will fix the tests before landing ## help needed: Is this push safe; updating buck core code and www code (the Sandcastle tests) in the same diff? This error makes it seem like no > fbs-validate-www-changes > PUSH UNSAFE - Landing together does not mean they are deployed together. Changes to fbsource/www and fbsource in the same diff are currently an opt-in experience. If you understand the risks as outlined here https://www.internalfb.com/.../When_to_use_megarepo_0/ Please join this group (https://fb.workplace.com/groups/757968966024362) and rerun the job. That said, this is just tests and also putting the test changes in a separate diff doesn't seem any better than doing it all in one diff? Reviewed By: blackm00n Differential Revision: D66050821 fbshipit-source-id: e288909af5489aaa2ec451b1ab19dcfd8748b406
facebook-github-bot
pushed a commit
that referenced
this pull request
Dec 19, 2024
…cross all host platforms Summary: ### Motivation My team has a concrete need for buck to generate 100% matching zip files for the same sets of inputs on all host platforms (macOS, Linux, Windows). Current limitations: 1. File order can be different on file system with different case sensitivity. 2. Windows can't write correct posix mode (i.e. permissions) for any entries. Although the entries themselves might fully match, those discrepancies result in different metadata, which results in a different zip file. See D67149264 for an in-depth explanation of the use case that requires this level of determinism. ### Tentative solution #1 In D66386385, I made it so the asset generation rule was only executable from Linux. Paired with buck cross builds, it made so that outputs from macOS and Linux matched, but did not work on Windows [due to some lower level buck problem](https://fb.workplace.com/groups/930797200910874/posts/1548299102494011) (still unresolved). ### Tentative solution #2 In D66404381, I wrote my own Python script to create zip files. I got all the files and metadata to match everywhere, but I could not get around differences in the compression results. Decided not to pursue because compression is important for file size. ### Tentative solution #3 In D67149264, I duplicated and tweaked buck's zip binary. It did work, but IanChilds rightfully pointed out that I'd be making maintenance on those libraries more difficult and that the team is even planning on deleting those, at some point. ### Tentative solution #4 (this diff!) IanChilds advised me to try to fix buck itself to produce consistent results, so this is me giving it a try. Because the root problem could not have been done in a backwards compatible way (the file permissions, specifically; see inlined comment), I decided to use an argument to control whether the zip tool should strive to produce a deterministic output or not, at the expense of some loss of metadata. The changes are simple and backwards compatible, but any feedback on the root problem, idea and execution are welcome. Reviewed By: christolliday Differential Revision: D67301945 fbshipit-source-id: c42ef7a52efd235b43509337913d905bcbaf3782
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
CLA Signed
This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This will probably need to wait on some changes to how the prelude is exported before it works correctly.