From 5ffa9d08d8dd7763025ffc0e783b3168078cacf9 Mon Sep 17 00:00:00 2001 From: Mateusz Tarnaski Date: Wed, 4 May 2022 14:39:15 +0200 Subject: [PATCH] chore: rebase development (#131) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * test: fix coverage action (#4036) Description --- Fix test coverage. I had to remove the wasm tests. Looks like it's [open issue](https://github.com/rust-lang/rust/issues/81684) How Has This Been Tested? --- Working (excluding last reporting step) on local using `act -j coverage` * ci: move npm audit to development only (#4055) Description --- Separate `npm audit` into it's own CI file and run it only on `development` branch merges * chore(deps): bump async from 2.6.3 to 2.6.4 in /applications/launchpad/gui-vue (#4053) Bumps [async](https://github.com/caolan/async) from 2.6.3 to 2.6.4.
Changelog

Sourced from async's changelog.

v2.6.4

  • Fix potential prototype pollution exploit (#1828)
Commits
Maintainer changes

This version was pushed to npm by hargasinski, a new releaser for async since your current version.


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=async&package-manager=npm_and_yarn&previous-version=2.6.3&new-version=2.6.4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/tari-project/tari/network/alerts).
* feat(tari_explorer): add total hashrate chart (#4054) Description --- Reused the already provided hashrate data by the gRPC client to plot the total in the Tari text explorer. Motivation and Context --- Recently we fixed the total estimated hashrate calculation, and included it in the Tari text explorer. We added charts for both Monero and SHA3 hashrates, but we didn't include a chart for the total hashrate. The motivation of this PR is to add the total hashrate chart to the explorer: ![Screenshot 2022-04-26 at 17 19 13](https://user-images.githubusercontent.com/47919901/165349284-95ec2b18-5aa8-405a-b48b-46657446f89c.png) How Has This Been Tested? --- Manually running the text explorer * chore(deps): bump ejs from 3.1.6 to 3.1.7 in /applications/tari_web_extension (#4056) Bumps [ejs](https://github.com/mde/ejs) from 3.1.6 to 3.1.7.
Release notes

Sourced from ejs's releases.

v3.1.7

Version 3.1.7

Commits

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=ejs&package-manager=npm_and_yarn&previous-version=3.1.6&new-version=3.1.7)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/tari-project/tari/network/alerts).
* chore: remove unused else (#4051) Description --- Remove empty un used else * refactor(base-node): use existing peer feature methods to check if is a base or client node (#4048) Description --- Refactored the `list-peers` and `list-connections` base node console commands to reuse the existing peer feature methods to check if is a base or client node. Motivation and Context --- Up until now, the `list-peers` and `list-connections` base node console commands check if a node is a client/node by comparing values against enums, but there are already some convenient methods to check that. How Has This Been Tested? --- We don't have unit or integrations tests on console commands, so I tested by manually running the `list-peers` and `list-connections` to check if the results are correct. * fix: makes header consensus encoding infallible (#4045) Description --- - removes `BlockHeader::consensus_encode` impl errors Motivation and Context --- The ConnsensusEncode contract states that the implementation should never be the source of errors, only the writer may return errors. How Has This Been Tested? --- Existing tests manually, basenode sync * fix(wallet): do not prompt for password if given in config (#4040) Description --- - uses cli password followed by configured password if specified Motivation and Context --- Should not prompt for password if given in config How Has This Been Tested? --- Manually * test: cucumber saf test (#3135) ## Description Integration test for SAF. This test has some long sleeps, because we need to timeout the connections. Once we solve the issue we can lower the sleeps (I've added TODO in the code). * chore: obscure grpc error response (#3995) Description --- Obscure errors returned by grpc on base node. The flag for obscuring is coming from config (default is false). * test(covenant): improve test coverage (#4052) Description --- - adds more tests for covenant modules - fixes typo in `CovenantReadExt` trait - returns error on invalid `Option` types in `is_eq` Motivation and Context --- Previous coverage: image Coverage is now: PENDING How Has This Been Tested? --- Tests pass * fix: weird behaviour of dates in base node banned peers (#4037) Description --- * Changed the peer display format to output `offline_at` and `banned_until` as local time instead of UTC, fixing the “time in the past” issue * Improved handling and display of permanent bans (when invoking `ban-peer` with no params). Note that this makes _existing_ permabans still display weird times, changes affect only new permabans * Scope of the changes does seem to only affect log messages in other applications Motivation and Context --- There is some weird behaviour regarding display of date times on banned peers. - Times are shown in UTC but without the timezone, so if the user is in UTC+N the ban end time may look like it's "in the past" for some values - The default ban time is permanent, and this makes the "banned until" display the max datetime. How Has This Been Tested? --- Manually tested on a base node, by banning peers with different values * chore(deps): bump async from 2.6.3 to 2.6.4 in /applications/tari_web_extension (#4059) Bumps [async](https://github.com/caolan/async) from 2.6.3 to 2.6.4.
Changelog

Sourced from async's changelog.

v2.6.4

  • Fix potential prototype pollution exploit (#1828)
Commits
Maintainer changes

This version was pushed to npm by hargasinski, a new releaser for async since your current version.


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=async&package-manager=npm_and_yarn&previous-version=2.6.3&new-version=2.6.4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/tari-project/tari/network/alerts).
* chore(deps): bump async from 3.2.1 to 3.2.3 in /integration_tests (#4035) Bumps [async](https://github.com/caolan/async) from 3.2.1 to 3.2.3.
Changelog

Sourced from async's changelog.

v3.2.3

  • Fix bugs in comment parsing in autoInject. (#1767, #1780)

v3.2.2

  • Fix potential prototype pollution exploit
Commits

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=async&package-manager=npm_and_yarn&previous-version=3.2.1&new-version=3.2.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/tari-project/tari/network/alerts).
* docs: add key manager docs (#4050) Adds/updates and fixes documentation for the key manager service * fix: only count base nodes in peers count in base node status (#4039) Description --- Filter connection count in status command to only count base nodes Motivation and Context --- We only want to show connection count to base nodes in the base node status line. Currently it counts all connected peers regarding of type. How Has This Been Tested? --- Manually tested by listing all connections and then comparing it to the status line to see that it filters base nodes * docs(comms): adds documentation for comms public interface (#4033) Description --- - adds docs on comms public interfaces Motivation and Context --- Documentation How Has This Been Tested? --- `cargo doc --no-deps` * refactor(dht): use CipherKey new type for diffie-hellman key (#4038) Description --- - Document the DHT public interface - Move forward layer to `outbound` module - Remove unnecessary allocation from `encrypt` helper - Return `CipherKey` new type from Diffie-Hellman helper function - Changes `encrypt` and `decrypt` to take in `CipherKey` newtype - implement zeroize on drop for `CipherKey` new type Motivation and Context --- Documentation. `generate_ecdh_secret` returned a public key, which implies that it is safe to share publicly. The `CipherKey` new type will zero it's contents before releasing it's memory. The forward layer didn't have anything to do with SAF so shouldn't be located in that module. How Has This Been Tested? --- Existing tests pass Manually, no breaking changes. cargo doc --no-deps * refactor(rpc-macros): split into smaller functions (clippy) (#4063) Description --- Split RPC function parsing into smaller functions Motivation and Context --- fixes `too_many_lines` Clippy How Has This Been Tested? --- No `clippy::too_many_lines` error * fix: update daily test configuration (#4049) Description --- - adds options parameter for base node and wallet process start functions and updates calls as necessary - updates base node sync daily test config to use dibbler network - configures base node process using cli overrides instead of environment variables Motivation and Context --- Configuration was refactored in #4005. Cucumber processes use localnet as default. This PR sets the network for dailies to dibbler. How Has This Been Tested? --- Ran daily sync and recovery tests locally and checked that sync/recovery started. * refactor(comms): reduce length of long functions (clippy) (#4065) Description --- Reduces LOC for `handle_request` and `handle_connection_manager_event` methods Allows long function for `NoiseSocket::poll_write_or_flush` Motivation and Context --- These methods exceeded the linter's maximum of 100 LOC. `poll_write_or_flush` is a poll function and so handling the continue loop case vs the return case makes putting it in a separate function a little unwieldy, and possibly will incur a small performance cost. How Has This Been Tested? --- Code compiles, Existing tests pass, running a base node * test(cucumber): use separate FFI target dir (#4067) Description --- Uses `./temp/ffi-target` as the cargo target dir for FFI compilation. Motivation and Context --- The wallet and wallet FFI need to be recompiled for each run of the integration tests. This is because the compilation target differs between native and ffi, so previous compilations are overwritten. This PR sets a separate the target dir for FFI so that subsequent runs do not need to recompile the wallet and wallet FFI on each run. How Has This Been Tested? --- Locally, the wallet and FFI do not need to be recompiled after the initial run. * fix(key-manager): remove floating point math from mnemonic code (#4064) Description --- - removes floating-point math from `mnemonic::from_bytes` - masks "last" byte before conversion to u8 (clippy) Motivation and Context --- This fixes clippy cast warnings and avoids the overhead of working with floating-points How Has This Been Tested? --- This module is tested well, these tests pass. * feat(p2p): adds tor.forward_address setting (#4070) Description --- Adds `tor.forward_address` to instruct tor to forward traffic to a custom address Motivation and Context --- This setting is useful for docker setups where tor and the base node listener are accessible through DNS addresses. How Has This Been Tested? --- Manually: setting `tor.forward_address` and checking that traffic is forwarded through that port. * ci: fix coverage (#4071) Add llvm-tools-preview to toolchain * chore: remove deprecated ExtendBytes, update EpochTime (#3914) Description --- - Removes references to `ExtendBytes` - `EpochTime` no longer converted into `chrono` types Motivation and Context --- Ref https://github.com/tari-project/tari_utilities/pull/25 How Has This Been Tested? --- * chore(deps): bump ejs from 3.1.6 to 3.1.7 in /applications/tari_collectibles/web-app (#4057) Bumps [ejs](https://github.com/mde/ejs) from 3.1.6 to 3.1.7.
Release notes

Sourced from ejs's releases.

v3.1.7

Version 3.1.7

Commits
  • 820855a Version 3.1.7
  • 076dcb6 Don't use template literal
  • faf8b84 Skip test -- error message vary depending on JS runtime
  • c028c34 Update packages
  • e4180b4 Merge pull request #629 from markbrouwer96/main
  • d5404d6 Updated jsdoc to 3.6.7
  • 7b0845d Merge pull request #609 from mde/dependabot/npm_and_yarn/glob-parent-5.1.2
  • 32fb8ee Bump glob-parent from 5.1.1 to 5.1.2
  • f21a9e4 Merge pull request #603 from mde/mde-null-proto-where-possible
  • a50e46f Merge pull request #606 from akash-55/main
  • Additional commits viewable in compare view

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=ejs&package-manager=npm_and_yarn&previous-version=3.1.6&new-version=3.1.7)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/tari-project/tari/network/alerts).
* feat(collectibles): add list assets command (#3908) Add the ability to list assets via the cli. Run `tari_collectibles --help` for details * fix: support safe non-interactive mode (#4072) Non-interactive mode should NEVER prompt the user. This PR clears one path where this happens: If a wallet does not exist and a password has not been provided so that we can auto-create one. In this case, the application should just exit. Providing passwords on the command line is VERY bad practice, since anyone with access to the machine can see the password in plaintext by inspecting the running jobs. The ability to read the password from the commaond-line, OR config file OR envar was removed in a previous PR, and this is rectified here. How Has This Been Tested? --- Manually, with various CLI and envar combinations * feat: allow network to be set by TARI_NETWORK env var (#4073) Description --- Allows the base directory to be set using the `TARI_BASE_DIR` environment variable. Allows the current network to be set by the `TARI_NETWORK` environment variable. Allows the password to be passed in using an environment variable `TARI_WALLET_PASSWORD` in `tari_console_wallet`. Motivation and Context --- In some environments, it is desirable to use environment variables instead of passing cli arguments. How Has This Been Tested? --- Setting env var `TARI_NETWORK=mainnet` and `dibbler` on the base node and wallet. Setting env var `TARI_WALLET_PASSWORD = 'xxx'` on the console wallet * chore: update launchpad backend (#4017) * feat: react_based launchpad frontend This is a long-lived feature branch. Do not merge it until complete. This commit disables many of the long-running CI tasks that are not relevant to this branch. HOWEVER, check the TODOs in the .github folder when this branch gets merged into `development` and remove them, OR cherry-pick to revery this specific commit. * Remove deprecated useBootstrapper option `useBootstrapper` is no longer a valid configuration option in tauri.conf.json, so we remove it. * Switch to public minideb base package The quay.io version of bitnami's minideb package does not seem to work anymore. The commit also leverages `install_packages` which removes the boilerplate of having to `apt update && apt install .. & rm /var/dpkg/...` Add platform (arm64 / amd64) to docker tag and switch minideb source Add the platform marker to all docker image tags * Add architecture-dependent image resolution Add a function to use the appropriate docker image based on CPU architecture, e.g. latest-arm64 for M1 chips, or latest-amd64 for Intel chips. * Configure static base node parameters in config file With the new config setup, some config setting names are no longer network-dependent, so they can be moved into the lauchpad config.toml template. * Update docker configs for wallet and mm_proxy Moves static config variables to the config.toml file, and dynamic variables are updated according to the config variable names. * Pull out wallet command Keep interface consistent with rest of code and provide a `wallet_cmd` function for the docker execution command arguments. Fixes a small envar string typo. * Update SHA3 miner image config * Update config.toml [miner] section so that it connects to the wallet and base node * Pull command line arguments into its own procedure, in line with other images * Add multiple monero stagenet urls as default * fix: issues with launchpad backend (#4074) Description -- Fixes issues with launchpad backend. See individual commit messages for changes. How Has This Been Tested? --- Manually * test: unignore working tests (#4020) Description --- - unignores `store_and_retrieve_blocks_from_contents` and `test_transaction_cancellation` - fixes flaky `store_and_retrieve_blocks_from_contents` test Motivation and Context --- These tests work How Has This Been Tested? --- Tests pass * Set up new Launchpad v2 with Tauri and CRA * port (copy) backend from previous launchpad to launchpad_v2 * call tari backend for list of available docker images * remove legacy main.rs from src-tauri * added .gitkeep to cra build directory to avoid CI panic * changed javascript build to build launchpad_v2 * Set up React project tree * Add ESLint * Add no-console eslint warning * Remove package-json.lock * Switch tabs to spaces * Add SVG icon components for extracted Figma icon set (#53) * Adding svg icon components for extracted Figma icon set * Add initial content to the Readme.md (#52) * Add initial content to the Readme.md * Add techs to Readme * add information about gui directory structure to readme Co-authored-by: tarnas * Adding styled components, replacing yarn.lock with package-lock * Updating README to use npm * Add colors & gradients, linting icon component files * Add theme files, refactoring * Add color variables to gradients where possible * feat: launchpad CI (#55) * Add CI job for launchpad * Update .circleci/config.yml Co-authored-by: Mateusz Tarnaski Co-authored-by: Mateusz Tarnaski * quick and dirty one-file dropdown * added label to select * extracted value and options and made it configurable in the select * extract Select to a reusable dumb component * refactor WithTheme HoC to correctly set displayName * extract transparent background to theme * fix types for styledComponents in Select * add basic tests to Select component * typescript children: ReactNode typing * change darkBackground prop name to inverted * use default export in Select component * move type declarations to separate types.ts files * fix test to use correct import * feat: add typography and fonts * add jsdoc to Select component * improve jsdoc * feat: main layout (#64) * Main Layout, tests and Prettier. - set up Main Layout - set up unit tests - add Prettier - add primitive UI components: Button, Switch and Logo * Fixes according to the PR review * improved styling with the inverted values * fix Select tests - pass correct theme * remove last darkBackground reference * avoid using optional chaining to not angry CI gods * hooked up prettier to eslint and fixed all * rename MyListboxProps to SelectProps * feat(primitive): add text component refer issue #74 * refactor: move globalStyles, set default Text type * remove unnecessary fragment, formatting * chore: eslint rule update and readme addition (#79) * add eslint curly spaces rule * add development practices to README * add examples of acceptable JSDoc for react component * Add missing property to jsdoc example Co-authored-by: Tom Co-authored-by: Tom * Fixed CI tests, refactor d.ts files * Removed GlobalStyle, moved fonts to App.css * lint fix with prettier after merge * dirty one file implementation of a box * move Box component to separate files * add all styles declaration to DefaultTheme in custom.d.ts * add jsdoc to Box component * add basic test for Box component * update Select tests to conform to test standard * fix Box test * Launchpad on Github Actions (#88) * Add tag component files, update theme * Add unit tests * More unit tests, changes from PR comments * Adjust the audit job (#91) * feat: footer & keyboardkeys (#86) * Add Footer and KeyboardKeys components * Fix lint issues * Fix typo * Improve Switch component (#89) * chore: tests for icons (#97) * Add tests for Icons * Replace require with import * layout for inactive state of base node * prepared layout for dark (running) base node * extract view component from base node container * connect base node container to store; improve styling * cleanup and document Loading component * fix unused payload in base node slice * add placeholder for "running" tag on basenode * add Running tag to a running base node container * add tests for Loading indicator * move base node store slice to /store/baseNode * reverse Network type import * feat: tabs and ts issue (#94) * Improve Tabs component and its usage in DashboardContainer * Upgrade Tabs component and fix TS * Fix text style type * Remove unused props and imports (#99) * Fix box sizing of the main container (#102) * Add Text test (#106) * Change the size of large tags to 26px (#104) * Polishing Select component: text color and spacing (#105) * prepared ui for password input * include tari signed in wallet password page * prepared main wallet layout (without inputs) * add loading prop to button * connect wallet to store * fix tabs component to memoize tabs content * move lines from wallet components to locales * remove Send funds button * add password input and disabling submit button below certain password length threshold * border-box sizing for box * change loading indicator to be relatively positioned in button * cleaned up Button disabled styling; removed unnecessary variants and types * allowing loading and disabled to be controlled separately * rename Chart icon component * add WalletContainer basic tests * make sure buttons always have the same height/width * Add password strength + smiley icons * Add TextInput component files, fix password strength icons, update themes * Add unit tests * Add JSDoc * Add unit tests for new icons, delete icon images from assets * Modal component * extracted modal styling to separate files * add modal tests * add tari wallet box with emoji toggle * extract wallet components and styles to separate files * justify base node to center * introduce CenteredLayout component to layout basenode and wallet containers centered * Add copy/paste/select keyboard functionality * dont clear password field * fix tari wallet id box * extract reusable Input component and built TextInput on top of it * add PasswordInput and use it in wallet password box * disabling input icon when whole component is disabled * set up static layout of settings modal * make the cancel button on settings "secondary" * fix buttons in icon for settings button in title bar * created static layout for wallet settings * extracted components to separate files, connected SettingsContainer to store * allow opening settings on specific page * connecting wallet settings page component to store * showing `running` indicator on wallet box conditionally * pending indicator on button in wallet settings instead of running tag * extracted styled components from CopyBox * cleaned up WalletSettings component with extracted styles and locales * dont block wallet settings if wallet is locked * Fixing Tag component background styling, App.tsx typo * Fixing svg icon colour attributes * feat: mining dashboard (#108) * Set up the Mining dashboard layout * Add handling mining node states * Add default mining box style and config * Fix typo and mining header styling * add JSDoc for copybox * remove Link component, use Button with href on wallet settings * moved react gui over to launchpad * update github actions to point to gui-react directory * remove launchpad_v2 workspace from root Cargo.toml * remove reference to tauri-apps/cli from gui-react (not needed) * Remove useBootstrapper * added `dev-vue` package.json script to launch vue version of the application in development mode * remove useBoostrapper flag from tauri.vue.conf.json after bumping to rc9 * remove tauri scripts from gui-react * Fix typography letter-spacing Co-authored-by: Martin Stefcek <35243812+Cifko@users.noreply.github.com> Co-authored-by: Mike the Tike Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: mrnaveira <47919901+mrnaveira@users.noreply.github.com> Co-authored-by: SW van Heerden Co-authored-by: Stan Bondi Co-authored-by: Cayle Sharrock Co-authored-by: tomaszantas Co-authored-by: Cormac Quaid Co-authored-by: Cormac Quaid <69508715+corquaid@users.noreply.github.com> --- .github/workflows/ci.yml | 29 -- .github/workflows/coverage.yml | 11 +- .github/workflows/development-ci.yml | 44 ++ Cargo.lock | 31 +- .../daily_tests/automatic_recovery_test.js | 4 +- .../daily_tests/automatic_sync_test.js | 17 +- .../launchpad/backend/assets/config.toml | 76 ++-- .../launchpad/backend/src/docker/settings.rs | 123 ++---- .../launchpad/backend/src/docker/workspace.rs | 14 +- applications/launchpad/build_images.sh | 19 +- .../launchpad/docker_rig/base_node.Dockerfile | 4 +- .../docker_rig/console_wallet.Dockerfile | 2 +- .../launchpad/docker_rig/mm_proxy.Dockerfile | 2 +- .../launchpad/docker_rig/monerod.Dockerfile | 4 +- .../docker_rig/sha3_miner.Dockerfile | 2 +- .../gui-react/src/components/Text/index.tsx | 2 +- .../gui-react/src/styles/themes/dark.ts | 2 + .../gui-react/src/styles/themes/light.ts | 1 - .../launchpad/gui-vue/package-lock.json | 7 +- applications/launchpad/gui-vue/src/store.js | 2 +- applications/launchpad/versions.txt | 2 +- applications/tari_app_grpc/Cargo.toml | 4 +- .../src/conversions/block_header.rs | 2 +- .../src/conversions/output_features.rs | 2 +- .../src/conversions/transaction_kernel.rs | 2 +- applications/tari_app_utilities/Cargo.toml | 6 +- .../tari_app_utilities/src/common_cli_args.rs | 3 +- .../src/identity_management.rs | 4 +- applications/tari_base_node/Cargo.toml | 6 +- applications/tari_base_node/src/builder.rs | 4 + applications/tari_base_node/src/cli.rs | 4 +- .../src/commands/command/header_stats.rs | 6 +- .../src/commands/command/list_connections.rs | 3 +- .../src/commands/command/list_peers.rs | 10 +- .../src/commands/command/status.rs | 13 +- applications/tari_base_node/src/config.rs | 2 + .../src/grpc/base_node_grpc_server.rs | 402 ++++++++++++++---- .../tari_collectibles/src-tauri/Cargo.toml | 16 +- .../tari_collectibles/src-tauri/src/cli.rs | 102 ++++- .../tari_collectibles/src-tauri/src/config.rs | 4 +- .../tari_collectibles/src-tauri/src/main.rs | 140 +----- .../web-app/package-lock.json | 130 +++--- applications/tari_console_wallet/Cargo.toml | 6 +- .../src/automation/command_parser.rs | 2 +- applications/tari_console_wallet/src/cli.rs | 4 +- .../src/grpc/wallet_grpc_server.rs | 4 +- .../tari_console_wallet/src/init/mod.rs | 10 +- applications/tari_console_wallet/src/main.rs | 14 +- .../tari_console_wallet/src/recovery.rs | 2 +- .../src/ui/components/assets_tab.rs | 2 +- .../src/ui/components/network_tab.rs | 2 +- .../src/ui/components/tokens_component.rs | 2 +- .../src/ui/state/app_state.rs | 3 +- .../tari_console_wallet/src/ui/ui_error.rs | 2 +- .../tari_console_wallet/src/utils/db.rs | 2 +- applications/tari_explorer/routes/index.js | 1 + applications/tari_explorer/views/index.hbs | 2 + .../tari_merge_mining_proxy/Cargo.toml | 6 +- .../tari_merge_mining_proxy/src/cli.rs | 2 +- applications/tari_miner/Cargo.toml | 4 +- applications/tari_miner/src/difficulty.rs | 4 +- applications/tari_miner/src/main.rs | 3 +- applications/tari_validator_node/Cargo.toml | 4 +- applications/tari_validator_node/src/cli.rs | 24 +- .../tari_web_extension/package-lock.json | 50 +-- applications/test_faucet/Cargo.toml | 4 +- base_layer/common_types/Cargo.toml | 4 +- base_layer/common_types/src/array.rs | 11 + base_layer/common_types/src/chain_metadata.rs | 2 +- .../src/types/bullet_rangeproofs.rs | 2 +- base_layer/core/Cargo.toml | 6 +- .../comms_interface/comms_request.rs | 2 +- .../comms_interface/inbound_handlers.rs | 3 +- base_layer/core/src/base_node/rpc/service.rs | 2 +- .../base_node/rpc/sync_utxos_by_block_task.rs | 2 +- .../state_machine_service/states/listening.rs | 2 +- .../core/src/base_node/sync/rpc/service.rs | 3 +- .../src/base_node/sync/rpc/sync_utxos_task.rs | 2 +- .../core/src/blocks/accumulated_data.rs | 3 +- base_layer/core/src/blocks/block.rs | 4 +- base_layer/core/src/blocks/block_header.rs | 109 ++--- .../core/src/blocks/historical_block.rs | 2 +- .../src/blocks/new_blockheader_template.rs | 2 +- .../src/chain_storage/block_add_result.rs | 2 +- .../src/chain_storage/blockchain_database.rs | 3 +- .../core/src/chain_storage/db_transaction.rs | 2 +- .../core/src/chain_storage/lmdb_db/lmdb.rs | 2 +- .../core/src/chain_storage/lmdb_db/lmdb_db.rs | 7 +- .../core/src/chain_storage/pruned_output.rs | 2 +- .../core/src/consensus/consensus_constants.rs | 2 +- base_layer/core/src/covenants/arguments.rs | 25 +- base_layer/core/src/covenants/byte_codes.rs | 71 +++- base_layer/core/src/covenants/context.rs | 2 +- base_layer/core/src/covenants/decoder.rs | 50 ++- base_layer/core/src/covenants/encoder.rs | 60 +++ base_layer/core/src/covenants/error.rs | 2 - base_layer/core/src/covenants/fields.rs | 251 +++++++++-- .../core/src/covenants/filters/field_eq.rs | 41 +- .../src/covenants/filters/fields_hashed_eq.rs | 22 - .../core/src/covenants/filters/filter.rs | 14 + .../core/src/covenants/filters/identity.rs | 20 + base_layer/core/src/covenants/macros.rs | 12 +- base_layer/core/src/covenants/token.rs | 58 ++- .../src/mempool/service/inbound_handlers.rs | 2 +- .../core/src/mempool/service/request.rs | 2 +- .../core/src/mempool/service/service.rs | 2 +- .../core/src/mempool/sync_protocol/mod.rs | 3 +- .../core/src/mempool/sync_protocol/test.rs | 2 +- .../unconfirmed_pool/unconfirmed_pool.rs | 3 +- .../core/src/proof_of_work/difficulty.rs | 2 +- .../core/src/proof_of_work/lwma_diff.rs | 2 +- .../proof_of_work/monero_rx/fixed_array.rs | 3 +- .../src/proof_of_work/monero_rx/helpers.rs | 15 +- .../core/src/proof_of_work/proof_of_work.rs | 2 +- base_layer/core/src/proof_of_work/sha3_pow.rs | 7 +- .../proof_of_work/target_difficulty_window.rs | 2 +- base_layer/core/src/proto/block.rs | 2 +- base_layer/core/src/proto/block_header.rs | 2 +- base_layer/core/src/proto/types_impls.rs | 2 +- base_layer/core/src/proto/utils.rs | 2 +- base_layer/core/src/transactions/fee.rs | 2 +- .../transaction_components/transaction.rs | 2 +- .../transaction_kernel.rs | 2 +- .../transaction_output.rs | 8 +- .../unblinded_output.rs | 8 +- .../proto/recipient_signed_message.rs | 2 +- .../proto/transaction_sender.rs | 2 +- .../src/validation/block_validators/orphan.rs | 2 +- .../src/validation/block_validators/test.rs | 2 +- .../core/src/validation/header_validator.rs | 2 +- base_layer/core/tests/async_db.rs | 3 +- base_layer/core/tests/base_node_rpc.rs | 3 +- .../chain_storage_tests/chain_backend.rs | 2 +- .../chain_storage_tests/chain_storage.rs | 8 +- .../core/tests/helpers/test_blockchain.rs | 2 +- base_layer/core/tests/node_service.rs | 2 +- base_layer/key_manager/Cargo.toml | 3 +- base_layer/key_manager/src/cipher_seed.rs | 2 +- base_layer/key_manager/src/error.rs | 2 +- base_layer/key_manager/src/mnemonic.rs | 31 +- base_layer/key_manager/src/wasm.rs | 2 +- base_layer/mmr/Cargo.toml | 4 +- base_layer/mmr/tests/merkle_proof.rs | 2 +- base_layer/mmr/tests/mutable_mmr.rs | 2 +- base_layer/mmr/tests/with_blake512_hash.rs | 2 +- base_layer/p2p/Cargo.toml | 4 +- base_layer/p2p/examples/gen_node_identity.rs | 2 +- base_layer/p2p/examples/gen_tor_identity.rs | 2 +- base_layer/p2p/src/initialization.rs | 13 +- base_layer/p2p/src/services/liveness/mock.rs | 2 +- base_layer/p2p/src/transport.rs | 23 +- base_layer/tari_mining_helper_ffi/Cargo.toml | 4 +- .../tari_mining_helper_ffi/src/error.rs | 2 +- base_layer/wallet/Cargo.toml | 4 +- .../src/contacts_service/storage/sqlite_db.rs | 2 +- base_layer/wallet/src/error.rs | 4 +- .../wallet/src/key_manager_service/error.rs | 4 +- .../wallet/src/key_manager_service/handle.rs | 9 +- .../src/key_manager_service/initializer.rs | 2 + .../src/key_manager_service/interface.rs | 15 + .../wallet/src/key_manager_service/mock.rs | 6 + .../storage/database/backend.rs | 9 +- .../storage/database/mod.rs | 11 + .../storage/sqlite_db/key_manager_state.rs | 9 + .../sqlite_db/key_manager_state_old.rs | 6 +- .../storage/sqlite_db/mod.rs | 3 + .../src/output_manager_service/error.rs | 3 +- .../storage/database/mod.rs | 2 +- .../storage/sqlite_db/new_output_sql.rs | 2 +- .../tasks/txo_validation_task.rs | 2 +- .../wallet/src/storage/sqlite_db/wallet.rs | 6 +- .../wallet/src/transaction_service/error.rs | 4 +- .../transaction_broadcast_protocol.rs | 2 +- .../protocols/transaction_receive_protocol.rs | 2 +- .../transaction_validation_protocol.rs | 2 +- .../transaction_service/storage/sqlite_db.rs | 2 +- .../wallet/src/utxo_scanner_service/error.rs | 2 +- .../utxo_scanner_service/utxo_scanner_task.rs | 27 +- .../transaction_service_tests/service.rs | 9 +- base_layer/wallet_ffi/Cargo.toml | 4 +- base_layer/wallet_ffi/src/tasks.rs | 2 +- common/config/presets/base_node.toml | 3 + common/src/exit_codes.rs | 2 +- comms/core/Cargo.toml | 3 +- comms/core/examples/stress/error.rs | 2 +- comms/core/examples/stress/prompt.rs | 2 +- comms/core/examples/stress/service.rs | 2 +- comms/core/examples/stress_test.rs | 2 +- comms/core/examples/tor.rs | 2 +- comms/core/examples/vanity_id.rs | 3 +- comms/core/src/backoff.rs | 3 + comms/core/src/bounded_executor.rs | 9 + comms/core/src/builder/comms_node.rs | 9 +- comms/core/src/builder/mod.rs | 86 +++- comms/core/src/compat.rs | 94 ---- comms/core/src/connection_manager/common.rs | 9 +- .../core/src/connection_manager/dial_state.rs | 3 + comms/core/src/connection_manager/dialer.rs | 3 +- .../{types.rs => direction.rs} | 0 comms/core/src/connection_manager/error.rs | 2 + comms/core/src/connection_manager/listener.rs | 3 +- comms/core/src/connection_manager/liveness.rs | 1 + comms/core/src/connection_manager/manager.rs | 10 +- comms/core/src/connection_manager/mod.rs | 14 +- .../src/connection_manager/peer_connection.rs | 5 +- comms/core/src/connectivity/config.rs | 1 + .../core/src/connectivity/connection_pool.rs | 2 + .../core/src/connectivity/connection_stats.rs | 2 + comms/core/src/connectivity/error.rs | 1 + comms/core/src/connectivity/manager.rs | 230 +++++----- comms/core/src/connectivity/mod.rs | 7 + comms/core/src/connectivity/requester.rs | 19 +- comms/core/src/connectivity/selection.rs | 25 +- comms/core/src/framing.rs | 1 + comms/core/src/lib.rs | 2 +- comms/core/src/macros.rs | 9 + comms/core/src/message/envelope.rs | 6 + comms/core/src/message/error.rs | 1 + comms/core/src/message/mod.rs | 37 +- comms/core/src/multiplexing/mod.rs | 2 + comms/core/src/multiplexing/yamux.rs | 1 + comms/core/src/net_address/mod.rs | 2 + comms/core/src/noise/config.rs | 2 +- comms/core/src/noise/mod.rs | 13 +- comms/core/src/noise/socket.rs | 3 +- .../core/src/peer_manager/connection_stats.rs | 4 +- comms/core/src/peer_manager/error.rs | 1 + .../src/peer_manager/identity_signature.rs | 4 +- comms/core/src/peer_manager/manager.rs | 2 +- comms/core/src/peer_manager/migrations/v5.rs | 2 +- comms/core/src/peer_manager/migrations/v6.rs | 2 +- comms/core/src/peer_manager/node_distance.rs | 4 + comms/core/src/peer_manager/node_id.rs | 3 +- comms/core/src/peer_manager/or_not_found.rs | 1 + comms/core/src/peer_manager/peer.rs | 19 +- comms/core/src/peer_manager/peer_features.rs | 9 + comms/core/src/peer_manager/peer_storage.rs | 6 +- comms/core/src/peer_manager/wrapper.rs | 2 +- comms/core/src/pipeline/builder.rs | 9 + comms/core/src/pipeline/inbound.rs | 3 + comms/core/src/pipeline/outbound.rs | 4 + comms/core/src/pipeline/sink.rs | 1 + comms/core/src/protocol/error.rs | 1 + comms/core/src/protocol/extensions.rs | 12 + comms/core/src/protocol/identity.rs | 11 + comms/core/src/protocol/messaging/error.rs | 2 + .../core/src/protocol/messaging/extension.rs | 1 + comms/core/src/protocol/messaging/inbound.rs | 1 + comms/core/src/protocol/messaging/mod.rs | 8 + comms/core/src/protocol/messaging/outbound.rs | 1 + comms/core/src/protocol/messaging/protocol.rs | 9 +- comms/core/src/protocol/negotiation.rs | 25 +- comms/core/src/protocol/protocols.rs | 12 + comms/core/src/protocol/rpc/client/pool.rs | 1 + comms/core/src/protocol/rpc/context.rs | 10 +- comms/core/src/protocol/rpc/message.rs | 16 +- comms/core/src/protocol/rpc/mod.rs | 6 +- comms/core/src/protocol/rpc/server/mock.rs | 2 + comms/core/src/protocol/rpc/server/router.rs | 1 + .../src/protocol/rpc/test/greeting_service.rs | 2 +- comms/core/src/protocol/rpc/test/mock.rs | 1 + comms/core/src/protocol/rpc/test/smoke.rs | 2 +- comms/core/src/rate_limit.rs | 3 +- comms/core/src/runtime.rs | 2 + comms/core/src/socks/client.rs | 1 + comms/core/src/socks/mod.rs | 6 +- comms/core/src/stream_id.rs | 4 + comms/core/src/tor/mod.rs | 6 +- comms/core/src/transports/mod.rs | 8 + comms/core/src/transports/socks.rs | 2 + comms/core/src/types.rs | 2 + comms/core/src/utils/datetime.rs | 19 +- comms/core/src/utils/mod.rs | 2 + comms/dht/Cargo.toml | 5 +- comms/dht/examples/propagation/prompt.rs | 2 +- comms/dht/src/actor.rs | 22 +- comms/dht/src/broadcast_strategy.rs | 13 + comms/dht/src/builder.rs | 43 +- comms/dht/src/config.rs | 7 +- comms/dht/src/connectivity/mod.rs | 29 +- comms/dht/src/crypt.rs | 47 +- comms/dht/src/dedup/dedup_cache.rs | 3 +- comms/dht/src/dedup/mod.rs | 4 + comms/dht/src/dht.rs | 6 +- comms/dht/src/discovery/mod.rs | 14 + comms/dht/src/domain_message.rs | 14 +- comms/dht/src/envelope.rs | 23 +- comms/dht/src/event.rs | 1 + comms/dht/src/inbound/decryption.rs | 5 +- .../src/{store_forward => inbound}/forward.rs | 37 +- comms/dht/src/inbound/mod.rs | 6 + comms/dht/src/lib.rs | 78 +--- comms/dht/src/logging_middleware.rs | 2 + comms/dht/src/outbound/broadcast.rs | 11 +- comms/dht/src/outbound/error.rs | 3 +- comms/dht/src/outbound/mod.rs | 4 +- comms/dht/src/peer_validator.rs | 3 + comms/dht/src/rpc/mod.rs | 2 + comms/dht/src/storage/connection.rs | 10 + comms/dht/src/storage/database.rs | 6 + comms/dht/src/storage/dht_setting_entry.rs | 3 + comms/dht/src/storage/error.rs | 1 + comms/dht/src/storage/mod.rs | 2 + comms/dht/src/store_forward/config.rs | 1 + comms/dht/src/store_forward/error.rs | 1 + comms/dht/src/store_forward/local_state.rs | 3 +- comms/dht/src/store_forward/mod.rs | 5 +- .../src/store_forward/saf_handler/layer.rs | 1 + .../dht/src/store_forward/saf_handler/task.rs | 5 +- comms/dht/src/store_forward/service.rs | 26 +- comms/dht/src/store_forward/store.rs | 1 + comms/dht/src/test_utils/makers.rs | 4 +- comms/dht/src/test_utils/mod.rs | 2 + comms/dht/src/version.rs | 7 + comms/dht/tests/dht.rs | 2 +- comms/rpc_macros/src/expand.rs | 83 ++-- dan_layer/core/Cargo.toml | 4 +- dan_layer/core/src/models/asset_definition.rs | 2 +- dan_layer/core/src/services/asset_proxy.rs | 2 +- .../core/src/templates/tip002_template.rs | 2 +- .../core/src/templates/tip004_template.rs | 3 +- .../core/src/templates/tip721_template.rs | 2 +- .../core/src/workers/states/decide_state.rs | 2 +- dan_layer/storage_sqlite/Cargo.toml | 2 +- infrastructure/storage/Cargo.toml | 2 +- infrastructure/storage/tests/lmdb.rs | 12 - infrastructure/tari_script/Cargo.toml | 4 +- .../features/WalletRoutingMechanism.feature | 81 ++-- .../features/support/wallet_cli_steps.js | 2 +- integration_tests/features/support/world.js | 2 +- integration_tests/helpers/baseNodeProcess.js | 10 +- integration_tests/helpers/ffi/ffiInterface.js | 5 +- integration_tests/helpers/walletProcess.js | 8 +- integration_tests/package-lock.json | 6 +- 334 files changed, 2575 insertions(+), 1611 deletions(-) create mode 100644 .github/workflows/development-ci.yml delete mode 100644 comms/core/src/compat.rs rename comms/core/src/connection_manager/{types.rs => direction.rs} (100%) rename comms/dht/src/{store_forward => inbound}/forward.rs (85%) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 2bdf05e6eb..b273ed27c1 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -21,35 +21,6 @@ env: PROTOC: protoc jobs: - checks: - name: npm checks - runs-on: ubuntu-latest - steps: - - name: checkout - uses: actions/checkout@v2 - - name: npm audit launchpad_v2 gui - run: | - cd applications/launchpad_v2 - npm audit - - name: npm audit collectibles - run: | - cd applications/tari_collectibles/web-app - # We have to ignore this for now because audit error is in react-scripts - npm audit || true - - name: npm audit explorer - run: | - cd applications/tari_explorer - npm audit - - name: npm audit web extensions - run: | - cd applications/tari_web_extension - # We have to ignore this for now because audit error is in react-scripts - npm audit || true - - name: npm audit web extensions example - run: | - cd applications/tari_web_extension_example - npm audit - clippy: name: clippy runs-on: ubuntu-18.04 diff --git a/.github/workflows/coverage.yml b/.github/workflows/coverage.yml index bb9248864c..c47ed3aabe 100644 --- a/.github/workflows/coverage.yml +++ b/.github/workflows/coverage.yml @@ -18,7 +18,8 @@ jobs: uses: actions-rs/toolchain@v1 with: toolchain: ${{ env.toolchain }} - - uses: Swatinem/rust-cache@v1 + override: true + components: llvm-tools-preview - name: ubuntu dependencies run: | sudo apt-get update && \ @@ -28,7 +29,6 @@ jobs: pkg-config \ libsqlite3-dev \ clang-10 \ - clang \ git \ cmake \ libc++-dev \ @@ -47,12 +47,9 @@ jobs: libappindicator3-dev \ patchelf \ librsvg2-dev \ - - name: test key manager wasm + - name: install grcov run: | - npm install -g wasm-pack - cd base_layer/key_manager - rustup target add wasm32-unknown-unknown - make test + cargo install grcov - name: cargo test compile uses: actions-rs/cargo@v1 with: diff --git a/.github/workflows/development-ci.yml b/.github/workflows/development-ci.yml new file mode 100644 index 0000000000..eca7f2a2d0 --- /dev/null +++ b/.github/workflows/development-ci.yml @@ -0,0 +1,44 @@ +on: + push: + branches: + - development + - main + +name: Development CI + +env: + toolchain: nightly-2021-11-20 + CARGO_HTTP_MULTIPLEXING: false + CARGO_TERM_COLOR: always + PROTOC: protoc + +jobs: + checks: + name: npm checks + runs-on: ubuntu-latest + steps: + - name: checkout + uses: actions/checkout@v2 + - name: npm audit launchpad gui + run: | + cd applications/launchpad/gui-vue + npm audit + - name: npm audit collectibles + run: | + cd applications/tari_collectibles/web-app + # We have to ignore this for now because audit error is in react-scripts + npm audit || true + - name: npm audit explorer + run: | + cd applications/tari_explorer + npm audit + - name: npm audit web extensions + run: | + cd applications/tari_web_extension + # We have to ignore this for now because audit error is in react-scripts + npm audit || true + - name: npm audit web extensions example + run: | + cd applications/tari_web_extension_example + npm audit + diff --git a/Cargo.lock b/Cargo.lock index 06f02f1950..2b72b7fd14 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -397,15 +397,6 @@ dependencies = [ "cc", ] -[[package]] -name = "base58-monero" -version = "0.2.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b4b40d07a9459c8d0d60cf7e7935748fae3f401263c38a8a120be6c0a2be566d" -dependencies = [ - "thiserror", -] - [[package]] name = "base58-monero" version = "0.3.2" @@ -3876,7 +3867,7 @@ version = "0.13.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5a7038b6ba92588189248fbb4f8b2744d4918a9732f826e414814a50c168dca3" dependencies = [ - "base58-monero 0.3.2", + "base58-monero", "curve25519-dalek", "fixed-hash", "hex", @@ -6774,6 +6765,7 @@ dependencies = [ "tari_shutdown", "tari_storage", "tari_test_utils 0.31.1", + "tari_utilities", "tempfile", "thiserror", "tokio 1.17.0", @@ -6827,6 +6819,7 @@ dependencies = [ "tokio 1.17.0", "tokio-stream", "tower", + "zeroize", ] [[package]] @@ -6952,8 +6945,8 @@ dependencies = [ [[package]] name = "tari_crypto" -version = "0.12.5" -source = "git+https://github.com/tari-project/tari-crypto.git?tag=v0.12.5#740abb56e4a5240190fde8777e83a92611521a2d" +version = "0.13.0" +source = "git+https://github.com/tari-project/tari-crypto.git?tag=v0.13.0#c4dd4c0e53528720642b54f42083c6d9e392ee29" dependencies = [ "base64 0.10.1", "blake2", @@ -7066,6 +7059,7 @@ dependencies = [ "strum_macros 0.22.0", "tari_common_types", "tari_crypto", + "tari_utilities", "thiserror", "wasm-bindgen", "wasm-bindgen-test", @@ -7333,21 +7327,16 @@ dependencies = [ [[package]] name = "tari_utilities" -version = "0.3.1" -source = "git+https://github.com/tari-project/tari_utilities.git?tag=v0.3.1#056603a99ac1d09723fadd955baf5d5dc53a909a" +version = "0.4.3" +source = "git+https://github.com/tari-project/tari_utilities.git?tag=v0.4.3#bd328a01ed8f2fec0661e0bd39ea904e3be961cf" dependencies = [ - "base58-monero 0.2.1", - "base64 0.10.1", + "base58-monero", + "base64 0.13.0", "bincode", - "bitflags 1.3.2", - "chrono", - "clear_on_drop", "newtype-ops", - "rand 0.7.3", "serde", "serde_json", "thiserror", - "time 0.3.9", ] [[package]] diff --git a/applications/daily_tests/automatic_recovery_test.js b/applications/daily_tests/automatic_recovery_test.js index c305f92e89..de5da6e4c8 100644 --- a/applications/daily_tests/automatic_recovery_test.js +++ b/applications/daily_tests/automatic_recovery_test.js @@ -91,7 +91,9 @@ async function run(options = {}) { let recoveredAmount = 0; if (recoveredAmountMatch[2] === "T") { // convert to micro tari - recoveredAmount = round(parseFloat(recoveredAmountMatch[1]) * 1000000); + recoveredAmount = Math.round( + parseFloat(recoveredAmountMatch[1]) * 1000000 + ); } else { recoveredAmount = parseInt(recoveredAmountMatch[1]); } diff --git a/applications/daily_tests/automatic_sync_test.js b/applications/daily_tests/automatic_sync_test.js index 12b0af183a..93f46ae4c5 100644 --- a/applications/daily_tests/automatic_sync_test.js +++ b/applications/daily_tests/automatic_sync_test.js @@ -7,7 +7,7 @@ const path = require("path"); const helpers = require("./helpers"); const BaseNodeProcess = require("integration_tests/helpers/baseNodeProcess"); -const NETWORK = "DIBBLER"; +const NETWORK = "dibbler"; const SyncType = { Archival: "Archival", @@ -59,12 +59,14 @@ async function run(options) { const baseNode = new BaseNodeProcess("compile", true); await baseNode.init(); - // Bypass tor for outbound TCP (faster but less private) - process.env[`TARI_BASE_NODE__${NETWORK}__TOR_PROXY_BYPASS_FOR_OUTBOUND_TCP`] = - "true"; + let config = { + // Bypass tor for outbound TCP (faster but less private) + [`${NETWORK}.base_node.p2p.transport.tor.proxy_bypass_for_outbound_tcp`]: true, + }; + // Set pruning horizon in config file if `pruned` command line arg is present if (options.syncType === SyncType.Pruned) { - process.env[`TARI_BASE_NODE__${NETWORK}__PRUNING_HORIZON`] = 20; + config[`${NETWORK}.base_node.storage.pruning_horizon`] = 20; } if (options.forceSyncPeer) { @@ -73,11 +75,10 @@ async function run(options) { forcedSyncPeersStr = options.forceSyncPeer.join(","); } console.log(`Force sync peer set to ${forcedSyncPeersStr}`); - process.env[`TARI_BASE_NODE__${NETWORK}__FORCE_SYNC_PEERS`] = - forcedSyncPeersStr; + config[`${NETWORK}.base_node.force_sync_peers`] = forcedSyncPeersStr; } - await baseNode.start(); + await baseNode.start({ network: NETWORK, config }); await fs.mkdir(path.dirname(options.log), { recursive: true }); let logfile = await fs.open(options.log, "w+"); diff --git a/applications/launchpad/backend/assets/config.toml b/applications/launchpad/backend/assets/config.toml index dd2d21eaeb..6dc8acd230 100644 --- a/applications/launchpad/backend/assets/config.toml +++ b/applications/launchpad/backend/assets/config.toml @@ -1,5 +1,32 @@ -# Config for launchpad v0.0.2 -[common] +# Config for launchpad v1.0.0 +[base_node] +network = "dibbler" +grpc_address = "/ip4/0.0.0.0/tcp/18142" +override_from = "dibbler" + +[base_node.storage] +track_reorgs = true + +[dibbler.base_node] +identity_file = "/var/tari/base_node/config/dibbler/tari_base_node_id.json" + +[igor.base_node] +network = "igor" +base_node_identity_file = "/var/tari/base_node/config/igor/base_node_id.json" + +[base_node.p2p] +auxiliary_tcp_listener_address = "/dns4/base_node/tcp/18189" + +[base_node.p2p.transport] +type = "tor" + +[base_node.p2p.transport.tor] +control_auth = "password=tari" +socks_address_override = "/dns4/tor/tcp/9050" +control_address = "/dns4/tor/tcp/9051" + +[base_node.p2p.transport.tcp] +listener_address = "/dns4/base_node/tcp/18189" [dibbler.p2p.seeds] dns_seeds = ["seeds.dibbler.tari.com"] @@ -24,58 +51,39 @@ peer_seeds = [ "544ed2baed414307e119d12894e27f9ddbdfa2fd5b6528dc843f27903e951c30::/ip4/13.40.189.176/tcp/18189" ] -[base_node] -network = "dibbler" -grpc_address = "/ip4/0.0.0.0/tcp/18142" - -[base_node.storage] -track_reorgs = true - -[dibbler.base_node] -identity_file = "/var/tari/base_node/config/dibbler/tari_base_node_id.json" - -[igor.base_node] -network = "igor" -base_node_identity_file = "/var/tari/base_node/config/igor/base_node_id.json" - -[base_node.p2p.transport] -tor.control_auth = "password=tari" -#tcp.listener_address = "/dns4/base_node/tcp/18189" -tor.socks_address_override = "/dns4/tor/tcp/9050" -tor.control_address = "/dns4/tor/tcp/9051" - - [wallet] override_from = "dibbler" db_file = "wallet/wallet.dat" grpc_address = "/ip4/0.0.0.0/tcp/18143" password = "tari" +use_libtor = false [wallet.p2p] [wallet.p2p.transport] type = "tor" -tor.control_auth = "password=tari" -tor.control_address = "/dns4/tor/tcp/9051" -tor.socks_address_override = "/dns4/tor/tcp/9050" + +[wallet.p2p.transport.tor] +control_auth = "password=tari" +socks_address_override = "/dns4/tor/tcp/9050" +control_address = "/dns4/tor/tcp/9051" + +[wallet.p2p.transport.tcp] +listener_address = "/dns4/wallet/tcp/18188" [dibbler.wallet] network = "dibbler" -#use_libtor = false -#tor_onion_port = 18141 [igor.wallet] network = "igor" -#use_libtor = false -#tor_onion_port = 18141 [miner] -base_node_grpc_address = "/dns4/base_node/tcp/18142" -wallet_grpc_address = "/dns4/wallet/tcp/18143" +base_node_addr = "/dns4/base_node/tcp/18142" +wallet_addr = "/dns4/wallet/tcp/18143" mine_on_tip_only = true +num_mining_threads = 1 [merge_mining_proxy] -#config = "dibbler" monerod_url = [ # stagenet "http://stagenet.xmr-tw.org:38081", "http://stagenet.community.xmr.to:38081", @@ -90,5 +98,3 @@ submit_to_origin = true monerod_username = "" monerod_password = "" monerod_use_auth = false - - diff --git a/applications/launchpad/backend/src/docker/settings.rs b/applications/launchpad/backend/src/docker/settings.rs index 8d8461cf4d..f74b1e3521 100644 --- a/applications/launchpad/backend/src/docker/settings.rs +++ b/applications/launchpad/backend/src/docker/settings.rs @@ -36,7 +36,11 @@ use crate::docker::{models::ImageType, TariNetwork}; pub const DEFAULT_MINING_ADDRESS: &str = "5AJ8FwQge4UjT9Gbj4zn7yYcnpVQzzkqr636pKto59jQcu85CFsuYVeFgbhUdRpiPjUCkA4sQtWApUzCyTMmSigFG2hDo48"; -pub const DEFAULT_MONEROD_URL: &str = "http://monero-stagenet.exan.tech:38081"; +pub const DEFAULT_MONEROD_URL: &str = "http://stagenet.xmr-tw.org:38081,\ +http://stagenet.community.xmr.to:38081,\ +http://monero-stagenet.exan.tech:38081,\ +http://xmr-lux.boldsuck.org:38081,\ +http://singapore.node.xmr.pm:38081"; #[derive(Default, Debug, Serialize, Deserialize)] pub struct BaseNodeConfig { @@ -275,11 +279,11 @@ impl LaunchpadConfig { /// Return the command line arguments we want for the given container execution. pub fn command(&self, image_type: ImageType) -> Vec { match image_type { - ImageType::BaseNode => vec!["--non-interactive-mode".to_string()], - ImageType::Wallet => vec!["--non-interactive-mode".to_string()], + ImageType::BaseNode => self.base_node_cmd(), + ImageType::Wallet => self.wallet_cmd(), ImageType::XmRig => self.xmrig_cmd(), - ImageType::Sha3Miner => vec![], - ImageType::MmProxy => vec![], + ImageType::Sha3Miner => self.miner_cmd(), + ImageType::MmProxy => self.mm_proxy_cmd(), ImageType::Tor => self.tor_cmd(), ImageType::Monerod => self.monerod_cmd(), ImageType::Frontail => self.frontail_cmd(), @@ -313,6 +317,26 @@ impl LaunchpadConfig { args.into_iter().map(String::from).collect() } + fn base_node_cmd(&self) -> Vec { + let args = vec!["--non-interactive-mode", "--log-config=/var/tari/config/log4rs.yml"]; + args.into_iter().map(String::from).collect() + } + + fn wallet_cmd(&self) -> Vec { + let args = vec!["--non-interactive-mode", "--log-config=/var/tari/config/log4rs.yml"]; + args.into_iter().map(String::from).collect() + } + + fn miner_cmd(&self) -> Vec { + let args = vec!["--log-config=/var/tari/config/log4rs.yml"]; + args.into_iter().map(String::from).collect() + } + + fn mm_proxy_cmd(&self) -> Vec { + let args = vec!["--log-config=/var/tari/config/log4rs.yml"]; + args.into_iter().map(String::from).collect() + } + fn xmrig_cmd(&self) -> Vec { let args = vec![ "--url=mm_proxy:18081", @@ -387,26 +411,10 @@ impl LaunchpadConfig { } fn base_node_tor_config(&self, env: &mut Vec) { - env.append(&mut vec![ - format!("TARI_BASE_NODE__{}__TRANSPORT=tor", self.tari_network.upper_case()), - format!( - "TARI_BASE_NODE__{}__TOR_CONTROL_AUTH=password={}", - self.tari_network.upper_case(), - self.tor_control_password - ), - format!( - "TARI_BASE_NODE__{}__TOR_FORWARD_ADDRESS=/dns4/base_node/tcp/18189", - self.tari_network.upper_case() - ), - format!( - "TARI_BASE_NODE__{}__TOR_SOCKS_ADDRESS_OVERRIDE=/dns4/tor/tcp/9050", - self.tari_network.upper_case() - ), - format!( - "TARI_BASE_NODE__{}__TOR_CONTROL_ADDRESS=/dns4/tor/tcp/9051", - self.tari_network.upper_case() - ), - ]); + env.append(&mut vec![format!( + "TARI_BASE_NODE__P2P__TRANSPORT__TOR__CONTROL_AUTH=password={}", + self.tor_control_password + )]); } /// Generate the vector of ENVAR strings for the docker environment @@ -417,19 +425,9 @@ impl LaunchpadConfig { env.append(&mut vec![ format!("WAIT_FOR_TOR={}", base_node.delay.as_secs()), format!( - "TARI_COMMON__{}__DATA_DIR=/blockchain/{}", - self.tari_network.upper_case(), + "TARI_BASE_NODE__DATA_DIR=/blockchain/{}", self.tari_network.lower_case() ), - format!( - "TARI_BASE_NODE__{}__TCP_LISTENER_ADDRESS=/dns4/base_node/tcp/18189", - self.tari_network.upper_case() - ), - format!("TARI_BASE_NODE__{}__GRPC_ENABLED=1", self.tari_network.upper_case()), - format!( - "TARI_BASE_NODE__{}__GRPC_BASE_NODE_ADDRESS=0.0.0.0:18142", - self.tari_network.upper_case() - ), "APP_NAME=base_node".to_string(), ]); } @@ -446,28 +444,10 @@ impl LaunchpadConfig { "SHELL=/bin/bash".to_string(), "TERM=linux".to_string(), format!("TARI_WALLET_PASSWORD={}", config.password), - format!("TARI_WALLET__{}__TRANSPORT=tor", self.tari_network.upper_case()), format!( - "TARI_WALLET__{}__TOR_CONTROL_AUTH=password={}", - self.tari_network.upper_case(), + "TARI_WALLET__P2P__TRANSPORT__TOR__CONTROL_AUTH=password={}", self.tor_control_password ), - format!( - "TARI_WALLET__{}__TOR_CONTROL_ADDRESS=/dns4/tor/tcp/9051", - self.tari_network.upper_case() - ), - format!( - "TARI_WALLET__{}__TOR_SOCKS_ADDRESS_OVERRIDE=/dns4/tor/tcp/9050", - self.tari_network.upper_case() - ), - format!( - "TARI_WALLET__{}__TOR_FORWARD_ADDRESS=/dns4/wallet/tcp/18188", - self.tari_network.upper_case() - ), - format!( - "TARI_WALLET__{}__TCP_LISTENER_ADDRESS=/dns4/wallet/tcp/18188", - self.tari_network.upper_case() - ), ]); } env @@ -521,35 +501,10 @@ impl LaunchpadConfig { format!("WAIT_FOR_TOR={}", config.delay.as_secs() + 6), "APP_NAME=mm_proxy".to_string(), "APP_EXEC=tari_merge_mining_proxy".to_string(), - format!( - "TARI_BASE_NODE__{}__GRPC_BASE_NODE_ADDRESS=/dns4/base_node/tcp/18142", - self.tari_network.upper_case() - ), - "TARI_WALLET__GRPC_ADDRESS=/dns4/wallet/tcp/18143".to_string(), - format!( - "TARI_MERGE_MINING_PROXY__{}__MONEROD_URL={}", - self.tari_network.upper_case(), - config.monerod_url - ), - format!( - "TARI_MERGE_MINING_PROXY__{}__MONEROD_USERNAME={}", - self.tari_network.upper_case(), - config.monero_username - ), - format!( - "TARI_MERGE_MINING_PROXY__{}__MONEROD_PASSWORD={}", - self.tari_network.upper_case(), - config.monero_password - ), - format!( - "TARI_MERGE_MINING_PROXY__{}__MONEROD_USE_AUTH={}", - self.tari_network.upper_case(), - config.monero_use_auth() - ), - format!( - "TARI_MERGE_MINING_PROXY__{}__PROXY_HOST_ADDRESS=0.0.0.0:18081", - self.tari_network.upper_case() - ), + format!("TARI_MERGE_MINING_PROXY__MONEROD_URL={}", config.monerod_url), + format!("TARI_MERGE_MINING_PROXY__MONEROD_USERNAME={}", config.monero_username), + format!("TARI_MERGE_MINING_PROXY__MONEROD_PASSWORD={}", config.monero_password), + format!("TARI_MERGE_MINING_PROXY__MONEROD_USE_AUTH={}", config.monero_use_auth()), ]); } env diff --git a/applications/launchpad/backend/src/docker/workspace.rs b/applications/launchpad/backend/src/docker/workspace.rs index 0aaafa709e..1c0419fcd1 100644 --- a/applications/launchpad/backend/src/docker/workspace.rs +++ b/applications/launchpad/backend/src/docker/workspace.rs @@ -212,10 +212,22 @@ impl TariWorkspace { /// It also lets power users customise which version of docker images they want to run in the workspace. pub fn fully_qualified_image(image: ImageType, registry: Option<&str>, tag: Option<&str>) -> String { let reg = registry.unwrap_or(DEFAULT_REGISTRY); - let tag = tag.unwrap_or(DEFAULT_TAG); + let tag = Self::arch_specific_tag(tag); format!("{}/{}:{}", reg, image.image_name(), tag) } + /// Returns an architecture-specific tag based on the current CPU and the given label. e.g. + /// `arch_specific_tag(Some("v1.0"))` returns `"v1.0-arm64"` on M1 chips, and `v1.0-amd64` on Intel and AMD chips. + pub fn arch_specific_tag(label: Option<&str>) -> String { + let label = label.unwrap_or(DEFAULT_TAG); + let platform = match std::env::consts::ARCH { + "x86_64" => "amd64", + "aarch64" => "arm64", + _ => "unsupported", + }; + format!("{}-{}", label, platform) + } + /// Starts the Tari workspace recipe. /// /// This is an MVP / PoC version that starts everything in one go, but TODO, should really take some sort of recipe diff --git a/applications/launchpad/build_images.sh b/applications/launchpad/build_images.sh index d8b09f7b94..2950662092 100755 --- a/applications/launchpad/build_images.sh +++ b/applications/launchpad/build_images.sh @@ -1,11 +1,12 @@ #!/bin/bash source versions.txt +platform=${BUILD_PLATFORM:-amd64} build_image() { echo "Building $1 image v$VERSION.." - docker build -f docker_rig/$1 --build-arg ARCH=native --build-arg FEATURES=avx2 --build-arg VERSION=$VERSION $3 $4 -t quay.io/tarilabs/$2:latest ./../.. - docker tag quay.io/tarilabs/$2:latest quay.io/tarilabs/$2:$VERSION - docker push quay.io/tarilabs/$2:latest + docker build -f docker_rig/$1 --build-arg ARCH=native --build-arg FEATURES=avx2 --build-arg VERSION=$VERSION $3 $4 -t quay.io/tarilabs/$2:latest-$platform ./../.. + docker tag quay.io/tarilabs/$2:latest-$platform quay.io/tarilabs/$2:$VERSION-$platform + docker push quay.io/tarilabs/$2:latest-$platform docker push quay.io/tarilabs/$2:$VERSION } @@ -17,12 +18,12 @@ build_image tor.Dockerfile tor build_image monerod.Dockerfile monerod echo "Building XMRig image v$VERSION (XMRig v$XMRIG_VERSION)" -docker build -f docker_rig/xmrig.Dockerfile --build-arg VERSION=$VERSION --build-arg XMRIG_VERSION=$XMRIG_VERSION -t quay.io/tarilabs/xmrig:latest ./../.. -docker tag quay.io/tarilabs/xmrig:latest quay.io/tarilabs/xmrig:$VERSION -docker push quay.io/tarilabs/xmrig:latest +docker build -f docker_rig/xmrig.Dockerfile --build-arg VERSION=$VERSION --build-arg XMRIG_VERSION=$XMRIG_VERSION -t quay.io/tarilabs/xmrig:latest-$platform ./../.. +docker tag quay.io/tarilabs/xmrig:latest-$platform quay.io/tarilabs/xmrig:$VERSION-$platform +docker push quay.io/tarilabs/xmrig:latest-$platform docker push quay.io/tarilabs/xmrig:$VERSION -docker build -f docker_rig/frontail.Dockerfile -t quay.io/tarilabs/frontail ./docker_rig -docker tag quay.io/tarilabs/frontail:latest quay.io/tarilabs/frontail:$VERSION -docker push quay.io/tarilabs/frontail:latest +docker build -f docker_rig/frontail.Dockerfile -t quay.io/tarilabs/frontail:latest-$platform ./docker_rig +docker tag quay.io/tarilabs/frontail:latest-$platform quay.io/tarilabs/frontail:$VERSION-$platform +docker push quay.io/tarilabs/frontail:latest-$platform docker push quay.io/tarilabs/frontail:$VERSION diff --git a/applications/launchpad/docker_rig/base_node.Dockerfile b/applications/launchpad/docker_rig/base_node.Dockerfile index a89768bbc7..42605a48fa 100644 --- a/applications/launchpad/docker_rig/base_node.Dockerfile +++ b/applications/launchpad/docker_rig/base_node.Dockerfile @@ -55,11 +55,11 @@ ADD meta meta RUN cargo build --bin tari_base_node --release --features $FEATURES --locked # Create a base minimal image for the executables -FROM quay.io/bitnami/minideb:bullseye as base +FROM bitnami/minideb:bullseye as base ARG VERSION=1.0.1 # Disable Prompt During Packages Installation ARG DEBIAN_FRONTEND=noninteractive -RUN apt update && apt -y install \ +RUN install_packages \ apt-transport-https \ bash \ ca-certificates \ diff --git a/applications/launchpad/docker_rig/console_wallet.Dockerfile b/applications/launchpad/docker_rig/console_wallet.Dockerfile index 33895da944..9fb34e28ed 100644 --- a/applications/launchpad/docker_rig/console_wallet.Dockerfile +++ b/applications/launchpad/docker_rig/console_wallet.Dockerfile @@ -55,7 +55,7 @@ ADD meta meta RUN cargo build --bin tari_console_wallet --release --features $FEATURES --locked # Create a base minimal image for the executables -FROM quay.io/bitnami/minideb:bullseye as base +FROM bitnami/minideb:bullseye as base ARG VERSION=1.0.1 # Disable Prompt During Packages Installation ARG DEBIAN_FRONTEND=noninteractive diff --git a/applications/launchpad/docker_rig/mm_proxy.Dockerfile b/applications/launchpad/docker_rig/mm_proxy.Dockerfile index 5f353eb478..671a5f79b9 100644 --- a/applications/launchpad/docker_rig/mm_proxy.Dockerfile +++ b/applications/launchpad/docker_rig/mm_proxy.Dockerfile @@ -55,7 +55,7 @@ ADD meta meta RUN cargo build --bin tari_merge_mining_proxy --release --features $FEATURES --locked # Create a base minimal image for the executables -FROM quay.io/bitnami/minideb:bullseye as base +FROM bitnami/minideb:bullseye as base ARG VERSION=1.0.1 # Disable Prompt During Packages Installation ARG DEBIAN_FRONTEND=noninteractive diff --git a/applications/launchpad/docker_rig/monerod.Dockerfile b/applications/launchpad/docker_rig/monerod.Dockerfile index 34419dbe54..f9d5aec9b6 100644 --- a/applications/launchpad/docker_rig/monerod.Dockerfile +++ b/applications/launchpad/docker_rig/monerod.Dockerfile @@ -1,5 +1,5 @@ # Usage: docker run --restart=always -v /var/data/blockchain-xmr:/root/.bitmonero -p 18080:18080 -p 18081:18081 --name=monerod -td kannix/monero-full-node -FROM quay.io/bitnami/minideb:bullseye AS build +FROM bitnami/minideb:bullseye AS build ENV MONERO_VERSION=0.17.2.3 MONERO_SHA256=8069012ad5e7b35f79e35e6ca71c2424efc54b61f6f93238b182981ba83f2311 @@ -14,7 +14,7 @@ RUN curl https://dlsrc.getmonero.org/cli/monero-linux-x64-v$MONERO_VERSION.tar.b cp ./monero-x86_64-linux-gnu-v$MONERO_VERSION/monerod . &&\ rm -r monero-* -FROM quay.io/bitnami/minideb:bullseye +FROM bitnami/minideb:bullseye ARG VERSION=1.0.1 RUN groupadd -g 1000 tari && useradd -ms /bin/bash -u 1000 -g 1000 tari \ diff --git a/applications/launchpad/docker_rig/sha3_miner.Dockerfile b/applications/launchpad/docker_rig/sha3_miner.Dockerfile index 650e1b9d92..0e9c378438 100644 --- a/applications/launchpad/docker_rig/sha3_miner.Dockerfile +++ b/applications/launchpad/docker_rig/sha3_miner.Dockerfile @@ -55,7 +55,7 @@ ADD meta meta RUN cargo build --bin tari_miner --release --features $FEATURES --locked # Create a base minimal image for the executables -FROM quay.io/bitnami/minideb:bullseye as base +FROM bitnami/minideb:bullseye as base ARG VERSION=1.0.1 # Disable Prompt During Packages Installation ARG DEBIAN_FRONTEND=noninteractive diff --git a/applications/launchpad/gui-react/src/components/Text/index.tsx b/applications/launchpad/gui-react/src/components/Text/index.tsx index 0deb0a589c..0fa77a642a 100644 --- a/applications/launchpad/gui-react/src/components/Text/index.tsx +++ b/applications/launchpad/gui-react/src/components/Text/index.tsx @@ -18,10 +18,10 @@ import { TextProps } from './types' const Text = ({ type = 'defaultMedium', - style, as = 'p', color, children, + style, testId, className, }: TextProps) => { diff --git a/applications/launchpad/gui-react/src/styles/themes/dark.ts b/applications/launchpad/gui-react/src/styles/themes/dark.ts index 117b30e296..94ab46d81a 100644 --- a/applications/launchpad/gui-react/src/styles/themes/dark.ts +++ b/applications/launchpad/gui-react/src/styles/themes/dark.ts @@ -12,6 +12,8 @@ const darkTheme = { disabledText: styles.colors.dark.placeholder, tariGradient: styles.gradients.tari, borderColor: styles.colors.light.backgroundImage, + + titleBar: styles.colors.dark.primary, borderColorLight: styles.colors.secondary.borderLight, controlBackground: 'rgba(255,255,255,.2)', info: styles.colors.secondary.info, diff --git a/applications/launchpad/gui-react/src/styles/themes/light.ts b/applications/launchpad/gui-react/src/styles/themes/light.ts index c612a1c190..19ac3a2266 100644 --- a/applications/launchpad/gui-react/src/styles/themes/light.ts +++ b/applications/launchpad/gui-react/src/styles/themes/light.ts @@ -18,7 +18,6 @@ const lightTheme = { shadow: '0 0 40px #00000011', titleBar: styles.colors.light.background, - controlBackground: 'transparent', info: styles.colors.secondary.info, infoText: styles.colors.secondary.infoText, diff --git a/applications/launchpad/gui-vue/package-lock.json b/applications/launchpad/gui-vue/package-lock.json index a9d4387198..248037e7c4 100644 --- a/applications/launchpad/gui-vue/package-lock.json +++ b/applications/launchpad/gui-vue/package-lock.json @@ -13575,6 +13575,7 @@ "dev": true, "peer": true }, +>>>>>>> c7468a8f9 (added `dev-vue` package.json script to launch vue version of the application in development mode) "array-flatten": { "version": "2.1.2", "resolved": "https://registry.npmjs.org/array-flatten/-/array-flatten-2.1.2.tgz", @@ -13588,9 +13589,9 @@ "dev": true }, "async": { - "version": "2.6.3", - "resolved": "https://registry.npmjs.org/async/-/async-2.6.3.tgz", - "integrity": "sha512-zflvls11DCy+dQWzTW2dzuilv8Z5X/pjfmZOWba6TNIVDm+2UDaJmXSOXlasHKfNBs8oo3M0aT50fDEWfKZjXg==", + "version": "2.6.4", + "resolved": "https://registry.npmjs.org/async/-/async-2.6.4.tgz", + "integrity": "sha512-mzo5dfJYwAn29PeiJ0zvwTo04zj8HDJj0Mn8TD7sno7q12prdbnasKJHhkm2c1LgrhlJ0teaea8860oxi51mGA==", "dev": true, "requires": { "lodash": "^4.17.14" diff --git a/applications/launchpad/gui-vue/src/store.js b/applications/launchpad/gui-vue/src/store.js index 64db3cdb06..5e4d972b8f 100644 --- a/applications/launchpad/gui-vue/src/store.js +++ b/applications/launchpad/gui-vue/src/store.js @@ -16,7 +16,7 @@ async function createDefaultSettings() { rootFolder: await cacheDir() + "tari" + sep + "tmp" + sep + "dibbler", dockerRegistry: "quay.io/tarilabs", dockerTag: "latest", - monerodUrl: "http://monero-stagenet.exan.tech:38081", + monerodUrl: "http://stagenet.community.xmr.to:38081,http://monero-stagenet.exan.tech:3808", moneroUseAuth: false, moneroUsername: "", moneroPassword: "" diff --git a/applications/launchpad/versions.txt b/applications/launchpad/versions.txt index bf4b78dda1..6d18c165c2 100644 --- a/applications/launchpad/versions.txt +++ b/applications/launchpad/versions.txt @@ -1,3 +1,3 @@ # Version refers to the base_node, wallet, etc. version -VERSION=0.27.3-lp2 +VERSION=0.31.1-lp2 XMRIG_VERSION=v6.16.3 \ No newline at end of file diff --git a/applications/tari_app_grpc/Cargo.toml b/applications/tari_app_grpc/Cargo.toml index f95add502e..edb8c03b01 100644 --- a/applications/tari_app_grpc/Cargo.toml +++ b/applications/tari_app_grpc/Cargo.toml @@ -11,9 +11,9 @@ edition = "2018" tari_common_types = { version = "^0.31", path = "../../base_layer/common_types"} tari_comms = { path = "../../comms/core"} tari_core = { path = "../../base_layer/core"} -tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.12.5" } +tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.13.0" } tari_script = { path = "../../infrastructure/tari_script" } -tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.3.1" } +tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.4.3" } chrono = { version = "0.4.19", default-features = false } prost = "0.9" diff --git a/applications/tari_app_grpc/src/conversions/block_header.rs b/applications/tari_app_grpc/src/conversions/block_header.rs index 6faf42048e..c3a6765c3a 100644 --- a/applications/tari_app_grpc/src/conversions/block_header.rs +++ b/applications/tari_app_grpc/src/conversions/block_header.rs @@ -24,7 +24,7 @@ use std::convert::TryFrom; use tari_common_types::types::BlindingFactor; use tari_core::{blocks::BlockHeader, proof_of_work::ProofOfWork}; -use tari_crypto::tari_utilities::{ByteArray, Hashable}; +use tari_utilities::{ByteArray, Hashable}; use crate::{ conversions::{datetime_to_timestamp, timestamp_to_datetime}, diff --git a/applications/tari_app_grpc/src/conversions/output_features.rs b/applications/tari_app_grpc/src/conversions/output_features.rs index dce5ed6473..2276ced12a 100644 --- a/applications/tari_app_grpc/src/conversions/output_features.rs +++ b/applications/tari_app_grpc/src/conversions/output_features.rs @@ -36,7 +36,7 @@ use tari_core::transactions::transaction_components::{ SideChainCheckpointFeatures, TemplateParameter, }; -use tari_crypto::tari_utilities::ByteArray; +use tari_utilities::ByteArray; use crate::tari_rpc as grpc; diff --git a/applications/tari_app_grpc/src/conversions/transaction_kernel.rs b/applications/tari_app_grpc/src/conversions/transaction_kernel.rs index 884ee65693..0f2075ef75 100644 --- a/applications/tari_app_grpc/src/conversions/transaction_kernel.rs +++ b/applications/tari_app_grpc/src/conversions/transaction_kernel.rs @@ -27,7 +27,7 @@ use tari_core::transactions::{ tari_amount::MicroTari, transaction_components::{KernelFeatures, TransactionKernel, TransactionKernelVersion}, }; -use tari_crypto::tari_utilities::{ByteArray, Hashable}; +use tari_utilities::{ByteArray, Hashable}; use crate::tari_rpc as grpc; diff --git a/applications/tari_app_utilities/Cargo.toml b/applications/tari_app_utilities/Cargo.toml index 1344d8d641..342982d766 100644 --- a/applications/tari_app_utilities/Cargo.toml +++ b/applications/tari_app_utilities/Cargo.toml @@ -7,13 +7,13 @@ license = "BSD-3-Clause" [dependencies] tari_comms = { path = "../../comms/core" } -tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.12.5" } +tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.13.0" } tari_common = { path = "../../common" } tari_common_types = { path = "../../base_layer/common_types" } tari_p2p = { path = "../../base_layer/p2p", features = ["auto-update"] } -tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.3.1" } +tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.4.3" } -clap = { version = "3.1.1", features = ["derive"] } +clap = { version = "3.1.1", features = ["derive", "env"] } config = { version = "0.13.0" } futures = { version = "^0.3.16", default-features = false, features = ["alloc"] } dirs-next = "1.0.2" diff --git a/applications/tari_app_utilities/src/common_cli_args.rs b/applications/tari_app_utilities/src/common_cli_args.rs index 46e137e555..d42135a3e9 100644 --- a/applications/tari_app_utilities/src/common_cli_args.rs +++ b/applications/tari_app_utilities/src/common_cli_args.rs @@ -31,7 +31,8 @@ pub struct CommonCliArgs { short, long, aliases = &["base_path", "base_dir", "base-dir"], - default_value_t= defaults::base_path() + default_value_t= defaults::base_path(), + env = "TARI_BASE_DIR" )] base_path: String, /// A path to the configuration file to use (config.toml) diff --git a/applications/tari_app_utilities/src/identity_management.rs b/applications/tari_app_utilities/src/identity_management.rs index da597ee5f7..db2afa0ff1 100644 --- a/applications/tari_app_utilities/src/identity_management.rs +++ b/applications/tari_app_utilities/src/identity_management.rs @@ -30,7 +30,7 @@ use tari_common::{ exit_codes::{ExitCode, ExitError}, }; use tari_comms::{multiaddr::Multiaddr, peer_manager::PeerFeatures, tor::TorIdentity, NodeIdentity}; -use tari_crypto::tari_utilities::hex::Hex; +use tari_utilities::hex::Hex; pub const LOG_TARGET: &str = "tari_application"; @@ -187,7 +187,7 @@ pub fn load_from_json, T: DeserializeOwned>(path: P) -> Result>(path: P) -> Result, IdentityError> { check_identity_file(&path)?; let identity = load_from_json(path)?; diff --git a/applications/tari_base_node/Cargo.toml b/applications/tari_base_node/Cargo.toml index 85f7df58fe..baafeb1fde 100644 --- a/applications/tari_base_node/Cargo.toml +++ b/applications/tari_base_node/Cargo.toml @@ -15,20 +15,20 @@ tari_comms = { path = "../../comms/core", features = ["rpc"] } tari_common_types = { path = "../../base_layer/common_types" } tari_comms_dht = { path = "../../comms/dht" } tari_core = { path = "../../base_layer/core", default-features = false, features = ["transactions"] } -tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.12.5" } +tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.13.0" } tari_libtor = { path = "../../infrastructure/libtor" } tari_mmr = { path = "../../base_layer/mmr", features = ["native_bitmap"] } tari_p2p = { path = "../../base_layer/p2p", features = ["auto-update"] } tari_storage = {path="../../infrastructure/storage"} tari_service_framework = { path = "../../base_layer/service_framework" } tari_shutdown = { path = "../../infrastructure/shutdown" } -tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.3.1" } +tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.4.3" } anyhow = "1.0.53" async-trait = "0.1.52" bincode = "1.3.1" chrono = { version = "0.4.19", default-features = false } -clap = { version = "3.1.1", features = ["derive"] } +clap = { version = "3.1.1", features = ["derive", "env"] } config = { version = "0.13.0" } crossterm = { version = "0.23.1", features = ["event-stream"] } derive_more = "0.99.17" diff --git a/applications/tari_base_node/src/builder.rs b/applications/tari_base_node/src/builder.rs index ed83c953c5..bf1e6c8632 100644 --- a/applications/tari_base_node/src/builder.rs +++ b/applications/tari_base_node/src/builder.rs @@ -154,6 +154,10 @@ impl BaseNodeContext { .expect_handle::() .get_status_info_watch() } + + pub fn get_report_grpc_error(&self) -> bool { + self.config.base_node.report_grpc_error + } } /// Sets up and initializes the base node, creating the context and database diff --git a/applications/tari_base_node/src/cli.rs b/applications/tari_base_node/src/cli.rs index 34be9b74e4..3e634b300a 100644 --- a/applications/tari_base_node/src/cli.rs +++ b/applications/tari_base_node/src/cli.rs @@ -42,13 +42,13 @@ pub(crate) struct Cli { #[clap(long, alias = "rebuild_db")] pub rebuild_db: bool, /// Run in non-interactive mode, with no UI. - #[clap(short, long, alias = "non-interactive")] + #[clap(short, long, alias = "non-interactive", env = "TARI_NON_INTERACTIVE")] pub non_interactive_mode: bool, /// Watch a command in the non-interactive mode. #[clap(long)] pub watch: Option, /// Supply a network (overrides existing configuration) - #[clap(long, alias = "network", default_value = DEFAULT_NETWORK)] + #[clap(long, default_value = DEFAULT_NETWORK, env = "TARI_NETWORK")] pub network: String, } diff --git a/applications/tari_base_node/src/commands/command/header_stats.rs b/applications/tari_base_node/src/commands/command/header_stats.rs index fdd86a4f29..1602f32a1c 100644 --- a/applications/tari_base_node/src/commands/command/header_stats.rs +++ b/applications/tari_base_node/src/commands/command/header_stats.rs @@ -24,6 +24,7 @@ use std::{cmp, convert::TryFrom, io::Write}; use anyhow::Error; use async_trait::async_trait; +use chrono::{NaiveDateTime, Utc}; use clap::Parser; use tari_core::proof_of_work::PowAlgorithm; use tari_utilities::{hex::Hex, Hashable}; @@ -135,7 +136,10 @@ impl CommandContext { solve_time, normalized_solve_time, pow_algo, - chrono::DateTime::from(header.header().timestamp), + chrono::DateTime::::from_utc( + NaiveDateTime::from_timestamp(header.header().timestamp.as_u64() as i64, 0), + Utc + ), target_diff.get(pow_algo).len(), acc_monero.as_u64(), acc_sha3.as_u64(), diff --git a/applications/tari_base_node/src/commands/command/list_connections.rs b/applications/tari_base_node/src/commands/command/list_connections.rs index 4da7565a6b..42d7402340 100644 --- a/applications/tari_base_node/src/commands/command/list_connections.rs +++ b/applications/tari_base_node/src/commands/command/list_connections.rs @@ -23,7 +23,6 @@ use anyhow::Error; use async_trait::async_trait; use clap::Parser; -use tari_comms::peer_manager::PeerFeatures; use tari_core::base_node::state_machine_service::states::PeerMetadata; use super::{CommandContext, HandleCommand}; @@ -81,7 +80,7 @@ impl CommandContext { conn.direction(), format_duration_basic(conn.age()), { - if peer.features == PeerFeatures::COMMUNICATION_CLIENT { + if peer.features.is_client() { "Wallet" } else { "Base node" diff --git a/applications/tari_base_node/src/commands/command/list_peers.rs b/applications/tari_base_node/src/commands/command/list_peers.rs index 40e995db93..1a0685a777 100644 --- a/applications/tari_base_node/src/commands/command/list_peers.rs +++ b/applications/tari_base_node/src/commands/command/list_peers.rs @@ -24,7 +24,7 @@ use anyhow::Error; use async_trait::async_trait; use chrono::Utc; use clap::Parser; -use tari_comms::peer_manager::{PeerFeatures, PeerQuery}; +use tari_comms::peer_manager::PeerQuery; use tari_core::base_node::state_machine_service::states::PeerMetadata; use super::{CommandContext, HandleCommand}; @@ -49,10 +49,8 @@ impl CommandContext { if let Some(f) = filter { let filter = f.to_lowercase(); query = query.select_where(move |p| match filter.as_str() { - "basenode" | "basenodes" | "base_node" | "base-node" | "bn" => { - p.features == PeerFeatures::COMMUNICATION_NODE - }, - "wallet" | "wallets" | "w" => p.features == PeerFeatures::COMMUNICATION_CLIENT, + "basenode" | "basenodes" | "base_node" | "base-node" | "bn" => p.features.is_node(), + "wallet" | "wallets" | "w" => p.features.is_client(), _ => false, }) } @@ -116,7 +114,7 @@ impl CommandContext { peer.node_id, peer.public_key, { - if peer.features == PeerFeatures::COMMUNICATION_CLIENT { + if peer.features.is_client() { "Wallet" } else { "Base node" diff --git a/applications/tari_base_node/src/commands/command/status.rs b/applications/tari_base_node/src/commands/command/status.rs index e0b6bca3fc..eaad1e9d36 100644 --- a/applications/tari_base_node/src/commands/command/status.rs +++ b/applications/tari_base_node/src/commands/command/status.rs @@ -24,9 +24,10 @@ use std::time::{Duration, Instant}; use anyhow::{anyhow, Error}; use async_trait::async_trait; -use chrono::{DateTime, Utc}; +use chrono::{DateTime, NaiveDateTime, Utc}; use clap::Parser; use tari_app_utilities::consts; +use tari_comms::connectivity::ConnectivitySelection; use super::{CommandContext, HandleCommand}; use crate::commands::status_line::{StatusLine, StatusLineOutput}; @@ -65,7 +66,10 @@ impl CommandContext { .get_header(height) .await? .ok_or_else(|| anyhow!("No last header"))?; - let last_block_time = DateTime::::from(last_header.header().timestamp); + let last_block_time = DateTime::::from_utc( + NaiveDateTime::from_timestamp(last_header.header().timestamp.as_u64() as i64, 0), + Utc, + ); status_line.add_field( "Tip", format!( @@ -93,7 +97,10 @@ impl CommandContext { ), ); - let conns = self.connectivity.get_active_connections().await?; + let conns = self + .connectivity + .select_connections(ConnectivitySelection::all_nodes(vec![])) + .await?; status_line.add_field("Connections", conns.len()); let banned_peers = self.fetch_banned_peers().await?; status_line.add_field("Banned", banned_peers.len()); diff --git a/applications/tari_base_node/src/config.rs b/applications/tari_base_node/src/config.rs index 0a491a8613..9aa8aad27a 100644 --- a/applications/tari_base_node/src/config.rs +++ b/applications/tari_base_node/src/config.rs @@ -106,6 +106,7 @@ pub struct BaseNodeConfig { pub metadata_auto_ping_interval: Duration, pub state_machine: BaseNodeStateMachineConfig, pub resize_terminal_on_startup: bool, + pub report_grpc_error: bool, } impl Default for BaseNodeConfig { @@ -136,6 +137,7 @@ impl Default for BaseNodeConfig { metadata_auto_ping_interval: Duration::from_secs(30), state_machine: Default::default(), resize_terminal_on_startup: true, + report_grpc_error: false, } } } diff --git a/applications/tari_base_node/src/grpc/base_node_grpc_server.rs b/applications/tari_base_node/src/grpc/base_node_grpc_server.rs index f7865ede48..991a85eb22 100644 --- a/applications/tari_base_node/src/grpc/base_node_grpc_server.rs +++ b/applications/tari_base_node/src/grpc/base_node_grpc_server.rs @@ -92,6 +92,7 @@ pub struct BaseNodeGrpcServer { software_updater: SoftwareUpdaterHandle, comms: CommsNode, liveness: LivenessHandle, + report_grpc_error: bool, } impl BaseNodeGrpcServer { @@ -105,8 +106,21 @@ impl BaseNodeGrpcServer { software_updater: ctx.software_updater(), comms: ctx.base_node_comms().clone(), liveness: ctx.liveness(), + report_grpc_error: ctx.get_report_grpc_error(), } } + + pub fn report_error_flag(&self) -> bool { + self.report_grpc_error + } +} + +pub fn report_error(report: bool, status: Status) -> Status { + if report { + status + } else { + Status::new(status.code(), "Error has occurred. Details are obscured.") + } } pub async fn get_heights( @@ -134,6 +148,7 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { &self, request: Request, ) -> Result, Status> { + let report_error_flag = self.report_error_flag(); let request = request.into_inner(); debug!( target: LOG_TARGET, @@ -147,10 +162,13 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { // Overflow safety: checked in get_heights let num_requested = end_height - start_height; if num_requested > GET_DIFFICULTY_MAX_HEIGHTS { - return Err(Status::invalid_argument(format!( - "Number of headers requested exceeds maximum. Expected less than {} but got {}", - GET_DIFFICULTY_MAX_HEIGHTS, num_requested - ))); + return Err(report_error( + report_error_flag, + Status::invalid_argument(format!( + "Number of headers requested exceeds maximum. Expected less than {} but got {}", + GET_DIFFICULTY_MAX_HEIGHTS, num_requested + )), + )); } let (mut tx, rx) = mpsc::channel(cmp::min(num_requested as usize, GET_DIFFICULTY_PAGE_SIZE)); @@ -168,17 +186,20 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { Err(err) => { warn!(target: LOG_TARGET, "Base node service error: {:?}", err,); let _ = tx - .send(Err(Status::internal("Internal error when fetching blocks"))) + .send(Err(report_error( + report_error_flag, + Status::internal("Internal error when fetching blocks"), + ))) .await; return; }, }; if headers.is_empty() { - let _network_difficulty_response = tx.send(Err(Status::invalid_argument(format!( - "No blocks found within range {} - {}", - start, end - )))); + let _network_difficulty_response = tx.send(Err(report_error( + report_error_flag, + Status::invalid_argument(format!("No blocks found within range {} - {}", start, end)), + ))); return; } @@ -228,6 +249,7 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { &self, request: Request, ) -> Result, Status> { + let report_error_flag = self.report_error_flag(); let _request = request.into_inner(); debug!(target: LOG_TARGET, "Incoming GRPC request for GetMempoolTransactions",); @@ -250,7 +272,13 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { target: LOG_TARGET, "Error sending converting transaction for GRPC: {}", e ); - match tx.send(Err(Status::internal("Error converting transaction"))).await { + match tx + .send(Err(report_error( + report_error_flag, + Status::internal("Error converting transaction"), + ))) + .await + { Ok(_) => (), Err(send_err) => { warn!(target: LOG_TARGET, "Error sending error to GRPC client: {}", send_err) @@ -272,7 +300,13 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { target: LOG_TARGET, "Error sending mempool transaction via GRPC: {}", err ); - match tx.send(Err(Status::unknown("Error sending data"))).await { + match tx + .send(Err(report_error( + report_error_flag, + Status::unknown("Error sending data"), + ))) + .await + { Ok(_) => (), Err(send_err) => { warn!(target: LOG_TARGET, "Error sending error to GRPC client: {}", send_err) @@ -291,6 +325,7 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { &self, request: Request, ) -> Result, Status> { + let report_error_flag = self.report_error_flag(); let request = request.into_inner(); debug!( target: LOG_TARGET, @@ -304,7 +339,7 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { let tip = match handler.get_metadata().await { Err(err) => { warn!(target: LOG_TARGET, "Error communicating with base node: {}", err,); - return Err(Status::internal(err.to_string())); + return Err(report_error(report_error_flag, Status::internal(err.to_string()))); }, Ok(data) => data.height_of_longest_chain(), }; @@ -389,7 +424,13 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { Ok(_) => (), Err(err) => { warn!(target: LOG_TARGET, "Error sending block header via GRPC: {}", err); - match tx.send(Err(Status::unknown("Error sending data"))).await { + match tx + .send(Err(report_error( + report_error_flag, + Status::unknown("Error sending data"), + ))) + .await + { Ok(_) => (), Err(send_err) => { warn!(target: LOG_TARGET, "Error sending error to GRPC client: {}", send_err) @@ -410,6 +451,7 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { &self, request: Request, ) -> Result, Status> { + let report_error_flag = self.report_error_flag(); let request = request.into_inner(); debug!( target: LOG_TARGET, @@ -423,8 +465,12 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { .join(",") ); - let pub_key = PublicKey::from_bytes(&request.asset_public_key) - .map_err(|err| Status::invalid_argument(format!("Asset public Key is not a valid public key:{}", err)))?; + let pub_key = PublicKey::from_bytes(&request.asset_public_key).map_err(|err| { + report_error( + report_error_flag, + Status::invalid_argument(format!("Asset public Key is not a valid public key:{}", err)), + ) + })?; let mut handler = self.node_service.clone(); let (mut tx, rx) = mpsc::channel(50); @@ -438,7 +484,8 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { Ok(tokens) => tokens, Err(err) => { warn!(target: LOG_TARGET, "Error communicating with base node: {:?}", err,); - let _get_token_response = tx.send(Err(Status::internal("Internal error"))); + let _get_token_response = + tx.send(Err(report_error(report_error_flag, Status::internal("Internal error")))); return; }, }; @@ -455,8 +502,10 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { Ok(f) => f, Err(err) => { warn!(target: LOG_TARGET, "Could not convert features: {}", err,); - let _get_token_response = - tx.send(Err(Status::internal(format!("Could not convert features:{}", err)))); + let _get_token_response = tx.send(Err(report_error( + report_error_flag, + Status::internal(format!("Could not convert features:{}", err)), + ))); break; }, }; @@ -479,7 +528,13 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { Ok(_) => (), Err(err) => { warn!(target: LOG_TARGET, "Error sending token via GRPC: {}", err); - match tx.send(Err(Status::unknown("Error sending data"))).await { + match tx + .send(Err(report_error( + report_error_flag, + Status::unknown("Error sending data"), + ))) + .await + { Ok(_) => (), Err(send_err) => { warn!(target: LOG_TARGET, "Error sending error to GRPC client: {}", send_err) @@ -497,16 +552,19 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { &self, request: Request, ) -> Result, Status> { + let report_error_flag = self.report_error_flag(); let request = request.into_inner(); let mut handler = self.node_service.clone(); let metadata = handler - .get_asset_metadata( - PublicKey::from_bytes(&request.asset_public_key) - .map_err(|_e| Status::invalid_argument("Not a valid asset public key"))?, - ) + .get_asset_metadata(PublicKey::from_bytes(&request.asset_public_key).map_err(|_e| { + report_error( + report_error_flag, + Status::invalid_argument("Not a valid asset public key"), + ) + })?) .await - .map_err(|e| Status::internal(e.to_string()))?; + .map_err(|e| report_error(report_error_flag, Status::internal(e.to_string())))?; if let Some(m) = metadata { let mined_height = m.mined_height; @@ -515,7 +573,12 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { PrunedOutput::Pruned { output_hash: _, witness_hash: _, - } => return Err(Status::not_found("Output has been pruned")), + } => { + return Err(report_error( + report_error_flag, + Status::not_found("Output has been pruned"), + )) + }, PrunedOutput::NotPruned { output } => { if let Some(ref asset) = output.features.asset { const ASSET_METADATA_TEMPLATE_ID: u32 = 1; @@ -556,9 +619,12 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { })); }, }; - // Err(Status::unknown("Could not find a matching arm")) + // Err(report_error(report_error_flag, Status::unknown("Could not find a matching arm"))) } else { - Err(Status::not_found("Could not find any utxo")) + Err(report_error( + report_error_flag, + Status::not_found("Could not find any utxo"), + )) } } @@ -566,6 +632,7 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { &self, request: Request, ) -> Result, Status> { + let report_error_flag = self.report_error_flag(); let request = request.into_inner(); let mut handler = self.node_service.clone(); @@ -582,7 +649,8 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { Ok(outputs) => outputs, Err(err) => { warn!(target: LOG_TARGET, "Error communicating with base node: {:?}", err,); - let _list_assest_registrations_response = tx.send(Err(Status::internal("Internal error"))); + let _list_assest_registrations_response = + tx.send(Err(report_error(report_error_flag, Status::internal("Internal error")))); return; }, }; @@ -602,8 +670,10 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { Ok(f) => f, Err(err) => { warn!(target: LOG_TARGET, "Could not convert features: {}", err,); - let _list_assest_registrations_response = - tx.send(Err(Status::internal(format!("Could not convert features:{}", err)))); + let _list_assest_registrations_response = tx.send(Err(report_error( + report_error_flag, + Status::internal(format!("Could not convert features:{}", err)), + ))); break; }, }; @@ -635,17 +705,28 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { &self, request: Request, ) -> Result, Status> { + let report_error_flag = self.report_error_flag(); let request = request.into_inner(); debug!(target: LOG_TARGET, "Incoming GRPC request for get new block template"); trace!(target: LOG_TARGET, "Request {:?}", request); let algo: PowAlgorithm = (u64::try_from( (request.algo) - .ok_or_else(|| Status::invalid_argument("No valid pow algo selected".to_string()))? + .ok_or_else(|| { + report_error( + report_error_flag, + Status::invalid_argument("No valid pow algo selected".to_string()), + ) + })? .pow_algo, ) .unwrap()) .try_into() - .map_err(|_| Status::invalid_argument("No valid pow algo selected".to_string()))?; + .map_err(|_| { + report_error( + report_error_flag, + Status::invalid_argument("No valid pow algo selected".to_string()), + ) + })?; let mut handler = self.node_service.clone(); let new_template = handler @@ -657,7 +738,7 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { "Could not get new block template: {}", e.to_string() ); - Status::internal(e.to_string()) + report_error(report_error_flag, Status::internal(e.to_string())) })?; let status_watch = self.state_machine_handle.get_status_info_watch(); @@ -669,7 +750,11 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { total_fees: new_template.total_fees.into(), algo: Some(tari_rpc::PowAlgo { pow_algo: pow }), }), - new_block_template: Some(new_template.try_into().map_err(Status::internal)?), + new_block_template: Some( + new_template + .try_into() + .map_err(|e| report_error(report_error_flag, Status::internal(e)))?, + ), initial_sync_achieved: (*status_watch.borrow()).bootstrapped, }; @@ -682,18 +767,22 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { &self, request: Request, ) -> Result, Status> { + let report_error_flag = self.report_error_flag(); let request = request.into_inner(); debug!(target: LOG_TARGET, "Incoming GRPC request for get new block"); - let block_template: NewBlockTemplate = request - .try_into() - .map_err(|s| Status::invalid_argument(format!("Invalid block template: {}", s)))?; + let block_template: NewBlockTemplate = request.try_into().map_err(|s| { + report_error( + report_error_flag, + Status::invalid_argument(format!("Invalid block template: {}", s)), + ) + })?; let mut handler = self.node_service.clone(); let new_block = match handler.get_new_block(block_template).await { Ok(b) => b, Err(CommsInterfaceError::ChainStorageError(ChainStorageError::InvalidArguments { message, .. })) => { - return Err(Status::invalid_argument(message)); + return Err(report_error(report_error_flag, Status::invalid_argument(message))); }, Err(CommsInterfaceError::ChainStorageError(ChainStorageError::CannotCalculateNonTipMmr(msg))) => { let status = Status::with_details( @@ -701,14 +790,18 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { msg, Bytes::from_static(b"CannotCalculateNonTipMmr"), ); - return Err(status); + return Err(report_error(report_error_flag, status)); }, - Err(e) => return Err(Status::internal(e.to_string())), + Err(e) => return Err(report_error(report_error_flag, Status::internal(e.to_string()))), }; // construct response let block_hash = new_block.hash(); let mining_hash = new_block.header.merged_mining_hash(); - let block: Option = Some(new_block.try_into().map_err(Status::internal)?); + let block: Option = Some( + new_block + .try_into() + .map_err(|e| report_error(report_error_flag, Status::internal(e)))?, + ); let response = tari_rpc::GetNewBlockResult { block_hash, @@ -770,9 +863,14 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { &self, request: Request, ) -> Result, Status> { + let report_error_flag = self.report_error_flag(); let request = request.into_inner(); - let block = Block::try_from(request) - .map_err(|e| Status::invalid_argument(format!("Failed to convert arguments. Invalid block: {:?}", e)))?; + let block = Block::try_from(request).map_err(|e| { + report_error( + report_error_flag, + Status::invalid_argument(format!("Failed to convert arguments. Invalid block: {:?}", e)), + ) + })?; let block_height = block.header.height; debug!(target: LOG_TARGET, "Miner submitted block: {}", block); info!( @@ -784,7 +882,7 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { let block_hash = handler .submit_block(block) .await - .map_err(|e| Status::internal(e.to_string()))?; + .map_err(|e| report_error(report_error_flag, Status::internal(e.to_string())))?; debug!( target: LOG_TARGET, @@ -833,12 +931,18 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { &self, request: Request, ) -> Result, Status> { + let report_error_flag = self.report_error_flag(); let request = request.into_inner(); let txn: Transaction = request .transaction - .ok_or_else(|| Status::invalid_argument("Transaction is empty"))? + .ok_or_else(|| report_error(report_error_flag, Status::invalid_argument("Transaction is empty")))? .try_into() - .map_err(|e| Status::invalid_argument(format!("Failed to convert arguments. Invalid transaction.{}", e)))?; + .map_err(|e| { + report_error( + report_error_flag, + Status::invalid_argument(format!("Failed to convert arguments. Invalid transaction.{}", e)), + ) + })?; debug!( target: LOG_TARGET, "Received SubmitTransaction request from client ({} kernels, {} outputs, {} inputs)", @@ -850,7 +954,7 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { let mut handler = self.mempool_service.clone(); let res = handler.submit_transaction(txn).await.map_err(|e| { error!(target: LOG_TARGET, "Error submitting:{}", e); - Status::internal(e.to_string()) + report_error(report_error_flag, Status::internal(e.to_string())) })?; let response = match res { TxStorageResponse::UnconfirmedPool => tari_rpc::SubmitTransactionResponse { @@ -877,12 +981,23 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { &self, request: Request, ) -> Result, Status> { + let report_error_flag = self.report_error_flag(); let request = request.into_inner(); let excess_sig: Signature = request .excess_sig - .ok_or_else(|| Status::invalid_argument("excess_sig not provided".to_string()))? + .ok_or_else(|| { + report_error( + report_error_flag, + Status::invalid_argument("excess_sig not provided".to_string()), + ) + })? .try_into() - .map_err(|_| Status::invalid_argument("excess_sig could not be converted".to_string()))?; + .map_err(|_| { + report_error( + report_error_flag, + Status::invalid_argument("excess_sig could not be converted".to_string()), + ) + })?; debug!( target: LOG_TARGET, "Received TransactionState request from client ({} excess_sig)", @@ -898,7 +1013,7 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { .await .map_err(|e| { error!(target: LOG_TARGET, "Error submitting query:{}", e); - Status::internal(e.to_string()) + report_error(report_error_flag, Status::internal(e.to_string())) })?; if !base_node_response.is_empty() { @@ -918,7 +1033,7 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { .await .map_err(|e| { error!(target: LOG_TARGET, "Error submitting query:{}", e); - Status::internal(e.to_string()) + report_error(report_error_flag, Status::internal(e.to_string())) })?; let response = match res { TxStorageResponse::UnconfirmedPool => tari_rpc::TransactionStateResponse { @@ -950,6 +1065,7 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { &self, _request: Request, ) -> Result, Status> { + let report_error_flag = self.report_error_flag(); debug!(target: LOG_TARGET, "Incoming GRPC request for get all peers"); let peers = self @@ -957,7 +1073,7 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { .peer_manager() .all() .await - .map_err(|e| Status::unknown(e.to_string()))?; + .map_err(|e| report_error(report_error_flag, Status::unknown(e.to_string())))?; let peers: Vec = peers.into_iter().map(|p| p.into()).collect(); let (mut tx, rx) = mpsc::channel(peers.len()); task::spawn(async move { @@ -967,7 +1083,13 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { Ok(_) => (), Err(err) => { warn!(target: LOG_TARGET, "Error sending peer via GRPC: {}", err); - match tx.send(Err(Status::unknown("Error sending data"))).await { + match tx + .send(Err(report_error( + report_error_flag, + Status::unknown("Error sending data"), + ))) + .await + { Ok(_) => (), Err(send_err) => { warn!(target: LOG_TARGET, "Error sending error to GRPC client: {}", send_err) @@ -987,6 +1109,7 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { &self, request: Request, ) -> Result, Status> { + let report_error_flag = self.report_error_flag(); let request = request.into_inner(); debug!( target: LOG_TARGET, @@ -995,7 +1118,10 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { let mut heights = request.heights; if heights.is_empty() { - return Err(Status::invalid_argument("heights cannot be empty")); + return Err(report_error( + report_error_flag, + Status::invalid_argument("heights cannot be empty"), + )); } heights.truncate(GET_BLOCKS_MAX_HEIGHTS); @@ -1031,17 +1157,24 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { block.header().height ); match tx - .send( - block - .try_into() - .map_err(|err| Status::internal(format!("Could not provide block: {}", err))), - ) + .send(block.try_into().map_err(|err| { + report_error( + report_error_flag, + Status::internal(format!("Could not provide block: {}", err)), + ) + })) .await { Ok(_) => (), Err(err) => { warn!(target: LOG_TARGET, "Error sending header via GRPC: {}", err); - match tx.send(Err(Status::unknown("Error sending data"))).await { + match tx + .send(Err(report_error( + report_error_flag, + Status::unknown("Error sending data"), + ))) + .await + { Ok(_) => (), Err(send_err) => { warn!(target: LOG_TARGET, "Error sending error to GRPC client: {}", send_err) @@ -1062,6 +1195,7 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { &self, _request: Request, ) -> Result, Status> { + let report_error_flag = self.report_error_flag(); debug!(target: LOG_TARGET, "Incoming GRPC request for BN tip data"); let mut handler = self.node_service.clone(); @@ -1069,7 +1203,7 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { let meta = handler .get_metadata() .await - .map_err(|e| Status::internal(e.to_string()))?; + .map_err(|e| report_error(report_error_flag, Status::internal(e.to_string())))?; // Determine if we are bootstrapped let status_watch = self.state_machine_handle.get_status_info_watch(); @@ -1088,11 +1222,17 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { &self, request: Request, ) -> Result, Status> { + let report_error_flag = self.report_error_flag(); debug!(target: LOG_TARGET, "Incoming GRPC request for SearchKernels"); let request = request.into_inner(); let converted: Result, _> = request.signatures.into_iter().map(|s| s.try_into()).collect(); - let kernels = converted.map_err(|_| Status::internal("Failed to convert one or more arguments."))?; + let kernels = converted.map_err(|_| { + report_error( + report_error_flag, + Status::internal("Failed to convert one or more arguments."), + ) + })?; let mut handler = self.node_service.clone(); @@ -1110,17 +1250,24 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { }; for block in blocks { match tx - .send( - block - .try_into() - .map_err(|err| Status::internal(format!("Could not provide block:{}", err))), - ) + .send(block.try_into().map_err(|err| { + report_error( + report_error_flag, + Status::internal(format!("Could not provide block:{}", err)), + ) + })) .await { Ok(_) => (), Err(err) => { warn!(target: LOG_TARGET, "Error sending header via GRPC: {}", err); - match tx.send(Err(Status::unknown("Error sending data"))).await { + match tx + .send(Err(report_error( + report_error_flag, + Status::unknown("Error sending data"), + ))) + .await + { Ok(_) => (), Err(send_err) => { warn!(target: LOG_TARGET, "Error sending error to GRPC client: {}", send_err) @@ -1140,6 +1287,7 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { &self, request: Request, ) -> Result, Status> { + let report_error_flag = self.report_error_flag(); debug!(target: LOG_TARGET, "Incoming GRPC request for SearchUtxos"); let request = request.into_inner(); @@ -1148,7 +1296,12 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { .into_iter() .map(|s| Commitment::from_bytes(&s)) .collect(); - let outputs = converted.map_err(|_| Status::internal("Failed to convert one or more arguments."))?; + let outputs = converted.map_err(|_| { + report_error( + report_error_flag, + Status::internal("Failed to convert one or more arguments."), + ) + })?; let mut handler = self.node_service.clone(); @@ -1166,17 +1319,24 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { }; for block in blocks { match tx - .send( - block - .try_into() - .map_err(|err| Status::internal(format!("Could not provide block:{}", err))), - ) + .send(block.try_into().map_err(|err| { + report_error( + report_error_flag, + Status::internal(format!("Could not provide block:{}", err)), + ) + })) .await { Ok(_) => (), Err(err) => { warn!(target: LOG_TARGET, "Error sending header via GRPC: {}", err); - match tx.send(Err(Status::unknown("Error sending data"))).await { + match tx + .send(Err(report_error( + report_error_flag, + Status::unknown("Error sending data"), + ))) + .await + { Ok(_) => (), Err(send_err) => { warn!(target: LOG_TARGET, "Error sending error to GRPC client: {}", send_err) @@ -1197,11 +1357,17 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { &self, request: Request, ) -> Result, Status> { + let report_error_flag = self.report_error_flag(); debug!(target: LOG_TARGET, "Incoming GRPC request for FetchMatchingUtxos"); let request = request.into_inner(); let converted: Result, _> = request.hashes.into_iter().map(|s| s.try_into()).collect(); - let hashes = converted.map_err(|_| Status::internal("Failed to convert one or more arguments."))?; + let hashes = converted.map_err(|_| { + report_error( + report_error_flag, + Status::internal("Failed to convert one or more arguments."), + ) + })?; let mut handler = self.node_service.clone(); @@ -1228,7 +1394,13 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { Err(err) => { warn!(target: LOG_TARGET, "Error sending output via GRPC: {}", err); - match tx.send(Err(Status::unknown("Error sending data"))).await { + match tx + .send(Err(report_error( + report_error_flag, + Status::unknown("Error sending data"), + ))) + .await + { Ok(_) => (), Err(send_err) => { warn!(target: LOG_TARGET, "Error sending error to GRPC client: {}", send_err) @@ -1251,6 +1423,7 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { &self, request: Request, ) -> Result, Status> { + let report_error_flag = self.report_error_flag(); let request = request.into_inner(); debug!( target: LOG_TARGET, @@ -1271,7 +1444,10 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { num_requested, BLOCK_TIMING_MAX_BLOCKS ); - return Err(Status::invalid_argument("Max request size exceeded.")); + return Err(report_error( + report_error_flag, + Status::invalid_argument("Max request size exceeded."), + )); } let headers = match handler.get_headers(start..=end).await { @@ -1304,14 +1480,28 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { &self, request: Request, ) -> Result, Status> { - get_block_group(self.node_service.clone(), request, BlockGroupType::BlockSize).await + let report_error_flag = self.report_error_flag(); + get_block_group( + self.node_service.clone(), + request, + BlockGroupType::BlockSize, + report_error_flag, + ) + .await } async fn get_block_fees( &self, request: Request, ) -> Result, Status> { - get_block_group(self.node_service.clone(), request, BlockGroupType::BlockFees).await + let report_error_flag = self.report_error_flag(); + get_block_group( + self.node_service.clone(), + request, + BlockGroupType::BlockFees, + report_error_flag, + ) + .await } async fn get_version(&self, _request: Request) -> Result, Status> { @@ -1338,6 +1528,7 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { &self, request: Request, ) -> Result, Status> { + let report_error_flag = self.report_error_flag(); debug!(target: LOG_TARGET, "Incoming GRPC request for GetTokensInCirculation",); let request = request.into_inner(); let mut heights = request.heights; @@ -1371,7 +1562,13 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { Ok(_) => (), Err(err) => { warn!(target: LOG_TARGET, "Error sending value via GRPC: {}", err); - match tx.send(Err(Status::unknown("Error sending data"))).await { + match tx + .send(Err(report_error( + report_error_flag, + Status::unknown("Error sending data"), + ))) + .await + { Ok(_) => (), Err(send_err) => { warn!(target: LOG_TARGET, "Error sending error to GRPC client: {}", send_err) @@ -1467,13 +1664,14 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { &self, request: Request, ) -> Result, Status> { + let report_error_flag = self.report_error_flag(); let tari_rpc::GetHeaderByHashRequest { hash } = request.into_inner(); let mut node_service = self.node_service.clone(); let hash_hex = hash.to_hex(); let block = node_service .get_block_by_hash(hash) .await - .map_err(|err| Status::internal(err.to_string()))?; + .map_err(|err| report_error(report_error_flag, Status::internal(err.to_string())))?; match block { Some(block) => { @@ -1492,7 +1690,10 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { Ok(Response::new(resp)) }, - None => Err(Status::not_found(format!("Header not found with hash `{}`", hash_hex))), + None => Err(report_error( + report_error_flag, + Status::not_found(format!("Header not found with hash `{}`", hash_hex)), + )), } } @@ -1541,19 +1742,20 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { &self, _: Request, ) -> Result, Status> { + let report_error_flag = self.report_error_flag(); let status = self .comms .connectivity() .get_connectivity_status() .await - .map_err(|err| Status::internal(err.to_string()))?; + .map_err(|err| report_error(report_error_flag, Status::internal(err.to_string())))?; let latency = self .liveness .clone() .get_network_avg_latency() .await - .map_err(|err| Status::internal(err.to_string()))?; + .map_err(|err| report_error(report_error_flag, Status::internal(err.to_string())))?; let resp = tari_rpc::NetworkStatusResponse { status: tari_rpc::ConnectivityStatus::from(status) as i32, @@ -1570,12 +1772,13 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { &self, _: Request, ) -> Result, Status> { + let report_error_flag = self.report_error_flag(); let mut connectivity = self.comms.connectivity(); let peer_manager = self.comms.peer_manager(); let connected_peers = connectivity .get_active_connections() .await - .map_err(|err| Status::internal(err.to_string()))?; + .map_err(|err| report_error(report_error_flag, Status::internal(err.to_string())))?; let mut peers = Vec::with_capacity(connected_peers.len()); for peer in connected_peers { @@ -1583,8 +1786,13 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { peer_manager .find_by_node_id(peer.peer_node_id()) .await - .map_err(|err| Status::internal(err.to_string()))? - .ok_or_else(|| Status::not_found(format!("Peer {} not found", peer.peer_node_id())))?, + .map_err(|err| report_error(report_error_flag, Status::internal(err.to_string())))? + .ok_or_else(|| { + report_error( + report_error_flag, + Status::not_found(format!("Peer {} not found", peer.peer_node_id())), + ) + })?, ); } @@ -1599,11 +1807,12 @@ impl tari_rpc::base_node_server::BaseNode for BaseNodeGrpcServer { &self, _: Request, ) -> Result, Status> { + let report_error_flag = self.report_error_flag(); let mut mempool_handle = self.mempool_service.clone(); let mempool_stats = mempool_handle.get_mempool_stats().await.map_err(|e| { error!(target: LOG_TARGET, "Error submitting query:{}", e); - Status::internal(e.to_string()) + report_error(report_error_flag, Status::internal(e.to_string())) })?; let response = tari_rpc::MempoolStatsResponse { @@ -1625,6 +1834,7 @@ async fn get_block_group( mut handler: LocalNodeCommsInterface, request: Request, block_group_type: BlockGroupType, + report_error_flag: bool, ) -> Result, Status> { let request = request.into_inner(); let calc_type_response = request.calc_type; @@ -1659,8 +1869,18 @@ async fn get_block_group( let value = match calc_type { CalcType::Median => median(values).map(|v| vec![v]), CalcType::Mean => mean(values).map(|v| vec![v]), - CalcType::Quantile => return Err(Status::unimplemented("Quantile has not been implemented")), - CalcType::Quartile => return Err(Status::unimplemented("Quartile has not been implemented")), + CalcType::Quantile => { + return Err(report_error( + report_error_flag, + Status::unimplemented("Quantile has not been implemented"), + )) + }, + CalcType::Quartile => { + return Err(report_error( + report_error_flag, + Status::unimplemented("Quartile has not been implemented"), + )) + }, } .unwrap_or_default(); debug!( diff --git a/applications/tari_collectibles/src-tauri/Cargo.toml b/applications/tari_collectibles/src-tauri/Cargo.toml index 3575f52c8a..66cb968ab5 100644 --- a/applications/tari_collectibles/src-tauri/Cargo.toml +++ b/applications/tari_collectibles/src-tauri/Cargo.toml @@ -19,14 +19,14 @@ tari_app_grpc = { path = "../../tari_app_grpc" } tari_app_utilities = { path = "../../tari_app_utilities" } tari_common = { path = "../../../common" } tari_common_types = { path = "../../../base_layer/common_types" } -tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.12.5" } +tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.13.0" } tari_key_manager = { path = "../../../base_layer/key_manager" } -tari_mmr = { path = "../../../base_layer/mmr"} -tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.3.1" } -tari_dan_common_types = { path = "../../../dan_layer/common_types"} +tari_mmr = { path = "../../../base_layer/mmr" } +tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.4.3" } +tari_dan_common_types = { path = "../../../dan_layer/common_types" } blake2 = "^0.9.0" -clap = "3.1.8" +clap = { version = "3.1.8", features = ["env"] } derivative = "2.2.0" diesel = { version = "1.4.8", features = ["sqlite"] } diesel_migrations = "1.4.0" @@ -43,8 +43,8 @@ tauri = { version = "1.0.0-rc.6", features = ["api-all"] } thiserror = "1.0.30" tokio = { version = "1.11", features = ["signal"] } tonic = "0.6.2" -uuid = { version = "0.8.2", features = ["serde"] } +uuid = { version = "0.8.2", features = ["serde"] } [features] -default = [ "custom-protocol" ] -custom-protocol = [ "tauri/custom-protocol" ] +default = ["custom-protocol"] +custom-protocol = ["tauri/custom-protocol"] diff --git a/applications/tari_collectibles/src-tauri/src/cli.rs b/applications/tari_collectibles/src-tauri/src/cli.rs index bf2d21a881..2443461606 100644 --- a/applications/tari_collectibles/src-tauri/src/cli.rs +++ b/applications/tari_collectibles/src-tauri/src/cli.rs @@ -20,8 +20,13 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -use clap::Parser; +use crate::{ + app_state::ConcurrentAppState, commands, commands::assets::inner_assets_list_registered_assets, +}; +use clap::{Parser, Subcommand}; use tari_app_utilities::common_cli_args::CommonCliArgs; +use tari_common::exit_codes::{ExitCode, ExitError}; +use uuid::Uuid; const DEFAULT_NETWORK: &str = "dibbler"; @@ -32,10 +37,10 @@ pub(crate) struct Cli { #[clap(flatten)] pub common: CommonCliArgs, /// Command to run - #[clap(long)] - pub command: Option, + #[clap(subcommand)] + pub command: Option, /// Supply a network (overrides existing configuration) - #[clap(long, default_value = DEFAULT_NETWORK)] + #[clap(long, default_value = DEFAULT_NETWORK, env = "TARI_NETWORK")] pub network: String, } @@ -49,3 +54,92 @@ impl Cli { overrides } } + +#[derive(Subcommand, Debug)] +pub enum Commands { + ListAssets { + #[clap(default_value = "0")] + offset: u64, + #[clap(default_value = "20")] + count: u64, + }, + MakeItRain { + asset_public_key: String, + amount_per_transaction: u64, + number_transactions: u32, + destination_address: String, + source_address: Option, + }, +} + +pub fn list_assets(offset: u64, count: u64, state: &ConcurrentAppState) -> Result<(), ExitError> { + let runtime = tokio::runtime::Builder::new_multi_thread() + .enable_all() + .build() + .expect("Failed to build a runtime!"); + match runtime.block_on(inner_assets_list_registered_assets(offset, count, state)) { + Ok(rows) => { + println!("{}", serde_json::to_string_pretty(&rows).unwrap()); + Ok(()) + } + Err(e) => Err(ExitError::new(ExitCode::CommandError, &e)), + } +} + +// make-it-rain +pub(crate) fn make_it_rain( + asset_public_key: String, + amount: u64, + number_transactions: u32, + to_address: String, + source_address: Option, + state: &ConcurrentAppState, +) -> Result<(), ExitError> { + let runtime = tokio::runtime::Builder::new_multi_thread() + .enable_all() + .build() + .expect("Failed to build a runtime!"); + let id = match runtime.block_on(commands::wallets::inner_wallets_list(state)) { + Ok(rows) => { + if rows.is_empty() { + return Err(ExitError::new( + ExitCode::CommandError, + "There is no wallet!", + )); + } + match source_address { + Some(source_address) => { + let source_uuid = Uuid::parse_str(&source_address) + .map_err(|e| ExitError::new(ExitCode::CommandError, &e))?; + if !rows.iter().any(|wallet| wallet.id == source_uuid) { + return Err(ExitError::new(ExitCode::CommandError, "Wallet not found!")); + } + source_uuid + } + None => rows[0].id, + } + } + Err(e) => { + return Err(ExitError::new(ExitCode::CommandError, e.to_string())); + } + }; + + runtime + .block_on(commands::wallets::inner_wallets_unlock(id, state)) + .map_err(|e| ExitError::new(ExitCode::CommandError, e.to_string()))?; + println!( + "Sending {} of {} to {} {} times.", + asset_public_key, amount, to_address, number_transactions + ); + for _ in 0..number_transactions { + runtime + .block_on(commands::asset_wallets::inner_asset_wallets_send_to( + asset_public_key.clone(), + amount, + to_address.clone(), + state, + )) + .map_err(|e| ExitError::new(ExitCode::CommandError, e.to_string()))?; + } + Ok(()) +} diff --git a/applications/tari_collectibles/src-tauri/src/config.rs b/applications/tari_collectibles/src-tauri/src/config.rs index 4e245484bd..8e436c350e 100644 --- a/applications/tari_collectibles/src-tauri/src/config.rs +++ b/applications/tari_collectibles/src-tauri/src/config.rs @@ -38,8 +38,8 @@ impl Default for CollectiblesConfig { Self { override_from: None, validator_node_grpc_address: "/ip4/127.0.0.1/tcp/18144".parse().unwrap(), - base_node_grpc_address: "/ip4/127.0.0.1/18142".parse().unwrap(), - wallet_grpc_address: "/ip4/127.0.0.1/tpc/18143".parse().unwrap(), + base_node_grpc_address: "/ip4/127.0.0.1/tcp/18142".parse().unwrap(), + wallet_grpc_address: "/ip4/127.0.0.1/tcp/18143".parse().unwrap(), } } } diff --git a/applications/tari_collectibles/src-tauri/src/main.rs b/applications/tari_collectibles/src-tauri/src/main.rs index ddc327305a..db5bd45972 100644 --- a/applications/tari_collectibles/src-tauri/src/main.rs +++ b/applications/tari_collectibles/src-tauri/src/main.rs @@ -10,13 +10,14 @@ use std::error::Error; use tauri::{Menu, MenuItem, Submenu}; use clap::Parser; -use tari_common::{ - exit_codes::{ExitCode, ExitError}, - load_configuration, DefaultConfigLoader, -}; -use uuid::Uuid; +use std::path::PathBuf; +use tari_common::{exit_codes::ExitError, load_configuration, DefaultConfigLoader}; -use crate::{app_state::ConcurrentAppState, cli::Cli, config::CollectiblesConfig}; +use crate::{ + app_state::ConcurrentAppState, + cli::{Cli, Commands}, + config::CollectiblesConfig, +}; #[macro_use] extern crate diesel; @@ -36,133 +37,23 @@ mod schema; mod status; mod storage; -#[derive(Debug)] -pub enum Command { - MakeItRain { - asset_public_key: String, - amount_per_transaction: u64, - number_transactions: u32, - destination_address: String, - source_address: Option, - }, -} - -fn parse_make_it_rain(src: &[&str]) -> Result { - if src.len() < 4 && 5 < src.len() { - return Err(ExitError::new( - ExitCode::CommandError, - &"Invalid arguments for make-it-rain", - )); - } - let asset_public_key = src[0].to_string(); - let amount_per_transaction = src[1] - .to_string() - .parse::() - .map_err(|e| ExitError::new(ExitCode::CommandError, &e.to_string()))?; - let number_transactions = src[2] - .to_string() - .parse::() - .map_err(|e| ExitError::new(ExitCode::CommandError, &e.to_string()))?; - let destination_address = src[3].to_string(); - let source_address = match src.len() { - 5 => Some(src[4].to_string()), - _ => None, - }; - Ok(Command::MakeItRain { - asset_public_key, - amount_per_transaction, - number_transactions, - destination_address, - source_address, - }) -} - -fn parse_command(src: &str) -> Result { - let args: Vec<_> = src.split(' ').collect(); - if args.is_empty() { - return Err(ExitError::new(ExitCode::CommandError, &"Empty command")); - } - match args.get(0) { - Some(&"make-it-rain") => parse_make_it_rain(&args[1..]), - _ => Err(ExitError::new(ExitCode::CommandError, &"Invalid command")), - } -} - -// make-it-rain -fn make_it_rain( - asset_public_key: String, - amount: u64, - number_transactions: u32, - to_address: String, - source_address: Option, - state: &ConcurrentAppState, -) -> Result<(), ExitError> { - let runtime = tokio::runtime::Builder::new_multi_thread() - .enable_all() - .build() - .expect("Failed to build a runtime!"); - let id = match runtime.block_on(commands::wallets::inner_wallets_list(state)) { - Ok(rows) => { - if rows.is_empty() { - return Err(ExitError::new( - ExitCode::CommandError, - &"There is no wallet!", - )); - } - match source_address { - Some(source_address) => { - let source_uuid = Uuid::parse_str(&source_address) - .map_err(|e| ExitError::new(ExitCode::CommandError, &e.to_string()))?; - if !rows.iter().any(|wallet| wallet.id == source_uuid) { - return Err(ExitError::new(ExitCode::CommandError, &"Wallet not found!")); - } - source_uuid - } - None => rows[0].id, - } - } - Err(e) => { - return Err(ExitError::new(ExitCode::CommandError, &e.to_string())); - } - }; - - runtime - .block_on(commands::wallets::inner_wallets_unlock(id, state)) - .map_err(|e| ExitError::new(ExitCode::CommandError, &e.to_string()))?; - println!( - "Sending {} of {} to {} {} times.", - asset_public_key, amount, to_address, number_transactions - ); - for _ in 0..number_transactions { - runtime - .block_on(commands::asset_wallets::inner_asset_wallets_send_to( - asset_public_key.clone(), - amount, - to_address.clone(), - state, - )) - .map_err(|e| ExitError::new(ExitCode::CommandError, &e.to_string()))?; - } - Ok(()) -} - -pub fn process_command(command: Command, state: &ConcurrentAppState) -> Result<(), ExitError> { - println!("command {:?}", command); +pub fn process_command(command: Commands, state: &ConcurrentAppState) -> Result<(), ExitError> { match command { - Command::MakeItRain { + Commands::MakeItRain { asset_public_key, amount_per_transaction, - number_transactions: number_transaction, + number_transactions, destination_address, source_address, - } => make_it_rain( + } => cli::make_it_rain( asset_public_key, amount_per_transaction, - number_transaction, + number_transactions, destination_address, source_address, state, ), + Commands::ListAssets { offset, count } => cli::list_assets(offset, count, state), } } @@ -177,11 +68,12 @@ fn main() -> Result<(), Box> { let config = CollectiblesConfig::load_from(&cfg)?; let state = ConcurrentAppState::new(cli.common.get_base_path(), config); - if let Some(ref command) = cli.command { - let command = parse_command(command)?; + if let Some(command) = cli.command { process_command(command, &state)?; return Ok(()); } + //let (bootstrap, config, _) = init_configuration(ApplicationType::Collectibles)?; + let state = ConcurrentAppState::new(PathBuf::from("."), CollectiblesConfig::default()); tauri::Builder::default() .menu(build_menu()) diff --git a/applications/tari_collectibles/web-app/package-lock.json b/applications/tari_collectibles/web-app/package-lock.json index 5c6161f179..d0fd9762e3 100644 --- a/applications/tari_collectibles/web-app/package-lock.json +++ b/applications/tari_collectibles/web-app/package-lock.json @@ -4798,11 +4798,72 @@ "integrity": "sha1-WQxhFWsK4vTwJVcyoViyZrxWsh0=" }, "ejs": { - "version": "3.1.6", - "resolved": "https://registry.npmjs.org/ejs/-/ejs-3.1.6.tgz", - "integrity": "sha512-9lt9Zse4hPucPkoP7FHDF0LQAlGyF9JVpnClFLFH3aSSbxmyoqINRpp/9wePWJTUl4KOQwRL72Iw3InHPDkoGw==", + "version": "3.1.7", + "resolved": "https://registry.npmjs.org/ejs/-/ejs-3.1.7.tgz", + "integrity": "sha512-BIar7R6abbUxDA3bfXrO4DSgwo8I+fB5/1zgujl3HLLjwd6+9iOnrT+t3grn2qbk9vOgBubXOFwX2m9axoFaGw==", "requires": { - "jake": "^10.6.1" + "jake": "^10.8.5" + }, + "dependencies": { + "ansi-styles": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.3.0.tgz", + "integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==", + "requires": { + "color-convert": "^2.0.1" + } + }, + "async": { + "version": "3.2.3", + "resolved": "https://registry.npmjs.org/async/-/async-3.2.3.tgz", + "integrity": "sha512-spZRyzKL5l5BZQrr/6m/SqFdBN0q3OCI0f9rjfBzCMBIP4p75P620rR3gTmaksNOhmzgdxcaxdNfMy6anrbM0g==" + }, + "chalk": { + "version": "4.1.2", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-4.1.2.tgz", + "integrity": "sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA==", + "requires": { + "ansi-styles": "^4.1.0", + "supports-color": "^7.1.0" + } + }, + "color-convert": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz", + "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==", + "requires": { + "color-name": "~1.1.4" + } + }, + "color-name": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz", + "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==" + }, + "has-flag": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz", + "integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==" + }, + "jake": { + "version": "10.8.5", + "resolved": "https://registry.npmjs.org/jake/-/jake-10.8.5.tgz", + "integrity": "sha512-sVpxYeuAhWt0OTWITwT98oyV0GsXyMlXCF+3L1SuafBVUIr/uILGRB+NqwkzhgXKvoJpDIpQvqkUALgdmQsQxw==", + "requires": { + "async": "^3.2.3", + "chalk": "^4.0.2", + "filelist": "^1.0.1", + "minimatch": "^3.0.4" + } + }, + "supports-color": { + "version": "7.2.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.2.0.tgz", + "integrity": "sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==", + "requires": { + "has-flag": "^4.0.0" + } + } } }, "electron-to-chromium": { @@ -6782,67 +6843,6 @@ "istanbul-lib-report": "^3.0.0" } }, - "jake": { - "version": "10.8.4", - "resolved": "https://registry.npmjs.org/jake/-/jake-10.8.4.tgz", - "integrity": "sha512-MtWeTkl1qGsWUtbl/Jsca/8xSoK3x0UmS82sNbjqxxG/de/M/3b1DntdjHgPMC50enlTNwXOCRqPXLLt5cCfZA==", - "requires": { - "async": "0.9.x", - "chalk": "^4.0.2", - "filelist": "^1.0.1", - "minimatch": "^3.0.4" - }, - "dependencies": { - "ansi-styles": { - "version": "4.3.0", - "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.3.0.tgz", - "integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==", - "requires": { - "color-convert": "^2.0.1" - } - }, - "async": { - "version": "0.9.2", - "resolved": "https://registry.npmjs.org/async/-/async-0.9.2.tgz", - "integrity": "sha1-rqdNXmHB+JlhO/ZL2mbUx48v0X0=" - }, - "chalk": { - "version": "4.1.2", - "resolved": "https://registry.npmjs.org/chalk/-/chalk-4.1.2.tgz", - "integrity": "sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA==", - "requires": { - "ansi-styles": "^4.1.0", - "supports-color": "^7.1.0" - } - }, - "color-convert": { - "version": "2.0.1", - "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz", - "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==", - "requires": { - "color-name": "~1.1.4" - } - }, - "color-name": { - "version": "1.1.4", - "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz", - "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==" - }, - "has-flag": { - "version": "4.0.0", - "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz", - "integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==" - }, - "supports-color": { - "version": "7.2.0", - "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.2.0.tgz", - "integrity": "sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==", - "requires": { - "has-flag": "^4.0.0" - } - } - } - }, "jest": { "version": "27.5.1", "resolved": "https://registry.npmjs.org/jest/-/jest-27.5.1.tgz", diff --git a/applications/tari_console_wallet/Cargo.toml b/applications/tari_console_wallet/Cargo.toml index bd68d76e62..7061ba7981 100644 --- a/applications/tari_console_wallet/Cargo.toml +++ b/applications/tari_console_wallet/Cargo.toml @@ -7,7 +7,7 @@ license = "BSD-3-Clause" [dependencies] tari_wallet = { path = "../../base_layer/wallet", features = ["bundled_sqlite"] } -tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.12.5" } +tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.13.0" } tari_common = { path = "../../common" } tari_app_utilities = { path = "../tari_app_utilities" } tari_comms = { path = "../../comms/core" } @@ -18,7 +18,7 @@ tari_p2p = { path = "../../base_layer/p2p", features = ["auto-update"] } tari_app_grpc = { path = "../tari_app_grpc" } tari_shutdown = { path = "../../infrastructure/shutdown" } tari_key_manager = { path = "../../base_layer/key_manager" } -tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.3.1" } +tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.4.3" } # Uncomment for tokio tracing via tokio-console (needs "tracing" featurs) #console-subscriber = "0.1.3" @@ -28,7 +28,7 @@ tokio = { version = "1.14", default-features = false, features = ["signal", "syn sha2 = "0.9.5" digest = "0.9.0" -clap = { version = "3.1.1", features = ["derive"] } +clap = { version = "3.1.1", features = ["derive", "env"] } config = "0.13.0" chrono = { version = "0.4.19", default-features = false } bitflags = "1.2.1" diff --git a/applications/tari_console_wallet/src/automation/command_parser.rs b/applications/tari_console_wallet/src/automation/command_parser.rs index a39ab606f1..507a83d89b 100644 --- a/applications/tari_console_wallet/src/automation/command_parser.rs +++ b/applications/tari_console_wallet/src/automation/command_parser.rs @@ -32,7 +32,7 @@ use tari_app_utilities::utilities::{parse_emoji_id_or_public_key, parse_hash}; use tari_common_types::types::PublicKey; use tari_comms::multiaddr::Multiaddr; use tari_core::transactions::tari_amount::MicroTari; -use tari_crypto::tari_utilities::hex::Hex; +use tari_utilities::hex::Hex; use crate::automation::{commands::WalletCommand, error::ParseError}; diff --git a/applications/tari_console_wallet/src/cli.rs b/applications/tari_console_wallet/src/cli.rs index a9725e7f2d..1c47bfc0c1 100644 --- a/applications/tari_console_wallet/src/cli.rs +++ b/applications/tari_console_wallet/src/cli.rs @@ -39,7 +39,7 @@ pub(crate) struct Cli { /// Supply the password for the console wallet. It's very bad security practice to provide the password on the /// command line, since it's visible using `ps ax` from anywhere on the system, so always use the env var where /// possible. - #[clap(long)] // , env = "TARI_WALLET_PASSWORD")] + #[clap(long, env = "TARI_WALLET_PASSWORD", hide_env_values = true)] pub password: Option, /// Change the password for the console wallet #[clap(long, alias = "update-password")] @@ -69,7 +69,7 @@ pub(crate) struct Cli { #[clap(long, alias = "auto-exit")] pub command_mode_auto_exit: bool, /// Supply a network (overrides existing configuration) - #[clap(long, alias = "network", default_value = DEFAULT_NETWORK)] + #[clap(long, default_value = DEFAULT_NETWORK, env = "TARI_NETWORK")] pub network: String, } diff --git a/applications/tari_console_wallet/src/grpc/wallet_grpc_server.rs b/applications/tari_console_wallet/src/grpc/wallet_grpc_server.rs index f698314774..3dbe03c1e1 100644 --- a/applications/tari_console_wallet/src/grpc/wallet_grpc_server.rs +++ b/applications/tari_console_wallet/src/grpc/wallet_grpc_server.rs @@ -87,8 +87,8 @@ use tari_core::transactions::{ tari_amount::MicroTari, transaction_components::{OutputFeatures, UnblindedOutput}, }; -use tari_crypto::{ristretto::RistrettoPublicKey, tari_utilities::Hashable}; -use tari_utilities::{hex::Hex, ByteArray}; +use tari_crypto::ristretto::RistrettoPublicKey; +use tari_utilities::{hex::Hex, ByteArray, Hashable}; use tari_wallet::{ connectivity_service::{OnlineStatus, WalletConnectivityInterface}, output_manager_service::handle::OutputManagerHandle, diff --git a/applications/tari_console_wallet/src/init/mod.rs b/applications/tari_console_wallet/src/init/mod.rs index 3ae8aee7ab..dc8c82802d 100644 --- a/applications/tari_console_wallet/src/init/mod.rs +++ b/applications/tari_console_wallet/src/init/mod.rs @@ -555,10 +555,18 @@ pub(crate) fn boot(cli: &Cli, wallet_config: &WalletConfig) -> Result::new(); diff --git a/applications/tari_console_wallet/src/main.rs b/applications/tari_console_wallet/src/main.rs index 857da1522c..8641cee042 100644 --- a/applications/tari_console_wallet/src/main.rs +++ b/applications/tari_console_wallet/src/main.rs @@ -119,10 +119,13 @@ fn main_inner() -> Result<(), ExitError> { consts::APP_VERSION ); - // get command line password if provided - let arg_password = cli.password.clone(); + let password = cli + .password + .as_ref() + .or_else(|| config.wallet.password.as_ref()) + .map(|s| s.to_owned()); - if arg_password.is_none() { + if password.is_none() { tari_splash_screen("Console Wallet"); } @@ -132,7 +135,6 @@ fn main_inner() -> Result<(), ExitError> { let recovery_seed = get_recovery_seed(boot_mode, &cli)?; // get command line password if provided - let arg_password = cli.password.clone(); let seed_words_file_name = cli.seed_words_file_name.clone(); let mut shutdown = Shutdown::new(); @@ -140,7 +142,7 @@ fn main_inner() -> Result<(), ExitError> { if cli.change_password { info!(target: LOG_TARGET, "Change password requested."); - return runtime.block_on(change_password(&config, arg_password, shutdown_signal)); + return runtime.block_on(change_password(&config, password, shutdown_signal)); } // Run our own Tor instance, if configured @@ -159,7 +161,7 @@ fn main_inner() -> Result<(), ExitError> { // initialize wallet let mut wallet = runtime.block_on(init_wallet( &config, - arg_password, + password, seed_words_file_name, recovery_seed, shutdown_signal, diff --git a/applications/tari_console_wallet/src/recovery.rs b/applications/tari_console_wallet/src/recovery.rs index 16b795e642..f9aa745eb9 100644 --- a/applications/tari_console_wallet/src/recovery.rs +++ b/applications/tari_console_wallet/src/recovery.rs @@ -25,9 +25,9 @@ use futures::FutureExt; use log::*; use rustyline::Editor; use tari_common::exit_codes::{ExitCode, ExitError}; -use tari_crypto::tari_utilities::hex::Hex; use tari_key_manager::{cipher_seed::CipherSeed, mnemonic::Mnemonic}; use tari_shutdown::Shutdown; +use tari_utilities::hex::Hex; use tari_wallet::{ storage::sqlite_db::wallet::WalletSqliteDatabase, utxo_scanner_service::{handle::UtxoScannerEvent, service::UtxoScannerService}, diff --git a/applications/tari_console_wallet/src/ui/components/assets_tab.rs b/applications/tari_console_wallet/src/ui/components/assets_tab.rs index 1f04ac3490..5732e0b7e5 100644 --- a/applications/tari_console_wallet/src/ui/components/assets_tab.rs +++ b/applications/tari_console_wallet/src/ui/components/assets_tab.rs @@ -20,7 +20,7 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -use tari_crypto::tari_utilities::hex::Hex; +use tari_utilities::hex::Hex; use tui::{ backend::Backend, layout::{Constraint, Rect}, diff --git a/applications/tari_console_wallet/src/ui/components/network_tab.rs b/applications/tari_console_wallet/src/ui/components/network_tab.rs index eeb58fbbff..62b8200da1 100644 --- a/applications/tari_console_wallet/src/ui/components/network_tab.rs +++ b/applications/tari_console_wallet/src/ui/components/network_tab.rs @@ -5,7 +5,7 @@ use std::collections::HashMap; use log::*; use tari_comms::peer_manager::Peer; -use tari_crypto::tari_utilities::hex::Hex; +use tari_utilities::hex::Hex; use tokio::runtime::Handle; use tui::{ backend::Backend, diff --git a/applications/tari_console_wallet/src/ui/components/tokens_component.rs b/applications/tari_console_wallet/src/ui/components/tokens_component.rs index 2ca8da044a..a90ac18fc0 100644 --- a/applications/tari_console_wallet/src/ui/components/tokens_component.rs +++ b/applications/tari_console_wallet/src/ui/components/tokens_component.rs @@ -20,7 +20,7 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -use tari_crypto::tari_utilities::hex::Hex; +use tari_utilities::hex::Hex; use tui::{ backend::Backend, layout::{Constraint, Rect}, diff --git a/applications/tari_console_wallet/src/ui/state/app_state.rs b/applications/tari_console_wallet/src/ui/state/app_state.rs index 98163c2fe4..e11c551d7e 100644 --- a/applications/tari_console_wallet/src/ui/state/app_state.rs +++ b/applications/tari_console_wallet/src/ui/state/app_state.rs @@ -47,8 +47,9 @@ use tari_core::transactions::{ tari_amount::{uT, MicroTari}, weight::TransactionWeight, }; -use tari_crypto::{ristretto::RistrettoPublicKey, tari_utilities::hex::Hex}; +use tari_crypto::ristretto::RistrettoPublicKey; use tari_shutdown::ShutdownSignal; +use tari_utilities::hex::Hex; use tari_wallet::{ assets::Asset, base_node_service::{handle::BaseNodeEventReceiver, service::BaseNodeState}, diff --git a/applications/tari_console_wallet/src/ui/ui_error.rs b/applications/tari_console_wallet/src/ui/ui_error.rs index ecc4a8dfc8..d8c759858b 100644 --- a/applications/tari_console_wallet/src/ui/ui_error.rs +++ b/applications/tari_console_wallet/src/ui/ui_error.rs @@ -2,7 +2,7 @@ // SPDX-License-Identifier: BSD-3-Clause use tari_comms::connectivity::ConnectivityError; -use tari_crypto::tari_utilities::hex::HexError; +use tari_utilities::hex::HexError; use tari_wallet::{ contacts_service::error::ContactsServiceError, error::{WalletError, WalletStorageError}, diff --git a/applications/tari_console_wallet/src/utils/db.rs b/applications/tari_console_wallet/src/utils/db.rs index 342353f576..41d4759bb4 100644 --- a/applications/tari_console_wallet/src/utils/db.rs +++ b/applications/tari_console_wallet/src/utils/db.rs @@ -26,7 +26,7 @@ use tari_comms::{ multiaddr::Multiaddr, peer_manager::{NodeId, Peer, PeerFeatures, PeerFlags}, }; -use tari_crypto::tari_utilities::hex::Hex; +use tari_utilities::hex::Hex; use tari_wallet::WalletSqlite; pub const LOG_TARGET: &str = "wallet::utils::db"; diff --git a/applications/tari_explorer/routes/index.js b/applications/tari_explorer/routes/index.js index 4088af8207..9818e1eb0e 100644 --- a/applications/tari_explorer/routes/index.js +++ b/applications/tari_explorer/routes/index.js @@ -116,6 +116,7 @@ router.get("/", async function (req, res) { moneroTimes: getBlockTimes(last100Headers, "0"), shaTimes: getBlockTimes(last100Headers, "1"), currentHashRate: totalHashRates[totalHashRates.length - 1], + totalHashRates, currentShaHashRate: shaHashRates[shaHashRates.length - 1], shaHashRates, currentMoneroHashRate: moneroHashRates[moneroHashRates.length - 1], diff --git a/applications/tari_explorer/views/index.hbs b/applications/tari_explorer/views/index.hbs index f7a60b3a04..e897c82c1d 100644 --- a/applications/tari_explorer/views/index.hbs +++ b/applications/tari_explorer/views/index.hbs @@ -97,6 +97,8 @@ Current total estimated Hash Rate: {{this.currentHashRate}} H/s +
{{chart this.totalHashRates 15}}
+      
diff --git a/applications/tari_merge_mining_proxy/Cargo.toml b/applications/tari_merge_mining_proxy/Cargo.toml index 930b55b979..471f8fd654 100644 --- a/applications/tari_merge_mining_proxy/Cargo.toml +++ b/applications/tari_merge_mining_proxy/Cargo.toml @@ -17,15 +17,15 @@ tari_common = { path = "../../common" } tari_comms = { path = "../../comms/core" } tari_core = { path = "../../base_layer/core", default-features = false, features = ["transactions"] } tari_app_utilities = { path = "../tari_app_utilities" } -tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.12.5" } -tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.3.1" } +tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.13.0" } +tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.4.3" } anyhow = "1.0.53" crossterm = { version = "0.17" } bincode = "1.3.1" bytes = "1.1" chrono = { version = "0.4.6", default-features = false } -clap = { version = "3.1.1", features = ["derive"] } +clap = { version = "3.1.1", features = ["derive", "env"] } config = { version = "0.13.0" } derivative = "2.2.0" env_logger = { version = "0.7.1", optional = true } diff --git a/applications/tari_merge_mining_proxy/src/cli.rs b/applications/tari_merge_mining_proxy/src/cli.rs index aebc520995..fa90b1abb4 100644 --- a/applications/tari_merge_mining_proxy/src/cli.rs +++ b/applications/tari_merge_mining_proxy/src/cli.rs @@ -32,7 +32,7 @@ pub(crate) struct Cli { #[clap(flatten)] pub common: CommonCliArgs, /// Supply a network (overrides existing configuration) - #[clap(long, alias = "network", default_value = DEFAULT_NETWORK)] + #[clap(long, default_value = DEFAULT_NETWORK, env = "TARI_NETWORK")] pub network: String, } diff --git a/applications/tari_miner/Cargo.toml b/applications/tari_miner/Cargo.toml index 67ad9ed845..0302f0e9ee 100644 --- a/applications/tari_miner/Cargo.toml +++ b/applications/tari_miner/Cargo.toml @@ -13,8 +13,8 @@ tari_common = { path = "../../common" } tari_comms = { path = "../../comms/core" } tari_app_utilities = { path = "../tari_app_utilities"} tari_app_grpc = { path = "../tari_app_grpc" } -tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.12.5" } -tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.3.1" } +tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.13.0" } +tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.4.3" } crossterm = { version = "0.17" } clap = { version = "3.1.1", features = ["derive"] } diff --git a/applications/tari_miner/src/difficulty.rs b/applications/tari_miner/src/difficulty.rs index ca601f30ff..cca3a3dcbe 100644 --- a/applications/tari_miner/src/difficulty.rs +++ b/applications/tari_miner/src/difficulty.rs @@ -143,7 +143,9 @@ pub mod test { pub fn get_header() -> (BlockHeader, CoreBlockHeader) { let mut header = CoreBlockHeader::new(0); - header.timestamp = DateTime::::from_utc(NaiveDate::from_ymd(2000, 1, 1).and_hms(1, 1, 1), Utc).into(); + header.timestamp = + (DateTime::::from_utc(NaiveDate::from_ymd(2000, 1, 1).and_hms(1, 1, 1), Utc).timestamp() as u64) + .into(); header.pow.pow_algo = tari_core::proof_of_work::PowAlgorithm::Sha3; (header.clone().into(), header) } diff --git a/applications/tari_miner/src/main.rs b/applications/tari_miner/src/main.rs index d973457a25..25dd267c43 100644 --- a/applications/tari_miner/src/main.rs +++ b/applications/tari_miner/src/main.rs @@ -43,7 +43,8 @@ use tari_common::{ }; use tari_comms::utils::multiaddr::multiaddr_to_socketaddr; use tari_core::blocks::BlockHeader; -use tari_crypto::{ristretto::RistrettoPublicKey, tari_utilities::hex::Hex}; +use tari_crypto::ristretto::RistrettoPublicKey; +use tari_utilities::hex::Hex; use tokio::{runtime::Runtime, time::sleep}; use tonic::transport::Channel; use utils::{coinbase_request, extract_outputs_and_kernels}; diff --git a/applications/tari_validator_node/Cargo.toml b/applications/tari_validator_node/Cargo.toml index 7ebcd2e8d5..104a58bd36 100644 --- a/applications/tari_validator_node/Cargo.toml +++ b/applications/tari_validator_node/Cargo.toml @@ -14,7 +14,7 @@ tari_common = { path = "../../common" } tari_comms = { path = "../../comms/core" } tari_comms_dht = { path = "../../comms/dht" } tari_comms_rpc_macros = { path = "../../comms/rpc_macros" } -tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.12.5" } +tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.13.0" } tari_mmr = { path = "../../base_layer/mmr" } tari_p2p = { path = "../../base_layer/p2p" } tari_service_framework = { path = "../../base_layer/service_framework" } @@ -29,7 +29,7 @@ tari_common_types = { path = "../../base_layer/common_types" } anyhow = "1.0.53" async-trait = "0.1.50" blake2 = "0.9.2" -clap = "3.1.8" +clap = { version = "3.1.8", features = ["env"] } config = "0.13.0" digest = "0.9.0" futures = { version = "^0.3.1" } diff --git a/applications/tari_validator_node/src/cli.rs b/applications/tari_validator_node/src/cli.rs index bd6922e017..6306a405d0 100644 --- a/applications/tari_validator_node/src/cli.rs +++ b/applications/tari_validator_node/src/cli.rs @@ -20,28 +20,6 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Copyright 2022. The Tari Project -// -// Redistribution and use in source and binary forms, with or without modification, are permitted provided that the -// following conditions are met: -// -// 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following -// disclaimer. -// -// 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the -// following disclaimer in the documentation and/or other materials provided with the distribution. -// -// 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote -// products derived from this software without specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, -// INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR -// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, -// WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE -// USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - use clap::Parser; use tari_app_utilities::common_cli_args::CommonCliArgs; @@ -57,7 +35,7 @@ pub(crate) struct Cli { #[clap(long, aliases = &["tracing", "enable-tracing"])] pub tracing_enabled: bool, /// Supply a network (overrides existing configuration) - #[clap(long, alias = "network", default_value = DEFAULT_NETWORK)] + #[clap(long, default_value = DEFAULT_NETWORK, env = "TARI_NETWORK")] pub network: String, } diff --git a/applications/tari_web_extension/package-lock.json b/applications/tari_web_extension/package-lock.json index 781f9133db..0023e77a77 100644 --- a/applications/tari_web_extension/package-lock.json +++ b/applications/tari_web_extension/package-lock.json @@ -2738,9 +2738,9 @@ "integrity": "sha1-9wtzXGvKGlycItmCw+Oef+ujva0=" }, "async": { - "version": "2.6.3", - "resolved": "https://registry.npmjs.org/async/-/async-2.6.3.tgz", - "integrity": "sha512-zflvls11DCy+dQWzTW2dzuilv8Z5X/pjfmZOWba6TNIVDm+2UDaJmXSOXlasHKfNBs8oo3M0aT50fDEWfKZjXg==", + "version": "2.6.4", + "resolved": "https://registry.npmjs.org/async/-/async-2.6.4.tgz", + "integrity": "sha512-mzo5dfJYwAn29PeiJ0zvwTo04zj8HDJj0Mn8TD7sno7q12prdbnasKJHhkm2c1LgrhlJ0teaea8860oxi51mGA==", "requires": { "lodash": "^4.17.14" } @@ -4078,11 +4078,29 @@ "integrity": "sha1-WQxhFWsK4vTwJVcyoViyZrxWsh0=" }, "ejs": { - "version": "3.1.6", - "resolved": "https://registry.npmjs.org/ejs/-/ejs-3.1.6.tgz", - "integrity": "sha512-9lt9Zse4hPucPkoP7FHDF0LQAlGyF9JVpnClFLFH3aSSbxmyoqINRpp/9wePWJTUl4KOQwRL72Iw3InHPDkoGw==", + "version": "3.1.7", + "resolved": "https://registry.npmjs.org/ejs/-/ejs-3.1.7.tgz", + "integrity": "sha512-BIar7R6abbUxDA3bfXrO4DSgwo8I+fB5/1zgujl3HLLjwd6+9iOnrT+t3grn2qbk9vOgBubXOFwX2m9axoFaGw==", "requires": { - "jake": "^10.6.1" + "jake": "^10.8.5" + }, + "dependencies": { + "async": { + "version": "3.2.3", + "resolved": "https://registry.npmjs.org/async/-/async-3.2.3.tgz", + "integrity": "sha512-spZRyzKL5l5BZQrr/6m/SqFdBN0q3OCI0f9rjfBzCMBIP4p75P620rR3gTmaksNOhmzgdxcaxdNfMy6anrbM0g==" + }, + "jake": { + "version": "10.8.5", + "resolved": "https://registry.npmjs.org/jake/-/jake-10.8.5.tgz", + "integrity": "sha512-sVpxYeuAhWt0OTWITwT98oyV0GsXyMlXCF+3L1SuafBVUIr/uILGRB+NqwkzhgXKvoJpDIpQvqkUALgdmQsQxw==", + "requires": { + "async": "^3.2.3", + "chalk": "^4.0.2", + "filelist": "^1.0.1", + "minimatch": "^3.0.4" + } + } } }, "electron-to-chromium": { @@ -5755,24 +5773,6 @@ "istanbul-lib-report": "^3.0.0" } }, - "jake": { - "version": "10.8.4", - "resolved": "https://registry.npmjs.org/jake/-/jake-10.8.4.tgz", - "integrity": "sha512-MtWeTkl1qGsWUtbl/Jsca/8xSoK3x0UmS82sNbjqxxG/de/M/3b1DntdjHgPMC50enlTNwXOCRqPXLLt5cCfZA==", - "requires": { - "async": "0.9.x", - "chalk": "^4.0.2", - "filelist": "^1.0.1", - "minimatch": "^3.0.4" - }, - "dependencies": { - "async": { - "version": "0.9.2", - "resolved": "https://registry.npmjs.org/async/-/async-0.9.2.tgz", - "integrity": "sha1-rqdNXmHB+JlhO/ZL2mbUx48v0X0=" - } - } - }, "jest": { "version": "27.5.1", "resolved": "https://registry.npmjs.org/jest/-/jest-27.5.1.tgz", diff --git a/applications/test_faucet/Cargo.toml b/applications/test_faucet/Cargo.toml index e0fdae643f..df93fe35d0 100644 --- a/applications/test_faucet/Cargo.toml +++ b/applications/test_faucet/Cargo.toml @@ -10,8 +10,8 @@ simd = ["tari_crypto/simd"] avx2 = ["simd"] [dependencies] -tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.12.5" } -tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.3.1" } +tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.13.0" } +tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.4.3" } tari_common_types = { path = "../../base_layer/common_types" } tari_script = { path = "../../infrastructure/tari_script" } diff --git a/base_layer/common_types/Cargo.toml b/base_layer/common_types/Cargo.toml index b55278e1b9..3307a624de 100644 --- a/base_layer/common_types/Cargo.toml +++ b/base_layer/common_types/Cargo.toml @@ -7,8 +7,8 @@ version = "0.31.1" edition = "2018" [dependencies] -tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.12.5" } -tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.3.1" } +tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.13.0" } +tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.4.3" } digest = "0.9.0" lazy_static = "1.4.0" diff --git a/base_layer/common_types/src/array.rs b/base_layer/common_types/src/array.rs index ed2fcbd47a..c08ce04d2b 100644 --- a/base_layer/common_types/src/array.rs +++ b/base_layer/common_types/src/array.rs @@ -20,6 +20,8 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +use std::cmp; + use tari_utilities::ByteArrayError; pub fn copy_into_fixed_array(elems: &[T]) -> Result<[T; SZ], ByteArrayError> { @@ -30,3 +32,12 @@ pub fn copy_into_fixed_array(elems: &[T]) -> buf.copy_from_slice(&elems[0..SZ]); Ok(buf) } + +/// Copies `SZ` elements from a slice into a fixed array of size `SZ`. If the length of the slice is less than `SZ` the +/// default value is used for the remaining elements. +pub fn copy_into_fixed_array_lossy(elems: &[T]) -> [T; SZ] { + let len = cmp::min(elems.len(), SZ); + let mut buf = [T::default(); SZ]; + buf[..len].copy_from_slice(&elems[..len]); + buf +} diff --git a/base_layer/common_types/src/chain_metadata.rs b/base_layer/common_types/src/chain_metadata.rs index 66118abf7a..817f7a9862 100644 --- a/base_layer/common_types/src/chain_metadata.rs +++ b/base_layer/common_types/src/chain_metadata.rs @@ -23,7 +23,7 @@ use std::fmt::{Display, Error, Formatter}; use serde::{Deserialize, Serialize}; -use tari_crypto::tari_utilities::hex::Hex; +use tari_utilities::hex::Hex; use crate::types::BlockHash; diff --git a/base_layer/common_types/src/types/bullet_rangeproofs.rs b/base_layer/common_types/src/types/bullet_rangeproofs.rs index 5129ecbd7b..441036d707 100644 --- a/base_layer/common_types/src/types/bullet_rangeproofs.rs +++ b/base_layer/common_types/src/types/bullet_rangeproofs.rs @@ -30,7 +30,7 @@ use serde::{ Serialize, Serializer, }; -use tari_crypto::tari_utilities::{hex::*, ByteArray, ByteArrayError, Hashable}; +use tari_utilities::{hex::*, ByteArray, ByteArrayError, Hashable}; use crate::types::HashDigest; diff --git a/base_layer/core/Cargo.toml b/base_layer/core/Cargo.toml index 75640ce004..9e8581c642 100644 --- a/base_layer/core/Cargo.toml +++ b/base_layer/core/Cargo.toml @@ -24,7 +24,7 @@ tari_common_types = { version = "^0.31", path = "../../base_layer/common_types" tari_comms = { version = "^0.31", path = "../../comms/core" } tari_comms_dht = { version = "^0.31", path = "../../comms/dht" } tari_comms_rpc_macros = { version = "^0.31", path = "../../comms/rpc_macros" } -tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.12.5" } +tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.13.0" } tari_metrics = { path = "../../infrastructure/metrics" } tari_mmr = { version = "^0.31", path = "../../base_layer/mmr", optional = true, features = ["native_bitmap"] } tari_p2p = { version = "^0.31", path = "../../base_layer/p2p" } @@ -33,7 +33,7 @@ tari_service_framework = { version = "^0.31", path = "../service_framework" } tari_shutdown = { version = "^0.31", path = "../../infrastructure/shutdown" } tari_storage = { version = "^0.31", path = "../../infrastructure/storage" } tari_test_utils = { version = "^0.31", path = "../../infrastructure/test_utils" } -tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.3.1" } +tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.4.3" } async-trait = "0.1.50" bincode = "1.1.4" @@ -84,4 +84,4 @@ tari_common = { version = "^0.31", path = "../../common", features = ["build"] } [[bench]] name = "mempool" -harness = false \ No newline at end of file +harness = false diff --git a/base_layer/core/src/base_node/comms_interface/comms_request.rs b/base_layer/core/src/base_node/comms_interface/comms_request.rs index 9ba3e1fa56..0ef0a3df9f 100644 --- a/base_layer/core/src/base_node/comms_interface/comms_request.rs +++ b/base_layer/core/src/base_node/comms_interface/comms_request.rs @@ -27,7 +27,7 @@ use std::{ use serde::{Deserialize, Serialize}; use tari_common_types::types::{Commitment, HashOutput, PrivateKey, PublicKey, Signature}; -use tari_crypto::tari_utilities::hex::Hex; +use tari_utilities::hex::Hex; use crate::{blocks::NewBlockTemplate, chain_storage::MmrTree, proof_of_work::PowAlgorithm}; diff --git a/base_layer/core/src/base_node/comms_interface/inbound_handlers.rs b/base_layer/core/src/base_node/comms_interface/inbound_handlers.rs index f6496b2ef6..d83294a8bc 100644 --- a/base_layer/core/src/base_node/comms_interface/inbound_handlers.rs +++ b/base_layer/core/src/base_node/comms_interface/inbound_handlers.rs @@ -30,8 +30,7 @@ use log::*; use strum_macros::Display; use tari_common_types::types::{BlockHash, HashOutput, PublicKey}; use tari_comms::{connectivity::ConnectivityRequester, peer_manager::NodeId}; -use tari_crypto::tari_utilities::{hash::Hashable, hex::Hex}; -use tari_utilities::ByteArray; +use tari_utilities::{hash::Hashable, hex::Hex, ByteArray}; use tokio::sync::Semaphore; use crate::{ diff --git a/base_layer/core/src/base_node/rpc/service.rs b/base_layer/core/src/base_node/rpc/service.rs index db138cd9a0..586f8ab9b0 100644 --- a/base_layer/core/src/base_node/rpc/service.rs +++ b/base_layer/core/src/base_node/rpc/service.rs @@ -26,7 +26,7 @@ use std::convert::TryFrom; use log::*; use tari_common_types::types::Signature; use tari_comms::protocol::rpc::{Request, Response, RpcStatus, RpcStatusResultExt, Streaming}; -use tari_crypto::tari_utilities::hex::Hex; +use tari_utilities::hex::Hex; use tokio::sync::mpsc; use crate::{ diff --git a/base_layer/core/src/base_node/rpc/sync_utxos_by_block_task.rs b/base_layer/core/src/base_node/rpc/sync_utxos_by_block_task.rs index 971434559e..b9efcb44a9 100644 --- a/base_layer/core/src/base_node/rpc/sync_utxos_by_block_task.rs +++ b/base_layer/core/src/base_node/rpc/sync_utxos_by_block_task.rs @@ -24,7 +24,7 @@ use std::{sync::Arc, time::Instant}; use log::*; use tari_comms::protocol::rpc::{RpcStatus, RpcStatusResultExt}; -use tari_crypto::tari_utilities::{hex::Hex, Hashable}; +use tari_utilities::{hex::Hex, Hashable}; use tokio::{sync::mpsc, task}; use crate::{ diff --git a/base_layer/core/src/base_node/state_machine_service/states/listening.rs b/base_layer/core/src/base_node/state_machine_service/states/listening.rs index 52286733d8..3bc20b30d0 100644 --- a/base_layer/core/src/base_node/state_machine_service/states/listening.rs +++ b/base_layer/core/src/base_node/state_machine_service/states/listening.rs @@ -30,7 +30,7 @@ use log::*; use num_format::{Locale, ToFormattedString}; use serde::{Deserialize, Serialize}; use tari_common_types::chain_metadata::ChainMetadata; -use tari_crypto::tari_utilities::epoch_time::EpochTime; +use tari_utilities::epoch_time::EpochTime; use tokio::sync::broadcast; use crate::{ diff --git a/base_layer/core/src/base_node/sync/rpc/service.rs b/base_layer/core/src/base_node/sync/rpc/service.rs index 61174572e2..b4f501f2a0 100644 --- a/base_layer/core/src/base_node/sync/rpc/service.rs +++ b/base_layer/core/src/base_node/sync/rpc/service.rs @@ -32,8 +32,7 @@ use tari_comms::{ protocol::rpc::{Request, Response, RpcStatus, RpcStatusResultExt, Streaming}, utils, }; -use tari_crypto::tari_utilities::hex::Hex; -use tari_utilities::Hashable; +use tari_utilities::{hex::Hex, Hashable}; use tokio::{ sync::{mpsc, RwLock}, task, diff --git a/base_layer/core/src/base_node/sync/rpc/sync_utxos_task.rs b/base_layer/core/src/base_node/sync/rpc/sync_utxos_task.rs index ac40a5f364..d971b92788 100644 --- a/base_layer/core/src/base_node/sync/rpc/sync_utxos_task.rs +++ b/base_layer/core/src/base_node/sync/rpc/sync_utxos_task.rs @@ -28,7 +28,7 @@ use tari_comms::{ protocol::rpc::{Request, RpcStatus, RpcStatusResultExt}, utils, }; -use tari_crypto::tari_utilities::{hex::Hex, Hashable}; +use tari_utilities::{hex::Hex, Hashable}; use tokio::{sync::mpsc, task}; use crate::{ diff --git a/base_layer/core/src/blocks/accumulated_data.rs b/base_layer/core/src/blocks/accumulated_data.rs index d2c936dd10..6510f0a209 100644 --- a/base_layer/core/src/blocks/accumulated_data.rs +++ b/base_layer/core/src/blocks/accumulated_data.rs @@ -38,9 +38,8 @@ use serde::{ Serializer, }; use tari_common_types::types::{BlindingFactor, Commitment, HashOutput}; -use tari_crypto::tari_utilities::hex::Hex; use tari_mmr::{pruned_hashset::PrunedHashSet, ArrayLike}; -use tari_utilities::Hashable; +use tari_utilities::{hex::Hex, Hashable}; use crate::{ blocks::{error::BlockError, Block, BlockHeader}, diff --git a/base_layer/core/src/blocks/block.rs b/base_layer/core/src/blocks/block.rs index 522c06c095..353fc31dc1 100644 --- a/base_layer/core/src/blocks/block.rs +++ b/base_layer/core/src/blocks/block.rs @@ -33,8 +33,7 @@ use std::{ use log::*; use serde::{Deserialize, Serialize}; use tari_common_types::types::PrivateKey; -use tari_crypto::tari_utilities::Hashable; -use tari_utilities::hex::Hex; +use tari_utilities::{hex::Hex, Hashable}; use thiserror::Error; use crate::{ @@ -170,7 +169,6 @@ impl Display for Block { } } -#[derive(Default)] pub struct BlockBuilder { header: BlockHeader, inputs: Vec, diff --git a/base_layer/core/src/blocks/block_header.rs b/base_layer/core/src/blocks/block_header.rs index f63c6ede7a..513c616193 100644 --- a/base_layer/core/src/blocks/block_header.rs +++ b/base_layer/core/src/blocks/block_header.rs @@ -39,13 +39,14 @@ use std::{ cmp::Ordering, + convert::TryFrom, fmt, fmt::{Display, Error, Formatter}, io, io::{Read, Write}, }; -use chrono::{DateTime, Utc}; +use chrono::{DateTime, NaiveDateTime, Utc}; use digest::Digest; use serde::{ de::{self, Visitor}, @@ -55,10 +56,10 @@ use serde::{ Serializer, }; use tari_common_types::{ - array::copy_into_fixed_array, + array::{copy_into_fixed_array, copy_into_fixed_array_lossy}, types::{BlindingFactor, BlockHash, HashDigest, BLOCK_HASH_LENGTH}, }; -use tari_crypto::tari_utilities::{epoch_time::EpochTime, hex::Hex, ByteArray, Hashable}; +use tari_utilities::{epoch_time::EpochTime, hex::Hex, ByteArray, Hashable}; use thiserror::Error; #[cfg(feature = "base_node")] @@ -88,7 +89,7 @@ pub enum BlockHeaderValidationError { /// The BlockHeader contains all the metadata for the block, including proof of work, a link to the previous block /// and the transaction kernels. -#[derive(Serialize, Deserialize, Clone, Debug, Default)] +#[derive(Serialize, Deserialize, Clone, Debug)] pub struct BlockHeader { /// Version of the block pub version: u16, @@ -241,10 +242,10 @@ impl BlockHeader { .chain(&self.timestamp) .chain(&self.input_mr) // TODO: Cleanup if/when we migrate to fixed 32-byte array type for hashes - .chain(©_into_fixed_array::<_, 32>(&self.output_mr).unwrap()) + .chain(©_into_fixed_array_lossy::<_, 32>(&self.output_mr)) .chain(&self.output_mmr_size) - .chain(& copy_into_fixed_array::<_, 32>(&self.witness_mr).unwrap()) - .chain(©_into_fixed_array::<_, 32>(&self.kernel_mr).unwrap()) + .chain(©_into_fixed_array_lossy::<_, 32>(&self.witness_mr)) + .chain(©_into_fixed_array_lossy::<_, 32>(&self.kernel_mr)) .chain(&self.kernel_mmr_size) .chain(&self.total_kernel_offset) .chain(&self.total_script_offset) @@ -257,6 +258,11 @@ impl BlockHeader { self.timestamp } + pub fn to_chrono_datetime(&self) -> DateTime { + let dt = NaiveDateTime::from_timestamp(i64::try_from(self.timestamp.as_u64()).unwrap_or(i64::MAX), 0); + DateTime::from_utc(dt, Utc) + } + #[inline] pub fn pow_algo(&self) -> PowAlgorithm { self.pow.pow_algo @@ -301,7 +307,7 @@ impl Hashable for BlockHeader { // up if we decide to migrate to a fixed 32-byte type .chain(©_into_fixed_array::<_, 32>(&self.merged_mining_hash()).unwrap()) .chain(&self.pow) - .chain(& self.nonce) + .chain(&self.nonce) .finalize().to_vec() } } @@ -317,14 +323,13 @@ impl Eq for BlockHeader {} impl Display for BlockHeader { fn fmt(&self, fmt: &mut Formatter<'_>) -> Result<(), Error> { - let datetime: DateTime = self.timestamp.into(); writeln!( fmt, "Version: {}\nBlock height: {}\nPrevious block hash: {}\nTimestamp: {}", self.version, self.height, self.prev_hash.to_hex(), - datetime.to_rfc2822() + self.to_chrono_datetime().to_rfc2822() )?; writeln!( fmt, @@ -348,7 +353,7 @@ impl Display for BlockHeader { } pub(crate) mod hash_serializer { - use tari_crypto::tari_utilities::hex::Hex; + use tari_utilities::hex::Hex; use super::*; @@ -392,49 +397,14 @@ impl ConsensusEncoding for BlockHeader { fn consensus_encode(&self, writer: &mut W) -> Result { let mut written = self.version.consensus_encode(writer)?; written += self.height.consensus_encode(writer)?; - written += copy_into_fixed_array::<_, 32>(&self.prev_hash) - .map_err(|e| { - io::Error::new( - io::ErrorKind::InvalidData, - format!("Could not copy vec to 32 byte array: {}", e), - ) - })? - .consensus_encode(writer)?; + written += copy_into_fixed_array_lossy::<_, 32>(&self.prev_hash).consensus_encode(writer)?; written += self.timestamp.as_u64().consensus_encode(writer)?; - written += copy_into_fixed_array::<_, 32>(&self.output_mr) - .map_err(|e| { - io::Error::new( - io::ErrorKind::InvalidData, - format!("Could not copy vec to 32 byte array: {}", e), - ) - })? - .consensus_encode(writer)?; - written += copy_into_fixed_array::<_, 32>(&self.witness_mr) - .map_err(|e| { - io::Error::new( - io::ErrorKind::InvalidData, - format!("Could not copy vec to 32 byte array: {}", e), - ) - })? - .consensus_encode(writer)?; + written += copy_into_fixed_array_lossy::<_, 32>(&self.output_mr).consensus_encode(writer)?; + written += copy_into_fixed_array_lossy::<_, 32>(&self.witness_mr).consensus_encode(writer)?; written += self.output_mmr_size.consensus_encode(writer)?; - written += copy_into_fixed_array::<_, 32>(&self.kernel_mr) - .map_err(|e| { - io::Error::new( - io::ErrorKind::InvalidData, - format!("Could not copy vec to 32 byte array: {}", e), - ) - })? - .consensus_encode(writer)?; + written += copy_into_fixed_array_lossy::<_, 32>(&self.kernel_mr).consensus_encode(writer)?; written += self.kernel_mmr_size.consensus_encode(writer)?; - written += copy_into_fixed_array::<_, 32>(&self.input_mr) - .map_err(|e| { - io::Error::new( - io::ErrorKind::InvalidData, - format!("Could not copy vec to 32 byte array: {}", e), - ) - })? - .consensus_encode(writer)?; + written += copy_into_fixed_array_lossy::<_, 32>(&self.input_mr).consensus_encode(writer)?; written += self.total_kernel_offset.consensus_encode(writer)?; written += self.total_script_offset.consensus_encode(writer)?; written += self.nonce.consensus_encode(writer)?; @@ -466,9 +436,8 @@ impl ConsensusDecoding for BlockHeader { #[cfg(test)] mod test { - use std::cmp::Ordering; - use tari_crypto::tari_utilities::Hashable; + use tari_utilities::Hashable; use crate::blocks::BlockHeader; #[test] @@ -489,7 +458,7 @@ mod test { .into_iter() .map(|t| BlockHeader { timestamp: t.into(), - ..BlockHeader::default() + ..BlockHeader::new(0) }) .collect::>(); let (max, min, avg) = BlockHeader::timing_stats(&headers); @@ -505,7 +474,7 @@ mod test { .into_iter() .map(|t| BlockHeader { timestamp: t.into(), - ..BlockHeader::default() + ..BlockHeader::new(0) }) .collect::>(); let (max, min, avg) = BlockHeader::timing_stats(&headers); @@ -528,7 +497,7 @@ mod test { fn timing_one_block() { let header = BlockHeader { timestamp: 0.into(), - ..BlockHeader::default() + ..BlockHeader::new(0) }; let (max, min, avg) = BlockHeader::timing_stats(&[header]); @@ -542,9 +511,9 @@ mod test { .into_iter() .map(|t| BlockHeader { timestamp: t.into(), - ..BlockHeader::default() + ..BlockHeader::new(0) }) - .collect::>(); + .collect::>(); let (max, min, avg) = BlockHeader::timing_stats(&headers); assert_eq!(max, 60); assert_eq!(min, 60); @@ -558,33 +527,13 @@ mod test { .into_iter() .map(|t| BlockHeader { timestamp: t.into(), - ..BlockHeader::default() + ..BlockHeader::new(0) }) - .collect::>(); + .collect::>(); let (max, min, avg) = BlockHeader::timing_stats(&headers); assert_eq!(max, 60); assert_eq!(min, 60); let error_margin = f64::EPSILON; // Use machine epsilon for comparison of floats assert!((avg - 60f64).abs() < error_margin); } - - #[test] - fn compare_timestamps() { - let headers = vec![90, 90, 150] - .into_iter() - .map(|t| BlockHeader { - timestamp: t.into(), - ..BlockHeader::default() - }) - .collect::>(); - - let ordering = headers[0].timestamp.cmp(&headers[1].timestamp); - assert_eq!(ordering, Ordering::Equal); - - let ordering = headers[1].timestamp.cmp(&headers[2].timestamp); - assert_eq!(ordering, Ordering::Less); - - let ordering = headers[2].timestamp.cmp(&headers[0].timestamp); - assert_eq!(ordering, Ordering::Greater); - } } diff --git a/base_layer/core/src/blocks/historical_block.rs b/base_layer/core/src/blocks/historical_block.rs index 645b0baa58..d7adfd7a32 100644 --- a/base_layer/core/src/blocks/historical_block.rs +++ b/base_layer/core/src/blocks/historical_block.rs @@ -24,7 +24,7 @@ use std::{fmt, fmt::Display, sync::Arc}; use serde::{Deserialize, Serialize}; use tari_common_types::types::HashOutput; -use tari_crypto::tari_utilities::hex::Hex; +use tari_utilities::hex::Hex; use crate::blocks::{error::BlockError, Block, BlockHeader, BlockHeaderAccumulatedData, ChainBlock}; diff --git a/base_layer/core/src/blocks/new_blockheader_template.rs b/base_layer/core/src/blocks/new_blockheader_template.rs index dec066d5b3..6b6aa15a64 100644 --- a/base_layer/core/src/blocks/new_blockheader_template.rs +++ b/base_layer/core/src/blocks/new_blockheader_template.rs @@ -24,7 +24,7 @@ use std::fmt::{Display, Error, Formatter}; use serde::{Deserialize, Serialize}; use tari_common_types::types::{BlindingFactor, BlockHash}; -use tari_crypto::tari_utilities::hex::Hex; +use tari_utilities::hex::Hex; use crate::{ blocks::block_header::{hash_serializer, BlockHeader}, diff --git a/base_layer/core/src/chain_storage/block_add_result.rs b/base_layer/core/src/chain_storage/block_add_result.rs index 0f258ba98b..04df034701 100644 --- a/base_layer/core/src/chain_storage/block_add_result.rs +++ b/base_layer/core/src/chain_storage/block_add_result.rs @@ -22,7 +22,7 @@ use std::{fmt, sync::Arc}; -use tari_crypto::tari_utilities::hex::Hex; +use tari_utilities::hex::Hex; use crate::blocks::ChainBlock; diff --git a/base_layer/core/src/chain_storage/blockchain_database.rs b/base_layer/core/src/chain_storage/blockchain_database.rs index 141083d594..a975570b32 100644 --- a/base_layer/core/src/chain_storage/blockchain_database.rs +++ b/base_layer/core/src/chain_storage/blockchain_database.rs @@ -38,9 +38,8 @@ use tari_common_types::{ chain_metadata::ChainMetadata, types::{BlockHash, Commitment, HashDigest, HashOutput, PublicKey, Signature}, }; -use tari_crypto::tari_utilities::{hex::Hex, ByteArray, Hashable}; use tari_mmr::{pruned_hashset::PrunedHashSet, MerkleMountainRange, MutableMmr}; -use tari_utilities::epoch_time::EpochTime; +use tari_utilities::{epoch_time::EpochTime, hex::Hex, ByteArray, Hashable}; use crate::{ blocks::{ diff --git a/base_layer/core/src/chain_storage/db_transaction.rs b/base_layer/core/src/chain_storage/db_transaction.rs index 800b817bef..78f9bf133a 100644 --- a/base_layer/core/src/chain_storage/db_transaction.rs +++ b/base_layer/core/src/chain_storage/db_transaction.rs @@ -28,7 +28,7 @@ use std::{ use croaring::Bitmap; use tari_common_types::types::{BlockHash, Commitment, HashOutput}; -use tari_crypto::tari_utilities::{ +use tari_utilities::{ hex::{to_hex, Hex}, Hashable, }; diff --git a/base_layer/core/src/chain_storage/lmdb_db/lmdb.rs b/base_layer/core/src/chain_storage/lmdb_db/lmdb.rs index 63e4a52458..75e238b088 100644 --- a/base_layer/core/src/chain_storage/lmdb_db/lmdb.rs +++ b/base_layer/core/src/chain_storage/lmdb_db/lmdb.rs @@ -37,7 +37,7 @@ use lmdb_zero::{ }; use log::*; use serde::{de::DeserializeOwned, Serialize}; -use tari_crypto::tari_utilities::hex::to_hex; +use tari_utilities::hex::to_hex; use crate::chain_storage::{ error::ChainStorageError, diff --git a/base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs b/base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs index f70992c65b..636578fe1e 100644 --- a/base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs +++ b/base_layer/core/src/chain_storage/lmdb_db/lmdb_db.rs @@ -47,10 +47,13 @@ use tari_common_types::{ chain_metadata::ChainMetadata, types::{BlockHash, Commitment, HashDigest, HashOutput, PublicKey, Signature, BLOCK_HASH_LENGTH}, }; -use tari_crypto::tari_utilities::{hash::Hashable, hex::Hex, ByteArray}; use tari_mmr::{Hash, MerkleMountainRange, MutableMmr}; use tari_storage::lmdb_store::{db, LMDBBuilder, LMDBConfig, LMDBStore}; -use tari_utilities::hex::to_hex; +use tari_utilities::{ + hash::Hashable, + hex::{to_hex, Hex}, + ByteArray, +}; use crate::{ blocks::{ diff --git a/base_layer/core/src/chain_storage/pruned_output.rs b/base_layer/core/src/chain_storage/pruned_output.rs index e505963e77..2c328f7d5e 100644 --- a/base_layer/core/src/chain_storage/pruned_output.rs +++ b/base_layer/core/src/chain_storage/pruned_output.rs @@ -22,7 +22,7 @@ use serde::{Deserialize, Serialize}; use tari_common_types::types::HashOutput; -use tari_crypto::tari_utilities::Hashable; +use tari_utilities::Hashable; use crate::transactions::transaction_components::TransactionOutput; diff --git a/base_layer/core/src/consensus/consensus_constants.rs b/base_layer/core/src/consensus/consensus_constants.rs index 55b56c8dc0..3f4a833fa6 100644 --- a/base_layer/core/src/consensus/consensus_constants.rs +++ b/base_layer/core/src/consensus/consensus_constants.rs @@ -27,8 +27,8 @@ use std::{ use chrono::{DateTime, Duration, Utc}; use tari_common::configuration::Network; -use tari_crypto::tari_utilities::epoch_time::EpochTime; use tari_script::script; +use tari_utilities::epoch_time::EpochTime; use crate::{ consensus::{network::NetworkConsensus, ConsensusEncodingSized}, diff --git a/base_layer/core/src/covenants/arguments.rs b/base_layer/core/src/covenants/arguments.rs index 760f8c07a0..a071b4fbbf 100644 --- a/base_layer/core/src/covenants/arguments.rs +++ b/base_layer/core/src/covenants/arguments.rs @@ -35,7 +35,7 @@ use crate::{ covenants::{ byte_codes, covenant::Covenant, - decoder::{CovenantDecodeError, CovenentReadExt}, + decoder::{CovenantDecodeError, CovenantReadExt}, encoder::CovenentWriteExt, error::CovenantError, fields::{OutputField, OutputFields}, @@ -238,7 +238,12 @@ impl Display for CovenantArg { #[cfg(test)] mod test { + use tari_common_types::types::Commitment; + use tari_script::script; + use tari_utilities::hex::from_hex; + use super::*; + use crate::{covenant, covenants::byte_codes::*}; mod require_x_impl { use super::*; @@ -260,18 +265,18 @@ mod test { } } - mod write_to { - use tari_common_types::types::Commitment; - use tari_script::script; - use tari_utilities::hex::from_hex; - + mod write_to_and_read_from { use super::*; - use crate::{covenant, covenants::byte_codes::*}; - fn test_case(arg: CovenantArg, expected: &[u8]) { + fn test_case(argument: CovenantArg, mut data: &[u8]) { let mut buf = Vec::new(); - arg.write_to(&mut buf).unwrap(); - assert_eq!(buf, expected); + argument.write_to(&mut buf).unwrap(); + assert_eq!(buf, data); + + let reader = &mut data; + let code = reader.read_next_byte_code().unwrap().unwrap(); + let arg = CovenantArg::read_from(&mut data, code).unwrap(); + assert_eq!(arg, argument); } #[test] diff --git a/base_layer/core/src/covenants/byte_codes.rs b/base_layer/core/src/covenants/byte_codes.rs index d8e1ee910a..176fd53943 100644 --- a/base_layer/core/src/covenants/byte_codes.rs +++ b/base_layer/core/src/covenants/byte_codes.rs @@ -22,8 +22,21 @@ //---------------------------------- ARG byte codes --------------------------------------------// pub(super) fn is_valid_arg_code(code: u8) -> bool { - (0x01..=0x09).contains(&code) + ALL_ARGS.contains(&code) } + +pub(super) const ALL_ARGS: [u8; 9] = [ + ARG_HASH, + ARG_PUBLIC_KEY, + ARG_COMMITMENT, + ARG_TARI_SCRIPT, + ARG_COVENANT, + ARG_UINT, + ARG_OUTPUT_FIELD, + ARG_OUTPUT_FIELDS, + ARG_BYTES, +]; + pub const ARG_HASH: u8 = 0x01; pub const ARG_PUBLIC_KEY: u8 = 0x02; pub const ARG_COMMITMENT: u8 = 0x03; @@ -37,9 +50,22 @@ pub const ARG_BYTES: u8 = 0x09; //---------------------------------- FILTER byte codes --------------------------------------------// pub(super) fn is_valid_filter_code(code: u8) -> bool { - (0x20..=0x24).contains(&code) || (0x30..=0x34).contains(&code) + ALL_FILTERS.contains(&code) } +pub(super) const ALL_FILTERS: [u8; 10] = [ + FILTER_IDENTITY, + FILTER_AND, + FILTER_OR, + FILTER_XOR, + FILTER_NOT, + FILTER_OUTPUT_HASH_EQ, + FILTER_FIELDS_PRESERVED, + FILTER_FIELDS_HASHED_EQ, + FILTER_FIELD_EQ, + FILTER_ABSOLUTE_HEIGHT, +]; + pub const FILTER_IDENTITY: u8 = 0x20; pub const FILTER_AND: u8 = 0x21; pub const FILTER_OR: u8 = 0x22; @@ -63,3 +89,44 @@ pub const FIELD_FEATURES_MATURITY: u8 = 0x06; pub const FIELD_FEATURES_UNIQUE_ID: u8 = 0x07; pub const FIELD_FEATURES_PARENT_PUBLIC_KEY: u8 = 0x08; pub const FIELD_FEATURES_METADATA: u8 = 0x09; + +#[cfg(test)] +mod tests { + use super::*; + + mod is_valid_filter_code { + use super::*; + + #[test] + fn it_returns_true_for_all_filter_codes() { + ALL_FILTERS.iter().for_each(|code| { + assert!(is_valid_filter_code(*code)); + }); + } + + #[test] + fn it_returns_false_for_all_arg_codes() { + ALL_ARGS.iter().for_each(|code| { + assert!(!is_valid_filter_code(*code)); + }); + } + } + + mod is_valid_arg_code { + use super::*; + + #[test] + fn it_returns_false_for_all_filter_codes() { + ALL_FILTERS.iter().for_each(|code| { + assert!(!is_valid_arg_code(*code)); + }); + } + + #[test] + fn it_returns_true_for_all_arg_codes() { + ALL_ARGS.iter().for_each(|code| { + assert!(is_valid_arg_code(*code)); + }); + } + } +} diff --git a/base_layer/core/src/covenants/context.rs b/base_layer/core/src/covenants/context.rs index c9df25e984..44b558ec93 100644 --- a/base_layer/core/src/covenants/context.rs +++ b/base_layer/core/src/covenants/context.rs @@ -51,7 +51,7 @@ impl<'a> CovenantContext<'a> { pub fn next_arg(&mut self) -> Result { match self.tokens.next().ok_or(CovenantError::UnexpectedEndOfTokens)? { - CovenantToken::Arg(arg) => Ok(arg), + CovenantToken::Arg(arg) => Ok(*arg), CovenantToken::Filter(_) => Err(CovenantError::ExpectedArgButGotFilter), } } diff --git a/base_layer/core/src/covenants/decoder.rs b/base_layer/core/src/covenants/decoder.rs index 5d72dd25f1..becf41cfb2 100644 --- a/base_layer/core/src/covenants/decoder.rs +++ b/base_layer/core/src/covenants/decoder.rs @@ -81,12 +81,12 @@ pub enum CovenantDecodeError { Io(#[from] io::Error), } -pub(super) trait CovenentReadExt: io::Read { +pub(super) trait CovenantReadExt: io::Read { fn read_next_byte_code(&mut self) -> Result, io::Error>; fn read_variable_length_bytes(&mut self, size: usize) -> Result, io::Error>; } -impl CovenentReadExt for R { +impl CovenantReadExt for R { fn read_next_byte_code(&mut self) -> Result, io::Error> { let mut buf = [0u8; 1]; loop { @@ -127,7 +127,12 @@ mod test { use super::*; use crate::{ covenant, - covenants::{arguments::CovenantArg, fields::OutputField, filters::CovenantFilter}, + covenants::{ + arguments::CovenantArg, + byte_codes::ARG_OUTPUT_FIELD, + fields::OutputField, + filters::CovenantFilter, + }, }; #[test] @@ -136,6 +141,28 @@ mod test { assert!(CovenantTokenDecoder::new(&mut &buf[..]).next().is_none()); } + #[test] + fn it_ends_after_an_error() { + let buf = &[0xffu8]; + let mut reader = &buf[..]; + let mut decoder = CovenantTokenDecoder::new(&mut reader); + assert!(matches!(decoder.next(), Some(Err(_)))); + assert!(decoder.next().is_none()); + } + + #[test] + fn it_returns_an_error_if_arg_expected() { + let buf = &[ARG_OUTPUT_FIELD]; + let mut reader = &buf[..]; + let mut decoder = CovenantTokenDecoder::new(&mut reader); + + assert!(matches!( + decoder.next(), + Some(Err(CovenantDecodeError::UnexpectedEof { .. })) + )); + assert!(decoder.next().is_none()); + } + #[test] fn it_decodes_from_well_formed_bytes() { let hash = from_hex("53563b674ba8e5166adb57afa8355bcf2ee759941eef8f8959b802367c2558bd").unwrap(); @@ -171,4 +198,21 @@ mod test { assert!(decoder.next().is_none()); } + + mod covenant_read_ext { + use super::*; + + #[test] + fn it_reads_bytes_with_length_prefix() { + let data = vec![0x03u8, 0x01, 0x02, 0x03]; + let bytes = CovenantReadExt::read_variable_length_bytes(&mut data.as_slice(), 3).unwrap(); + assert_eq!(bytes, [1u8, 2, 3]); + } + + #[test] + fn it_errors_if_len_byte_exceeds_maximum() { + let data = vec![0x02, 0x01]; + CovenantReadExt::read_variable_length_bytes(&mut data.as_slice(), 1).unwrap_err(); + } + } } diff --git a/base_layer/core/src/covenants/encoder.rs b/base_layer/core/src/covenants/encoder.rs index f18c2e372c..48bcd93b1c 100644 --- a/base_layer/core/src/covenants/encoder.rs +++ b/base_layer/core/src/covenants/encoder.rs @@ -52,3 +52,63 @@ impl CovenentWriteExt for W { Ok(1) } } + +#[cfg(test)] +mod tests { + + use super::*; + use crate::{ + covenant, + covenants::{ + byte_codes::{ARG_HASH, ARG_OUTPUT_FIELD, FILTER_AND, FILTER_FIELD_EQ, FILTER_IDENTITY, FILTER_OR}, + OutputField, + }, + }; + + #[test] + fn it_encodes_empty_tokens() { + let encoder = CovenantTokenEncoder::new(&[]); + let mut buf = Vec::::new(); + let written = encoder.write_to(&mut buf).unwrap(); + assert_eq!(buf, [] as [u8; 0]); + assert_eq!(written, 0); + } + + #[test] + fn it_encodes_tokens_correctly() { + let covenant = covenant!(and(identity(), or(identity()))); + let encoder = CovenantTokenEncoder::new(covenant.tokens()); + let mut buf = Vec::::new(); + let written = encoder.write_to(&mut buf).unwrap(); + assert_eq!(buf, [FILTER_AND, FILTER_IDENTITY, FILTER_OR, FILTER_IDENTITY]); + assert_eq!(written, 4); + } + + #[test] + fn it_encodes_args_correctly() { + let dummy = [0u8; 32]; + let covenant = covenant!(field_eq(@field::features, @hash(dummy))); + let encoder = CovenantTokenEncoder::new(covenant.tokens()); + let mut buf = Vec::::new(); + let written = encoder.write_to(&mut buf).unwrap(); + assert_eq!(buf[..4], [ + FILTER_FIELD_EQ, + ARG_OUTPUT_FIELD, + OutputField::Features.as_byte(), + ARG_HASH + ]); + assert_eq!(buf[4..], [0u8; 32]); + assert_eq!(written, 36); + } + + mod covenant_write_ext { + use super::*; + + #[test] + fn it_writes_a_single_byte() { + let mut buf = Vec::new(); + buf.write_u8_fixed(123u8).unwrap(); + assert_eq!(buf, vec![123u8]); + } + } +} diff --git a/base_layer/core/src/covenants/error.rs b/base_layer/core/src/covenants/error.rs index 201d71fb3d..4eacf2775d 100644 --- a/base_layer/core/src/covenants/error.rs +++ b/base_layer/core/src/covenants/error.rs @@ -40,6 +40,4 @@ pub enum CovenantError { RemainingTokens, #[error("Invalid argument for filter {filter}: {details}")] InvalidArgument { filter: &'static str, details: String }, - #[error("Unsupported argument {arg}: {details}")] - UnsupportedArgument { arg: &'static str, details: String }, } diff --git a/base_layer/core/src/covenants/fields.rs b/base_layer/core/src/covenants/fields.rs index 027b957177..9b99bbf72d 100644 --- a/base_layer/core/src/covenants/fields.rs +++ b/base_layer/core/src/covenants/fields.rs @@ -29,13 +29,13 @@ use std::{ use digest::Digest; use integer_encoding::VarIntWriter; -use tari_common_types::types::Challenge; +use tari_common_types::types::HashDigest; use crate::{ consensus::ToConsensusBytes, covenants::{ byte_codes, - decoder::{CovenantDecodeError, CovenentReadExt}, + decoder::{CovenantDecodeError, CovenantReadExt}, encoder::CovenentWriteExt, error::CovenantError, }, @@ -162,17 +162,17 @@ impl OutputField { } pub fn is_eq(self, output: &TransactionOutput, val: &T) -> Result { - use OutputField::{Features, FeaturesParentPublicKey, FeaturesUniqueId}; + use OutputField::{FeaturesParentPublicKey, FeaturesUniqueId}; match self { // Handle edge cases FeaturesParentPublicKey | FeaturesUniqueId => match self.get_field_value_ref::>(output) { Some(Some(field_val)) => Ok(field_val == val), - _ => Ok(false), + Some(None) => Ok(false), + None => Err(CovenantError::InvalidArgument { + filter: "is_eq", + details: format!("Invalid type for field {}", self), + }), }, - Features => Err(CovenantError::UnsupportedArgument { - arg: "features", - details: "OutputFeatures is not supported for operation is_eq".to_string(), - }), _ => match self.get_field_value_ref::(output) { Some(field_val) => Ok(field_val == val), None => Err(CovenantError::InvalidArgument { @@ -304,10 +304,10 @@ impl OutputFields { self.fields.is_empty() } - pub fn construct_challenge_from(&self, output: &TransactionOutput) -> Challenge { - let mut challenge = Challenge::new(); + pub fn construct_challenge_from(&self, output: &TransactionOutput) -> HashDigest { + let mut challenge = HashDigest::new(); for field in &self.fields { - challenge = challenge.chain(field.get_field_value_bytes(output)); + challenge.update(field.get_field_value_bytes(output)); } challenge } @@ -332,26 +332,221 @@ impl FromIterator for OutputFields { #[cfg(test)] mod test { + use rand::rngs::OsRng; + use tari_common_types::types::{Commitment, PublicKey}; + use tari_crypto::keys::PublicKey as PublicKeyTrait; + use tari_script::script; + use super::*; use crate::{ - covenants::test::create_outputs, - transactions::{test_helpers::UtxoTestParams, transaction_components::OutputFeatures}, + consensus::ConsensusEncoding, + covenant, + covenants::test::{create_input, create_outputs}, + transactions::{ + test_helpers::UtxoTestParams, + transaction_components::{OutputFeatures, OutputFlags, SpentOutput}, + }, }; - #[test] - fn get_field_value_ref() { - let mut features = OutputFeatures { - maturity: 42, - ..Default::default() - }; - let output = create_outputs(1, UtxoTestParams { - features: features.clone(), - ..Default::default() - }) - .pop() - .unwrap(); - features.set_recovery_byte(output.features.recovery_byte); - let r = OutputField::Features.get_field_value_ref::(&output); - assert_eq!(*r.unwrap(), features); + mod output_field { + use super::*; + + mod is_eq { + + use super::*; + + #[test] + fn it_returns_true_if_eq() { + let output = create_outputs(1, UtxoTestParams { + features: OutputFeatures { + parent_public_key: Some(Default::default()), + unique_id: Some(b"1234".to_vec()), + ..Default::default() + }, + script: script![Drop Nop], + ..Default::default() + }) + .remove(0); + + assert!(OutputField::Commitment.is_eq(&output, &output.commitment).unwrap()); + assert!(OutputField::Features.is_eq(&output, &output.features).unwrap()); + assert!(OutputField::Script.is_eq(&output, &output.script).unwrap()); + assert!(OutputField::Covenant.is_eq(&output, &output.covenant).unwrap()); + assert!(OutputField::FeaturesMaturity + .is_eq(&output, &output.features.maturity) + .unwrap()); + assert!(OutputField::FeaturesFlags + .is_eq(&output, &output.features.flags) + .unwrap()); + assert!(OutputField::FeaturesParentPublicKey + .is_eq(&output, output.features.parent_public_key.as_ref().unwrap()) + .unwrap()); + assert!(OutputField::FeaturesMetadata + .is_eq(&output, &output.features.metadata) + .unwrap()); + assert!(OutputField::FeaturesUniqueId + .is_eq(&output, output.features.unique_id.as_ref().unwrap()) + .unwrap()); + assert!(OutputField::SenderOffsetPublicKey + .is_eq(&output, &output.sender_offset_public_key) + .unwrap()); + } + + #[test] + fn it_returns_false_if_not_eq() { + let (_, parent_pk) = PublicKey::random_keypair(&mut OsRng); + let output = create_outputs(1, UtxoTestParams { + features: OutputFeatures { + parent_public_key: Some(parent_pk), + unique_id: Some(b"1234".to_vec()), + ..Default::default() + }, + script: script![Drop Nop], + ..Default::default() + }) + .remove(0); + + assert!(!OutputField::Commitment.is_eq(&output, &Commitment::default()).unwrap()); + assert!(!OutputField::Features + .is_eq(&output, &OutputFeatures::default()) + .unwrap()); + assert!(!OutputField::Script.is_eq(&output, &script![Nop Drop]).unwrap()); + assert!(!OutputField::Covenant + .is_eq(&output, &covenant!(and(identity(), identity()))) + .unwrap()); + assert!(!OutputField::FeaturesMaturity.is_eq(&output, &123u64).unwrap()); + assert!(!OutputField::FeaturesFlags + .is_eq(&output, &OutputFlags::COINBASE_OUTPUT) + .unwrap()); + assert!(!OutputField::FeaturesParentPublicKey + .is_eq(&output, &PublicKey::default()) + .unwrap()); + assert!(!OutputField::FeaturesMetadata.is_eq(&output, &vec![123u8]).unwrap()); + assert!(!OutputField::FeaturesUniqueId.is_eq(&output, &vec![123u8]).unwrap()); + assert!(!OutputField::SenderOffsetPublicKey + .is_eq(&output, &PublicKey::default()) + .unwrap()); + } + } + + mod is_eq_input { + use super::*; + + #[test] + fn it_returns_true_if_eq_input() { + let output = create_outputs(1, UtxoTestParams { + features: OutputFeatures { + maturity: 42, + ..Default::default() + }, + script: script![Drop Nop], + ..Default::default() + }) + .remove(0); + let mut input = create_input(); + if let SpentOutput::OutputData { + features, + commitment, + script, + sender_offset_public_key, + covenant, + .. + } = &mut input.spent_output + { + *features = output.features.clone(); + *commitment = output.commitment.clone(); + *script = output.script.clone(); + *sender_offset_public_key = output.sender_offset_public_key.clone(); + *covenant = output.covenant.clone(); + } + + assert!(OutputField::Commitment.is_eq_input(&input, &output)); + assert!(OutputField::Features.is_eq_input(&input, &output)); + assert!(OutputField::Script.is_eq_input(&input, &output)); + assert!(OutputField::Covenant.is_eq_input(&input, &output)); + assert!(OutputField::FeaturesMaturity.is_eq_input(&input, &output)); + assert!(OutputField::FeaturesFlags.is_eq_input(&input, &output)); + assert!(OutputField::FeaturesParentPublicKey.is_eq_input(&input, &output)); + assert!(OutputField::FeaturesMetadata.is_eq_input(&input, &output)); + assert!(OutputField::FeaturesUniqueId.is_eq_input(&input, &output)); + assert!(OutputField::SenderOffsetPublicKey.is_eq_input(&input, &output)); + } + } + + #[test] + fn display() { + let output_fields = [ + OutputField::Commitment, + OutputField::Features, + OutputField::FeaturesFlags, + OutputField::FeaturesUniqueId, + OutputField::FeaturesMetadata, + OutputField::FeaturesMaturity, + OutputField::FeaturesParentPublicKey, + OutputField::SenderOffsetPublicKey, + OutputField::Script, + OutputField::Covenant, + ]; + output_fields.iter().for_each(|f| { + assert!(f.to_string().starts_with("field::")); + }) + } + } + + mod output_fields { + use super::*; + + mod construct_challenge_from { + use super::*; + + #[test] + fn it_constructs_challenge_using_consensus_encoding() { + let features = OutputFeatures { + maturity: 42, + flags: OutputFlags::COINBASE_OUTPUT, + ..Default::default() + }; + let output = create_outputs(1, UtxoTestParams { + features, + script: script![Drop Nop], + ..Default::default() + }) + .remove(0); + + let mut fields = OutputFields::new(); + fields.push(OutputField::Features); + fields.push(OutputField::Commitment); + fields.push(OutputField::Script); + let hash = fields.construct_challenge_from(&output).finalize(); + + let mut challenge = Vec::new(); + output.features.consensus_encode(&mut challenge).unwrap(); + output.commitment.consensus_encode(&mut challenge).unwrap(); + output.script.consensus_encode(&mut challenge).unwrap(); + let expected_hash = HashDigest::new().chain(&challenge).finalize(); + assert_eq!(hash, expected_hash); + } + } + + mod get_field_value_ref { + use super::*; + + #[test] + fn it_retrieves_the_value_as_ref() { + let mut features = OutputFeatures { + maturity: 42, + ..Default::default() + }; + let output = create_outputs(1, UtxoTestParams { + features: features.clone(), + ..Default::default() + }) + .pop() + .unwrap(); + features.set_recovery_byte(output.features.recovery_byte); + let r = OutputField::Features.get_field_value_ref::(&output); + assert_eq!(*r.unwrap(), features); + } + } } } diff --git a/base_layer/core/src/covenants/filters/field_eq.rs b/base_layer/core/src/covenants/filters/field_eq.rs index d1f6ae28a3..b002f8c0d4 100644 --- a/base_layer/core/src/covenants/filters/field_eq.rs +++ b/base_layer/core/src/covenants/filters/field_eq.rs @@ -165,43 +165,22 @@ mod test { assert_eq!(output_set.get_selected_indexes(), vec![5, 7]); } - // #[test] - // fn it_filters_covenant() { - // // TODO: Covenant field is not in output yet - // let covenant = covenant!(identity()); - // let covenant = covenant!(field_eq( - // @field::covenant, - // @covenant(covenant.clone()) - // )); - // let input = create_input(); - // let mut context = create_context(&covenant, &input, 0); - // // Remove `field_eq` - // context.next_filter().unwrap(); - // let mut outputs = create_outputs(10, Default::default()); - // outputs[5].covenant = covenant.clone(); - // outputs[7].covenant = covenant.clone(); - // let mut output_set = OutputSet::new(&outputs); - // FieldEqFilter.filter(&mut context, &mut output_set).unwrap(); - // - // assert_eq!(output_set.len(), 2); - // assert_eq!(output_set.get_selected_indexes(), vec![5, 7]); - // } - #[test] - fn it_errors_for_unsupported_features_field() { - let covenant = covenant!(field_eq( - @field::features, - @bytes(vec![]) - )); + fn it_filters_covenant() { + let next_cov = covenant!(and(identity(), or(field_eq(@field::features_maturity, @uint(42))))); + let covenant = covenant!(field_eq(@field::covenant, @covenant(next_cov.clone()))); let input = create_input(); let mut context = create_context(&covenant, &input, 0); // Remove `field_eq` context.next_filter().unwrap(); - let outputs = create_outputs(10, Default::default()); + let mut outputs = create_outputs(10, Default::default()); + outputs[5].covenant = next_cov.clone(); + outputs[7].covenant = next_cov; let mut output_set = OutputSet::new(&outputs); - let err = FieldEqFilter.filter(&mut context, &mut output_set).unwrap_err(); - unpack_enum!(CovenantError::UnsupportedArgument { arg, .. } = err); - assert_eq!(arg, "features"); + FieldEqFilter.filter(&mut context, &mut output_set).unwrap(); + + assert_eq!(output_set.len(), 2); + assert_eq!(output_set.get_selected_indexes(), vec![5, 7]); } #[test] diff --git a/base_layer/core/src/covenants/filters/fields_hashed_eq.rs b/base_layer/core/src/covenants/filters/fields_hashed_eq.rs index 3fd6816b44..510c69bed7 100644 --- a/base_layer/core/src/covenants/filters/fields_hashed_eq.rs +++ b/base_layer/core/src/covenants/filters/fields_hashed_eq.rs @@ -20,28 +20,6 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// Copyright 2021, The Tari Project -// -// Redistribution and use in source and binary forms, with or without modification, are permitted provided that the -// following conditions are met: -// -// 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following -// disclaimer. -// -// 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the -// following disclaimer in the documentation and/or other materials provided with the distribution. -// -// 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote -// products derived from this software without specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, -// INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR -// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, -// WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE -// USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - use digest::Digest; use crate::covenants::{context::CovenantContext, error::CovenantError, filters::Filter, output_set::OutputSet}; diff --git a/base_layer/core/src/covenants/filters/filter.rs b/base_layer/core/src/covenants/filters/filter.rs index 8310f7e5ec..32539fc0c0 100644 --- a/base_layer/core/src/covenants/filters/filter.rs +++ b/base_layer/core/src/covenants/filters/filter.rs @@ -165,3 +165,17 @@ impl Filter for CovenantFilter { } } } + +#[cfg(test)] +mod tests { + use super::*; + use crate::covenants::byte_codes::ALL_FILTERS; + + #[test] + fn it_returns_filter_from_byte_code() { + ALL_FILTERS.iter().for_each(|code| { + let filter = CovenantFilter::try_from_byte_code(*code).unwrap(); + assert_eq!(filter.as_byte_code(), *code); + }) + } +} diff --git a/base_layer/core/src/covenants/filters/identity.rs b/base_layer/core/src/covenants/filters/identity.rs index 1748c97810..91e8d3c1fb 100644 --- a/base_layer/core/src/covenants/filters/identity.rs +++ b/base_layer/core/src/covenants/filters/identity.rs @@ -30,3 +30,23 @@ impl Filter for IdentityFilter { Ok(()) } } + +#[cfg(test)] +mod tests { + use super::*; + use crate::{ + covenant, + covenants::{filters::test::setup_filter_test, test::create_input}, + }; + + #[test] + fn it_returns_the_outputset_unchanged() { + let covenant = covenant!(identity()); + let input = create_input(); + let (mut context, outputs) = setup_filter_test(&covenant, &input, 0, |_| {}); + let mut output_set = OutputSet::new(&outputs); + let previous_len = output_set.len(); + IdentityFilter.filter(&mut context, &mut output_set).unwrap(); + assert_eq!(output_set.len(), previous_len); + } +} diff --git a/base_layer/core/src/covenants/macros.rs b/base_layer/core/src/covenants/macros.rs index 2690d5773f..247a9020db 100644 --- a/base_layer/core/src/covenants/macros.rs +++ b/base_layer/core/src/covenants/macros.rs @@ -77,16 +77,16 @@ macro_rules! __covenant_inner { $crate::__covenant_inner!(@ { $covenant } $($tail)*) }; - // @covenant(...), ... - (@ { $covenant:ident } @covenant($($inner:tt)*), $($tail:tt)*) => { + // @covenant_lit(...), ... + (@ { $covenant:ident } @covenant_lit($($inner:tt)*), $($tail:tt)*) => { let inner = $crate::covenant!($($inner)*); $covenant.push_token($crate::covenants::CovenantToken::covenant(inner)); $crate::__covenant_inner!(@ { $covenant } $($tail)*) }; - // @covenant(...) - (@ { $covenant:ident } @covenant($($inner:tt)*) $(,)?) => { - $crate::__covenant_inner!(@ { $covenant } @covenant($($inner)*),) + // @covenant_lit(...) + (@ { $covenant:ident } @covenant_lit($($inner:tt)*) $(,)?) => { + $crate::__covenant_inner!(@ { $covenant } @covenant_lit($($inner)*),) }; // @arg(expr1, expr2, ...), ... @@ -200,7 +200,7 @@ mod test { #[test] fn covenant() { let bytes = vec![0xba, 0xda, 0x55]; - let covenant = covenant!(field_eq(@field::covenant, @covenant(and(field_eq(@field::features_unique_id, @bytes(bytes), identity()))))); + let covenant = covenant!(field_eq(@field::covenant, @covenant_lit(and(field_eq(@field::features_unique_id, @bytes(bytes), identity()))))); assert_eq!(covenant.to_bytes().to_hex(), "330703050a213307070903bada5520"); } diff --git a/base_layer/core/src/covenants/token.rs b/base_layer/core/src/covenants/token.rs index f24f63415c..dc82115805 100644 --- a/base_layer/core/src/covenants/token.rs +++ b/base_layer/core/src/covenants/token.rs @@ -27,7 +27,7 @@ use tari_script::TariScript; use crate::covenants::{ arguments::{CovenantArg, Hash}, - decoder::{CovenantDecodeError, CovenentReadExt}, + decoder::{CovenantDecodeError, CovenantReadExt}, fields::OutputField, filters::{ AbsoluteHeightFilter, @@ -48,7 +48,7 @@ use crate::covenants::{ #[derive(Debug, Clone, PartialEq, Eq)] pub enum CovenantToken { Filter(CovenantFilter), - Arg(CovenantArg), + Arg(Box), } impl CovenantToken { @@ -65,7 +65,7 @@ impl CovenantToken { }, code if CovenantArg::is_valid_code(code) => { let arg = CovenantArg::read_from(reader, code)?; - Ok(Some(CovenantToken::Arg(arg))) + Ok(Some(CovenantToken::Arg(Box::new(arg)))) }, code => Err(CovenantDecodeError::UnknownByteCode { code }), } @@ -88,7 +88,7 @@ impl CovenantToken { pub fn as_arg(&self) -> Option<&CovenantArg> { match self { CovenantToken::Filter(_) => None, - CovenantToken::Arg(arg) => Some(arg), + CovenantToken::Arg(arg) => Some(&**arg), } } @@ -96,97 +96,109 @@ impl CovenantToken { #[allow(dead_code)] pub fn identity() -> Self { - CovenantToken::Filter(CovenantFilter::Identity(IdentityFilter)) + CovenantFilter::Identity(IdentityFilter).into() } #[allow(dead_code)] pub fn and() -> Self { - CovenantToken::Filter(CovenantFilter::And(AndFilter)) + CovenantFilter::And(AndFilter).into() } #[allow(dead_code)] pub fn or() -> Self { - CovenantToken::Filter(CovenantFilter::Or(OrFilter)) + CovenantFilter::Or(OrFilter).into() } #[allow(dead_code)] pub fn xor() -> Self { - CovenantToken::Filter(CovenantFilter::Xor(XorFilter)) + CovenantFilter::Xor(XorFilter).into() } #[allow(dead_code)] pub fn not() -> Self { - CovenantToken::Filter(CovenantFilter::Not(NotFilter)) + CovenantFilter::Not(NotFilter).into() } #[allow(dead_code)] pub fn output_hash_eq() -> Self { - CovenantToken::Filter(CovenantFilter::OutputHashEq(OutputHashEqFilter)) + CovenantFilter::OutputHashEq(OutputHashEqFilter).into() } #[allow(dead_code)] pub fn fields_preserved() -> Self { - CovenantToken::Filter(CovenantFilter::FieldsPreserved(FieldsPreservedFilter)) + CovenantFilter::FieldsPreserved(FieldsPreservedFilter).into() } #[allow(dead_code)] pub fn field_eq() -> Self { - CovenantToken::Filter(CovenantFilter::FieldEq(FieldEqFilter)) + CovenantFilter::FieldEq(FieldEqFilter).into() } #[allow(dead_code)] pub fn fields_hashed_eq() -> Self { - CovenantToken::Filter(CovenantFilter::FieldsHashedEq(FieldsHashedEqFilter)) + CovenantFilter::FieldsHashedEq(FieldsHashedEqFilter).into() } #[allow(dead_code)] pub fn absolute_height() -> Self { - CovenantToken::Filter(CovenantFilter::AbsoluteHeight(AbsoluteHeightFilter)) + CovenantFilter::AbsoluteHeight(AbsoluteHeightFilter).into() } #[allow(dead_code)] pub fn hash(hash: Hash) -> Self { - CovenantToken::Arg(CovenantArg::Hash(hash)) + CovenantArg::Hash(hash).into() } #[allow(dead_code)] pub fn public_key(public_key: PublicKey) -> Self { - CovenantToken::Arg(CovenantArg::PublicKey(public_key)) + CovenantArg::PublicKey(public_key).into() } #[allow(dead_code)] pub fn commitment(commitment: Commitment) -> Self { - CovenantToken::Arg(CovenantArg::Commitment(commitment)) + CovenantArg::Commitment(commitment).into() } #[allow(dead_code)] pub fn script(script: TariScript) -> Self { - CovenantToken::Arg(CovenantArg::TariScript(script)) + CovenantArg::TariScript(script).into() } #[allow(dead_code)] pub fn covenant(covenant: Covenant) -> Self { - CovenantToken::Arg(CovenantArg::Covenant(covenant)) + CovenantArg::Covenant(covenant).into() } #[allow(dead_code)] pub fn uint(val: u64) -> Self { - CovenantToken::Arg(CovenantArg::Uint(val)) + CovenantArg::Uint(val).into() } #[allow(dead_code)] pub fn field(field: OutputField) -> Self { - CovenantToken::Arg(CovenantArg::OutputField(field)) + CovenantArg::OutputField(field).into() } #[allow(dead_code)] pub fn fields(fields: Vec) -> Self { - CovenantToken::Arg(CovenantArg::OutputFields(fields.into())) + CovenantArg::OutputFields(fields.into()).into() } #[allow(dead_code)] pub fn bytes(bytes: Vec) -> Self { - CovenantToken::Arg(CovenantArg::Bytes(bytes)) + CovenantArg::Bytes(bytes).into() + } +} + +impl From for CovenantToken { + fn from(arg: CovenantArg) -> Self { + CovenantToken::Arg(Box::new(arg)) + } +} + +impl From for CovenantToken { + fn from(filter: CovenantFilter) -> Self { + CovenantToken::Filter(filter) } } diff --git a/base_layer/core/src/mempool/service/inbound_handlers.rs b/base_layer/core/src/mempool/service/inbound_handlers.rs index 6f915df4f9..a04111fcd2 100644 --- a/base_layer/core/src/mempool/service/inbound_handlers.rs +++ b/base_layer/core/src/mempool/service/inbound_handlers.rs @@ -24,7 +24,7 @@ use std::sync::Arc; use log::*; use tari_comms::peer_manager::NodeId; -use tari_crypto::tari_utilities::hex::Hex; +use tari_utilities::hex::Hex; use crate::{ base_node::comms_interface::BlockEvent, diff --git a/base_layer/core/src/mempool/service/request.rs b/base_layer/core/src/mempool/service/request.rs index c6a3bf783d..07165478f2 100644 --- a/base_layer/core/src/mempool/service/request.rs +++ b/base_layer/core/src/mempool/service/request.rs @@ -24,7 +24,7 @@ use core::fmt::{Display, Error, Formatter}; use serde::{Deserialize, Serialize}; use tari_common_types::{types::Signature, waiting_requests::RequestKey}; -use tari_crypto::tari_utilities::hex::Hex; +use tari_utilities::hex::Hex; use crate::transactions::transaction_components::Transaction; diff --git a/base_layer/core/src/mempool/service/service.rs b/base_layer/core/src/mempool/service/service.rs index ab36f3041a..2eddbd3975 100644 --- a/base_layer/core/src/mempool/service/service.rs +++ b/base_layer/core/src/mempool/service/service.rs @@ -30,9 +30,9 @@ use tari_comms_dht::{ envelope::NodeDestination, outbound::{DhtOutboundError, OutboundEncryption, OutboundMessageRequester}, }; -use tari_crypto::tari_utilities::hex::Hex; use tari_p2p::{domain_message::DomainMessage, tari_message::TariMessageType}; use tari_service_framework::{reply_channel, reply_channel::RequestContext}; +use tari_utilities::hex::Hex; use tokio::{sync::mpsc, task}; use crate::{ diff --git a/base_layer/core/src/mempool/sync_protocol/mod.rs b/base_layer/core/src/mempool/sync_protocol/mod.rs index f5eaae31a3..32f81b65c5 100644 --- a/base_layer/core/src/mempool/sync_protocol/mod.rs +++ b/base_layer/core/src/mempool/sync_protocol/mod.rs @@ -88,8 +88,7 @@ use tari_comms::{ Bytes, PeerConnection, }; -use tari_crypto::tari_utilities::hex::Hex; -use tari_utilities::ByteArray; +use tari_utilities::{hex::Hex, ByteArray}; use tokio::{ io::{AsyncRead, AsyncWrite}, sync::Semaphore, diff --git a/base_layer/core/src/mempool/sync_protocol/test.rs b/base_layer/core/src/mempool/sync_protocol/test.rs index 623e63a5d7..1189772ba8 100644 --- a/base_layer/core/src/mempool/sync_protocol/test.rs +++ b/base_layer/core/src/mempool/sync_protocol/test.rs @@ -35,7 +35,7 @@ use tari_comms::{ Bytes, BytesMut, }; -use tari_crypto::tari_utilities::ByteArray; +use tari_utilities::ByteArray; use tokio::{ sync::{broadcast, mpsc}, task, diff --git a/base_layer/core/src/mempool/unconfirmed_pool/unconfirmed_pool.rs b/base_layer/core/src/mempool/unconfirmed_pool/unconfirmed_pool.rs index 9711d75cd4..eea18ba914 100644 --- a/base_layer/core/src/mempool/unconfirmed_pool/unconfirmed_pool.rs +++ b/base_layer/core/src/mempool/unconfirmed_pool/unconfirmed_pool.rs @@ -30,8 +30,7 @@ use digest::Digest; use log::*; use serde::{Deserialize, Serialize}; use tari_common_types::types::{HashDigest, HashOutput, PrivateKey, PublicKey, Signature}; -use tari_crypto::tari_utilities::{hex::Hex, Hashable}; -use tari_utilities::ByteArray; +use tari_utilities::{hex::Hex, ByteArray, Hashable}; use crate::{ blocks::Block, diff --git a/base_layer/core/src/proof_of_work/difficulty.rs b/base_layer/core/src/proof_of_work/difficulty.rs index 034307434f..b54d22d2c7 100644 --- a/base_layer/core/src/proof_of_work/difficulty.rs +++ b/base_layer/core/src/proof_of_work/difficulty.rs @@ -25,7 +25,7 @@ use std::{fmt, ops::Div}; use newtype_ops::newtype_ops; use num_format::{Locale, ToFormattedString}; use serde::{Deserialize, Serialize}; -use tari_crypto::tari_utilities::epoch_time::EpochTime; +use tari_utilities::epoch_time::EpochTime; use crate::proof_of_work::error::DifficultyAdjustmentError; diff --git a/base_layer/core/src/proof_of_work/lwma_diff.rs b/base_layer/core/src/proof_of_work/lwma_diff.rs index 5212954f5b..cb964661c2 100644 --- a/base_layer/core/src/proof_of_work/lwma_diff.rs +++ b/base_layer/core/src/proof_of_work/lwma_diff.rs @@ -9,7 +9,7 @@ use std::{cmp, collections::VecDeque}; use log::*; -use tari_crypto::tari_utilities::epoch_time::EpochTime; +use tari_utilities::epoch_time::EpochTime; use crate::proof_of_work::{ difficulty::{Difficulty, DifficultyAdjustment}, diff --git a/base_layer/core/src/proof_of_work/monero_rx/fixed_array.rs b/base_layer/core/src/proof_of_work/monero_rx/fixed_array.rs index 9a27561c55..7979073fe9 100644 --- a/base_layer/core/src/proof_of_work/monero_rx/fixed_array.rs +++ b/base_layer/core/src/proof_of_work/monero_rx/fixed_array.rs @@ -26,8 +26,7 @@ use monero::{ consensus::{encode, Decodable, Encodable}, VarInt, }; -use tari_crypto::tari_utilities::ByteArray; -use tari_utilities::ByteArrayError; +use tari_utilities::{ByteArray, ByteArrayError}; const MAX_ARR_SIZE: usize = 63; diff --git a/base_layer/core/src/proof_of_work/monero_rx/helpers.rs b/base_layer/core/src/proof_of_work/monero_rx/helpers.rs index 0e2891f710..96e1a1fae3 100644 --- a/base_layer/core/src/proof_of_work/monero_rx/helpers.rs +++ b/base_layer/core/src/proof_of_work/monero_rx/helpers.rs @@ -192,6 +192,7 @@ mod test { }; use tari_test_utils::unpack_enum; use tari_utilities::{ + epoch_time::EpochTime, hex::{from_hex, Hex}, ByteArray, }; @@ -291,7 +292,7 @@ mod test { version: 0, height: 0, prev_hash: vec![0], - timestamp: Default::default(), + timestamp: EpochTime::now(), output_mr: vec![0], witness_mr: vec![0], output_mmr_size: 0, @@ -346,7 +347,7 @@ mod test { version: 0, height: 0, prev_hash: vec![0], - timestamp: Default::default(), + timestamp: EpochTime::now(), output_mr: vec![0], witness_mr: vec![0], output_mmr_size: 0, @@ -397,7 +398,7 @@ mod test { version: 0, height: 0, prev_hash: vec![0], - timestamp: Default::default(), + timestamp: EpochTime::now(), output_mr: vec![0], witness_mr: vec![0], output_mmr_size: 0, @@ -446,7 +447,7 @@ mod test { version: 0, height: 0, prev_hash: vec![0], - timestamp: Default::default(), + timestamp: EpochTime::now(), output_mr: vec![0], witness_mr: vec![0], output_mmr_size: 0, @@ -500,7 +501,7 @@ mod test { version: 0, height: 0, prev_hash: vec![0], - timestamp: Default::default(), + timestamp: EpochTime::now(), output_mr: vec![0], witness_mr: vec![0], output_mmr_size: 0, @@ -550,7 +551,7 @@ mod test { version: 0, height: 0, prev_hash: vec![0], - timestamp: Default::default(), + timestamp: EpochTime::now(), output_mr: vec![0], witness_mr: vec![0], output_mmr_size: 0, @@ -591,7 +592,7 @@ mod test { version: 0, height: 0, prev_hash: vec![0], - timestamp: Default::default(), + timestamp: EpochTime::now(), output_mr: vec![0], witness_mr: vec![0], output_mmr_size: 0, diff --git a/base_layer/core/src/proof_of_work/proof_of_work.rs b/base_layer/core/src/proof_of_work/proof_of_work.rs index a9d1bac8f4..b2217cf0e3 100644 --- a/base_layer/core/src/proof_of_work/proof_of_work.rs +++ b/base_layer/core/src/proof_of_work/proof_of_work.rs @@ -29,7 +29,7 @@ use std::{ use bytes::BufMut; use serde::{Deserialize, Serialize}; -use tari_crypto::tari_utilities::hex::Hex; +use tari_utilities::hex::Hex; use crate::{ consensus::{ConsensusDecoding, ConsensusEncoding, MaxSizeBytes}, diff --git a/base_layer/core/src/proof_of_work/sha3_pow.rs b/base_layer/core/src/proof_of_work/sha3_pow.rs index 1de840b81f..768202b5f1 100644 --- a/base_layer/core/src/proof_of_work/sha3_pow.rs +++ b/base_layer/core/src/proof_of_work/sha3_pow.rs @@ -21,7 +21,7 @@ // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. use sha3::{Digest, Sha3_256}; -use tari_crypto::tari_utilities::ByteArray; +use tari_utilities::ByteArray; use crate::{ blocks::BlockHeader, @@ -67,6 +67,7 @@ fn sha3_difficulty_with_hash(header: &BlockHeader) -> (Difficulty, Vec) { #[cfg(test)] pub mod test { use chrono::{DateTime, NaiveDate, Utc}; + use tari_utilities::epoch_time::EpochTime; use crate::{ blocks::BlockHeader, @@ -87,7 +88,9 @@ pub mod test { pub fn get_header() -> BlockHeader { let mut header = BlockHeader::new(0); - header.timestamp = DateTime::::from_utc(NaiveDate::from_ymd(2000, 1, 1).and_hms(1, 1, 1), Utc).into(); + header.timestamp = EpochTime::from_secs_since_epoch( + DateTime::::from_utc(NaiveDate::from_ymd(2000, 1, 1).and_hms(1, 1, 1), Utc).timestamp() as u64, + ); header.pow.pow_algo = PowAlgorithm::Sha3; header } diff --git a/base_layer/core/src/proof_of_work/target_difficulty_window.rs b/base_layer/core/src/proof_of_work/target_difficulty_window.rs index 53fdb619ba..ed4421c6a3 100644 --- a/base_layer/core/src/proof_of_work/target_difficulty_window.rs +++ b/base_layer/core/src/proof_of_work/target_difficulty_window.rs @@ -22,7 +22,7 @@ use std::cmp; -use tari_crypto::tari_utilities::epoch_time::EpochTime; +use tari_utilities::epoch_time::EpochTime; use crate::proof_of_work::{difficulty::DifficultyAdjustment, lwma_diff::LinearWeightedMovingAverage, Difficulty}; diff --git a/base_layer/core/src/proto/block.rs b/base_layer/core/src/proto/block.rs index 53a5172330..d7ad5983ee 100644 --- a/base_layer/core/src/proto/block.rs +++ b/base_layer/core/src/proto/block.rs @@ -23,7 +23,7 @@ use std::convert::{TryFrom, TryInto}; use tari_common_types::types::{BlindingFactor, PrivateKey}; -use tari_crypto::tari_utilities::ByteArray; +use tari_utilities::ByteArray; use super::core as proto; use crate::{ diff --git a/base_layer/core/src/proto/block_header.rs b/base_layer/core/src/proto/block_header.rs index 5d55b3e2fe..fc41802c78 100644 --- a/base_layer/core/src/proto/block_header.rs +++ b/base_layer/core/src/proto/block_header.rs @@ -23,7 +23,7 @@ use std::convert::TryFrom; use tari_common_types::types::BlindingFactor; -use tari_crypto::tari_utilities::ByteArray; +use tari_utilities::ByteArray; use super::core as proto; use crate::{ diff --git a/base_layer/core/src/proto/types_impls.rs b/base_layer/core/src/proto/types_impls.rs index 0e07255b7c..149ec5869e 100644 --- a/base_layer/core/src/proto/types_impls.rs +++ b/base_layer/core/src/proto/types_impls.rs @@ -31,7 +31,7 @@ use tari_common_types::types::{ PublicKey, Signature, }; -use tari_crypto::tari_utilities::{ByteArray, ByteArrayError}; +use tari_utilities::{ByteArray, ByteArrayError}; use super::types as proto; diff --git a/base_layer/core/src/proto/utils.rs b/base_layer/core/src/proto/utils.rs index 6e8738995a..729e31ddb1 100644 --- a/base_layer/core/src/proto/utils.rs +++ b/base_layer/core/src/proto/utils.rs @@ -21,7 +21,7 @@ // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. use prost_types::Timestamp; -use tari_crypto::tari_utilities::epoch_time::EpochTime; +use tari_utilities::epoch_time::EpochTime; /// Utility function that converts a `prost::Timestamp` to a `chrono::DateTime` pub(crate) fn timestamp_to_datetime(timestamp: Timestamp) -> EpochTime { diff --git a/base_layer/core/src/transactions/fee.rs b/base_layer/core/src/transactions/fee.rs index e891dcd1c3..44315c6947 100644 --- a/base_layer/core/src/transactions/fee.rs +++ b/base_layer/core/src/transactions/fee.rs @@ -84,7 +84,7 @@ mod test { #[test] pub fn test_derive_clone() { let f0 = Fee::new(TransactionWeight::latest()); - let f1 = f0.clone(); + let f1 = f0; assert_eq!( f0.weighting().params().kernel_weight, f1.weighting().params().kernel_weight diff --git a/base_layer/core/src/transactions/transaction_components/transaction.rs b/base_layer/core/src/transactions/transaction_components/transaction.rs index 6402aab3c5..a006fc7a12 100644 --- a/base_layer/core/src/transactions/transaction_components/transaction.rs +++ b/base_layer/core/src/transactions/transaction_components/transaction.rs @@ -31,7 +31,7 @@ use std::{ use serde::{Deserialize, Serialize}; use tari_common_types::types::{BlindingFactor, HashOutput, Signature}; -use tari_crypto::tari_utilities::hex::Hex; +use tari_utilities::hex::Hex; use crate::transactions::{ aggregated_body::AggregateBody, diff --git a/base_layer/core/src/transactions/transaction_components/transaction_kernel.rs b/base_layer/core/src/transactions/transaction_components/transaction_kernel.rs index 4cfc8b188d..4f0c452af7 100644 --- a/base_layer/core/src/transactions/transaction_components/transaction_kernel.rs +++ b/base_layer/core/src/transactions/transaction_components/transaction_kernel.rs @@ -32,7 +32,7 @@ use std::{ use serde::{Deserialize, Serialize}; use tari_common_types::types::{Commitment, Signature}; -use tari_crypto::tari_utilities::{hex::Hex, message_format::MessageFormat, Hashable}; +use tari_utilities::{hex::Hex, message_format::MessageFormat, Hashable}; use super::TransactionKernelVersion; use crate::{ diff --git a/base_layer/core/src/transactions/transaction_components/transaction_output.rs b/base_layer/core/src/transactions/transaction_components/transaction_output.rs index 8f94e7a5eb..3346484a1d 100644 --- a/base_layer/core/src/transactions/transaction_components/transaction_output.rs +++ b/base_layer/core/src/transactions/transaction_components/transaction_output.rs @@ -301,10 +301,10 @@ impl TransactionOutput { Some(key) => spending_key + key, }; Ok(ComSignature::sign( - value, - secret_x, - nonce_a, - nonce_b, + &value, + &secret_x, + &nonce_a, + &nonce_b, &e.finalize_fixed(), &PedersenCommitmentFactory::default(), )?) diff --git a/base_layer/core/src/transactions/transaction_components/unblinded_output.rs b/base_layer/core/src/transactions/transaction_components/unblinded_output.rs index c7b62411f5..121761754f 100644 --- a/base_layer/core/src/transactions/transaction_components/unblinded_output.rs +++ b/base_layer/core/src/transactions/transaction_components/unblinded_output.rs @@ -163,10 +163,10 @@ impl UnblindedOutput { &commitment, ); let script_signature = ComSignature::sign( - self.value.into(), - &self.script_private_key + &self.spending_key, - script_nonce_a, - script_nonce_b, + &self.value.into(), + &(&self.script_private_key + &self.spending_key), + &script_nonce_a, + &script_nonce_b, &challenge, factory, ) diff --git a/base_layer/core/src/transactions/transaction_protocol/proto/recipient_signed_message.rs b/base_layer/core/src/transactions/transaction_protocol/proto/recipient_signed_message.rs index 2dc1e7e51c..d931d4c2d2 100644 --- a/base_layer/core/src/transactions/transaction_protocol/proto/recipient_signed_message.rs +++ b/base_layer/core/src/transactions/transaction_protocol/proto/recipient_signed_message.rs @@ -23,7 +23,7 @@ use std::convert::{TryFrom, TryInto}; use tari_common_types::types::PublicKey; -use tari_crypto::tari_utilities::ByteArray; +use tari_utilities::ByteArray; use super::protocol as proto; use crate::transactions::transaction_protocol::recipient::RecipientSignedMessage; diff --git a/base_layer/core/src/transactions/transaction_protocol/proto/transaction_sender.rs b/base_layer/core/src/transactions/transaction_protocol/proto/transaction_sender.rs index 11987f2bbb..08fcb8d81f 100644 --- a/base_layer/core/src/transactions/transaction_protocol/proto/transaction_sender.rs +++ b/base_layer/core/src/transactions/transaction_protocol/proto/transaction_sender.rs @@ -24,8 +24,8 @@ use std::convert::{TryFrom, TryInto}; use proto::transaction_sender_message::Message as ProtoTxnSenderMessage; use tari_common_types::types::PublicKey; -use tari_crypto::tari_utilities::ByteArray; use tari_script::TariScript; +use tari_utilities::ByteArray; use super::{protocol as proto, protocol::transaction_sender_message::Message as ProtoTransactionSenderMessage}; use crate::{ diff --git a/base_layer/core/src/validation/block_validators/orphan.rs b/base_layer/core/src/validation/block_validators/orphan.rs index 34621e3069..bf8c30a33d 100644 --- a/base_layer/core/src/validation/block_validators/orphan.rs +++ b/base_layer/core/src/validation/block_validators/orphan.rs @@ -20,7 +20,7 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. use log::*; -use tari_crypto::tari_utilities::{hash::Hashable, hex::Hex}; +use tari_utilities::{hash::Hashable, hex::Hex}; use super::LOG_TARGET; use crate::{ diff --git a/base_layer/core/src/validation/block_validators/test.rs b/base_layer/core/src/validation/block_validators/test.rs index 77e340d049..33b21a3818 100644 --- a/base_layer/core/src/validation/block_validators/test.rs +++ b/base_layer/core/src/validation/block_validators/test.rs @@ -460,7 +460,7 @@ mod orphan_validator { let validator = OrphanBlockValidator::new(rules, false, CryptoFactories::default()); let (_, coinbase) = blockchain.append(block_spec!("1", parent: "GB")); - let schema = txn_schema!(from: vec![coinbase.clone()], to: vec![201 * T]); + let schema = txn_schema!(from: vec![coinbase], to: vec![201 * T]); let (initial_tx, outputs) = schema_to_transaction(&[schema]); let schema = txn_schema!(from: vec![outputs[0].clone()], to: vec![200 * T]); diff --git a/base_layer/core/src/validation/header_validator.rs b/base_layer/core/src/validation/header_validator.rs index 27c99560b2..5a280ed34a 100644 --- a/base_layer/core/src/validation/header_validator.rs +++ b/base_layer/core/src/validation/header_validator.rs @@ -2,7 +2,7 @@ // SPDX-License-Identifier: BSD-3-Clause use log::*; -use tari_crypto::tari_utilities::{hash::Hashable, hex::Hex}; +use tari_utilities::{hash::Hashable, hex::Hex}; use crate::{ blocks::BlockHeader, diff --git a/base_layer/core/tests/async_db.rs b/base_layer/core/tests/async_db.rs index 78ed930eee..98f32db791 100644 --- a/base_layer/core/tests/async_db.rs +++ b/base_layer/core/tests/async_db.rs @@ -41,8 +41,9 @@ use tari_core::{ }, txn_schema, }; -use tari_crypto::{commitment::HomomorphicCommitmentFactory, tari_utilities::Hashable}; +use tari_crypto::commitment::HomomorphicCommitmentFactory; use tari_test_utils::runtime::test_async; +use tari_utilities::Hashable; #[allow(dead_code)] mod helpers; diff --git a/base_layer/core/tests/base_node_rpc.rs b/base_layer/core/tests/base_node_rpc.rs index 9ea7782339..cfacf3028e 100644 --- a/base_layer/core/tests/base_node_rpc.rs +++ b/base_layer/core/tests/base_node_rpc.rs @@ -55,10 +55,9 @@ use tari_core::{ }, txn_schema, }; -use tari_crypto::tari_utilities::epoch_time::EpochTime; use tari_service_framework::reply_channel; use tari_test_utils::streams::convert_mpsc_to_stream; -use tari_utilities::Hashable; +use tari_utilities::{epoch_time::EpochTime, Hashable}; use tempfile::{tempdir, TempDir}; use tokio::sync::broadcast; diff --git a/base_layer/core/tests/chain_storage_tests/chain_backend.rs b/base_layer/core/tests/chain_storage_tests/chain_backend.rs index aaa849e79c..71434420b3 100644 --- a/base_layer/core/tests/chain_storage_tests/chain_backend.rs +++ b/base_layer/core/tests/chain_storage_tests/chain_backend.rs @@ -27,9 +27,9 @@ use tari_core::{ test_helpers::blockchain::create_test_db, tx, }; -use tari_crypto::tari_utilities::Hashable; use tari_storage::lmdb_store::LMDBConfig; use tari_test_utils::paths::create_temporary_data_path; +use tari_utilities::Hashable; use crate::helpers::database::create_orphan_block; diff --git a/base_layer/core/tests/chain_storage_tests/chain_storage.rs b/base_layer/core/tests/chain_storage_tests/chain_storage.rs index c130096805..1132e0e398 100644 --- a/base_layer/core/tests/chain_storage_tests/chain_storage.rs +++ b/base_layer/core/tests/chain_storage_tests/chain_storage.rs @@ -54,10 +54,11 @@ use tari_core::{ txn_schema, validation::{mocks::MockValidator, DifficultyCalculator, ValidationError}, }; -use tari_crypto::{keys::PublicKey as PublicKeyTrait, tari_utilities::Hashable}; +use tari_crypto::keys::PublicKey as PublicKeyTrait; use tari_script::StackItem; use tari_storage::lmdb_store::LMDBConfig; use tari_test_utils::{paths::create_temporary_data_path, unpack_enum}; +use tari_utilities::Hashable; // use crate::helpers::database::create_test_db; // use crate::helpers::database::create_store; @@ -1160,7 +1161,6 @@ fn asset_unique_id() { } #[test] -#[ignore = "To be completed with pruned mode"] #[allow(clippy::identity_op)] fn store_and_retrieve_blocks_from_contents() { let network = Network::LocalNet; @@ -1179,7 +1179,7 @@ fn store_and_retrieve_blocks_from_contents() { generate_new_block(&mut db, &mut blocks, &mut outputs, schema, &consensus_manager).unwrap() ); let kernel_sig = blocks[1].block().body.kernels()[0].clone().excess_sig; - let utxo_commit = blocks[1].block().body.outputs()[0].clone().commitment; + let utxo_commit = blocks.last().unwrap().block().body.outputs()[0].clone().commitment; assert_eq!( db.fetch_block_with_kernel(kernel_sig) .unwrap() @@ -1195,7 +1195,7 @@ fn store_and_retrieve_blocks_from_contents() { .unwrap() .try_into_chain_block() .unwrap(), - blocks[1] + blocks[2] ); } diff --git a/base_layer/core/tests/helpers/test_blockchain.rs b/base_layer/core/tests/helpers/test_blockchain.rs index 3cae2087bf..930b2ada6e 100644 --- a/base_layer/core/tests/helpers/test_blockchain.rs +++ b/base_layer/core/tests/helpers/test_blockchain.rs @@ -33,7 +33,7 @@ use tari_core::{ test_helpers::blockchain::TempDatabase, transactions::{transaction_components::UnblindedOutput, CryptoFactories}, }; -use tari_crypto::tari_utilities::Hashable; +use tari_utilities::Hashable; use crate::helpers::{ block_builders::{chain_block_with_new_coinbase, find_header_with_achieved_difficulty}, diff --git a/base_layer/core/tests/node_service.rs b/base_layer/core/tests/node_service.rs index 7fad81c574..6fd9475e2c 100644 --- a/base_layer/core/tests/node_service.rs +++ b/base_layer/core/tests/node_service.rs @@ -52,8 +52,8 @@ use tari_core::{ mocks::MockValidator, }, }; -use tari_crypto::tari_utilities::Hashable; use tari_test_utils::unpack_enum; +use tari_utilities::Hashable; use tempfile::tempdir; use crate::helpers::block_builders::{construct_chained_blocks, create_coinbase}; diff --git a/base_layer/key_manager/Cargo.toml b/base_layer/key_manager/Cargo.toml index 26de419c26..094d756d0e 100644 --- a/base_layer/key_manager/Cargo.toml +++ b/base_layer/key_manager/Cargo.toml @@ -12,7 +12,8 @@ crate-type = ["lib", "cdylib"] [dependencies] tari_common_types = { version = "^0.31", path = "../../base_layer/common_types" } -tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.12.5" } +tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.13.0" } +tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.4.3" } arrayvec = "0.7.1" argon2 = { version = "0.2", features = ["std"] } diff --git a/base_layer/key_manager/src/cipher_seed.rs b/base_layer/key_manager/src/cipher_seed.rs index d98cc8fc29..b866947725 100644 --- a/base_layer/key_manager/src/cipher_seed.rs +++ b/base_layer/key_manager/src/cipher_seed.rs @@ -34,7 +34,7 @@ use chacha20::{ use crc32fast::Hasher as CrcHasher; use digest::Update; use rand::{rngs::OsRng, RngCore}; -use tari_crypto::tari_utilities::ByteArray; +use tari_utilities::ByteArray; use crate::{ error::KeyManagerError, diff --git a/base_layer/key_manager/src/error.rs b/base_layer/key_manager/src/error.rs index 4b6bb6bcf4..b69a0cd96c 100644 --- a/base_layer/key_manager/src/error.rs +++ b/base_layer/key_manager/src/error.rs @@ -21,7 +21,7 @@ // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. use argon2::password_hash::Error as PasswordHashError; -use tari_crypto::tari_utilities::ByteArrayError; +use tari_utilities::ByteArrayError; use thiserror::Error; #[derive(Debug, Error, PartialEq)] diff --git a/base_layer/key_manager/src/mnemonic.rs b/base_layer/key_manager/src/mnemonic.rs index d467f86bc1..6c92360f4d 100644 --- a/base_layer/key_manager/src/mnemonic.rs +++ b/base_layer/key_manager/src/mnemonic.rs @@ -24,7 +24,7 @@ use std::{cmp::Ordering, slice::Iter}; use serde::{Deserialize, Serialize}; use strum_macros::{Display, EnumString}; -use tari_crypto::tari_utilities::bit::{bytes_to_bits, checked_bits_to_uint}; +use tari_utilities::bit::{bytes_to_bits, checked_bits_to_uint}; use crate::{ diacritics::*, @@ -185,7 +185,11 @@ pub fn from_bytes(bytes: &[u8], language: MnemonicLanguage) -> Result 0 { + padded_size += 1; + } + padded_size *= group_bit_count; bits.resize(padded_size, false); // Group each set of 11 bits to form one mnemonic word @@ -193,12 +197,10 @@ pub fn from_bytes(bytes: &[u8], language: MnemonicLanguage) -> Result mnemonic_sequence.push(mnemonic_word), - Err(err) => return Err(err), - } + let mnemonic_word = find_mnemonic_word_from_index(word_index, language)?; + mnemonic_sequence.push(mnemonic_word); } Ok(mnemonic_sequence) @@ -216,10 +218,10 @@ pub fn to_bytes(mnemonic_seq: &[String]) -> Result, MnemonicError> { /// look something like this: /// .....CCCCCCCCCCCBBBBBBBBBBBAAAAAAAAAAA, the input represented as one very large number would look like /// A+B*2^11+C*2^22+... And we want to cut it (from the right) to 8 bit long numbers like this: -/// .....eddddddddccccccccbbbbbbbbaaaaaaaa, the output represented as one very large number would look liek +/// .....eddddddddccccccccbbbbbbbbaaaaaaaa, the output represented as one very large number would look like /// a+b*2^8+c*2^16+... Where 'A' is the first mnemonic word in the seq and 'a' is the first byte output. /// So the algo works like this: -/// We add 11bits number to what we have 'rest' shited by the number of bit representation of rest ('rest_bits'). +/// We add 11bits number to what we have 'rest' shifted by the number of bit representation of rest ('rest_bits'). /// We now have enough bits to get some output, we take 8 bits and produce output byte. We do this as long as we have at /// least 8 bits in the 'rest'. /// Sample of couple first steps: @@ -227,25 +229,26 @@ pub fn to_bytes(mnemonic_seq: &[String]) -> Result, MnemonicError> { /// 2) We add 5 bits from 'B' to generate 'b', the leftover is 6 bits from 'B' /// 3) We add 2 bits from 'C to generate 'c', now we have 8 bits needed to generate 'd' and we have 1 bit leftover. pub fn to_bytes_with_language(mnemonic_seq: &[String], language: &MnemonicLanguage) -> Result, MnemonicError> { - let mut bytes: Vec = Vec::new(); - let mut rest = 0; + const MASK: u64 = (1u64 << 8) - 1; + let mut bytes = Vec::new(); + let mut rest = 0u64; let mut rest_bits: u8 = 0; for curr_word in mnemonic_seq { - let index = find_mnemonic_index_from_word(curr_word, *language)?; + let index = find_mnemonic_index_from_word(curr_word, *language)? as u64; // Add 11 bits to the front rest += index << rest_bits; rest_bits += 11; while rest_bits >= 8 { // Get last 8 bits and shift it - bytes.push(rest as u8); + bytes.push((rest & MASK) as u8); rest >>= 8; rest_bits -= 8; } } // If we have any leftover, we write it. if rest > 0 { - bytes.push(rest as u8); + bytes.push((rest & MASK) as u8); } Ok(bytes) } diff --git a/base_layer/key_manager/src/wasm.rs b/base_layer/key_manager/src/wasm.rs index f124b6bcaf..51239ff64f 100644 --- a/base_layer/key_manager/src/wasm.rs +++ b/base_layer/key_manager/src/wasm.rs @@ -162,7 +162,7 @@ where T: for<'a> Deserialize<'a> { } mod test { - use tari_crypto::tari_utilities::hex::Hex; + use tari_utilities::hex::Hex; use wasm_bindgen_test::*; use super::*; diff --git a/base_layer/mmr/Cargo.toml b/base_layer/mmr/Cargo.toml index f1d55ca447..d71e81365e 100644 --- a/base_layer/mmr/Cargo.toml +++ b/base_layer/mmr/Cargo.toml @@ -13,7 +13,7 @@ native_bitmap = ["croaring"] benches = ["criterion"] [dependencies] -tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.3.1" } +tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.4.3" } thiserror = "1.0.26" digest = "0.9.0" log = "0.4" @@ -24,7 +24,7 @@ criterion = { version="0.2", optional = true } [dev-dependencies] rand="0.8.0" blake2 = "0.9.0" -tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.12.5" } +tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.13.0" } serde_json = "1.0" bincode = "1.1" [lib] diff --git a/base_layer/mmr/tests/merkle_proof.rs b/base_layer/mmr/tests/merkle_proof.rs index 897d74681d..aea09b7766 100644 --- a/base_layer/mmr/tests/merkle_proof.rs +++ b/base_layer/mmr/tests/merkle_proof.rs @@ -25,12 +25,12 @@ mod support; use support::{create_mmr, int_to_hash, Hasher}; -use tari_crypto::tari_utilities::hex::{self, Hex}; use tari_mmr::{ common::{is_leaf, node_index}, MerkleProof, MerkleProofError, }; +use tari_utilities::hex::{self, Hex}; #[test] fn zero_size_mmr() { diff --git a/base_layer/mmr/tests/mutable_mmr.rs b/base_layer/mmr/tests/mutable_mmr.rs index 918d29105e..56e091140f 100644 --- a/base_layer/mmr/tests/mutable_mmr.rs +++ b/base_layer/mmr/tests/mutable_mmr.rs @@ -26,8 +26,8 @@ mod support; use croaring::Bitmap; use digest::Digest; use support::{create_mmr, int_to_hash, Hasher}; -use tari_crypto::tari_utilities::hex::Hex; use tari_mmr::{Hash, HashSlice, MutableMmr}; +use tari_utilities::hex::Hex; fn hash_with_bitmap(hash: &HashSlice, bitmap: &mut Bitmap) -> Hash { bitmap.run_optimize(); diff --git a/base_layer/mmr/tests/with_blake512_hash.rs b/base_layer/mmr/tests/with_blake512_hash.rs index ae617c0298..dc4cb89adb 100644 --- a/base_layer/mmr/tests/with_blake512_hash.rs +++ b/base_layer/mmr/tests/with_blake512_hash.rs @@ -24,8 +24,8 @@ use std::string::ToString; use blake2::Blake2b; use digest::Digest; -use tari_crypto::tari_utilities::hex::Hex; use tari_mmr::MerkleMountainRange; +use tari_utilities::hex::Hex; #[allow(clippy::vec_init_then_push)] pub fn hash_values() -> Vec { let mut hashvalues = Vec::new(); diff --git a/base_layer/p2p/Cargo.toml b/base_layer/p2p/Cargo.toml index 3eee452f98..e95e4ead6b 100644 --- a/base_layer/p2p/Cargo.toml +++ b/base_layer/p2p/Cargo.toml @@ -13,11 +13,11 @@ edition = "2018" tari_comms = { version = "^0.31", path = "../../comms/core" } tari_comms_dht = { version = "^0.31", path = "../../comms/dht" } tari_common = { version = "^0.31", path = "../../common" } -tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.12.5" } +tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.13.0" } tari_service_framework = { version = "^0.31", path = "../service_framework" } tari_shutdown = { version = "^0.31", path = "../../infrastructure/shutdown" } tari_storage = { version = "^0.31", path = "../../infrastructure/storage" } -tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.3.1" } +tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.4.3" } anyhow = "1.0.53" bytes = "0.5" diff --git a/base_layer/p2p/examples/gen_node_identity.rs b/base_layer/p2p/examples/gen_node_identity.rs index 951da5b886..89ba2912fc 100644 --- a/base_layer/p2p/examples/gen_node_identity.rs +++ b/base_layer/p2p/examples/gen_node_identity.rs @@ -37,7 +37,7 @@ use tari_comms::{ peer_manager::{NodeIdentity, PeerFeatures}, utils::multiaddr::socketaddr_to_multiaddr, }; -use tari_crypto::tari_utilities::message_format::MessageFormat; +use tari_utilities::message_format::MessageFormat; fn random_address() -> Multiaddr { let port = OsRng.gen_range(9000..std::u16::MAX); diff --git a/base_layer/p2p/examples/gen_tor_identity.rs b/base_layer/p2p/examples/gen_tor_identity.rs index 1fc45062ab..195f270354 100644 --- a/base_layer/p2p/examples/gen_tor_identity.rs +++ b/base_layer/p2p/examples/gen_tor_identity.rs @@ -27,7 +27,7 @@ use std::{env::current_dir, fs, path::Path}; /// populate the peer manager in other examples. use clap::{App, Arg}; use tari_comms::{multiaddr::Multiaddr, tor}; -use tari_crypto::tari_utilities::message_format::MessageFormat; +use tari_utilities::message_format::MessageFormat; fn to_abs_path(path: &str) -> String { let path = Path::new(path); diff --git a/base_layer/p2p/src/initialization.rs b/base_layer/p2p/src/initialization.rs index 28dddd0a1d..f17e4afde3 100644 --- a/base_layer/p2p/src/initialization.rs +++ b/base_layer/p2p/src/initialization.rs @@ -94,6 +94,8 @@ pub enum CommsInitializationError { FailedToAddSeedPeer(#[from] PeerManagerError), #[error("Cannot acquire exclusive file lock, another instance of the application is already running")] CannotAcquireFileLock, + #[error("Invalid tor forward address: `{0}`")] + InvalidTorForwardAddress(std::io::Error), #[error("IO Error: `{0}`")] IoError(#[from] std::io::Error), } @@ -249,10 +251,10 @@ pub async fn spawn_comms_using_transport( async fn initialize_hidden_service( mut config: TorTransportConfig, -) -> Result { +) -> Result { let mut builder = tor::HiddenServiceBuilder::new() .with_hs_flags(tor::HsFlags::DETACH) - .with_port_mapping(config.to_port_mapping()) + .with_port_mapping(config.to_port_mapping()?) .with_socks_authentication(config.to_socks_auth()) .with_control_server_auth(config.to_control_auth()) .with_socks_address_override(config.socks_address_override) @@ -267,7 +269,8 @@ async fn initialize_hidden_service( builder = builder.with_tor_identity(identity); } - builder.build().await + let hidden_svc_ctl = builder.build().await?; + Ok(hidden_svc_ctl) } async fn configure_comms_and_dht( @@ -323,8 +326,6 @@ async fn configure_comms_and_dht( } // Hook up DHT messaging middlewares - // TODO: messaging events should be optional - let (messaging_events_sender, _) = broadcast::channel(1); let messaging_pipeline = pipeline::Builder::new() .outbound_buffer_size(config.outbound_buffer_size) .with_outbound_pipeline(outbound_rx, |sink| { @@ -339,6 +340,8 @@ async fn configure_comms_and_dht( ) .build(); + // TODO: messaging events should be optional + let (messaging_events_sender, _) = broadcast::channel(1); comms = comms.add_protocol_extension(MessagingProtocolExtension::new( messaging_events_sender, messaging_pipeline, diff --git a/base_layer/p2p/src/services/liveness/mock.rs b/base_layer/p2p/src/services/liveness/mock.rs index 5f4bad373f..9323c64480 100644 --- a/base_layer/p2p/src/services/liveness/mock.rs +++ b/base_layer/p2p/src/services/liveness/mock.rs @@ -28,8 +28,8 @@ use std::sync::{ use futures::StreamExt; use log::*; -use tari_crypto::tari_utilities::{acquire_read_lock, acquire_write_lock}; use tari_service_framework::{reply_channel, reply_channel::RequestContext}; +use tari_utilities::{acquire_read_lock, acquire_write_lock}; use tokio::sync::{broadcast, broadcast::error::SendError}; use crate::services::liveness::{ diff --git a/base_layer/p2p/src/transport.rs b/base_layer/p2p/src/transport.rs index 30781583b6..b4110ddc4b 100644 --- a/base_layer/p2p/src/transport.rs +++ b/base_layer/p2p/src/transport.rs @@ -28,9 +28,10 @@ use tari_comms::{ tor, tor::TorIdentity, transports::{predicate::FalsePredicate, SocksConfig}, + utils::multiaddr::multiaddr_to_socketaddr, }; -use crate::{SocksAuthentication, TorControlAuthentication}; +use crate::{initialization::CommsInitializationError, SocksAuthentication, TorControlAuthentication}; #[derive(Debug, Clone, Serialize, Deserialize, Default)] #[serde(deny_unknown_fields)] @@ -146,14 +147,29 @@ pub struct TorTransportConfig { /// When set to true, outbound TCP connections bypass the tor proxy. Defaults to false for better privacy, setting /// to true may improve network performance for TCP nodes. pub proxy_bypass_for_outbound_tcp: bool, + /// If set, instructs tor to forward traffic the the provided address. + pub forward_address: Option, /// The tor identity to use to create the hidden service. If None, a new one will be generated. #[serde(skip)] pub identity: Option, } impl TorTransportConfig { - pub fn to_port_mapping(&self) -> tor::PortMapping { - tor::PortMapping::new(self.onion_port.get(), ([127, 0, 0, 1], 0).into()) + /// Returns a [self::tor::PortMapping] struct that maps the [onion_port] to an address that is listening for + /// traffic. If [forward_address] is set, that address is used, otherwise 127.0.0.1:[onion_port] is used. + /// + /// [onion_port]: TorTransportConfig::onion_port + /// [forward_address]: TorTransportConfig::forward_address + pub fn to_port_mapping(&self) -> Result { + let forward_addr = self + .forward_address + .as_ref() + .map(|addr| multiaddr_to_socketaddr(addr)) + .transpose() + .map_err(CommsInitializationError::InvalidTorForwardAddress)? + .unwrap_or_else(|| ([127, 0, 0, 1], 0).into()); + + Ok(tor::PortMapping::new(self.onion_port.get(), forward_addr)) } pub fn to_control_auth(&self) -> tor::Authentication { @@ -175,6 +191,7 @@ impl Default for TorTransportConfig { onion_port: NonZeroU16::new(18141).unwrap(), proxy_bypass_addresses: vec![], proxy_bypass_for_outbound_tcp: false, + forward_address: None, identity: None, } } diff --git a/base_layer/tari_mining_helper_ffi/Cargo.toml b/base_layer/tari_mining_helper_ffi/Cargo.toml index a792971299..9469a480f0 100644 --- a/base_layer/tari_mining_helper_ffi/Cargo.toml +++ b/base_layer/tari_mining_helper_ffi/Cargo.toml @@ -8,10 +8,10 @@ edition = "2018" [dependencies] tari_comms = { version = "^0.31", path = "../../comms/core" } -tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.12.5" } +tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.13.0" } tari_common = { path = "../../common" } tari_core = { path = "../core", default-features = false, features = ["transactions"]} -tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.3.1" } +tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.4.3" } libc = "0.2.65" thiserror = "1.0.26" hex = "0.4.2" diff --git a/base_layer/tari_mining_helper_ffi/src/error.rs b/base_layer/tari_mining_helper_ffi/src/error.rs index de6e45baea..4a51983129 100644 --- a/base_layer/tari_mining_helper_ffi/src/error.rs +++ b/base_layer/tari_mining_helper_ffi/src/error.rs @@ -19,7 +19,7 @@ // SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -use tari_crypto::tari_utilities::hex::HexError; +use tari_utilities::hex::HexError; use thiserror::Error; #[derive(Debug, Error, PartialEq)] diff --git a/base_layer/wallet/Cargo.toml b/base_layer/wallet/Cargo.toml index 678e6e9a9f..40d76ed7a5 100644 --- a/base_layer/wallet/Cargo.toml +++ b/base_layer/wallet/Cargo.toml @@ -11,7 +11,7 @@ tari_common = { path = "../../common" } tari_common_types = { version = "^0.31", path = "../../base_layer/common_types" } tari_comms = { version = "^0.31", path = "../../comms/core" } tari_comms_dht = { version = "^0.31", path = "../../comms/dht" } -tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.12.5" } +tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.13.0" } tari_key_manager = { version = "^0.31", path = "../key_manager" } tari_p2p = { version = "^0.31", path = "../p2p", features = ["auto-update"] } tari_script = { path = "../../infrastructure/tari_script" } @@ -19,7 +19,7 @@ tari_service_framework = { version = "^0.31", path = "../service_framework" } tari_shutdown = { version = "^0.31", path = "../../infrastructure/shutdown" } tari_storage = { version = "^0.31", path = "../../infrastructure/storage" } tari_common_sqlite = { path = "../../common_sqlite" } -tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.3.1" } +tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.4.3" } # Uncomment for tokio tracing via tokio-console (needs "tracing" featurs) #console-subscriber = "0.1.3" diff --git a/base_layer/wallet/src/contacts_service/storage/sqlite_db.rs b/base_layer/wallet/src/contacts_service/storage/sqlite_db.rs index 1d35fe1c53..e28d125c45 100644 --- a/base_layer/wallet/src/contacts_service/storage/sqlite_db.rs +++ b/base_layer/wallet/src/contacts_service/storage/sqlite_db.rs @@ -26,7 +26,7 @@ use chrono::NaiveDateTime; use diesel::{prelude::*, result::Error as DieselError, SqliteConnection}; use tari_common_types::types::PublicKey; use tari_comms::peer_manager::NodeId; -use tari_crypto::tari_utilities::ByteArray; +use tari_utilities::ByteArray; use crate::{ contacts_service::{ diff --git a/base_layer/wallet/src/error.rs b/base_layer/wallet/src/error.rs index 0a952978a8..8b3c6aa95d 100644 --- a/base_layer/wallet/src/error.rs +++ b/base_layer/wallet/src/error.rs @@ -32,10 +32,10 @@ use tari_comms::{ }; use tari_comms_dht::store_forward::StoreAndForwardError; use tari_core::transactions::transaction_components::TransactionError; -use tari_crypto::tari_utilities::{hex::HexError, ByteArrayError}; use tari_key_manager::error::KeyManagerError; use tari_p2p::{initialization::CommsInitializationError, services::liveness::error::LivenessError}; use tari_service_framework::{reply_channel::TransportChannelError, ServiceInitializationError}; +use tari_utilities::{hex::HexError, ByteArrayError}; use thiserror::Error; use crate::{ @@ -91,7 +91,7 @@ pub enum WalletError { #[error("Transaction Error: {0}")] TransactionError(#[from] TransactionError), #[error("Byte array error")] - ByteArrayError(#[from] tari_crypto::tari_utilities::ByteArrayError), + ByteArrayError(#[from] tari_utilities::ByteArrayError), #[error("Utxo Scanner Error: {0}")] UtxoScannerError(#[from] UtxoScannerError), #[error("Key manager error: `{0}`")] diff --git a/base_layer/wallet/src/key_manager_service/error.rs b/base_layer/wallet/src/key_manager_service/error.rs index 85c7b9cdc3..1492c22868 100644 --- a/base_layer/wallet/src/key_manager_service/error.rs +++ b/base_layer/wallet/src/key_manager_service/error.rs @@ -26,7 +26,7 @@ use tari_script::ScriptError; use tari_utilities::{hex::HexError, ByteArrayError}; use crate::error::WalletStorageError; - +/// Error enum for the [KeyManagerService] #[derive(Debug, thiserror::Error)] pub enum KeyManagerServiceError { #[error("Branch does not exist")] @@ -42,7 +42,7 @@ pub enum KeyManagerServiceError { #[error("Tari Key Manager error: `{0}`")] TariKeyManagerError(#[from] KMError), } - +/// Error enum for the [KeyManagerStorage] #[derive(Debug, thiserror::Error)] pub enum KeyManagerStorageError { #[error("Value not found")] diff --git a/base_layer/wallet/src/key_manager_service/handle.rs b/base_layer/wallet/src/key_manager_service/handle.rs index efe273d36f..7d5407489b 100644 --- a/base_layer/wallet/src/key_manager_service/handle.rs +++ b/base_layer/wallet/src/key_manager_service/handle.rs @@ -35,7 +35,11 @@ use crate::key_manager_service::{ KeyManagerInner, KeyManagerInterface, }; - +/// The key manager provides a hierarchical key derivation function (KDF) that derives uniformly random secret keys from +/// a single seed key for arbitrary branches, using an implementation of `KeyManagerBackend` to store the current index +/// for each branch. +/// +/// This handle can be cloned cheaply and safely shared across multiple threads. #[derive(Clone)] pub struct KeyManagerHandle { key_manager_inner: Arc>>, @@ -44,6 +48,9 @@ pub struct KeyManagerHandle { impl KeyManagerHandle where TBackend: KeyManagerBackend + 'static { + /// Creates a new key manager. + /// * `master_seed` is the primary seed that will be used to derive all unique branch keys with their indexes + /// * `db` implements `KeyManagerBackend` and is used for persistent storage of branches and indices. pub fn new(master_seed: CipherSeed, db: KeyManagerDatabase) -> Self { KeyManagerHandle { key_manager_inner: Arc::new(RwLock::new(KeyManagerInner::new(master_seed, db))), diff --git a/base_layer/wallet/src/key_manager_service/initializer.rs b/base_layer/wallet/src/key_manager_service/initializer.rs index b3a401161d..2166176d92 100644 --- a/base_layer/wallet/src/key_manager_service/initializer.rs +++ b/base_layer/wallet/src/key_manager_service/initializer.rs @@ -36,6 +36,7 @@ use crate::key_manager_service::{ KeyManagerHandle, }; +/// Initializes the key manager service by implementing the [ServiceInitializer] trait. pub struct KeyManagerInitializer where T: KeyManagerBackend { @@ -46,6 +47,7 @@ where T: KeyManagerBackend impl KeyManagerInitializer where T: KeyManagerBackend + 'static { + /// Creates a new [KeyManagerInitializer] from the provided [KeyManagerBackend] and [CipherSeed] pub fn new(backend: T, master_seed: CipherSeed) -> Self { Self { backend: Some(backend), diff --git a/base_layer/wallet/src/key_manager_service/interface.rs b/base_layer/wallet/src/key_manager_service/interface.rs index 1d54a4bfe4..7f9e2d215f 100644 --- a/base_layer/wallet/src/key_manager_service/interface.rs +++ b/base_layer/wallet/src/key_manager_service/interface.rs @@ -25,6 +25,8 @@ use tari_common_types::types::PrivateKey; use crate::key_manager_service::error::KeyManagerServiceError; +/// The value returned from [add_new_branch]. `AlreadyExists` is returned if the branch was previously created, +/// otherwise `NewEntry` is returned. #[derive(Debug, PartialEq)] pub enum AddResult { NewEntry, @@ -36,28 +38,41 @@ pub struct NextKeyResult { pub index: u64, } +/// Behaviour required for the Key manager service #[async_trait::async_trait] pub trait KeyManagerInterface: Clone + Send + Sync + 'static { + /// Creates a new branch for the key manager service to track + /// If this is an existing branch, that is not yet tracked in memory, the key manager service will load the key + /// manager from the backend to track in memory, will return `Ok(AddResult::NewEntry)`. If the branch is already + /// tracked in memory the result will be `Ok(AddResult::AlreadyExists)`. If the branch does not exist in memory + /// or in the backend, a new branch will be created and tracked the backend, `Ok(AddResult::NewEntry)`. async fn add_new_branch + Send>(&self, branch: T) -> Result; + /// Encrypts the key manager state using the provided cipher. An error is returned if the state is already + /// encrypted. async fn apply_encryption(&self, cipher: Aes256Gcm) -> Result<(), KeyManagerServiceError>; + /// Decrypts the key manager state using the provided cipher. An error is returned if the state is not encrypted. async fn remove_encryption(&self) -> Result<(), KeyManagerServiceError>; + /// Gets the next key from the branch. This will auto-increment the branch key index by 1 async fn get_next_key + Send>(&self, branch: T) -> Result; + /// Gets the key at the specified index async fn get_key_at_index + Send>( &self, branch: T, index: u64, ) -> Result; + /// Searches the branch to find the index used to generated the key, O(N) where N = index used. async fn find_key_index + Send>( &self, branch: T, key: &PrivateKey, ) -> Result; + /// Will update the index of the branch if the index given is higher than the current saved index async fn update_current_key_index_if_higher + Send>( &self, branch: T, diff --git a/base_layer/wallet/src/key_manager_service/mock.rs b/base_layer/wallet/src/key_manager_service/mock.rs index 9ce0426a47..abc8903aff 100644 --- a/base_layer/wallet/src/key_manager_service/mock.rs +++ b/base_layer/wallet/src/key_manager_service/mock.rs @@ -37,6 +37,8 @@ use std::{collections::HashMap, sync::Arc}; use crate::key_manager_service::{error::KeyManagerServiceError, storage::database::KeyManagerState}; +/// Testing Mock for the key manager service +/// Contains all functionality of the normal key manager service except persistent storage #[derive(Clone)] pub struct KeyManagerMock { key_managers: Arc>>>, @@ -44,6 +46,7 @@ pub struct KeyManagerMock { } impl KeyManagerMock { + /// Creates a new testing mock key manager service pub fn new(master_seed: CipherSeed) -> Self { KeyManagerMock { key_managers: Arc::new(RwLock::new(HashMap::new())), @@ -53,6 +56,7 @@ impl KeyManagerMock { } impl KeyManagerMock { + /// Adds a new branch for the key manager mock to track pub async fn add_key_manager_mock(&self, branch: String) -> Result { let result = if self.key_managers.read().await.contains_key(&branch) { AddResult::AlreadyExists @@ -75,6 +79,7 @@ impl KeyManagerMock { Ok(result) } + /// Gets the next key in the branch and increments the index pub async fn get_next_key_mock(&self, branch: String) -> Result { let mut lock = self.key_managers.write().await; let km = lock.get_mut(&branch).ok_or(KeyManagerServiceError::UnknownKeyBranch)?; @@ -85,6 +90,7 @@ impl KeyManagerMock { }) } + /// get the key at the request index for the branch pub async fn get_key_at_index_mock( &self, branch: String, diff --git a/base_layer/wallet/src/key_manager_service/storage/database/backend.rs b/base_layer/wallet/src/key_manager_service/storage/database/backend.rs index 2803c392d3..5fa8af54f0 100644 --- a/base_layer/wallet/src/key_manager_service/storage/database/backend.rs +++ b/base_layer/wallet/src/key_manager_service/storage/database/backend.rs @@ -24,14 +24,15 @@ use aes_gcm::Aes256Gcm; use crate::key_manager_service::{error::KeyManagerStorageError, storage::database::KeyManagerState}; /// This trait defines the required behaviour that a storage backend must provide for the Key Manager service. -/// Data is passed to and from the backend via the [DbKey], [DbValue], and [DbValueKey] enums. If new data types are -/// required to be supported by the backends then these enums can be updated to reflect this requirement and the trait -/// will remain the same pub trait KeyManagerBackend: Send + Sync + Clone { + /// This will retrieve the key manager specified by the branch string, None is returned if the key manager is not + /// found for the branch. fn get_key_manager(&self, branch: String) -> Result, KeyManagerStorageError>; + /// This will add an additional branch for the key manager to track. fn add_key_manager(&self, key_manager: KeyManagerState) -> Result<(), KeyManagerStorageError>; + /// This will increase the key index of the specified branch, and returns an error if the branch does not exist. fn increment_key_index(&self, branch: String) -> Result<(), KeyManagerStorageError>; - /// This method will set the currently stored key index for the key manager + /// This method will set the currently stored key index for the key manager. fn set_key_index(&self, branch: String, index: u64) -> Result<(), KeyManagerStorageError>; /// Apply encryption to the backend. fn apply_encryption(&self, cipher: Aes256Gcm) -> Result<(), KeyManagerStorageError>; diff --git a/base_layer/wallet/src/key_manager_service/storage/database/mod.rs b/base_layer/wallet/src/key_manager_service/storage/database/mod.rs index 4151906a86..d294d06727 100644 --- a/base_layer/wallet/src/key_manager_service/storage/database/mod.rs +++ b/base_layer/wallet/src/key_manager_service/storage/database/mod.rs @@ -45,10 +45,13 @@ pub struct KeyManagerDatabase { impl KeyManagerDatabase where T: KeyManagerBackend + 'static { + /// Creates a new [KeyManagerDatabase] linked to the provided KeyManagerBackend pub fn new(db: T) -> Self { Self { db: Arc::new(db) } } + /// Retrieves the key manager state of the provided branch + /// Returns None if the request branch does not exist. pub async fn get_key_manager_state( &self, branch: String, @@ -60,6 +63,7 @@ where T: KeyManagerBackend + 'static .and_then(|inner_result| inner_result) } + /// Saves the specified key manager state to the backend database. pub async fn set_key_manager_state(&self, state: KeyManagerState) -> Result<(), KeyManagerStorageError> { let db_clone = self.db.clone(); tokio::task::spawn_blocking(move || db_clone.add_key_manager(state)) @@ -69,6 +73,8 @@ where T: KeyManagerBackend + 'static Ok(()) } + /// Increment the key index of the provided branch of the key manager. + /// Will error if the branch does not exist. pub async fn increment_key_index(&self, branch: String) -> Result<(), KeyManagerStorageError> { let db_clone = self.db.clone(); tokio::task::spawn_blocking(move || db_clone.increment_key_index(branch)) @@ -77,6 +83,8 @@ where T: KeyManagerBackend + 'static Ok(()) } + /// Sets the key index of the provided branch of the key manager. + /// Will error if the branch does not exist. pub async fn set_key_index(&self, branch: String, index: u64) -> Result<(), KeyManagerStorageError> { let db_clone = self.db.clone(); tokio::task::spawn_blocking(move || db_clone.set_key_index(branch, index)) @@ -85,6 +93,8 @@ where T: KeyManagerBackend + 'static Ok(()) } + /// Encrypts the entire key manager with all branches. + /// This will only encrypt the index used, as the master seed phrase is not directly stored with the key manager. pub async fn apply_encryption(&self, cipher: Aes256Gcm) -> Result<(), KeyManagerStorageError> { let db_clone = self.db.clone(); tokio::task::spawn_blocking(move || db_clone.apply_encryption(cipher)) @@ -93,6 +103,7 @@ where T: KeyManagerBackend + 'static .and_then(|inner_result| inner_result) } + /// Decrypts the entire key manager. pub async fn remove_encryption(&self) -> Result<(), KeyManagerStorageError> { let db_clone = self.db.clone(); tokio::task::spawn_blocking(move || db_clone.remove_encryption()) diff --git a/base_layer/wallet/src/key_manager_service/storage/sqlite_db/key_manager_state.rs b/base_layer/wallet/src/key_manager_service/storage/sqlite_db/key_manager_state.rs index 3e614e1712..05b5df969c 100644 --- a/base_layer/wallet/src/key_manager_service/storage/sqlite_db/key_manager_state.rs +++ b/base_layer/wallet/src/key_manager_service/storage/sqlite_db/key_manager_state.rs @@ -35,6 +35,7 @@ use crate::{ }, }; +/// Represents a row in the key_manager_states table. #[derive(Clone, Debug, Queryable, Identifiable)] #[table_name = "key_manager_states"] #[primary_key(id)] @@ -45,6 +46,7 @@ pub struct KeyManagerStateSql { pub timestamp: NaiveDateTime, } +/// Struct used to create a new Key manager in the database #[derive(Clone, Debug, Insertable)] #[table_name = "key_manager_states"] pub struct NewKeyManagerStateSql { @@ -76,6 +78,7 @@ impl TryFrom for KeyManagerState { } impl NewKeyManagerStateSql { + /// Commits a new key manager into the database pub fn commit(&self, conn: &SqliteConnection) -> Result<(), KeyManagerStorageError> { diesel::insert_into(key_manager_states::table) .values(self.clone()) @@ -85,10 +88,14 @@ impl NewKeyManagerStateSql { } impl KeyManagerStateSql { + /// Retrieve every key manager branch currently in the database. + /// Returns a `Vec` of [KeyManagerStateSql], if none are found, it will return an empty `Vec`. pub fn index(conn: &SqliteConnection) -> Result, KeyManagerStorageError> { Ok(key_manager_states::table.load::(conn)?) } + /// Retrieve the key manager for the provided branch + /// Will return Err if the branch does not exist in the database pub fn get_state(branch: &str, conn: &SqliteConnection) -> Result { key_manager_states::table .filter(key_manager_states::branch_seed.eq(branch.to_string())) @@ -96,6 +103,7 @@ impl KeyManagerStateSql { .map_err(|_| KeyManagerStorageError::KeyManagerNotInitialized) } + /// Creates or updates the database with the key manager state in this instance. pub fn set_state(&self, conn: &SqliteConnection) -> Result<(), KeyManagerStorageError> { match KeyManagerStateSql::get_state(&self.branch_seed, conn) { Ok(km) => { @@ -121,6 +129,7 @@ impl KeyManagerStateSql { Ok(()) } + /// Updates the key index of the of the provided key manager indicated by the id. pub fn set_index(id: i32, index: Vec, conn: &SqliteConnection) -> Result<(), KeyManagerStorageError> { let update = KeyManagerStateUpdateSql { branch_seed: None, diff --git a/base_layer/wallet/src/key_manager_service/storage/sqlite_db/key_manager_state_old.rs b/base_layer/wallet/src/key_manager_service/storage/sqlite_db/key_manager_state_old.rs index 504c04dd5f..e08bb72c6d 100644 --- a/base_layer/wallet/src/key_manager_service/storage/sqlite_db/key_manager_state_old.rs +++ b/base_layer/wallet/src/key_manager_service/storage/sqlite_db/key_manager_state_old.rs @@ -25,8 +25,8 @@ use diesel::{prelude::*, SqliteConnection}; use crate::{key_manager_service::error::KeyManagerStorageError, schema::key_manager_states_old}; -// This is a temporary migration file to convert existing indexes to new ones. -// Todo remove at next testnet reset (currently on Dibbler) #testnet_reset +/// This is a temporary migration file to convert existing indexes to new ones. +/// Todo remove at next testnet reset (currently on Dibbler) #testnet_reset #[derive(Clone, Debug, Queryable, Identifiable)] #[table_name = "key_manager_states_old"] #[primary_key(id)] @@ -39,10 +39,12 @@ pub struct KeyManagerStateSqlOld { } impl KeyManagerStateSqlOld { + /// Retrieve all key manager states stored in the database. pub fn index(conn: &SqliteConnection) -> Result, KeyManagerStorageError> { Ok(key_manager_states_old::table.load::(conn)?) } + /// Deletes all the stored key manager states in the database. pub fn delete(conn: &SqliteConnection) -> Result<(), KeyManagerStorageError> { diesel::delete(key_manager_states_old::dsl::key_manager_states_old).execute(conn)?; Ok(()) diff --git a/base_layer/wallet/src/key_manager_service/storage/sqlite_db/mod.rs b/base_layer/wallet/src/key_manager_service/storage/sqlite_db/mod.rs index 8e5dae19c8..c93efef5a6 100644 --- a/base_layer/wallet/src/key_manager_service/storage/sqlite_db/mod.rs +++ b/base_layer/wallet/src/key_manager_service/storage/sqlite_db/mod.rs @@ -56,6 +56,9 @@ pub struct KeyManagerSqliteDatabase { } impl KeyManagerSqliteDatabase { + /// Creates a new sql backend from provided wallet db connection + /// * `cipher` is used to encrypt the sensitive fields in the database, if no cipher is provided, the database will + /// not encrypt sensitive fields pub fn new( database_connection: WalletDbConnection, cipher: Option, diff --git a/base_layer/wallet/src/output_manager_service/error.rs b/base_layer/wallet/src/output_manager_service/error.rs index 597cf11204..2442baffae 100644 --- a/base_layer/wallet/src/output_manager_service/error.rs +++ b/base_layer/wallet/src/output_manager_service/error.rs @@ -29,11 +29,10 @@ use tari_core::transactions::{ transaction_protocol::TransactionProtocolError, CoinbaseBuildError, }; -use tari_crypto::tari_utilities::ByteArrayError; use tari_key_manager::error::{KeyManagerError, MnemonicError}; use tari_script::ScriptError; use tari_service_framework::reply_channel::TransportChannelError; -use tari_utilities::hex::HexError; +use tari_utilities::{hex::HexError, ByteArrayError}; use thiserror::Error; use crate::{ diff --git a/base_layer/wallet/src/output_manager_service/storage/database/mod.rs b/base_layer/wallet/src/output_manager_service/storage/database/mod.rs index eec159d606..fd5c5e568d 100644 --- a/base_layer/wallet/src/output_manager_service/storage/database/mod.rs +++ b/base_layer/wallet/src/output_manager_service/storage/database/mod.rs @@ -37,7 +37,7 @@ use tari_core::transactions::{ tari_amount::MicroTari, transaction_components::{OutputFlags, TransactionOutput}, }; -use tari_crypto::tari_utilities::hex::Hex; +use tari_utilities::hex::Hex; use crate::output_manager_service::{ error::OutputManagerStorageError, diff --git a/base_layer/wallet/src/output_manager_service/storage/sqlite_db/new_output_sql.rs b/base_layer/wallet/src/output_manager_service/storage/sqlite_db/new_output_sql.rs index d1daad9f6a..6ff6877b54 100644 --- a/base_layer/wallet/src/output_manager_service/storage/sqlite_db/new_output_sql.rs +++ b/base_layer/wallet/src/output_manager_service/storage/sqlite_db/new_output_sql.rs @@ -23,7 +23,7 @@ use aes_gcm::Aes256Gcm; use derivative::Derivative; use diesel::{prelude::*, SqliteConnection}; use tari_common_types::transaction::TxId; -use tari_crypto::tari_utilities::ByteArray; +use tari_utilities::ByteArray; use crate::{ output_manager_service::{ diff --git a/base_layer/wallet/src/output_manager_service/tasks/txo_validation_task.rs b/base_layer/wallet/src/output_manager_service/tasks/txo_validation_task.rs index 131e933bad..7ae2648843 100644 --- a/base_layer/wallet/src/output_manager_service/tasks/txo_validation_task.rs +++ b/base_layer/wallet/src/output_manager_service/tasks/txo_validation_task.rs @@ -29,8 +29,8 @@ use tari_core::{ blocks::BlockHeader, proto::base_node::{QueryDeletedRequest, UtxoQueryRequest}, }; -use tari_crypto::tari_utilities::{hex::Hex, Hashable}; use tari_shutdown::ShutdownSignal; +use tari_utilities::{hex::Hex, Hashable}; use crate::{ connectivity_service::WalletConnectivityInterface, diff --git a/base_layer/wallet/src/storage/sqlite_db/wallet.rs b/base_layer/wallet/src/storage/sqlite_db/wallet.rs index dc9fd93ba0..f97972c438 100644 --- a/base_layer/wallet/src/storage/sqlite_db/wallet.rs +++ b/base_layer/wallet/src/storage/sqlite_db/wallet.rs @@ -42,11 +42,11 @@ use tari_comms::{ peer_manager::{IdentitySignature, PeerFeatures}, tor::TorIdentity, }; -use tari_crypto::tari_utilities::{ +use tari_key_manager::cipher_seed::CipherSeed; +use tari_utilities::{ hex::{from_hex, Hex}, message_format::MessageFormat, }; -use tari_key_manager::cipher_seed::CipherSeed; use tokio::time::Instant; use crate::{ @@ -768,9 +768,9 @@ impl Encryptable for ClientKeyValueSql { #[cfg(test)] mod test { - use tari_crypto::tari_utilities::hex::Hex; use tari_key_manager::cipher_seed::CipherSeed; use tari_test_utils::random::string; + use tari_utilities::hex::Hex; use tempfile::tempdir; use crate::storage::{ diff --git a/base_layer/wallet/src/transaction_service/error.rs b/base_layer/wallet/src/transaction_service/error.rs index f22276dcff..09e521281f 100644 --- a/base_layer/wallet/src/transaction_service/error.rs +++ b/base_layer/wallet/src/transaction_service/error.rs @@ -30,9 +30,9 @@ use tari_core::transactions::{ transaction_components::TransactionError, transaction_protocol::TransactionProtocolError, }; -use tari_crypto::tari_utilities::ByteArrayError; use tari_p2p::services::liveness::error::LivenessError; use tari_service_framework::reply_channel::TransportChannelError; +use tari_utilities::ByteArrayError; use thiserror::Error; use tokio::sync::broadcast::error::RecvError; @@ -152,7 +152,7 @@ pub enum TransactionServiceError { #[error("Maximum Attempts Exceeded")] MaximumAttemptsExceeded, #[error("Byte array error")] - ByteArrayError(#[from] tari_crypto::tari_utilities::ByteArrayError), + ByteArrayError(#[from] tari_utilities::ByteArrayError), #[error("Transaction Service Error: `{0}`")] ServiceError(String), #[error("Wallet Recovery in progress so Transaction Service Messaging Requests ignored")] diff --git a/base_layer/wallet/src/transaction_service/protocols/transaction_broadcast_protocol.rs b/base_layer/wallet/src/transaction_service/protocols/transaction_broadcast_protocol.rs index 22049a51fd..4a4130daf6 100644 --- a/base_layer/wallet/src/transaction_service/protocols/transaction_broadcast_protocol.rs +++ b/base_layer/wallet/src/transaction_service/protocols/transaction_broadcast_protocol.rs @@ -39,7 +39,7 @@ use tari_core::{ }, transactions::transaction_components::Transaction, }; -use tari_crypto::tari_utilities::hex::Hex; +use tari_utilities::hex::Hex; use tokio::{sync::watch, time::sleep}; use crate::{ diff --git a/base_layer/wallet/src/transaction_service/protocols/transaction_receive_protocol.rs b/base_layer/wallet/src/transaction_service/protocols/transaction_receive_protocol.rs index a724fbc628..eb2565d92f 100644 --- a/base_layer/wallet/src/transaction_service/protocols/transaction_receive_protocol.rs +++ b/base_layer/wallet/src/transaction_service/protocols/transaction_receive_protocol.rs @@ -34,7 +34,7 @@ use tari_core::transactions::{ transaction_components::Transaction, transaction_protocol::{recipient::RecipientState, sender::TransactionSenderMessage}, }; -use tari_crypto::tari_utilities::Hashable; +use tari_utilities::Hashable; use tokio::{ sync::{mpsc, oneshot}, time::sleep, diff --git a/base_layer/wallet/src/transaction_service/protocols/transaction_validation_protocol.rs b/base_layer/wallet/src/transaction_service/protocols/transaction_validation_protocol.rs index 3b09a99310..adff7803e5 100644 --- a/base_layer/wallet/src/transaction_service/protocols/transaction_validation_protocol.rs +++ b/base_layer/wallet/src/transaction_service/protocols/transaction_validation_protocol.rs @@ -40,7 +40,7 @@ use tari_core::{ blocks::BlockHeader, proto::{base_node::Signatures as SignaturesProto, types::Signature as SignatureProto}, }; -use tari_crypto::tari_utilities::{hex::Hex, Hashable}; +use tari_utilities::{hex::Hex, Hashable}; use crate::{ connectivity_service::WalletConnectivityInterface, diff --git a/base_layer/wallet/src/transaction_service/storage/sqlite_db.rs b/base_layer/wallet/src/transaction_service/storage/sqlite_db.rs index 845b06007d..b7d0ee39a0 100644 --- a/base_layer/wallet/src/transaction_service/storage/sqlite_db.rs +++ b/base_layer/wallet/src/transaction_service/storage/sqlite_db.rs @@ -43,7 +43,7 @@ use tari_common_types::{ }; use tari_comms::types::CommsPublicKey; use tari_core::transactions::tari_amount::MicroTari; -use tari_crypto::tari_utilities::{ +use tari_utilities::{ hex::{from_hex, Hex}, ByteArray, }; diff --git a/base_layer/wallet/src/utxo_scanner_service/error.rs b/base_layer/wallet/src/utxo_scanner_service/error.rs index 64e5e34e54..1f1feab775 100644 --- a/base_layer/wallet/src/utxo_scanner_service/error.rs +++ b/base_layer/wallet/src/utxo_scanner_service/error.rs @@ -22,8 +22,8 @@ use serde_json::Error as SerdeJsonError; use tari_comms::{connectivity::ConnectivityError, protocol::rpc::RpcError}; -use tari_crypto::tari_utilities::hex::HexError; use tari_service_framework::reply_channel::TransportChannelError; +use tari_utilities::hex::HexError; use thiserror::Error; use crate::{error::WalletStorageError, output_manager_service::error::OutputManagerError}; diff --git a/base_layer/wallet/src/utxo_scanner_service/utxo_scanner_task.rs b/base_layer/wallet/src/utxo_scanner_service/utxo_scanner_task.rs index d4fb6c4aa3..5db3667ebf 100644 --- a/base_layer/wallet/src/utxo_scanner_service/utxo_scanner_task.rs +++ b/base_layer/wallet/src/utxo_scanner_service/utxo_scanner_task.rs @@ -79,13 +79,15 @@ where TBackend: WalletBackend + 'static pub async fn run(mut self) -> Result<(), UtxoScannerError> { if self.mode == UtxoScannerMode::Recovery { self.set_recovery_mode().await?; - } else if self.check_recovery_mode().await? { - warn!( - target: LOG_TARGET, - "Scanning round aborted as a Recovery is in progress" - ); - return Ok(()); } else { + let in_progress = self.check_recovery_mode().await?; + if in_progress { + warn!( + target: LOG_TARGET, + "Scanning round aborted as a Recovery is in progress" + ); + return Ok(()); + } } loop { @@ -571,15 +573,12 @@ where TBackend: WalletBackend + 'static } async fn check_recovery_mode(&self) -> Result { - let value: Option = self - .resources + self.resources .db - .get_client_key_from_str(RECOVERY_KEY.to_owned()) - .await?; - match value { - None => Ok(false), - Some(_v) => Ok(true), - } + .get_client_key_from_str::(RECOVERY_KEY.to_owned()) + .await + .map(|x| x.is_some()) + .map_err(UtxoScannerError::from) // in case if `get_client_key_from_str` returns not exactly that type } async fn clear_recovery_mode(&self) -> Result<(), UtxoScannerError> { diff --git a/base_layer/wallet/tests/transaction_service_tests/service.rs b/base_layer/wallet/tests/transaction_service_tests/service.rs index 4678aee4d2..b7b48746fc 100644 --- a/base_layer/wallet/tests/transaction_service_tests/service.rs +++ b/base_layer/wallet/tests/transaction_service_tests/service.rs @@ -546,7 +546,7 @@ fn manage_single_transaction() { .block_on(alice_ts.send_transaction( bob_node_identity.public_key().clone(), value, - MicroTari::from(20), + MicroTari::from(4), "".to_string() )) .is_err()); @@ -557,7 +557,7 @@ fn manage_single_transaction() { .block_on(alice_ts.send_transaction( bob_node_identity.public_key().clone(), value, - MicroTari::from(20), + MicroTari::from(4), message, )) .expect("Alice sending tx"); @@ -2110,7 +2110,6 @@ fn test_set_num_confirmations() { } #[test] -#[ignore = "test is flaky"] fn test_transaction_cancellation() { let factories = CryptoFactories::default(); let mut runtime = Runtime::new().unwrap(); @@ -2253,7 +2252,7 @@ fn test_transaction_cancellation() { let amount = MicroTari::from(10_000); builder .with_lock_height(0) - .with_fee_per_gram(MicroTari::from(177)) + .with_fee_per_gram(MicroTari::from(5)) .with_offset(PrivateKey::random(&mut OsRng)) .with_private_nonce(PrivateKey::random(&mut OsRng)) .with_amount(0, amount) @@ -2340,7 +2339,7 @@ fn test_transaction_cancellation() { let amount = MicroTari::from(10_000); builder .with_lock_height(0) - .with_fee_per_gram(MicroTari::from(177)) + .with_fee_per_gram(MicroTari::from(5)) .with_offset(PrivateKey::random(&mut OsRng)) .with_private_nonce(PrivateKey::random(&mut OsRng)) .with_amount(0, amount) diff --git a/base_layer/wallet_ffi/Cargo.toml b/base_layer/wallet_ffi/Cargo.toml index 127d6d8dbd..8354e08b70 100644 --- a/base_layer/wallet_ffi/Cargo.toml +++ b/base_layer/wallet_ffi/Cargo.toml @@ -11,12 +11,12 @@ tari_common = {path="../../common"} tari_common_types = {path="../common_types"} tari_comms = { version = "^0.31", path = "../../comms/core", features = ["c_integration"]} tari_comms_dht = { version = "^0.31", path = "../../comms/dht", default-features = false } -tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.12.5" } +tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.13.0" } tari_key_manager = { version = "^0.31", path = "../key_manager" } tari_p2p = { version = "^0.31", path = "../p2p" } tari_script = { path = "../../infrastructure/tari_script" } tari_shutdown = { version = "^0.31", path = "../../infrastructure/shutdown" } -tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.3.1" } +tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.4.3" } tari_wallet = { version = "^0.31", path = "../wallet", features = ["c_integration"]} chrono = { version = "0.4.19", default-features = false, features = ["serde"] } diff --git a/base_layer/wallet_ffi/src/tasks.rs b/base_layer/wallet_ffi/src/tasks.rs index 096e04b17b..0013e220ba 100644 --- a/base_layer/wallet_ffi/src/tasks.rs +++ b/base_layer/wallet_ffi/src/tasks.rs @@ -21,7 +21,7 @@ // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. use log::*; -use tari_crypto::tari_utilities::hex::Hex; +use tari_utilities::hex::Hex; use tari_wallet::{error::WalletError, utxo_scanner_service::handle::UtxoScannerEvent}; use tokio::{sync::broadcast, task::JoinHandle}; diff --git a/common/config/presets/base_node.toml b/common/config/presets/base_node.toml index 51e5de076f..c092fb97b3 100644 --- a/common/config/presets/base_node.toml +++ b/common/config/presets/base_node.toml @@ -16,6 +16,7 @@ grpc_address = "/ip4/127.0.0.1/tcp/18142" # Spin up and use a built-in Tor instance. This only works on macos/linux and you must comment out tor_control_address below. # This requires that the base node was built with the optional "libtor" feature flag. +# This requires that the base node was built with the optional "libtor" feature flag. #use_libtor = true [dibbler.base_node] @@ -64,6 +65,8 @@ tor.proxy_bypass_addresses = [] #tor.proxy_bypass_addresses = ["/dns4/my-foo-base-node/tcp/9998"] # When using the tor transport and set to true, outbound TCP connections bypass the tor proxy. Defaults to false for better privacy tor.proxy_bypass_for_outbound_tcp = false +# Custom address to forward tor traffic. +#tor.forward_address = "/ip4/127.0.0.1/tcp/0" # Use a SOCKS5 proxy transport. This transport recognises any addresses supported by the proxy. #type = "socks5" diff --git a/common/src/exit_codes.rs b/common/src/exit_codes.rs index 063e1d496e..452f9638d4 100644 --- a/common/src/exit_codes.rs +++ b/common/src/exit_codes.rs @@ -12,7 +12,7 @@ pub struct ExitError { } impl ExitError { - pub fn new(exit_code: ExitCode, details: &impl ToString) -> Self { + pub fn new(exit_code: ExitCode, details: impl ToString) -> Self { let details = Some(details.to_string()); Self { exit_code, details } } diff --git a/comms/core/Cargo.toml b/comms/core/Cargo.toml index 10133db72e..675a70c380 100644 --- a/comms/core/Cargo.toml +++ b/comms/core/Cargo.toml @@ -10,10 +10,11 @@ version = "0.31.1" edition = "2018" [dependencies] -tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.12.5" } +tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.13.0" } tari_metrics = { path = "../../infrastructure/metrics" } tari_storage = { version = "^0.31", path = "../../infrastructure/storage" } tari_shutdown = { version = "^0.31", path = "../../infrastructure/shutdown" } +tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.4.3" } anyhow = "1.0.53" async-trait = "0.1.36" diff --git a/comms/core/examples/stress/error.rs b/comms/core/examples/stress/error.rs index c7e0a3d1d7..f181e5bf74 100644 --- a/comms/core/examples/stress/error.rs +++ b/comms/core/examples/stress/error.rs @@ -29,7 +29,7 @@ use tari_comms::{ CommsBuilderError, PeerConnectionError, }; -use tari_crypto::tari_utilities::message_format::MessageFormatError; +use tari_utilities::message_format::MessageFormatError; use thiserror::Error; use tokio::{ sync::{mpsc::error::SendError, oneshot}, diff --git a/comms/core/examples/stress/prompt.rs b/comms/core/examples/stress/prompt.rs index aa8e360865..9259dd5b21 100644 --- a/comms/core/examples/stress/prompt.rs +++ b/comms/core/examples/stress/prompt.rs @@ -27,7 +27,7 @@ use tari_comms::{ peer_manager::{NodeId, Peer}, types::CommsPublicKey, }; -use tari_crypto::tari_utilities::hex::Hex; +use tari_utilities::hex::Hex; use super::error::Error; use crate::stress::service::{StressProtocol, StressProtocolKind}; diff --git a/comms/core/examples/stress/service.rs b/comms/core/examples/stress/service.rs index 95ed1a13f8..4c2af5dbb6 100644 --- a/comms/core/examples/stress/service.rs +++ b/comms/core/examples/stress/service.rs @@ -39,8 +39,8 @@ use tari_comms::{ PeerConnection, Substream, }; -use tari_crypto::tari_utilities::hex::Hex; use tari_shutdown::Shutdown; +use tari_utilities::hex::Hex; use tokio::{ io::{AsyncReadExt, AsyncWriteExt}, sync::{mpsc, oneshot, RwLock}, diff --git a/comms/core/examples/stress_test.rs b/comms/core/examples/stress_test.rs index 12022d5f96..a101198b9e 100644 --- a/comms/core/examples/stress_test.rs +++ b/comms/core/examples/stress_test.rs @@ -25,8 +25,8 @@ use std::{env, net::Ipv4Addr, path::Path, process, sync::Arc, time::Duration}; use futures::{future, future::Either}; use stress::{error::Error, prompt::user_prompt}; -use tari_crypto::tari_utilities::message_format::MessageFormat; use tari_shutdown::Shutdown; +use tari_utilities::message_format::MessageFormat; use tempfile::Builder; use tokio::{sync::oneshot, time}; diff --git a/comms/core/examples/tor.rs b/comms/core/examples/tor.rs index 1c579dbaef..babf99acc2 100644 --- a/comms/core/examples/tor.rs +++ b/comms/core/examples/tor.rs @@ -18,11 +18,11 @@ use tari_comms::{ CommsBuilder, CommsNode, }; -use tari_crypto::tari_utilities::message_format::MessageFormat; use tari_storage::{ lmdb_store::{LMDBBuilder, LMDBConfig}, LMDBWrapper, }; +use tari_utilities::message_format::MessageFormat; use tempfile::Builder; use tokio::{ runtime, diff --git a/comms/core/examples/vanity_id.rs b/comms/core/examples/vanity_id.rs index d796117028..69f0c14212 100644 --- a/comms/core/examples/vanity_id.rs +++ b/comms/core/examples/vanity_id.rs @@ -21,7 +21,8 @@ // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. use tari_comms::peer_manager::NodeId; -use tari_crypto::{keys::PublicKey, ristretto::RistrettoPublicKey, tari_utilities::hex::Hex}; +use tari_crypto::{keys::PublicKey, ristretto::RistrettoPublicKey}; +use tari_utilities::hex::Hex; #[tokio::main] async fn main() { diff --git a/comms/core/src/backoff.rs b/comms/core/src/backoff.rs index 35e5c02964..39f912a344 100644 --- a/comms/core/src/backoff.rs +++ b/comms/core/src/backoff.rs @@ -22,6 +22,7 @@ use std::{cmp::min, time::Duration}; +/// Boxed backoff pub type BoxedBackoff = Box; pub trait Backoff { @@ -34,6 +35,7 @@ impl Backoff for BoxedBackoff { } } +/// Returns a backoff Duration that increases exponentially to the number of attempts. #[derive(Debug, Clone)] pub struct ExponentialBackoff { factor: f32, @@ -61,6 +63,7 @@ impl Backoff for ExponentialBackoff { } } +/// Returns a backoff Duration that increases linearly to the number of attempts. #[derive(Clone)] pub struct ConstantBackoff(Duration); diff --git a/comms/core/src/bounded_executor.rs b/comms/core/src/bounded_executor.rs index 7558bfd610..2754e51e06 100644 --- a/comms/core/src/bounded_executor.rs +++ b/comms/core/src/bounded_executor.rs @@ -174,11 +174,13 @@ impl BoundedExecutor { } } +/// A task executor that can be configured to be bounded or unbounded. pub struct OptionallyBoundedExecutor { inner: Either, } impl OptionallyBoundedExecutor { + /// Create a new OptionallyBoundedExecutor. If `num_permits` is `None` the executor will be unbounded. pub fn new(executor: runtime::Handle, num_permits: Option) -> Self { Self { inner: num_permits @@ -187,10 +189,13 @@ impl OptionallyBoundedExecutor { } } + /// Create a new OptionallyBoundedExecutor from the current tokio context. If `num_permits` is `None` the executor + /// will be unbounded. pub fn from_current(num_permits: Option) -> Self { Self::new(current(), num_permits) } + /// Returns true if this executor can spawn, otherwise false. pub fn can_spawn(&self) -> bool { match &self.inner { Either::Left(_) => true, @@ -198,6 +203,8 @@ impl OptionallyBoundedExecutor { } } + /// Try spawn a new task returning its `JoinHandle`. An error is returned if the executor is bounded and currently + /// full. pub fn try_spawn(&self, future: F) -> Result, TrySpawnError> where F: Future + Send + 'static, @@ -209,6 +216,8 @@ impl OptionallyBoundedExecutor { } } + /// Spawns a new task returning its `JoinHandle`. If the executor is running `num_permits` tasks, this waits until a + /// task is available. pub async fn spawn(&self, future: F) -> JoinHandle where F: Future + Send + 'static, diff --git a/comms/core/src/builder/comms_node.rs b/comms/core/src/builder/comms_node.rs index b6b45dde49..48e2da083f 100644 --- a/comms/core/src/builder/comms_node.rs +++ b/comms/core/src/builder/comms_node.rs @@ -89,17 +89,21 @@ impl UnspawnedCommsNode { self } + /// Adds [ProtocolExtensions](crate::protocol::ProtocolExtensions) to this node. pub fn add_protocol_extensions(mut self, extensions: ProtocolExtensions) -> Self { self.protocol_extensions.extend(extensions); self } - /// Add a protocol extension + /// Adds an implementation of [ProtocolExtension](crate::protocol::ProtocolExtension) to this node. + /// This is used to add custom protocols to Tari comms. pub fn add_protocol_extension(mut self, extension: T) -> Self { self.protocol_extensions.add(extension); self } + /// Registers custom ProtocolIds and mpsc notifier. A [ProtocolNotification](crate::protocol::ProtocolNotification) + /// will be sent on that channel whenever a remote peer requests to speak the given protocols. pub fn add_protocol>( mut self, protocol: I, @@ -109,7 +113,7 @@ impl UnspawnedCommsNode { self } - /// Set the listener address + /// Set the listener address. This is an alias to `CommsBuilder::with_listener_address`. pub fn with_listener_address(mut self, listener_address: Multiaddr) -> Self { self.builder = self.builder.with_listener_address(listener_address); self @@ -121,6 +125,7 @@ impl UnspawnedCommsNode { self } + /// Spawn a new node using the specified [Transport](crate::transports::Transport). pub async fn spawn_with_transport(self, transport: TTransport) -> Result where TTransport: Transport + Unpin + Send + Sync + Clone + 'static, diff --git a/comms/core/src/builder/mod.rs b/comms/core/src/builder/mod.rs index 22e4b82d0f..6219fd0039 100644 --- a/comms/core/src/builder/mod.rs +++ b/comms/core/src/builder/mod.rs @@ -20,12 +20,6 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -//! # CommsBuilder -//! -//! The [CommsBuilder] provides a simple builder API for getting Tari comms p2p messaging up and running. -//! -//! [CommsBuilder]: ./builder/struct.CommsBuilder.html - mod comms_node; pub use comms_node::{CommsNode, UnspawnedCommsNode}; @@ -41,13 +35,13 @@ mod placeholder; #[cfg(test)] mod tests; -use std::{fs::File, sync::Arc}; +use std::{fs::File, sync::Arc, time::Duration}; use tari_shutdown::ShutdownSignal; use tokio::sync::{broadcast, mpsc}; use crate::{ - backoff::{Backoff, BoxedBackoff, ExponentialBackoff}, + backoff::{Backoff, BoxedBackoff, ConstantBackoff}, connection_manager::{ConnectionManagerConfig, ConnectionManagerRequester}, connectivity::{ConnectivityConfig, ConnectivityRequester}, multiaddr::Multiaddr, @@ -57,7 +51,70 @@ use crate::{ types::CommsDatabase, }; -/// The `CommsBuilder` provides a simple builder API for getting Tari comms p2p messaging up and running. +/// # CommsBuilder +/// +/// [CommsBuilder] is used to customize and spawn Tari comms core. +/// +/// The following example will get a node customized for your own network up and running. +/// +/// ```rust +/// # use std::{sync::Arc, time::Duration}; +/// # use rand::rngs::OsRng; +/// # use tari_shutdown::Shutdown; +/// # use tari_comms::{ +/// # {CommsBuilder, NodeIdentity}, +/// # peer_manager::{PeerStorage, PeerFeatures}, +/// # transports::TcpTransport, +/// # }; +/// use tari_storage::{ +/// lmdb_store::{LMDBBuilder, LMDBConfig}, +/// LMDBWrapper, +/// }; +/// +/// # #[tokio::main] +/// # async fn main() { +/// let node_identity = Arc::new(NodeIdentity::random( +/// &mut OsRng, +/// "/dns4/basenodezforhire.com/tcp/18000".parse().unwrap(), +/// PeerFeatures::COMMUNICATION_NODE, +/// )); +/// node_identity.sign(); +/// let mut shutdown = Shutdown::new(); +/// let datastore = LMDBBuilder::new() +/// .set_path("/tmp") +/// .set_env_config(LMDBConfig::default()) +/// .set_max_number_of_databases(1) +/// .add_database("peers", lmdb_zero::db::CREATE) +/// .build() +/// .unwrap(); +/// +/// let peer_database = datastore.get_handle("peers").unwrap(); +/// let peer_database = LMDBWrapper::new(Arc::new(peer_database)); +/// +/// let unspawned_node = CommsBuilder::new() +/// // .with_listener_address("/ip4/0.0.0.0/tcp/18000".parse().unwrap()) +/// .with_node_identity(node_identity) +/// .with_peer_storage(peer_database, None) +/// .with_shutdown_signal(shutdown.to_signal()) +/// .build() +/// .unwrap(); +/// // This is your chance to add customizations that may require comms components for e.g. PeerManager. +/// // let my_peer = Peer::new(...); +/// // unspawned_node.peer_manager().add_peer(my_peer.clone()); +/// // Add custom extensions implementing `ProtocolExtension` +/// // unspawned_node = unspawned_node.add_protocol_extension(MyCustomProtocol::new(unspawned_node.peer_manager())); +/// +/// let transport = TcpTransport::new(); +/// let node = unspawned_node.spawn_with_transport(transport).await.unwrap(); +/// // Node is alive for 2 seconds +/// tokio::time::sleep(Duration::from_secs(2)).await; +/// shutdown.trigger(); +/// node.wait_until_shutdown().await; +/// // let peer_conn = node.connectivity().dial_peer(my_peer.node_id).await.unwrap(); +/// # } +/// ``` +/// +/// [CommsBuilder]: crate::CommsBuilder pub struct CommsBuilder { peer_storage: Option, peer_storage_file_lock: Option, @@ -76,7 +133,7 @@ impl Default for CommsBuilder { peer_storage: None, peer_storage_file_lock: None, node_identity: None, - dial_backoff: Box::new(ExponentialBackoff::default()), + dial_backoff: Box::new(ConstantBackoff::new(Duration::from_millis(500))), hidden_service_ctl: None, connection_manager_config: ConnectionManagerConfig::default(), connectivity_config: ConnectivityConfig::default(), @@ -141,21 +198,26 @@ impl CommsBuilder { self } + /// Sets the address that the transport will listen on. The address must be compatible with the transport. pub fn with_listener_address(mut self, listener_address: Multiaddr) -> Self { self.connection_manager_config.listener_address = listener_address; self } + /// Sets an auxiliary TCP listener address that can accept peer connections. This is optional. pub fn with_auxiliary_tcp_listener_address(mut self, listener_address: Multiaddr) -> Self { self.connection_manager_config.auxiliary_tcp_listener_address = Some(listener_address); self } + /// Sets the maximum allowed liveness sessions. Liveness is typically used by tools like docker or kubernetes to + /// detect that the node is live. Defaults to 0 (disabled) pub fn with_listener_liveness_max_sessions(mut self, max_sessions: usize) -> Self { self.connection_manager_config.liveness_max_sessions = max_sessions; self } + /// Restrict liveness sessions to certain address ranges (CIDR format). pub fn with_listener_liveness_allowlist_cidrs(mut self, cidrs: Vec) -> Self { self.connection_manager_config.liveness_cidr_allowlist = cidrs; self @@ -194,8 +256,8 @@ impl CommsBuilder { self } - /// Set the backoff that [ConnectionManager] uses when dialing peers. This is optional. If omitted the default - /// ExponentialBackoff is used. [ConnectionManager]: crate::connection_manager::next::ConnectionManager + /// Set the backoff to use when a dial to a remote peer fails. This is optional. If omitted the default + /// [ConstantBackoff](crate::backoff::ConstantBackoff) of 500ms is used. pub fn with_dial_backoff(mut self, backoff: T) -> Self where T: Backoff + Send + Sync + 'static { self.dial_backoff = Box::new(backoff); diff --git a/comms/core/src/compat.rs b/comms/core/src/compat.rs deleted file mode 100644 index 67b53c7c91..0000000000 --- a/comms/core/src/compat.rs +++ /dev/null @@ -1,94 +0,0 @@ -// Copyright 2020, The Tari Project -// -// Redistribution and use in source and binary forms, with or without modification, are permitted provided that the -// following conditions are met: -// -// 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following -// disclaimer. -// -// 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the -// following disclaimer in the documentation and/or other materials provided with the distribution. -// -// 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote -// products derived from this software without specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, -// INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR -// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, -// WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE -// USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -// Copyright (c) The Libra Core Contributors -// SPDX-License-Identifier: Apache-2.0 - -//! This module provides a compatibility shim between traits in the `futures` and `tokio` crate. -use std::{ - io, - pin::Pin, - task::{self, Context, Poll}, -}; -use tokio::io::ReadBuf; - -/// `IoCompat` provides a compatibility shim between the `AsyncRead`/`AsyncWrite` traits provided by -/// the `futures` library and those provided by the `tokio` library since they are different and -/// incompatible with one another. -#[derive(Copy, Clone, Debug)] -pub struct IoCompat { - inner: T, -} - -impl IoCompat { - pub fn new(inner: T) -> Self { - IoCompat { inner } - } -} - -impl tokio::io::AsyncRead for IoCompat -where T: futures::io::AsyncRead + Unpin -{ - fn poll_read(self: Pin<&mut Self>, cx: &mut Context<'_>, buf: &mut ReadBuf<'_>) -> Poll> { - futures::io::AsyncRead::poll_read(Pin::new(&mut self.inner), cx, buf.filled_mut()) - } -} - -impl futures::io::AsyncRead for IoCompat -where T: tokio::io::AsyncRead + Unpin -{ - fn poll_read(mut self: Pin<&mut Self>, cx: &mut task::Context, buf: &mut [u8]) -> Poll> { - tokio::io::AsyncRead::poll_read(Pin::new(&mut self.inner), cx, &mut ReadBuf::new(buf)) - } -} - -impl tokio::io::AsyncWrite for IoCompat -where T: futures::io::AsyncWrite + Unpin -{ - fn poll_write(mut self: Pin<&mut Self>, cx: &mut task::Context, buf: &[u8]) -> Poll> { - futures::io::AsyncWrite::poll_write(Pin::new(&mut self.inner), cx, buf) - } - - fn poll_flush(mut self: Pin<&mut Self>, cx: &mut task::Context) -> Poll> { - futures::io::AsyncWrite::poll_flush(Pin::new(&mut self.inner), cx) - } - - fn poll_shutdown(mut self: Pin<&mut Self>, cx: &mut task::Context) -> Poll> { - futures::io::AsyncWrite::poll_close(Pin::new(&mut self.inner), cx) - } -} - -impl futures::io::AsyncWrite for IoCompat -where T: tokio::io::AsyncWrite + Unpin -{ - fn poll_write(mut self: Pin<&mut Self>, cx: &mut task::Context, buf: &[u8]) -> Poll> { - tokio::io::AsyncWrite::poll_write(Pin::new(&mut self.inner), cx, buf) - } - - fn poll_flush(mut self: Pin<&mut Self>, cx: &mut task::Context) -> Poll> { - tokio::io::AsyncWrite::poll_flush(Pin::new(&mut self.inner), cx) - } - - fn poll_close(mut self: Pin<&mut Self>, cx: &mut task::Context) -> Poll> { - tokio::io::AsyncWrite::poll_shutdown(Pin::new(&mut self.inner), cx) - } -} diff --git a/comms/core/src/connection_manager/common.rs b/comms/core/src/connection_manager/common.rs index 2aa6baa68e..3e86c900b1 100644 --- a/comms/core/src/connection_manager/common.rs +++ b/comms/core/src/connection_manager/common.rs @@ -42,7 +42,8 @@ const LOG_TARGET: &str = "comms::connection_manager::common"; /// The maximum size of the peer's user agent string. If the peer sends a longer string it is truncated. const MAX_USER_AGENT_LEN: usize = 100; -pub async fn perform_identity_exchange< +/// Performs the identity exchange protocol on the given socket. +pub(super) async fn perform_identity_exchange< 'p, P: IntoIterator, TSocket: AsyncRead + AsyncWrite + Unpin, @@ -68,7 +69,7 @@ pub async fn perform_identity_exchange< /// /// If the `allow_test_addrs` parameter is true, loopback, local link and other addresses normally not considered valid /// for p2p comms will be accepted. -pub async fn validate_and_add_peer_from_peer_identity( +pub(super) async fn validate_and_add_peer_from_peer_identity( peer_manager: &PeerManager, known_peer: Option, authenticated_public_key: CommsPublicKey, @@ -171,7 +172,7 @@ fn add_valid_identity_signature_to_peer( Ok(()) } -pub async fn find_unbanned_peer( +pub(super) async fn find_unbanned_peer( peer_manager: &PeerManager, authenticated_public_key: &CommsPublicKey, ) -> Result, ConnectionManagerError> { @@ -182,6 +183,8 @@ pub async fn find_unbanned_peer( } } +/// Checks that the given peer addresses are well-formed and valid. If allow_test_addrs is false, all localhost and +/// memory addresses will be rejected. pub fn validate_peer_addresses<'a, A: IntoIterator>( addresses: A, allow_test_addrs: bool, diff --git a/comms/core/src/connection_manager/dial_state.rs b/comms/core/src/connection_manager/dial_state.rs index a31b4881c5..c7ab4e339d 100644 --- a/comms/core/src/connection_manager/dial_state.rs +++ b/comms/core/src/connection_manager/dial_state.rs @@ -66,10 +66,12 @@ impl DialState { self } + /// The number of attempts pub fn num_attempts(&self) -> usize { self.attempts } + /// Sends the connection result on the reply channel. If a reply has previously been sent, this is a no-op. pub fn send_reply( &mut self, result: Result, @@ -81,6 +83,7 @@ impl DialState { Ok(()) } + /// Returns a reference to the Peer that is currently being dialed. pub fn peer(&self) -> &Peer { &self.peer } diff --git a/comms/core/src/connection_manager/dialer.rs b/comms/core/src/connection_manager/dialer.rs index f6171e1dfa..2455e5f9b4 100644 --- a/comms/core/src/connection_manager/dialer.rs +++ b/comms/core/src/connection_manager/dialer.rs @@ -40,7 +40,7 @@ use tokio::{ use tokio_stream::StreamExt; use tracing::{self, span, Instrument, Level}; -use super::{error::ConnectionManagerError, peer_connection::PeerConnection, types::ConnectionDirection}; +use super::{direction::ConnectionDirection, error::ConnectionManagerError, peer_connection::PeerConnection}; use crate::{ backoff::Backoff, connection_manager::{ @@ -76,6 +76,7 @@ pub(crate) enum DialerRequest { NotifyNewInboundConnection(PeerConnection), } +/// Responsible for dialing peers on the given transport. pub struct Dialer { config: ConnectionManagerConfig, peer_manager: Arc, diff --git a/comms/core/src/connection_manager/types.rs b/comms/core/src/connection_manager/direction.rs similarity index 100% rename from comms/core/src/connection_manager/types.rs rename to comms/core/src/connection_manager/direction.rs diff --git a/comms/core/src/connection_manager/error.rs b/comms/core/src/connection_manager/error.rs index 316a7991d3..7611f3ed78 100644 --- a/comms/core/src/connection_manager/error.rs +++ b/comms/core/src/connection_manager/error.rs @@ -30,6 +30,7 @@ use crate::{ protocol::{IdentityProtocolError, ProtocolError}, }; +/// Error for ConnectionManager #[derive(Debug, Error, Clone)] pub enum ConnectionManagerError { #[error("Peer manager error: {0}")] @@ -100,6 +101,7 @@ impl From for ConnectionManagerError { } } +/// Error type for PeerConnection #[derive(Debug, Error)] pub enum PeerConnectionError { #[error("Yamux connection error: {0}")] diff --git a/comms/core/src/connection_manager/listener.rs b/comms/core/src/connection_manager/listener.rs index 7c68ccc28b..bf58dddbf8 100644 --- a/comms/core/src/connection_manager/listener.rs +++ b/comms/core/src/connection_manager/listener.rs @@ -44,9 +44,9 @@ use tracing::{span, Instrument, Level}; use super::{ common, + direction::ConnectionDirection, error::ConnectionManagerError, peer_connection::{self, PeerConnection}, - types::ConnectionDirection, ConnectionManagerConfig, ConnectionManagerEvent, }; @@ -70,6 +70,7 @@ use crate::{ const LOG_TARGET: &str = "comms::connection_manager::listener"; +/// Listens on the given transport for peer connections and notifies when a new inbound peer connection is established. pub struct PeerListener { config: ConnectionManagerConfig, bind_address: Multiaddr, diff --git a/comms/core/src/connection_manager/liveness.rs b/comms/core/src/connection_manager/liveness.rs index 94360379c5..39870dcf60 100644 --- a/comms/core/src/connection_manager/liveness.rs +++ b/comms/core/src/connection_manager/liveness.rs @@ -29,6 +29,7 @@ use tokio_util::codec::{Framed, LinesCodec, LinesCodecError}; /// Max line length accepted by the liveness session. const MAX_LINE_LENGTH: usize = 50; +/// Echo server for liveness checks pub struct LivenessSession { framed: Framed, } diff --git a/comms/core/src/connection_manager/manager.rs b/comms/core/src/connection_manager/manager.rs index a03dcc6eaf..ecadf7109c 100644 --- a/comms/core/src/connection_manager/manager.rs +++ b/comms/core/src/connection_manager/manager.rs @@ -57,6 +57,7 @@ const LOG_TARGET: &str = "comms::connection_manager::manager"; const EVENT_CHANNEL_SIZE: usize = 32; const DIALER_REQUEST_CHANNEL_SIZE: usize = 32; +/// Connection events #[derive(Debug)] pub enum ConnectionManagerEvent { // Peer connection @@ -88,6 +89,7 @@ impl fmt::Display for ConnectionManagerEvent { } } +/// Configuration for ConnectionManager #[derive(Debug, Clone)] pub struct ConnectionManagerConfig { /// The address to listen on for incoming connections. This address must be supported by the transport. @@ -147,16 +149,20 @@ pub struct ListenerInfo { } impl ListenerInfo { + /// The address that was bound on. In the case of TCP, if the OS has decided which port to bind on (0.0.0.0:0), this + /// address contains the actual port that was used. pub fn bind_address(&self) -> &Multiaddr { &self.bind_address } + /// The auxiliary TCP address that was bound on if enabled. pub fn auxiliary_bind_address(&self) -> Option<&Multiaddr> { self.aux_bind_address.as_ref() } } -pub struct ConnectionManager { +/// The actor responsible for connection management. +pub(crate) struct ConnectionManager { request_rx: mpsc::Receiver, internal_event_rx: mpsc::Receiver, dialer_tx: mpsc::Sender, @@ -178,7 +184,7 @@ where TTransport::Output: AsyncRead + AsyncWrite + Send + Sync + Unpin + 'static, TBackoff: Backoff + Send + Sync + 'static, { - pub fn new( + pub(crate) fn new( mut config: ConnectionManagerConfig, transport: TTransport, noise_config: NoiseConfig, diff --git a/comms/core/src/connection_manager/mod.rs b/comms/core/src/connection_manager/mod.rs index fd0f7a3312..98c40dad64 100644 --- a/comms/core/src/connection_manager/mod.rs +++ b/comms/core/src/connection_manager/mod.rs @@ -19,6 +19,13 @@ // SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +//! # ConnectionManager +//! +//! This component is responsible for orchestrating PeerConnections, specifically: +//! - dialing peers, +//! - listening for peer connections on the configured transport, +//! - performing connection upgrades (noise protocol, identity and multiplexing), +//! - and, notifying the connectivity manager of changes in connection state (new connections, disconnects, etc) mod dial_state; mod dialer; @@ -28,14 +35,15 @@ mod metrics; mod common; pub use common::validate_peer_addresses; -mod types; -pub use types::ConnectionDirection; +mod direction; +pub use direction::ConnectionDirection; mod requester; pub use requester::{ConnectionManagerRequest, ConnectionManagerRequester}; mod manager; -pub use manager::{ConnectionManager, ConnectionManagerConfig, ConnectionManagerEvent, ListenerInfo}; +pub(crate) use manager::ConnectionManager; +pub use manager::{ConnectionManagerConfig, ConnectionManagerEvent, ListenerInfo}; mod error; pub use error::{ConnectionManagerError, PeerConnectionError}; diff --git a/comms/core/src/connection_manager/peer_connection.rs b/comms/core/src/connection_manager/peer_connection.rs index b052871aad..b3165e9baa 100644 --- a/comms/core/src/connection_manager/peer_connection.rs +++ b/comms/core/src/connection_manager/peer_connection.rs @@ -40,9 +40,9 @@ use tokio_stream::StreamExt; use tracing::{self, span, Instrument, Level}; use super::{ + direction::ConnectionDirection, error::{ConnectionManagerError, PeerConnectionError}, manager::ConnectionManagerEvent, - types::ConnectionDirection, }; #[cfg(feature = "rpc")] use crate::protocol::rpc::{ @@ -111,6 +111,7 @@ pub fn create( Ok(peer_conn) } +/// Request types for the PeerConnection actor. #[derive(Debug)] pub enum PeerConnectionRequest { /// Open a new substream and negotiate the given protocol @@ -122,6 +123,7 @@ pub enum PeerConnectionRequest { Disconnect(bool, oneshot::Sender>), } +/// ID type for peer connections pub type ConnectionId = usize; /// Request handle for an active peer connection @@ -526,6 +528,7 @@ impl fmt::Display for PeerConnectionActor { } } +/// Contains the substream and the ProtocolId that was successfully negotiated. pub struct NegotiatedSubstream { pub protocol: ProtocolId, pub stream: TSubstream, diff --git a/comms/core/src/connectivity/config.rs b/comms/core/src/connectivity/config.rs index 33a42177d7..8d995ceeed 100644 --- a/comms/core/src/connectivity/config.rs +++ b/comms/core/src/connectivity/config.rs @@ -22,6 +22,7 @@ use std::time::Duration; +/// Connectivity actor configuration #[derive(Debug, Clone, Copy)] pub struct ConnectivityConfig { /// The minimum number of connected nodes before connectivity is transitioned to ONLINE diff --git a/comms/core/src/connectivity/connection_pool.rs b/comms/core/src/connectivity/connection_pool.rs index 03e077a6c4..68ac1f6ddc 100644 --- a/comms/core/src/connectivity/connection_pool.rs +++ b/comms/core/src/connectivity/connection_pool.rs @@ -26,6 +26,7 @@ use nom::lib::std::collections::hash_map::Entry; use crate::{peer_manager::NodeId, PeerConnection}; +/// Status type for connections #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub enum ConnectionStatus { NotConnected, @@ -42,6 +43,7 @@ impl fmt::Display for ConnectionStatus { } } +/// Connection state. This struct holds the PeerConnection handle if currently or previously connected. #[derive(Debug, Clone)] pub struct PeerConnectionState { node_id: NodeId, diff --git a/comms/core/src/connectivity/connection_stats.rs b/comms/core/src/connectivity/connection_stats.rs index c55fef0911..e2ca6fae79 100644 --- a/comms/core/src/connectivity/connection_stats.rs +++ b/comms/core/src/connectivity/connection_stats.rs @@ -28,6 +28,8 @@ use std::{ use crate::utils::datetime::format_duration; +/// Basic stats for peer connection attempts. Allows the connectivity manager to keep track of successful/failed +/// connection attempts to allow it to mark peers as offline if necessary. #[derive(Debug, Clone, Default, PartialEq)] pub struct PeerConnectionStats { /// The last time a connection was successfully made or, None if a successful diff --git a/comms/core/src/connectivity/error.rs b/comms/core/src/connectivity/error.rs index 200b8c9a07..acb554ac79 100644 --- a/comms/core/src/connectivity/error.rs +++ b/comms/core/src/connectivity/error.rs @@ -24,6 +24,7 @@ use thiserror::Error; use crate::{connection_manager::ConnectionManagerError, peer_manager::PeerManagerError, PeerConnectionError}; +/// Errors for the Connectivity actor. #[derive(Debug, Error)] pub enum ConnectivityError { #[error("Cannot send request because ConnectivityActor disconnected")] diff --git a/comms/core/src/connectivity/manager.rs b/comms/core/src/connectivity/manager.rs index 126117354e..acd45a5521 100644 --- a/comms/core/src/connectivity/manager.rs +++ b/comms/core/src/connectivity/manager.rs @@ -29,7 +29,12 @@ use std::{ use log::*; use nom::lib::std::collections::hash_map::Entry; use tari_shutdown::ShutdownSignal; -use tokio::{sync::mpsc, task::JoinHandle, time, time::MissedTickBehavior}; +use tokio::{ + sync::{mpsc, oneshot}, + task::JoinHandle, + time, + time::MissedTickBehavior, +}; use tracing::{span, Instrument, Level}; use super::{ @@ -61,13 +66,8 @@ const LOG_TARGET: &str = "comms::connectivity::manager"; /// # Connectivity Manager /// /// The ConnectivityManager actor is responsible for tracking the state of all peer -/// connections in the system and maintaining a _managed pool_ of peer connections. -/// It provides a simple interface to fetch active peer connections. -/// Selection includes selecting a single peer, random selection and selecting connections -/// closer to a `NodeId`. +/// connections in the system and maintaining a _pool_ of peer connections. /// -/// Additionally, set of managed peers can be provided. ConnectivityManager actor will -/// attempt to ensure that all provided peers have active peer connections. /// It emits [ConnectivityEvent](crate::connectivity::ConnectivityEvent)s that can keep client components /// in the loop with the state of the node's connectivity. pub struct ConnectivityManager { @@ -101,11 +101,16 @@ impl ConnectivityManager { } } +/// Node connectivity status. #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub enum ConnectivityStatus { + /// Initial connectivity status before the Connectivity actor has initialized. Initializing, + /// Connectivity is online. Online(usize), + /// Connectivity is less than the required minimum, but some connections are still active. Degraded(usize), + /// There are no active connections. Offline, } @@ -226,52 +231,9 @@ impl ConnectivityManagerActor { }, DialPeer { node_id, reply_tx } => { let tracing_id = tracing::Span::current().id(); - let span = span!(Level::TRACE, "handle_request"); + let span = span!(Level::TRACE, "handle_dial_peer"); span.follows_from(tracing_id); - async move { - match self.peer_manager.is_peer_banned(&node_id).await { - Ok(true) => { - if let Some(reply) = reply_tx { - let _result = reply.send(Err(ConnectionManagerError::PeerBanned)); - } - return; - }, - Ok(false) => {}, - Err(err) => { - if let Some(reply) = reply_tx { - let _result = reply.send(Err(err.into())); - } - return; - }, - } - match self.pool.get(&node_id) { - Some(state) if state.is_connected() => { - debug!( - target: LOG_TARGET, - "Found existing connection for peer `{}`", - node_id.short_str() - ); - if let Some(reply_tx) = reply_tx { - let _result = reply_tx.send(Ok(state.connection().cloned().expect("Already checked"))); - } - }, - _ => { - debug!( - target: LOG_TARGET, - "No existing connection found for peer `{}`. Dialing...", - node_id.short_str() - ); - if let Err(err) = self.connection_manager.send_dial_peer(node_id, reply_tx).await { - error!( - target: LOG_TARGET, - "Failed to send dial request to connection manager: {:?}", err - ); - } - }, - } - } - .instrument(span) - .await + self.handle_dial_peer(node_id, reply_tx).instrument(span).await; }, SelectConnections(selection, reply) => { let _result = reply.send(self.select_connections(selection).await); @@ -323,6 +285,53 @@ impl ConnectivityManagerActor { } } + async fn handle_dial_peer( + &mut self, + node_id: NodeId, + reply_tx: Option>>, + ) { + match self.peer_manager.is_peer_banned(&node_id).await { + Ok(true) => { + if let Some(reply) = reply_tx { + let _result = reply.send(Err(ConnectionManagerError::PeerBanned)); + } + return; + }, + Ok(false) => {}, + Err(err) => { + if let Some(reply) = reply_tx { + let _result = reply.send(Err(err.into())); + } + return; + }, + } + match self.pool.get(&node_id) { + Some(state) if state.is_connected() => { + debug!( + target: LOG_TARGET, + "Found existing connection for peer `{}`", + node_id.short_str() + ); + if let Some(reply_tx) = reply_tx { + let _result = reply_tx.send(Ok(state.connection().cloned().expect("Already checked"))); + } + }, + _ => { + debug!( + target: LOG_TARGET, + "No existing connection found for peer `{}`. Dialing...", + node_id.short_str() + ); + if let Err(err) = self.connection_manager.send_dial_peer(node_id, reply_tx).await { + error!( + target: LOG_TARGET, + "Failed to send dial request to connection manager: {:?}", err + ); + } + }, + } + } + async fn disconnect_all(&mut self) { let mut node_ids = Vec::with_capacity(self.pool.count_connected()); for mut state in self.pool.filter_drain(|_| true) { @@ -508,57 +517,12 @@ impl ConnectivityManagerActor { debug!(target: LOG_TARGET, "Received event: {}", event); match event { PeerConnected(new_conn) => { - match self.pool.get_connection(new_conn.peer_node_id()).cloned() { - Some(existing_conn) if !existing_conn.is_connected() => { - debug!( - target: LOG_TARGET, - "Tie break: Existing connection (id: {}, peer: {}, direction: {}) was not connected, \ - resolving tie break by using the new connection. (New: id: {}, peer: {}, direction: {})", - existing_conn.id(), - existing_conn.peer_node_id(), - existing_conn.direction(), - new_conn.id(), - new_conn.peer_node_id(), - new_conn.direction(), - ); - self.pool.remove(existing_conn.peer_node_id()); - }, - Some(mut existing_conn) => { - if self.tie_break_existing_connection(&existing_conn, new_conn) { - debug!( - target: LOG_TARGET, - "Tie break: Keep new connection (id: {}, peer: {}, direction: {}). Disconnect \ - existing connection (id: {}, peer: {}, direction: {})", - new_conn.id(), - new_conn.peer_node_id(), - new_conn.direction(), - existing_conn.id(), - existing_conn.peer_node_id(), - existing_conn.direction(), - ); - - let _result = existing_conn.disconnect_silent().await; - self.pool.remove(existing_conn.peer_node_id()); - } else { - debug!( - target: LOG_TARGET, - "Tie break: Keeping existing connection (id: {}, peer: {}, direction: {}). \ - Disconnecting new connection (id: {}, peer: {}, direction: {})", - new_conn.id(), - new_conn.peer_node_id(), - new_conn.direction(), - existing_conn.id(), - existing_conn.peer_node_id(), - existing_conn.direction(), - ); - - let _result = new_conn.clone().disconnect_silent().await; - // Ignore this event - state can stay as is - return Ok(()); - } + match self.handle_new_connection_tie_break(new_conn).await { + TieBreak::KeepExisting => { + // Ignore event, we discarded the new connection and keeping the current one + return Ok(()); }, - - _ => {}, + TieBreak::UseNew | TieBreak::None => {}, } }, PeerDisconnected(id, node_id) => { @@ -647,6 +611,62 @@ impl ConnectivityManagerActor { Ok(()) } + async fn handle_new_connection_tie_break(&mut self, new_conn: &PeerConnection) -> TieBreak { + match self.pool.get_connection(new_conn.peer_node_id()).cloned() { + Some(existing_conn) if !existing_conn.is_connected() => { + debug!( + target: LOG_TARGET, + "Tie break: Existing connection (id: {}, peer: {}, direction: {}) was not connected, resolving \ + tie break by using the new connection. (New: id: {}, peer: {}, direction: {})", + existing_conn.id(), + existing_conn.peer_node_id(), + existing_conn.direction(), + new_conn.id(), + new_conn.peer_node_id(), + new_conn.direction(), + ); + self.pool.remove(existing_conn.peer_node_id()); + TieBreak::UseNew + }, + Some(mut existing_conn) => { + if self.tie_break_existing_connection(&existing_conn, new_conn) { + debug!( + target: LOG_TARGET, + "Tie break: Keep new connection (id: {}, peer: {}, direction: {}). Disconnect existing \ + connection (id: {}, peer: {}, direction: {})", + new_conn.id(), + new_conn.peer_node_id(), + new_conn.direction(), + existing_conn.id(), + existing_conn.peer_node_id(), + existing_conn.direction(), + ); + + let _result = existing_conn.disconnect_silent().await; + self.pool.remove(existing_conn.peer_node_id()); + TieBreak::UseNew + } else { + debug!( + target: LOG_TARGET, + "Tie break: Keeping existing connection (id: {}, peer: {}, direction: {}). Disconnecting new \ + connection (id: {}, peer: {}, direction: {})", + new_conn.id(), + new_conn.peer_node_id(), + new_conn.direction(), + existing_conn.id(), + existing_conn.peer_node_id(), + existing_conn.direction(), + ); + + let _result = new_conn.clone().disconnect_silent().await; + TieBreak::KeepExisting + } + }, + + None => TieBreak::None, + } + } + /// Two connections to the same peer have been created. This function deterministically determines which peer /// connection to close. It does this by comparing our NodeId to that of the peer. This rule enables both sides to /// agree which connection to disconnect @@ -840,3 +860,9 @@ impl ConnectivityManagerActor { } } } + +enum TieBreak { + None, + UseNew, + KeepExisting, +} diff --git a/comms/core/src/connectivity/mod.rs b/comms/core/src/connectivity/mod.rs index 046fb3ff22..716c02d150 100644 --- a/comms/core/src/connectivity/mod.rs +++ b/comms/core/src/connectivity/mod.rs @@ -20,6 +20,13 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +//! # Connectivity +//! The ConnectivityManager actor is responsible for tracking the state of all peer +//! connections in the system and maintaining a _pool_ of active peer connections. +//! +//! It emits [ConnectivityEvent](crate::connectivity::ConnectivityEvent)s that can keep client components +//! in the loop with the state of the node's connectivity. + mod connection_stats; mod config; diff --git a/comms/core/src/connectivity/requester.rs b/comms/core/src/connectivity/requester.rs index 2428621eeb..44a550bca4 100644 --- a/comms/core/src/connectivity/requester.rs +++ b/comms/core/src/connectivity/requester.rs @@ -43,9 +43,12 @@ use crate::{connection_manager::ConnectionManagerError, peer_manager::NodeId, Pe const LOG_TARGET: &str = "comms::connectivity::requester"; +/// Connectivity event broadcast receiver. pub type ConnectivityEventRx = broadcast::Receiver; +/// Connectivity event broadcast sender. pub type ConnectivityEventTx = broadcast::Sender; +/// Node connectivity events emitted by the ConnectivityManager. #[derive(Debug, Clone)] pub enum ConnectivityEvent { PeerDisconnected(NodeId), @@ -78,6 +81,7 @@ impl fmt::Display for ConnectivityEvent { } } +/// Request types for the ConnectivityManager actor. #[derive(Debug)] pub enum ConnectivityRequest { WaitStarted(oneshot::Sender<()>), @@ -98,6 +102,7 @@ pub enum ConnectivityRequest { RemovePeerFromAllowList(NodeId), } +/// Handle to make requests and read events from the ConnectivityManager actor. #[derive(Debug, Clone)] pub struct ConnectivityRequester { sender: mpsc::Sender, @@ -105,10 +110,13 @@ pub struct ConnectivityRequester { } impl ConnectivityRequester { - pub fn new(sender: mpsc::Sender, event_tx: ConnectivityEventTx) -> Self { + pub(crate) fn new(sender: mpsc::Sender, event_tx: ConnectivityEventTx) -> Self { Self { sender, event_tx } } + /// Returns a subscription to [ConnectivityEvent]s. + /// + /// [ConnectivityEvent](self::ConnectivityEvent) pub fn get_event_subscription(&self) -> ConnectivityEventRx { self.event_tx.subscribe() } @@ -173,6 +181,7 @@ impl ConnectivityRequester { .try_for_each(|result| result.map_err(|_| ConnectivityError::ActorDisconnected)) } + /// Queries the ConnectivityManager and returns the matching [PeerConnection](crate::PeerConnection)s. pub async fn select_connections( &mut self, selection: ConnectivitySelection, @@ -195,6 +204,7 @@ impl ConnectivityRequester { reply_rx.await.map_err(|_| ConnectivityError::ActorResponseCancelled) } + /// Get the current [ConnectivityStatus](self::ConnectivityStatus). pub async fn get_connectivity_status(&mut self) -> Result { let (reply_tx, reply_rx) = oneshot::channel(); self.sender @@ -204,6 +214,7 @@ impl ConnectivityRequester { reply_rx.await.map_err(|_| ConnectivityError::ActorResponseCancelled) } + /// Get the full connection state that the connectivity actor. pub async fn get_all_connection_states(&mut self) -> Result, ConnectivityError> { let (reply_tx, reply_rx) = oneshot::channel(); self.sender @@ -213,6 +224,7 @@ impl ConnectivityRequester { reply_rx.await.map_err(|_| ConnectivityError::ActorResponseCancelled) } + /// Get all currently connection [PeerConnection](crate::PeerConnection]s. pub async fn get_active_connections(&mut self) -> Result, ConnectivityError> { let (reply_tx, reply_rx) = oneshot::channel(); self.sender @@ -222,6 +234,7 @@ impl ConnectivityRequester { reply_rx.await.map_err(|_| ConnectivityError::ActorResponseCancelled) } + /// Ban peer for the given Duration. The ban `reason` is persisted in the peer database for reference. pub async fn ban_peer_until( &mut self, node_id: NodeId, @@ -235,11 +248,13 @@ impl ConnectivityRequester { Ok(()) } + /// Ban the peer indefinitely. pub async fn ban_peer(&mut self, node_id: NodeId, reason: String) -> Result<(), ConnectivityError> { self.ban_peer_until(node_id, Duration::from_secs(u64::MAX), reason) .await } + /// Adds a peer to an allow list, preventing it from being banned. pub async fn add_peer_to_allow_list(&mut self, node_id: NodeId) -> Result<(), ConnectivityError> { self.sender .send(ConnectivityRequest::AddPeerToAllowList(node_id)) @@ -248,6 +263,7 @@ impl ConnectivityRequester { Ok(()) } + /// Removes a peer from an allow list that prevents it from being banned. pub async fn remove_peer_from_allow_list(&mut self, node_id: NodeId) -> Result<(), ConnectivityError> { self.sender .send(ConnectivityRequest::RemovePeerFromAllowList(node_id)) @@ -256,6 +272,7 @@ impl ConnectivityRequester { Ok(()) } + /// Returns a Future that resolves when the connectivity actor has started. pub async fn wait_started(&mut self) -> Result<(), ConnectivityError> { let (reply_tx, reply_rx) = oneshot::channel(); self.sender diff --git a/comms/core/src/connectivity/selection.rs b/comms/core/src/connectivity/selection.rs index c95126cbee..4234de23e1 100644 --- a/comms/core/src/connectivity/selection.rs +++ b/comms/core/src/connectivity/selection.rs @@ -27,6 +27,13 @@ use rand::{rngs::OsRng, seq::SliceRandom}; use super::connection_pool::ConnectionPool; use crate::{connectivity::connection_pool::ConnectionStatus, peer_manager::NodeId, PeerConnection}; +/// Selection query for PeerConnections. +/// +/// ```ignore +/// // This query selects the 10 closest connections to the given node id. +/// let query = ConnectivitySelection::closest_to(my_node_id, 10, vec![]); +/// let conns = connectivity.select_connections(query).await?; +/// ``` #[derive(Debug, Clone)] pub struct ConnectivitySelection { selection_mode: SelectionMode, @@ -41,6 +48,10 @@ enum SelectionMode { } impl ConnectivitySelection { + /// Returns a query that will return all connections for peers with `PeerFeatures::COMMUNICATION_NODES` excluding + /// the given [NodeId]s. + /// + /// [NodeId](crate::peer_manager::NodeId) pub fn all_nodes(exclude: Vec) -> Self { Self { selection_mode: SelectionMode::AllNodes, @@ -48,6 +59,10 @@ impl ConnectivitySelection { } } + /// Returns a query that will return `n` connections for peers with `PeerFeatures::COMMUNICATION_NODES` excluding + /// the given [NodeId]s. + /// + /// [NodeId](crate::peer_manager::NodeId) pub fn random_nodes(n: usize, exclude: Vec) -> Self { Self { selection_mode: SelectionMode::RandomNodes(n), @@ -55,7 +70,9 @@ impl ConnectivitySelection { } } - /// Select `n` peer connections ordered by closeness to `node_id` + /// Select `n` peer connections ordered by closeness to `node_id`, exclusing the given `exclude` [NodeId]s. + /// + /// [NodeId](crate::peer_manager::NodeId) pub fn closest_to(node_id: NodeId, n: usize, exclude: Vec) -> Self { Self { selection_mode: SelectionMode::ClosestTo(Box::new(node_id), n), @@ -78,7 +95,7 @@ impl ConnectivitySelection { } } -pub fn select_connected_nodes<'a>(pool: &'a ConnectionPool, exclude: &[NodeId]) -> Vec<&'a PeerConnection> { +fn select_connected_nodes<'a>(pool: &'a ConnectionPool, exclude: &[NodeId]) -> Vec<&'a PeerConnection> { pool.filter_connection_states(|state| { if state.status() != ConnectionStatus::Connected { return false; @@ -90,7 +107,7 @@ pub fn select_connected_nodes<'a>(pool: &'a ConnectionPool, exclude: &[NodeId]) }) } -pub fn select_closest<'a>(pool: &'a ConnectionPool, node_id: &NodeId, exclude: &[NodeId]) -> Vec<&'a PeerConnection> { +fn select_closest<'a>(pool: &'a ConnectionPool, node_id: &NodeId, exclude: &[NodeId]) -> Vec<&'a PeerConnection> { let mut nodes = select_connected_nodes(pool, exclude); nodes.sort_by(|a, b| { @@ -102,7 +119,7 @@ pub fn select_closest<'a>(pool: &'a ConnectionPool, node_id: &NodeId, exclude: & nodes } -pub fn select_random_nodes<'a>(pool: &'a ConnectionPool, n: usize, exclude: &[NodeId]) -> Vec<&'a PeerConnection> { +fn select_random_nodes<'a>(pool: &'a ConnectionPool, n: usize, exclude: &[NodeId]) -> Vec<&'a PeerConnection> { let nodes = select_connected_nodes(pool, exclude); nodes.choose_multiple(&mut OsRng, n).copied().collect() } diff --git a/comms/core/src/framing.rs b/comms/core/src/framing.rs index 19a89f84e1..544b6fb1dd 100644 --- a/comms/core/src/framing.rs +++ b/comms/core/src/framing.rs @@ -28,6 +28,7 @@ use crate::stream_id::{Id, StreamId}; /// Tari comms canonical framing pub type CanonicalFraming = Framed; +/// Create a length-delimited frame around the given stream reader/writer with the given maximum frame length. pub fn canonical(stream: T, max_frame_len: usize) -> CanonicalFraming where T: AsyncRead + AsyncWrite + Unpin { Framed::new( diff --git a/comms/core/src/lib.rs b/comms/core/src/lib.rs index 8aeb7b8c55..5622d9afb8 100644 --- a/comms/core/src/lib.rs +++ b/comms/core/src/lib.rs @@ -7,7 +7,7 @@ //! //! See [CommsBuilder] for more information on using this library. //! -//! [CommsBuilder]: ./builder/index.html +//! [CommsBuilder]: crate::CommsBuilder #[macro_use] extern crate lazy_static; diff --git a/comms/core/src/macros.rs b/comms/core/src/macros.rs index b81f77b411..06509c65f6 100644 --- a/comms/core/src/macros.rs +++ b/comms/core/src/macros.rs @@ -66,6 +66,7 @@ macro_rules! setter_mut { }; } +/// Internal macro used to recover locks macro_rules! recover_lock { ($e:expr) => { match $e { @@ -78,6 +79,8 @@ macro_rules! recover_lock { }; } +/// Used to acquire a lock from a sync resource. If that resource is poisoned, the lock +/// returned contains the state prior to it being poisoned. macro_rules! acquire_lock { ($e:expr, $m:ident) => { recover_lock!($e.$m()) @@ -87,12 +90,16 @@ macro_rules! acquire_lock { }; } +/// Acquires a read lock from a RwLock. If the lock is poisoned, the returned lock contains +/// the state prior to it being poisoned. This provides semantics similar to a DB transaction. macro_rules! acquire_read_lock { ($e:expr) => { acquire_lock!($e, read) }; } +/// Acquires an exclusive write lock from a RwLock. If the lock is poisoned, the returned lock contains +/// the state prior to it being poisoned. This provides semantics similar to a DB transaction. macro_rules! acquire_write_lock { ($e:expr) => { acquire_lock!($e, write) @@ -164,6 +171,7 @@ macro_rules! cfg_test { } } +/// Generates an `is_xx` function for an enum variant. macro_rules! is_fn { ( $(#[$outer:meta])* @@ -184,6 +192,7 @@ macro_rules! is_fn { }; } +/// Includes code from the OUT_DIR #[macro_export] macro_rules! outdir_include { ($name: expr) => { diff --git a/comms/core/src/message/envelope.rs b/comms/core/src/message/envelope.rs index ce3902cff0..d64e2b11ad 100644 --- a/comms/core/src/message/envelope.rs +++ b/comms/core/src/message/envelope.rs @@ -43,20 +43,24 @@ macro_rules! wrap_in_envelope_body { } impl EnvelopeBody { + /// New empty envelope body. pub fn new() -> Self { Self { parts: Default::default(), } } + /// Number of parts contained within this envelope. pub fn len(&self) -> usize { self.parts.len() } + /// Total size of all parts contained within this envelope. pub fn total_size(&self) -> usize { self.parts.iter().fold(0, |acc, b| acc + b.len()) } + /// Returns true if the envelope is empty, otherwise false. pub fn is_empty(&self) -> bool { self.parts.is_empty() } @@ -69,10 +73,12 @@ impl EnvelopeBody { .map(|i| self.parts.remove(i)) } + /// Push a new part to the end of the envelope. pub fn push_part(&mut self, part: Vec) { self.parts.push(part) } + /// Returns a Vec of message blobs. pub fn into_inner(self) -> Vec> { self.parts } diff --git a/comms/core/src/message/error.rs b/comms/core/src/message/error.rs index da76740f5f..749dec00c7 100644 --- a/comms/core/src/message/error.rs +++ b/comms/core/src/message/error.rs @@ -23,6 +23,7 @@ use prost::DecodeError; use thiserror::Error; +/// Message error type. #[derive(Error, Debug)] pub enum MessageError { #[error("Failed to decode protobuf message: {0}")] diff --git a/comms/core/src/message/mod.rs b/comms/core/src/message/mod.rs index 0590a36298..d5a2718797 100644 --- a/comms/core/src/message/mod.rs +++ b/comms/core/src/message/mod.rs @@ -23,42 +23,6 @@ //! # Message //! //! The message module contains the message types which wrap domain-level messages. -//! -//! Described further in [RFC-0172](https://rfc.tari.com/RFC-0172_PeerToPeerMessagingProtocol.html#messaging-structure) -//! -//! - [Frame] and [FrameSet] -//! -//! A [FrameSet] consists of multiple [Frame]s. A [Frame] is the raw byte representation of a message. -//! -//! - [MessageEnvelope] -//! -//! Represents data that is about to go on the wire or has just come off. -//! -//! - [MessageEnvelopeHeader] -//! -//! The header that every message contains. -//! -//! - [Message] -//! -//! This message is deserialized from the body [Frame] of the [MessageEnvelope]. -//! It consists of a [MessageHeader] and a domain-level body [Frame]. -//! This part of the [MessageEnvelope] can optionally be encrypted for a particular peer. -//! -//! - [MessageHeader] -//! -//! Information about the contained message. Currently, this only contains the -//! domain-level message type. -//! -//! - [MessageData] -//! -//! [Frame]: ./tyoe.Frame.html -//! [FrameSet]: ./tyoe.FrameSet.html -//! [MessageEnvelope]: ./envelope/struct.MessageEnvelope.html -//! [MessageEnvelopeHeader]: ./envelope/struct.MessageEnvelopeHeader.html -//! [Message]: ./message/struct.Message.html -//! [MessageHeader]: ./message/struct.MessageHeader.html -//! [MessageData]: ./message/struct.MessageData.html -//! [DomainConnector]: ../domain_connector/struct.DomainConnector.html #[macro_use] mod envelope; @@ -76,6 +40,7 @@ pub use outbound::{MessagingReplyRx, MessagingReplyTx, OutboundMessage}; mod tag; pub use tag::MessageTag; +/// Provides extensions to the prost Message trait. pub trait MessageExt: prost::Message { /// Encodes a message, allocating the buffer on the heap as necessary fn to_encoded_bytes(&self) -> Vec diff --git a/comms/core/src/multiplexing/mod.rs b/comms/core/src/multiplexing/mod.rs index 1732ea142b..378fdaab9e 100644 --- a/comms/core/src/multiplexing/mod.rs +++ b/comms/core/src/multiplexing/mod.rs @@ -20,6 +20,8 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +//! Stream multiplexers typically used to allow multiplexed substreams over an ordered reliable byte stream. + #[cfg(feature = "metrics")] mod metrics; diff --git a/comms/core/src/multiplexing/yamux.rs b/comms/core/src/multiplexing/yamux.rs index 6fb066e515..70b3d25d08 100644 --- a/comms/core/src/multiplexing/yamux.rs +++ b/comms/core/src/multiplexing/yamux.rs @@ -207,6 +207,7 @@ impl Drop for IncomingSubstreams { } } +/// A yamux stream wrapper that can be read from and written to. #[derive(Debug)] pub struct Substream { stream: Compat, diff --git a/comms/core/src/net_address/mod.rs b/comms/core/src/net_address/mod.rs index 64407840ac..a71d3e60ad 100644 --- a/comms/core/src/net_address/mod.rs +++ b/comms/core/src/net_address/mod.rs @@ -20,6 +20,8 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +//! Extension types used by the [PeerManager](crate::PeerManager) to keep track of address reliability. + mod multiaddr_with_stats; pub use multiaddr_with_stats::MutliaddrWithStats; diff --git a/comms/core/src/noise/config.rs b/comms/core/src/noise/config.rs index 6e39bca78d..63e893725e 100644 --- a/comms/core/src/noise/config.rs +++ b/comms/core/src/noise/config.rs @@ -26,7 +26,7 @@ use std::sync::Arc; use log::*; use snow::{self, params::NoiseParams}; -use tari_crypto::tari_utilities::ByteArray; +use tari_utilities::ByteArray; use tokio::io::{AsyncRead, AsyncWrite}; use crate::{ diff --git a/comms/core/src/noise/mod.rs b/comms/core/src/noise/mod.rs index ba18e1234d..88b82b47dd 100644 --- a/comms/core/src/noise/mod.rs +++ b/comms/core/src/noise/mod.rs @@ -20,13 +20,16 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// TODO: Remove #[allow(dead_code)] when used -#[allow(dead_code)] +//! An implementation of the [Noise protocol](https://noiseprotocol.org/) using the [snow crate](https://crates.io/crates/snow) +//! using the Ristretto curve from dalek via [tari_crypto](https://github.com/tari-project/tari-crypto). + mod config; +pub use config::NoiseConfig; + mod crypto_resolver; -mod error; -mod socket; -pub use config::NoiseConfig; +mod error; pub use error::NoiseError; + +mod socket; pub use socket::NoiseSocket; diff --git a/comms/core/src/noise/socket.rs b/comms/core/src/noise/socket.rs index 232989eda7..62155f5c2d 100644 --- a/comms/core/src/noise/socket.rs +++ b/comms/core/src/noise/socket.rs @@ -37,7 +37,7 @@ use std::{ use futures::ready; use log::*; use snow::{error::StateProblem, HandshakeState, TransportState}; -use tari_crypto::tari_utilities::ByteArray; +use tari_utilities::ByteArray; use tokio::io::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt, ReadBuf}; use crate::types::CommsPublicKey; @@ -364,6 +364,7 @@ where TSocket: AsyncRead + Unpin impl NoiseSocket where TSocket: AsyncWrite + Unpin { + #[allow(clippy::too_many_lines)] fn poll_write_or_flush(&mut self, context: &mut Context, buf: Option<&[u8]>) -> Poll>> { loop { trace!( diff --git a/comms/core/src/peer_manager/connection_stats.rs b/comms/core/src/peer_manager/connection_stats.rs index 55f76e46ee..4d2c4ce54a 100644 --- a/comms/core/src/peer_manager/connection_stats.rs +++ b/comms/core/src/peer_manager/connection_stats.rs @@ -30,6 +30,7 @@ use std::{ use chrono::{NaiveDateTime, Utc}; use serde::{Deserialize, Serialize}; +/// Basic connection stats for a [Peer](super::Peer). #[derive(Debug, Clone, Default, Deserialize, Serialize, PartialEq, Eq)] pub struct PeerConnectionStats { /// The last time a connection was successfully made or, None if a successful @@ -40,6 +41,7 @@ pub struct PeerConnectionStats { } impl PeerConnectionStats { + /// New connection stats pub fn new() -> Self { Default::default() } @@ -109,7 +111,7 @@ impl fmt::Display for PeerConnectionStats { } } -/// Peer connection statistics +/// Details on the last connection attempt #[derive(Debug, Clone, Deserialize, Serialize, PartialOrd, PartialEq, Eq)] pub enum LastConnectionAttempt { /// This node has never attempted to connect to this peer diff --git a/comms/core/src/peer_manager/error.rs b/comms/core/src/peer_manager/error.rs index f307c6df19..9f588acc09 100644 --- a/comms/core/src/peer_manager/error.rs +++ b/comms/core/src/peer_manager/error.rs @@ -25,6 +25,7 @@ use std::sync::PoisonError; use tari_storage::KeyValStoreError; use thiserror::Error; +/// Error type for [PeerManager](super::PeerManager). #[derive(Debug, Error, Clone)] pub enum PeerManagerError { #[error("The requested peer does not exist")] diff --git a/comms/core/src/peer_manager/identity_signature.rs b/comms/core/src/peer_manager/identity_signature.rs index 1f58d69566..085731c80e 100644 --- a/comms/core/src/peer_manager/identity_signature.rs +++ b/comms/core/src/peer_manager/identity_signature.rs @@ -27,7 +27,8 @@ use digest::Digest; use prost::Message; use rand::rngs::OsRng; use serde::{Deserialize, Serialize}; -use tari_crypto::{keys::SecretKey, tari_utilities::ByteArray}; +use tari_crypto::keys::SecretKey; +use tari_utilities::ByteArray; use crate::{ message::MessageExt, @@ -46,6 +47,7 @@ pub struct IdentitySignature { } impl IdentitySignature { + /// The latest version of the Identity Signature. pub const LATEST_VERSION: u8 = 0; pub fn new(version: u8, signature: Signature, updated_at: DateTime) -> Self { diff --git a/comms/core/src/peer_manager/manager.rs b/comms/core/src/peer_manager/manager.rs index 27c716bb5e..dcc2b7add2 100644 --- a/comms/core/src/peer_manager/manager.rs +++ b/comms/core/src/peer_manager/manager.rs @@ -97,7 +97,7 @@ impl PeerManager { /// Performs the given [PeerQuery]. /// - /// [PeerQuery]: crate::peer_manager::peer_query::PeerQuery + /// [PeerQuery]: crate::peer_manager::PeerQuery pub async fn perform_query(&self, peer_query: PeerQuery<'_>) -> Result, PeerManagerError> { self.peer_storage.read().await.perform_query(peer_query) } diff --git a/comms/core/src/peer_manager/migrations/v5.rs b/comms/core/src/peer_manager/migrations/v5.rs index e32ef3c01d..782b9f3a08 100644 --- a/comms/core/src/peer_manager/migrations/v5.rs +++ b/comms/core/src/peer_manager/migrations/v5.rs @@ -25,11 +25,11 @@ use std::collections::HashMap; use chrono::NaiveDateTime; use log::*; use serde::{Deserialize, Serialize}; -use tari_crypto::tari_utilities::hex::serialize_to_hex; use tari_storage::{ lmdb_store::{LMDBDatabase, LMDBError}, IterationResult, }; +use tari_utilities::hex::serialize_to_hex; use crate::{ net_address::MultiaddressesWithStats, diff --git a/comms/core/src/peer_manager/migrations/v6.rs b/comms/core/src/peer_manager/migrations/v6.rs index cf874fdbce..0537a57662 100644 --- a/comms/core/src/peer_manager/migrations/v6.rs +++ b/comms/core/src/peer_manager/migrations/v6.rs @@ -25,11 +25,11 @@ use std::collections::HashMap; use chrono::NaiveDateTime; use log::*; use serde::{Deserialize, Serialize}; -use tari_crypto::tari_utilities::hex::serialize_to_hex; use tari_storage::{ lmdb_store::{LMDBDatabase, LMDBError}, IterationResult, }; +use tari_utilities::hex::serialize_to_hex; use crate::{ net_address::MultiaddressesWithStats, diff --git a/comms/core/src/peer_manager/node_distance.rs b/comms/core/src/peer_manager/node_distance.rs index daab117436..f2c98a47ba 100644 --- a/comms/core/src/peer_manager/node_distance.rs +++ b/comms/core/src/peer_manager/node_distance.rs @@ -28,8 +28,10 @@ use std::{ use super::{node_id::NodeIdError, NodeId}; +/// The distance metric used by the [PeerManager](super::PeerManager). pub type NodeDistance = XorDistance; +/// The XOR distance metric. #[derive(Clone, PartialEq, Eq, PartialOrd, Ord, Default)] pub struct XorDistance(u128); @@ -69,10 +71,12 @@ impl XorDistance { .saturating_sub(1) } + /// Byte representation of the distance value. pub fn to_bytes(&self) -> [u8; Self::byte_size()] { self.0.to_be_bytes() } + /// Distance represented as a 128-bit unsigned integer. pub fn as_u128(&self) -> u128 { self.0 } diff --git a/comms/core/src/peer_manager/node_id.rs b/comms/core/src/peer_manager/node_id.rs index 6ccb838c6f..55d1217ec2 100644 --- a/comms/core/src/peer_manager/node_id.rs +++ b/comms/core/src/peer_manager/node_id.rs @@ -35,7 +35,7 @@ use blake2::{ VarBlake2b, }; use serde::{de, Deserialize, Deserializer, Serialize}; -use tari_crypto::tari_utilities::{ +use tari_utilities::{ hex::{to_hex, Hex}, ByteArray, ByteArrayError, @@ -46,6 +46,7 @@ use crate::{peer_manager::node_distance::NodeDistance, types::CommsPublicKey}; pub(super) type NodeIdArray = [u8; NodeId::byte_size()]; +/// Error type for NodeId #[derive(Debug, Error, Clone)] pub enum NodeIdError { #[error("Incorrect byte count (expected {} bytes)", NodeId::byte_size())] diff --git a/comms/core/src/peer_manager/or_not_found.rs b/comms/core/src/peer_manager/or_not_found.rs index 8323206b3f..b56dcea565 100644 --- a/comms/core/src/peer_manager/or_not_found.rs +++ b/comms/core/src/peer_manager/or_not_found.rs @@ -22,6 +22,7 @@ use crate::peer_manager::PeerManagerError; +/// Extension trait for Result, PeerManagerError>. pub trait OrNotFound { type Value; type Error; diff --git a/comms/core/src/peer_manager/peer.rs b/comms/core/src/peer_manager/peer.rs index b09f314f5b..6dabf87e17 100644 --- a/comms/core/src/peer_manager/peer.rs +++ b/comms/core/src/peer_manager/peer.rs @@ -32,7 +32,7 @@ use bitflags::bitflags; use chrono::{NaiveDateTime, Utc}; use multiaddr::Multiaddr; use serde::{Deserialize, Serialize}; -use tari_crypto::tari_utilities::hex::serialize_to_hex; +use tari_utilities::hex::serialize_to_hex; use super::{ connection_stats::PeerConnectionStats, @@ -45,22 +45,17 @@ use crate::{ peer_manager::identity_signature::IdentitySignature, protocol::ProtocolId, types::CommsPublicKey, - utils::datetime::safe_future_datetime_from_duration, + utils::datetime::{format_local_datetime, is_max_datetime, safe_future_datetime_from_duration}, }; bitflags! { + /// Miscellaneous Peer flags #[derive(Default, Deserialize, Serialize)] pub struct PeerFlags: u8 { const NONE = 0x00; } } -#[derive(Debug, Clone, PartialEq, Eq)] -pub struct PeerIdentity { - pub node_id: NodeId, - pub public_key: CommsPublicKey, -} - /// A Peer represents a communication peer that is identified by a Public Key and NodeId. The Peer struct maintains a /// collection of the NetAddressesWithStats that this Peer can be reached by. The struct also maintains a set of flags /// describing the status of the Peer. @@ -336,11 +331,15 @@ impl Display for Peer { let status_str = { let mut s = Vec::new(); if let Some(offline_at) = self.offline_at.as_ref() { - s.push(format!("Offline since: {}", offline_at)); + s.push(format!("Offline since: {}", format_local_datetime(offline_at))); } if let Some(dt) = self.banned_until() { - s.push(format!("Banned until: {}", dt)); + if is_max_datetime(dt) { + s.push("Banned permanently".to_string()); + } else { + s.push(format!("Banned until: {}", format_local_datetime(dt))); + } s.push(format!("Reason: {}", self.banned_reason)) } s.join(". ") diff --git a/comms/core/src/peer_manager/peer_features.rs b/comms/core/src/peer_manager/peer_features.rs index 285a9c9916..e41d30177c 100644 --- a/comms/core/src/peer_manager/peer_features.rs +++ b/comms/core/src/peer_manager/peer_features.rs @@ -26,28 +26,37 @@ use bitflags::bitflags; use serde::{Deserialize, Serialize}; bitflags! { + /// Peer feature flags. These advertised the capabilities of peer nodes. #[derive(Serialize, Deserialize)] pub struct PeerFeatures: u64 { + /// No capabilities const NONE = 0b0000_0000; + /// Node is able to propagate messages const MESSAGE_PROPAGATION = 0b0000_0001; + /// Node offers store and forward functionality const DHT_STORE_FORWARD = 0b0000_0010; + /// Node is a communication node (typically a base layer node) const COMMUNICATION_NODE = Self::MESSAGE_PROPAGATION.bits | Self::DHT_STORE_FORWARD.bits; + /// Node is a network client const COMMUNICATION_CLIENT = Self::NONE.bits; } } impl PeerFeatures { + /// Returns true if these flags represent a COMMUNICATION_CLIENT. #[inline] pub fn is_client(self) -> bool { self == PeerFeatures::COMMUNICATION_CLIENT } + /// Returns true if these flags represent a COMMUNICATION_NODE. #[inline] pub fn is_node(self) -> bool { self == PeerFeatures::COMMUNICATION_NODE } + /// Returns a human-readable string that represents these flags. pub fn as_role_str(self) -> &'static str { match self { PeerFeatures::COMMUNICATION_NODE => "node", diff --git a/comms/core/src/peer_manager/peer_storage.rs b/comms/core/src/peer_manager/peer_storage.rs index badb2fdda7..4101df8b22 100644 --- a/comms/core/src/peer_manager/peer_storage.rs +++ b/comms/core/src/peer_manager/peer_storage.rs @@ -26,8 +26,8 @@ use chrono::Utc; use log::*; use multiaddr::Multiaddr; use rand::{rngs::OsRng, seq::SliceRandom}; -use tari_crypto::tari_utilities::ByteArray; use tari_storage::{IterationResult, KeyValueStore}; +use tari_utilities::ByteArray; use crate::{ peer_manager::{ @@ -61,11 +61,13 @@ where DS: KeyValueStore { /// Constructs a new PeerStorage, with indexes populated from the given datastore pub fn new_indexed(database: DS) -> Result, PeerManagerError> { - // Restore peers and hashmap links from database + // mutable_key_type: CommsPublicKey uses interior mutability to lazily compress the key, but is otherwise + // immutable so the Hashmap order can never change. #[allow(clippy::mutable_key_type)] let mut public_key_index = HashMap::new(); let mut node_id_index = HashMap::new(); let mut total_entries = 0; + // Restore peers and hashmap links from database database .for_each_ok(|(peer_key, peer)| { total_entries += 1; diff --git a/comms/core/src/peer_manager/wrapper.rs b/comms/core/src/peer_manager/wrapper.rs index de6d12dfaf..72e044583b 100644 --- a/comms/core/src/peer_manager/wrapper.rs +++ b/comms/core/src/peer_manager/wrapper.rs @@ -27,7 +27,7 @@ use crate::peer_manager::{migrations::MIGRATION_VERSION_KEY, Peer, PeerId}; // TODO: Hack to get around current peer database design. Once PeerManager uses a PeerDatabase abstraction and the LMDB // implementation has access to multiple databases we can remove this wrapper. -pub struct KeyValueWrapper { +pub(super) struct KeyValueWrapper { inner: T, } diff --git a/comms/core/src/pipeline/builder.rs b/comms/core/src/pipeline/builder.rs index c4963a5c71..2aa88da405 100644 --- a/comms/core/src/pipeline/builder.rs +++ b/comms/core/src/pipeline/builder.rs @@ -34,6 +34,7 @@ const DEFAULT_OUTBOUND_BUFFER_SIZE: usize = 50; type OutboundMessageSinkService = SinkService>; +/// Message pipeline builder #[derive(Default)] pub struct Builder { max_concurrent_inbound_tasks: usize, @@ -129,6 +130,7 @@ where }) } + /// Try build the Pipeline pub fn try_finish(mut self) -> Result, PipelineBuilderError> { let inbound = self.inbound.take().ok_or(PipelineBuilderError::InboundNotProvided)?; let outbound = self.build_outbound()?; @@ -141,11 +143,16 @@ where }) } + /// Builds the pipeline. + /// + /// ## Panics + /// This panics if the pipeline has not been configured coorrectly. pub fn build(self) -> Config { self.try_finish().unwrap() } } +/// Configuration for the outbound pipeline. pub struct OutboundPipelineConfig { /// Messages read from this stream are passed to the pipeline pub in_receiver: mpsc::Receiver, @@ -155,6 +162,7 @@ pub struct OutboundPipelineConfig { pub pipeline: TPipeline, } +/// Configuration for the pipeline. pub struct Config { pub max_concurrent_inbound_tasks: usize, pub max_concurrent_outbound_tasks: Option, @@ -162,6 +170,7 @@ pub struct Config { pub outbound: OutboundPipelineConfig, } +/// Error type for the pipeline. #[derive(Debug, Error)] pub enum PipelineBuilderError { #[error("Inbound pipeline was not provided")] diff --git a/comms/core/src/pipeline/inbound.rs b/comms/core/src/pipeline/inbound.rs index 35d910c8a1..f77d5f66bb 100644 --- a/comms/core/src/pipeline/inbound.rs +++ b/comms/core/src/pipeline/inbound.rs @@ -50,6 +50,7 @@ where TSvc::Error: Display + Send, TSvc::Future: Send, { + /// New inbound pipeline. pub fn new( executor: BoundedExecutor, stream: mpsc::Receiver, @@ -65,6 +66,8 @@ where } } + /// Run the inbounde pipeline. This returns a future that resolves once the stream has ended. Typically, you would + /// spawn this in a new task. pub async fn run(mut self) { let mut current_id = 0; while let Some(item) = self.stream.recv().await { diff --git a/comms/core/src/pipeline/outbound.rs b/comms/core/src/pipeline/outbound.rs index 37fe074ec2..6f2dc115b3 100644 --- a/comms/core/src/pipeline/outbound.rs +++ b/comms/core/src/pipeline/outbound.rs @@ -36,6 +36,8 @@ use crate::{ const LOG_TARGET: &str = "comms::pipeline::outbound"; +/// Calls a service in a new task whenever a message is received by the configured channel and forwards the resulting +/// message as a [MessageRequest](crate::protocol::messaging::MessageRequest). pub struct Outbound { /// Executor used to spawn a pipeline for each received item on the stream executor: OptionallyBoundedExecutor, @@ -52,6 +54,7 @@ where TPipeline::Error: Display + Send, TPipeline::Future: Send, { + /// New outbound pipeline. pub fn new( executor: OptionallyBoundedExecutor, config: OutboundPipelineConfig, @@ -64,6 +67,7 @@ where } } + /// Run the outbound pipeline. pub async fn run(mut self) { let mut current_id = 0; loop { diff --git a/comms/core/src/pipeline/sink.rs b/comms/core/src/pipeline/sink.rs index 24608f5ceb..df7fe3cdb5 100644 --- a/comms/core/src/pipeline/sink.rs +++ b/comms/core/src/pipeline/sink.rs @@ -32,6 +32,7 @@ use super::PipelineError; pub struct SinkService(TSink); impl SinkService { + /// Creates a new service that forwards to the given sink. pub fn new(sink: TSink) -> Self { SinkService(sink) } diff --git a/comms/core/src/protocol/error.rs b/comms/core/src/protocol/error.rs index 9d27cf8583..4273714b64 100644 --- a/comms/core/src/protocol/error.rs +++ b/comms/core/src/protocol/error.rs @@ -24,6 +24,7 @@ use std::io; use thiserror::Error; +/// Error type for protocol module. #[derive(Debug, Error)] pub enum ProtocolError { #[error("IO error: {0}")] diff --git a/comms/core/src/protocol/extensions.rs b/comms/core/src/protocol/extensions.rs index 6f1c8efb28..e3cf4ad8b3 100644 --- a/comms/core/src/protocol/extensions.rs +++ b/comms/core/src/protocol/extensions.rs @@ -31,8 +31,10 @@ use crate::{ Substream, }; +/// Error type for ProtocolExtension pub type ProtocolExtensionError = anyhow::Error; +/// Implement this trait to install custom protocols. pub trait ProtocolExtension: Send { // TODO: The Box is easier to do for now at the cost of ProtocolExtension being less generic. fn install(self: Box, context: &mut ProtocolExtensionContext) -> Result<(), ProtocolExtensionError>; @@ -46,24 +48,29 @@ where F: FnOnce(&mut ProtocolExtensionContext) -> Result<(), ProtocolExtensionEr } } +/// Collection of implementations of ProtocolExtension #[derive(Default)] pub struct ProtocolExtensions { inner: Vec>, } impl ProtocolExtensions { + /// New empty ProtocolExtensions pub fn new() -> Self { Self { inner: Vec::new() } } + /// Returns the number of extensions contained in this instance pub fn len(&self) -> usize { self.inner.len() } + /// Returns true if this contains at least one extension, otherwise false pub fn is_empty(&self) -> bool { self.inner.is_empty() } + /// Adds an extension pub fn add(&mut self, ext: T) -> &mut Self { self.inner.push(Box::new(ext)); self @@ -100,6 +107,7 @@ impl IntoIterator for ProtocolExtensions { } } +/// Context that is passed to `ProtocolExtension::install`. pub struct ProtocolExtensionContext { connectivity: ConnectivityRequester, peer_manager: Arc, @@ -123,6 +131,7 @@ impl ProtocolExtensionContext { } } + /// Adds a protocol and notifier pub fn add_protocol>( &mut self, protocols: I, @@ -141,14 +150,17 @@ impl ProtocolExtensionContext { self } + /// See [ConnectivityRequester](crate::connectivity::ConnectivityRequester]. pub fn connectivity(&self) -> ConnectivityRequester { self.connectivity.clone() } + /// See [PeerManager](crate::peer_manager::PeerManager]. pub fn peer_manager(&self) -> Arc { self.peer_manager.clone() } + /// Returns the shutdown signal that will trigger on node shutdown. pub fn shutdown_signal(&self) -> ShutdownSignal { self.shutdown_signal.clone() } diff --git a/comms/core/src/protocol/identity.rs b/comms/core/src/protocol/identity.rs index 7cb1797ec9..60103490b3 100644 --- a/comms/core/src/protocol/identity.rs +++ b/comms/core/src/protocol/identity.rs @@ -41,6 +41,16 @@ const LOG_TARGET: &str = "comms::protocol::identity"; const MAX_IDENTITY_PROTOCOL_MSG_SIZE: u16 = 1024; +/// Perform the identity exchange protocol. +/// +/// This occurs on each new connection. Identity data is sent immediately by both the initiator and responder, therefore +/// this protocol has a half RTT. +/// +/// ```text +/// [initiator] (simultaneous) [responder] +/// | ---------[identity]--------> | +/// | <---------[identity]-------- | +/// ``` pub async fn identity_exchange<'p, TSocket, P>( node_identity: &NodeIdentity, our_supported_protocols: P, @@ -133,6 +143,7 @@ async fn write_protocol_frame( Ok(()) } +/// Error type for the identity protocol #[derive(Debug, Error, Clone)] pub enum IdentityProtocolError { #[error("IoError: {0}")] diff --git a/comms/core/src/protocol/messaging/error.rs b/comms/core/src/protocol/messaging/error.rs index e7fd9f5fb5..82635c349e 100644 --- a/comms/core/src/protocol/messaging/error.rs +++ b/comms/core/src/protocol/messaging/error.rs @@ -33,6 +33,7 @@ use crate::{ protocol::ProtocolError, }; +/// Error type for inbound messages. #[derive(Debug, Error)] pub enum InboundMessagingError { #[error("PeerManagerError: {0}")] @@ -41,6 +42,7 @@ pub enum InboundMessagingError { MessageDecodeError(#[from] prost::DecodeError), } +/// Error type for the messaging protocol. #[derive(Debug, Error)] pub enum MessagingProtocolError { #[error("Failed to send message")] diff --git a/comms/core/src/protocol/messaging/extension.rs b/comms/core/src/protocol/messaging/extension.rs index 6ced04b85e..eabbc99800 100644 --- a/comms/core/src/protocol/messaging/extension.rs +++ b/comms/core/src/protocol/messaging/extension.rs @@ -51,6 +51,7 @@ pub const MESSAGING_PROTOCOL_EVENTS_BUFFER_SIZE: usize = 30; /// buffering may be required if the node needs to send many messages out at the same time. pub const MESSAGING_REQUEST_BUFFER_SIZE: usize = 50; +/// Installs the messaging protocol pub struct MessagingProtocolExtension { event_tx: MessagingEventSender, pipeline: pipeline::Config, diff --git a/comms/core/src/protocol/messaging/inbound.rs b/comms/core/src/protocol/messaging/inbound.rs index 1d0479d5cb..f939a43e8d 100644 --- a/comms/core/src/protocol/messaging/inbound.rs +++ b/comms/core/src/protocol/messaging/inbound.rs @@ -34,6 +34,7 @@ use crate::{message::InboundMessage, peer_manager::NodeId, rate_limit::RateLimit const LOG_TARGET: &str = "comms::protocol::messaging::inbound"; +/// Inbound messaging actor. This is lazily spawned per peer when a peer requests a messaging session. pub struct InboundMessaging { peer: NodeId, inbound_message_tx: mpsc::Sender, diff --git a/comms/core/src/protocol/messaging/mod.rs b/comms/core/src/protocol/messaging/mod.rs index bdbf4eb0db..a55ec3628c 100644 --- a/comms/core/src/protocol/messaging/mod.rs +++ b/comms/core/src/protocol/messaging/mod.rs @@ -20,6 +20,14 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +//! # Messaging Protocol +//! +//! A comms protocol extension that provides fire-and-forget messaging between peers. +//! +//! This protocol sends BLOBs to the given peer and imposes no opinions on the actual message. +//! This protocol will attempt to dial the peer if an active peer connection is not already present, +//! if this does not succeed, the message is discarded. + mod extension; pub use extension::MessagingProtocolExtension; diff --git a/comms/core/src/protocol/messaging/outbound.rs b/comms/core/src/protocol/messaging/outbound.rs index 9865b1fd05..0a869f7b16 100644 --- a/comms/core/src/protocol/messaging/outbound.rs +++ b/comms/core/src/protocol/messaging/outbound.rs @@ -43,6 +43,7 @@ const LOG_TARGET: &str = "comms::protocol::messaging::outbound"; /// and because the connection manager already retries dialing a number of times for each requested dial. const MAX_SEND_RETRIES: usize = 1; +/// Actor for outbound messaging for a peer. This is spawned lazily when an outbound message must be sent. pub struct OutboundMessaging { connectivity: ConnectivityRequester, messages_rx: mpsc::UnboundedReceiver, diff --git a/comms/core/src/protocol/messaging/protocol.rs b/comms/core/src/protocol/messaging/protocol.rs index aae62f6f64..3d02b055ff 100644 --- a/comms/core/src/protocol/messaging/protocol.rs +++ b/comms/core/src/protocol/messaging/protocol.rs @@ -86,6 +86,7 @@ pub enum SendFailReason { MaxRetriesReached(usize), } +/// Events emitted by the messaging protocol. #[derive(Debug)] pub enum MessagingEvent { MessageReceived(NodeId, MessageTag), @@ -104,6 +105,7 @@ impl fmt::Display for MessagingEvent { } } +/// Actor responsible for lazily spawning inbound (protocol notifications) and outbound (mpsc channel) messaging actors. pub struct MessagingProtocol { connectivity: ConnectivityRequester, proto_notification: mpsc::Receiver>, @@ -120,7 +122,8 @@ pub struct MessagingProtocol { } impl MessagingProtocol { - pub fn new( + /// Create a new messaging protocol actor. + pub(super) fn new( connectivity: ConnectivityRequester, proto_notification: mpsc::Receiver>, request_rx: mpsc::Receiver, @@ -148,10 +151,12 @@ impl MessagingProtocol { } } + /// Returns a signal that resolves when this actor exits. pub fn complete_signal(&self) -> ShutdownSignal { self.complete_trigger.to_signal() } + /// Runs the messaging protocol actor. pub async fn run(mut self) { let mut shutdown_signal = self.shutdown_signal.clone(); @@ -194,7 +199,7 @@ impl MessagingProtocol { } #[inline] - pub fn framed(socket: TSubstream) -> Framed + pub(super) fn framed(socket: TSubstream) -> Framed where TSubstream: AsyncRead + AsyncWrite + Unpin { framing::canonical(socket, MAX_FRAME_LENGTH) } diff --git a/comms/core/src/protocol/negotiation.rs b/comms/core/src/protocol/negotiation.rs index 77e0d2d188..213f93ca1f 100644 --- a/comms/core/src/protocol/negotiation.rs +++ b/comms/core/src/protocol/negotiation.rs @@ -20,6 +20,28 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +//! # Protocol negotiation protocol. +//! +//! ## Frame format +//! +//! | len (1 byte) | flags (1 byte) | protocol id (variable, max=255) | +//! +//! The initiator sends the desired protocol frame. Any party MAY close the negotiation at any time +//! at which point both parties consider the negotiation as failed and terminate. +//! +//! If the OPTIMISTIC flag is set: +//! - the responder MUST NOT send a response, +//! - the responder MAY reject an unsupported protocol by closing the stream, +//! - if the protocol is supported, the responder SHOULD immediately begin the requested protocol, +//! - the initiator SHOULD immediately begin the requested protocol. +//! +//! If the OPTIMISTIC flag is not set: +//! - If the protocol is unsupported, the responder SHOULD send a NOT_SUPPORTED message response to the initiator, +//! - The responder SHOULD await further messages from the initiator, +//! - If the protocol is supported, the responder SHOULD respond with no flags and an acceptable protocol ID and +//! immediately begin the requested protocol, +//! - The initiator or responder SHOULD send a TERMINATE message if it does not wish to negotiate further. + use std::convert::TryInto; use bitflags::bitflags; @@ -31,9 +53,10 @@ use super::{ProtocolError, ProtocolId}; const LOG_TARGET: &str = "comms::connection_manager::protocol"; -const BUF_CAPACITY: usize = std::u8::MAX as usize; +const BUF_CAPACITY: usize = u8::MAX as usize; const MAX_ROUNDS_ALLOWED: u8 = 5; +/// Encapsulates a protocol negotiation. pub struct ProtocolNegotiation<'a, TSocket> { buf: BytesMut, socket: &'a mut TSocket, diff --git a/comms/core/src/protocol/protocols.rs b/comms/core/src/protocol/protocols.rs index 1d931e6be7..e6271a4d31 100644 --- a/comms/core/src/protocol/protocols.rs +++ b/comms/core/src/protocol/protocols.rs @@ -30,14 +30,18 @@ use crate::{ Substream, }; +/// Protocol notification sender pub type ProtocolNotificationTx = mpsc::Sender>; +/// Protocol notification receiver pub type ProtocolNotificationRx = mpsc::Receiver>; +/// Event emitted when a new inbound substream is requested by a remote node. #[derive(Debug, Clone)] pub enum ProtocolEvent { NewInboundSubstream(NodeId, TSubstream), } +/// Notification of a new protocol #[derive(Debug, Clone)] pub struct ProtocolNotification { pub event: ProtocolEvent, @@ -50,6 +54,7 @@ impl ProtocolNotification { } } +/// Keeps a map of supported protocols and the sender that should be notified. pub struct Protocols { protocols: HashMap>, } @@ -71,14 +76,17 @@ impl Default for Protocols { } impl Protocols { + /// New empty protocol map pub fn new() -> Self { Default::default() } + /// New empty protocol map pub fn empty() -> Self { Default::default() } + /// Add a new protocol ID and notifier pub fn add>( &mut self, protocols: I, @@ -89,15 +97,18 @@ impl Protocols { self } + /// Extend this instance with all the protocols from another instance pub fn extend(&mut self, protocols: Self) -> &mut Self { self.protocols.extend(protocols.protocols); self } + /// Returns all registered protocol IDs pub fn get_supported_protocols(&self) -> Vec { self.protocols.keys().cloned().collect() } + /// Send a notification to the registered notifier for the protocol ID. pub async fn notify( &mut self, protocol: &ProtocolId, @@ -115,6 +126,7 @@ impl Protocols { } } + /// Returns an iterator of currently registered [ProtocolId](self::ProtocolId) pub fn iter(&self) -> impl Iterator { self.protocols.iter().map(|(protocol_id, _)| protocol_id) } diff --git a/comms/core/src/protocol/rpc/client/pool.rs b/comms/core/src/protocol/rpc/client/pool.rs index 70c77bff6b..702f66c71e 100644 --- a/comms/core/src/protocol/rpc/client/pool.rs +++ b/comms/core/src/protocol/rpc/client/pool.rs @@ -122,6 +122,7 @@ where T: RpcPoolClient + From + NamedProtocolService + Clone self.connection.is_connected() } + #[allow(dead_code)] pub(super) fn refresh_num_active_connections(&mut self) -> usize { self.prune(); self.clients.len() diff --git a/comms/core/src/protocol/rpc/context.rs b/comms/core/src/protocol/rpc/context.rs index 142e028162..47ef988fb6 100644 --- a/comms/core/src/protocol/rpc/context.rs +++ b/comms/core/src/protocol/rpc/context.rs @@ -58,10 +58,6 @@ impl RpcCommsBackend { pub fn peer_manager(&self) -> &PeerManager { &self.peer_manager } - - pub fn peer_manager_owned(&self) -> Arc { - self.peer_manager.clone() - } } #[async_trait] @@ -88,6 +84,7 @@ impl RpcCommsProvider for RpcCommsBackend { pub struct RequestContext { request_id: u32, + #[allow(dead_code)] backend: Box, node_id: NodeId, } @@ -109,14 +106,17 @@ impl RequestContext { self.request_id } - pub(crate) async fn fetch_peer(&self) -> Result { + #[allow(dead_code)] + pub async fn fetch_peer(&self) -> Result { self.backend.fetch_peer(&self.node_id).await } + #[allow(dead_code)] async fn dial_peer(&mut self, node_id: &NodeId) -> Result { self.backend.dial_peer(node_id).await } + #[allow(dead_code)] async fn select_connections(&mut self, selection: ConnectivitySelection) -> Result, RpcError> { self.backend.select_connections(selection).await } diff --git a/comms/core/src/protocol/rpc/message.rs b/comms/core/src/protocol/rpc/message.rs index bb68e64a68..a66caacb97 100644 --- a/comms/core/src/protocol/rpc/message.rs +++ b/comms/core/src/protocol/rpc/message.rs @@ -119,6 +119,7 @@ impl BaseRequest { Self { method, message } } + #[allow(dead_code)] pub fn method(&self) -> RpcMethod { self.method } @@ -127,13 +128,14 @@ impl BaseRequest { self.message } - pub fn map(self, mut f: F) -> BaseRequest - where F: FnMut(T) -> U { - BaseRequest { - method: self.method, - message: f(self.message), - } - } + // #[allow(dead_code)] + // pub fn map(self, mut f: F) -> BaseRequest + // where F: FnMut(T) -> U { + // BaseRequest { + // method: self.method, + // message: f(self.message), + // } + // } pub fn get_ref(&self) -> &T { &self.message diff --git a/comms/core/src/protocol/rpc/mod.rs b/comms/core/src/protocol/rpc/mod.rs index 056c11a302..3d76b333db 100644 --- a/comms/core/src/protocol/rpc/mod.rs +++ b/comms/core/src/protocol/rpc/mod.rs @@ -20,8 +20,10 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// TODO: Remove once in use -#![allow(dead_code)] +//! # RPC protocol +//! +//! Provides a request/response protocol that supports streaming. +//! Available with the `rpc` crate feature. #[cfg(test)] mod test; diff --git a/comms/core/src/protocol/rpc/server/mock.rs b/comms/core/src/protocol/rpc/server/mock.rs index 814bb9ea30..94862b7081 100644 --- a/comms/core/src/protocol/rpc/server/mock.rs +++ b/comms/core/src/protocol/rpc/server/mock.rs @@ -67,6 +67,7 @@ use crate::{ pub struct RpcRequestMock { comms_provider: RpcCommsBackend, + #[allow(dead_code)] connectivity_mock_state: ConnectivityManagerMockState, } @@ -204,6 +205,7 @@ pub struct MockRpcServer { inner: Option>, protocol_tx: ProtocolNotificationTx, our_node: Arc, + #[allow(dead_code)] request_tx: mpsc::Sender, } diff --git a/comms/core/src/protocol/rpc/server/router.rs b/comms/core/src/protocol/rpc/server/router.rs index 1ac3d63ff3..280457a09b 100644 --- a/comms/core/src/protocol/rpc/server/router.rs +++ b/comms/core/src/protocol/rpc/server/router.rs @@ -101,6 +101,7 @@ impl Router { Box::new(self) } + #[allow(dead_code)] pub(crate) fn all_protocols(&mut self) -> &[ProtocolId] { &self.protocol_names } diff --git a/comms/core/src/protocol/rpc/test/greeting_service.rs b/comms/core/src/protocol/rpc/test/greeting_service.rs index 3869fd8f9f..3648c67ff2 100644 --- a/comms/core/src/protocol/rpc/test/greeting_service.rs +++ b/comms/core/src/protocol/rpc/test/greeting_service.rs @@ -31,7 +31,7 @@ use std::{ time::Duration, }; -use tari_crypto::tari_utilities::hex::Hex; +use tari_utilities::hex::Hex; use tokio::{ sync::{mpsc, RwLock}, task, diff --git a/comms/core/src/protocol/rpc/test/mock.rs b/comms/core/src/protocol/rpc/test/mock.rs index f98bdcae38..14725712b8 100644 --- a/comms/core/src/protocol/rpc/test/mock.rs +++ b/comms/core/src/protocol/rpc/test/mock.rs @@ -157,6 +157,7 @@ impl MockRpcClient { self.inner.request_response(request, method).await } + #[allow(dead_code)] pub async fn server_streaming( &mut self, request: T, diff --git a/comms/core/src/protocol/rpc/test/smoke.rs b/comms/core/src/protocol/rpc/test/smoke.rs index ef15458e90..699b4360a3 100644 --- a/comms/core/src/protocol/rpc/test/smoke.rs +++ b/comms/core/src/protocol/rpc/test/smoke.rs @@ -23,9 +23,9 @@ use std::{sync::Arc, time::Duration}; use futures::StreamExt; -use tari_crypto::tari_utilities::hex::Hex; use tari_shutdown::Shutdown; use tari_test_utils::unpack_enum; +use tari_utilities::hex::Hex; use tokio::{ sync::{mpsc, RwLock}, task, diff --git a/comms/core/src/rate_limit.rs b/comms/core/src/rate_limit.rs index 5091358ddb..2f4f5360d5 100644 --- a/comms/core/src/rate_limit.rs +++ b/comms/core/src/rate_limit.rs @@ -54,6 +54,7 @@ pub trait RateLimit: Stream { impl RateLimit for T {} +/// Rate limiter for a Stream #[pin_project] #[must_use = "streams do nothing unless polled"] pub struct RateLimiter { @@ -73,7 +74,7 @@ pub struct RateLimiter { } impl RateLimiter { - pub fn new(stream: T, capacity: usize, restock_interval: Duration) -> Self { + pub(self) fn new(stream: T, capacity: usize, restock_interval: Duration) -> Self { let mut interval = time::interval(restock_interval); interval.set_missed_tick_behavior(MissedTickBehavior::Burst); RateLimiter { diff --git a/comms/core/src/runtime.rs b/comms/core/src/runtime.rs index 6c6858c315..78c3b8e11e 100644 --- a/comms/core/src/runtime.rs +++ b/comms/core/src/runtime.rs @@ -20,6 +20,8 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +//! Runtime used by Tari comms (tokio) + use tokio::runtime; // Re-export pub use tokio::{runtime::Handle, task, test}; diff --git a/comms/core/src/socks/client.rs b/comms/core/src/socks/client.rs index 8a23d700df..776db37f40 100644 --- a/comms/core/src/socks/client.rs +++ b/comms/core/src/socks/client.rs @@ -74,6 +74,7 @@ impl fmt::Debug for Authentication { #[repr(u8)] #[derive(Clone, Debug, Copy)] +#[allow(dead_code)] enum Command { Connect = 0x01, Bind = 0x02, diff --git a/comms/core/src/socks/mod.rs b/comms/core/src/socks/mod.rs index 87d5805689..7f7db993a2 100644 --- a/comms/core/src/socks/mod.rs +++ b/comms/core/src/socks/mod.rs @@ -20,8 +20,10 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -// TODO: Remove #[allow(dead_code)] once tests are in place -#[allow(dead_code)] +//! # SOCK5 client +//! +//! A SOCKS5 client that supports tor onion addresses. + mod client; pub use client::{Authentication, Socks5Client}; diff --git a/comms/core/src/stream_id.rs b/comms/core/src/stream_id.rs index 4f314340a6..d5403aae52 100644 --- a/comms/core/src/stream_id.rs +++ b/comms/core/src/stream_id.rs @@ -22,18 +22,22 @@ use std::fmt; +/// Implement this trait on any `Stream` that can identify itself by [Id](self::Id). pub trait StreamId { fn stream_id(&self) -> Id; } +/// An integer stream ID #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub struct Id(u32); impl Id { + /// New Id pub fn new(val: u32) -> Self { Self(val) } + /// Returns the stream ID as a u32 pub fn as_u32(self) -> u32 { self.0 } diff --git a/comms/core/src/tor/mod.rs b/comms/core/src/tor/mod.rs index bc08b6285b..01e82aaf59 100644 --- a/comms/core/src/tor/mod.rs +++ b/comms/core/src/tor/mod.rs @@ -24,10 +24,10 @@ //! //! These modules interact with the Tor Control Port to create hidden services on-the-fly. //! -//! The [client](crate::tor::client) module contains the client library for the Tor Control Port. You can find the spec -//! [here](https://gitweb.torproject.org/torspec.git/tree/control-spec.txt). +//! The [client](crate::tor::TorControlPortClient) module contains the client library for the Tor Control Port. You can +//! find the spec here: . //! -//! The [hidden_service](crate::tor::hidden_service) module contains code which sets up hidden services required for +//! The [hidden_service](crate::tor::HiddenService) module contains code which sets up hidden services required for //! `tari_comms` to function over Tor. mod control_client; diff --git a/comms/core/src/transports/mod.rs b/comms/core/src/transports/mod.rs index 38a0abc0df..90e3de56de 100644 --- a/comms/core/src/transports/mod.rs +++ b/comms/core/src/transports/mod.rs @@ -24,6 +24,13 @@ // Copyright (c) The Libra Core Contributors // SPDX-License-Identifier: Apache-2.0 +//! # Transports +//! +//! Provides an abstraction for [Transport](self::Transport)s and several implemenations: +//! - [TCP](self::TcpTransport) - communication over TCP and IP4/IP6 and DNS +//! - [SOCKS](self::SocksTransport) - communication over a SOCKS5 proxy. +//! - [Memory](self::MemoryTransport) - in-process communication (mpsc channel), typically for testing. + use multiaddr::Multiaddr; use tokio_stream::Stream; @@ -43,6 +50,7 @@ pub use tcp::TcpTransport; mod tcp_with_tor; pub use tcp_with_tor::TcpWithTorTransport; +/// Defines an abstraction for implementations that can dial and listen for connections over a provided address. #[crate::async_trait] pub trait Transport { /// The output of the transport after a connection is established diff --git a/comms/core/src/transports/socks.rs b/comms/core/src/transports/socks.rs index 7ae8b8f788..754eddb0ae 100644 --- a/comms/core/src/transports/socks.rs +++ b/comms/core/src/transports/socks.rs @@ -38,6 +38,7 @@ use crate::{ const LOG_TARGET: &str = "comms::transports::socks"; +/// SOCKS proxy client config #[derive(Clone)] pub struct SocksConfig { pub proxy_address: Multiaddr, @@ -55,6 +56,7 @@ impl Debug for SocksConfig { } } +/// Transport over the SOCKS5 protocol #[derive(Clone)] pub struct SocksTransport { socks_config: SocksConfig, diff --git a/comms/core/src/types.rs b/comms/core/src/types.rs index 1e9bc1d4ba..79a450e9cb 100644 --- a/comms/core/src/types.rs +++ b/comms/core/src/types.rs @@ -20,6 +20,8 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +//! Common Tari comms types + use tari_crypto::{common::Blake256, keys::PublicKey, ristretto::RistrettoPublicKey, signatures::SchnorrSignature}; use tari_storage::lmdb_store::LMDBStore; #[cfg(test)] diff --git a/comms/core/src/utils/datetime.rs b/comms/core/src/utils/datetime.rs index 53bfc93637..e61e2581ff 100644 --- a/comms/core/src/utils/datetime.rs +++ b/comms/core/src/utils/datetime.rs @@ -22,15 +22,13 @@ use std::time::Duration; -use chrono::{DateTime, NaiveTime, Utc}; +use chrono::{DateTime, Local, NaiveDateTime, Utc}; pub fn safe_future_datetime_from_duration(duration: Duration) -> DateTime { let old_duration = chrono::Duration::from_std(duration).unwrap_or_else(|_| chrono::Duration::max_value()); - Utc::now().checked_add_signed(old_duration).unwrap_or_else(|| { - chrono::MAX_DATE - .and_time(NaiveTime::from_hms(0, 0, 0)) - .expect("cannot fail") - }) + Utc::now() + .checked_add_signed(old_duration) + .unwrap_or(chrono::MAX_DATETIME) } pub fn format_duration(duration: Duration) -> String { @@ -48,6 +46,15 @@ pub fn format_duration(duration: Duration) -> String { } } +pub fn format_local_datetime(datetime: &NaiveDateTime) -> String { + let local_datetime = DateTime::::from_utc(*datetime, Local::now().offset().to_owned()); + local_datetime.format("%Y-%m-%d %H:%M:%S").to_string() +} + +pub fn is_max_datetime(datetime: &NaiveDateTime) -> bool { + chrono::MAX_DATETIME.naive_utc() == *datetime +} + #[cfg(test)] mod test { use super::*; diff --git a/comms/core/src/utils/mod.rs b/comms/core/src/utils/mod.rs index 1601cedcd5..59823a19d7 100644 --- a/comms/core/src/utils/mod.rs +++ b/comms/core/src/utils/mod.rs @@ -20,6 +20,8 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +//! Utilities used within Tari comms. + pub mod atomic_ref_counter; pub mod cidr; pub mod datetime; diff --git a/comms/dht/Cargo.toml b/comms/dht/Cargo.toml index f3406f0a06..60fea1d055 100644 --- a/comms/dht/Cargo.toml +++ b/comms/dht/Cargo.toml @@ -12,8 +12,8 @@ edition = "2018" [dependencies] tari_comms = { version = "^0.31", path = "../core", features = ["rpc"] } tari_comms_rpc_macros = { version = "^0.31", path = "../rpc_macros" } -tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.12.5" } -tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.3.1" } +tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.13.0" } +tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.4.3" } tari_shutdown = { version = "^0.31", path = "../../infrastructure/shutdown" } tari_storage = { version = "^0.31", path = "../../infrastructure/storage" } tari_common_sqlite = { path = "../../common_sqlite" } @@ -37,6 +37,7 @@ serde = "1.0.90" serde_derive = "1.0.90" thiserror = "1.0.26" tower = { version = "0.4", features = ["full"] } +zeroize = "1.4.0" # Uncomment for tokio tracing via tokio-console (needs "tracing" features) #console-subscriber = "0.1.3" diff --git a/comms/dht/examples/propagation/prompt.rs b/comms/dht/examples/propagation/prompt.rs index 87a3f6b5bf..5dcfaa9de2 100644 --- a/comms/dht/examples/propagation/prompt.rs +++ b/comms/dht/examples/propagation/prompt.rs @@ -29,7 +29,7 @@ use tari_comms::{ types::CommsPublicKey, NodeIdentity, }; -use tari_crypto::tari_utilities::hex::Hex; +use tari_utilities::hex::Hex; macro_rules! or_continue { ($expr:expr, $($arg:tt)*) => { diff --git a/comms/dht/src/actor.rs b/comms/dht/src/actor.rs index 4bb412c942..94ab91036a 100644 --- a/comms/dht/src/actor.rs +++ b/comms/dht/src/actor.rs @@ -39,9 +39,11 @@ use tari_comms::{ types::CommsPublicKey, PeerConnection, }; -use tari_crypto::tari_utilities::hex::Hex; use tari_shutdown::ShutdownSignal; -use tari_utilities::message_format::{MessageFormat, MessageFormatError}; +use tari_utilities::{ + hex::Hex, + message_format::{MessageFormat, MessageFormatError}, +}; use thiserror::Error; use tokio::{ sync::{mpsc, oneshot}, @@ -63,6 +65,7 @@ use crate::{ const LOG_TARGET: &str = "comms::dht::actor"; +/// Error type for the DHT actor #[derive(Debug, Error)] pub enum DhtActorError { #[error("MPSC channel is disconnected")] @@ -93,6 +96,7 @@ impl From> for DhtActorError { } } +/// Request type for the DHT actor #[derive(Debug)] #[allow(clippy::large_enum_variant)] pub enum DhtRequest { @@ -143,20 +147,23 @@ impl Display for DhtRequest { } } +/// DHT actor requester #[derive(Clone)] pub struct DhtRequester { sender: mpsc::Sender, } impl DhtRequester { - pub fn new(sender: mpsc::Sender) -> Self { + pub(crate) fn new(sender: mpsc::Sender) -> Self { Self { sender } } + /// Send a Join message to the network pub async fn send_join(&mut self) -> Result<(), DhtActorError> { self.sender.send(DhtRequest::SendJoin).await.map_err(Into::into) } + /// Select peers by [BroadcastStrategy](crate::broadcast_strategy::BroadcastStrategy] pub async fn select_peers(&mut self, broadcast_strategy: BroadcastStrategy) -> Result, DhtActorError> { let (reply_tx, reply_rx) = oneshot::channel(); self.sender @@ -165,6 +172,7 @@ impl DhtRequester { reply_rx.await.map_err(|_| DhtActorError::ReplyCanceled) } + /// Adds a message hash to the dedup cache. pub async fn add_message_to_dedup_cache( &mut self, message_hash: Vec, @@ -182,6 +190,7 @@ impl DhtRequester { reply_rx.await.map_err(|_| DhtActorError::ReplyCanceled) } + /// Gets the number of hits for a given message hash. pub async fn get_message_cache_hit_count(&mut self, message_hash: Vec) -> Result { let (reply_tx, reply_rx) = oneshot::channel(); self.sender @@ -191,6 +200,7 @@ impl DhtRequester { reply_rx.await.map_err(|_| DhtActorError::ReplyCanceled) } + /// Returns the deserialized metadata value for the given key pub async fn get_metadata(&mut self, key: DhtMetadataKey) -> Result, DhtActorError> { let (reply_tx, reply_rx) = oneshot::channel(); self.sender.send(DhtRequest::GetMetadata(key, reply_tx)).await?; @@ -202,6 +212,7 @@ impl DhtRequester { } } + /// Sets the metadata value for the given key pub async fn set_metadata(&mut self, key: DhtMetadataKey, value: T) -> Result<(), DhtActorError> { let (reply_tx, reply_rx) = oneshot::channel(); let bytes = value.to_binary().map_err(DhtActorError::FailedToSerializeValue)?; @@ -223,6 +234,7 @@ impl DhtRequester { } } +/// DHT actor. Responsible for executing DHT-related tasks. pub struct DhtActor { node_identity: Arc, peer_manager: Arc, @@ -237,7 +249,8 @@ pub struct DhtActor { } impl DhtActor { - pub fn new( + /// Create a new DhtActor + pub(crate) fn new( config: Arc, conn: DbConnection, node_identity: Arc, @@ -268,6 +281,7 @@ impl DhtActor { } } + /// Spawns the DHT actor on a new task. pub fn spawn(self) { task::spawn(async move { if let Err(err) = self.run().await { diff --git a/comms/dht/src/broadcast_strategy.rs b/comms/dht/src/broadcast_strategy.rs index 29a58805ef..a4dc21f361 100644 --- a/comms/dht/src/broadcast_strategy.rs +++ b/comms/dht/src/broadcast_strategy.rs @@ -20,6 +20,10 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +//! # Broadcast strategy +//! +//! Describes a strategy for selecting peers and active connections when sending messages. + use std::{ fmt, fmt::{Display, Formatter}, @@ -29,6 +33,7 @@ use tari_comms::{peer_manager::node_id::NodeId, types::CommsPublicKey}; use crate::envelope::NodeDestination; +/// Parameters for the [ClosestNodes](self::BroadcastStrategy::ClosestNodes) broadcast strategy. #[derive(Debug, Clone)] pub struct BroadcastClosestRequest { pub node_id: NodeId, @@ -48,6 +53,7 @@ impl Display for BroadcastClosestRequest { } } +/// Describes a strategy for selecting peers and active connections when sending messages. #[derive(Debug, Clone)] pub enum BroadcastStrategy { /// Send to a particular peer matching the given node ID @@ -101,11 +107,14 @@ impl BroadcastStrategy { } } + /// Returns true if the strategy is to send directly to the peer, otherwise false pub fn is_direct(&self) -> bool { use BroadcastStrategy::{DirectNodeId, DirectPublicKey}; matches!(self, DirectNodeId(_) | DirectPublicKey(_)) } + /// Returns a reference to the NodeId used in the `DirectNodeId` strategy, otherwise None if the strategy is not + /// `DirectNodeId`. pub fn direct_node_id(&self) -> Option<&NodeId> { use BroadcastStrategy::DirectNodeId; match self { @@ -114,6 +123,8 @@ impl BroadcastStrategy { } } + /// Returns a reference to the `CommsPublicKey` used in the `DirectPublicKey` strategy, otherwise None if the + /// strategy is not `DirectPublicKey`. pub fn direct_public_key(&self) -> Option<&CommsPublicKey> { use BroadcastStrategy::DirectPublicKey; match self { @@ -122,6 +133,8 @@ impl BroadcastStrategy { } } + /// Returns the `CommsPublicKey` used in the `DirectPublicKey` strategy, otherwise None if the strategy is not + /// `DirectPublicKey`. pub fn into_direct_public_key(self) -> Option> { use BroadcastStrategy::DirectPublicKey; match self { diff --git a/comms/dht/src/builder.rs b/comms/dht/src/builder.rs index 88ae3c655a..d9cb9aaa23 100644 --- a/comms/dht/src/builder.rs +++ b/comms/dht/src/builder.rs @@ -20,6 +20,8 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +//! A builder for customizing and constructing the DHT + use std::{sync::Arc, time::Duration}; use tari_comms::{connectivity::ConnectivityRequester, NodeIdentity, PeerManager}; @@ -35,6 +37,13 @@ use crate::{ DhtConfig, }; +/// Builder for the DHT. +/// +/// ```rust +/// # use tari_comms_dht::{DbConnectionUrl, Dht}; +/// let builder = Dht::builder().mainnet().with_database_url(DbConnectionUrl::Memory); +/// // let dht = builder.build(...).unwrap(); +/// ``` #[derive(Debug, Clone, Default)] pub struct DhtBuilder { config: DhtConfig, @@ -42,7 +51,7 @@ pub struct DhtBuilder { } impl DhtBuilder { - pub fn new() -> Self { + pub(crate) fn new() -> Self { Self { #[cfg(test)] config: DhtConfig::default_local_test(), @@ -52,87 +61,89 @@ impl DhtBuilder { } } + /// Specify a complete [DhtConfig](crate::DhtConfig). pub fn with_config(&mut self, config: DhtConfig) -> &mut Self { self.config = config; self } + /// Default configuration for local test environments. pub fn local_test(&mut self) -> &mut Self { self.config = DhtConfig::default_local_test(); self } + /// Sets the DHT protocol version. pub fn with_protocol_version(&mut self, protocol_version: DhtProtocolVersion) -> &mut Self { self.config.protocol_version = protocol_version; self } + /// Sets whether SAF messages are automatically requested on every new connection to a SAF node. pub fn set_auto_store_and_forward_requests(&mut self, enabled: bool) -> &mut Self { self.config.saf.auto_request = enabled; self } + /// Sets the mpsc sender that is hooked up to the outbound messaging pipeline. pub fn with_outbound_sender(&mut self, outbound_tx: mpsc::Sender) -> &mut Self { self.outbound_tx = Some(outbound_tx); self } + /// Use the default testnet configuration. pub fn testnet(&mut self) -> &mut Self { self.config = DhtConfig::default_testnet(); self } + /// Use the default mainnet configuration. pub fn mainnet(&mut self) -> &mut Self { self.config = DhtConfig::default_mainnet(); self } + /// Sets the [DbConnectionUrl](crate::DbConnectionUrl). pub fn with_database_url(&mut self, database_url: DbConnectionUrl) -> &mut Self { self.config.database_url = database_url; self } - pub fn with_dedup_cache_trim_interval(&mut self, trim_interval: Duration) -> &mut Self { - self.config.dedup_cache_trim_interval = trim_interval; - self - } - - pub fn with_dedup_cache_capacity(&mut self, capacity: usize) -> &mut Self { - self.config.dedup_cache_capacity = capacity; - self - } - - pub fn with_dedup_discard_hit_count(&mut self, max_hit_count: usize) -> &mut Self { - self.config.dedup_allowed_message_occurrences = max_hit_count; - self - } - + /// The number of connections to random peers that should be maintained. + /// Connections to random peers are reshuffled every `DhtConfig::connectivity::random_pool_refresh_interval`. pub fn with_num_random_nodes(&mut self, n: usize) -> &mut Self { self.config.num_random_nodes = n; self } + /// The number of neighbouring peers that the DHT should try maintain connections to. pub fn with_num_neighbouring_nodes(&mut self, n: usize) -> &mut Self { self.config.num_neighbouring_nodes = n; self.config.saf.num_neighbouring_nodes = n; self } + /// The number of peers to send a message using the + /// [Broadcast](crate::broadcast_strategy::BroadcastStrategy::Propagate) strategy. pub fn with_propagation_factor(&mut self, propagation_factor: usize) -> &mut Self { self.config.propagation_factor = propagation_factor; self } + /// The number of peers to send a message broadcast using the + /// [Broadcast](crate::broadcast_strategy::BroadcastStrategy::Broadcast) strategy. pub fn with_broadcast_factor(&mut self, broadcast_factor: usize) -> &mut Self { self.config.broadcast_factor = broadcast_factor; self } + /// The length of time to wait for a discovery reply after a discovery message has been sent. pub fn with_discovery_timeout(&mut self, timeout: Duration) -> &mut Self { self.config.discovery_request_timeout = timeout; self } + /// Enables automatically sending a join/announce message when connected to enough peers on the network. pub fn enable_auto_join(&mut self) -> &mut Self { self.config.auto_join = true; self diff --git a/comms/dht/src/config.rs b/comms/dht/src/config.rs index 21115f66e5..8406eb9556 100644 --- a/comms/dht/src/config.rs +++ b/comms/dht/src/config.rs @@ -101,14 +101,17 @@ pub struct DhtConfig { } impl DhtConfig { + /// Default testnet configuration pub fn default_testnet() -> Self { Default::default() } + /// Default mainnet configuration pub fn default_mainnet() -> Self { Default::default() } + /// Default local test configuration pub fn default_local_test() -> Self { Self { database_url: DbConnectionUrl::Memory, @@ -171,7 +174,7 @@ pub struct DhtConnectivityConfig { pub update_interval: Duration, /// The interval to change the random pool peers. /// Default: 2 hours - pub random_pool_refresh: Duration, + pub random_pool_refresh_interval: Duration, /// Length of cooldown when high connection failure rates are encountered. Default: 45s pub high_failure_rate_cooldown: Duration, /// The minimum desired ratio of TCPv4 to Tor connections. TCPv4 addresses have some significant cost to create, @@ -185,7 +188,7 @@ impl Default for DhtConnectivityConfig { fn default() -> Self { Self { update_interval: Duration::from_secs(2 * 60), - random_pool_refresh: Duration::from_secs(2 * 60 * 60), + random_pool_refresh_interval: Duration::from_secs(2 * 60 * 60), high_failure_rate_cooldown: Duration::from_secs(45), minimum_desired_tcpv4_node_ratio: 0.1, } diff --git a/comms/dht/src/connectivity/mod.rs b/comms/dht/src/connectivity/mod.rs index c5c24bd0ca..982b15a443 100644 --- a/comms/dht/src/connectivity/mod.rs +++ b/comms/dht/src/connectivity/mod.rs @@ -20,6 +20,16 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +//! # DHT Connectivity Actor +//! +//! Responsible for ensuring DHT network connectivity to a neighbouring and random peer set. This includes joining the +//! network when the node has established some peer connections (e.g to seed peers). It maintains neighbouring and +//! random peer pools and instructs the comms `ConnectivityManager` to establish those connections. Once a configured +//! percentage of these peers is online, the node is established on the DHT network. +//! +//! The DHT connectivity actor monitors the connectivity state (using `ConnectivityEvent`s) and attempts +//! to maintain connectivity to the network as peers come and go. + #[cfg(test)] mod test; @@ -50,6 +60,7 @@ use crate::{connectivity::metrics::MetricsError, event::DhtEvent, DhtActorError, const LOG_TARGET: &str = "comms::dht::connectivity"; +/// Error type for the DHT connectivity actor. #[derive(Debug, Error)] pub enum DhtConnectivityError { #[error("ConnectivityError: {0}")] @@ -62,16 +73,8 @@ pub enum DhtConnectivityError { MetricError(#[from] MetricsError), } -/// # DHT Connectivity Actor -/// -/// Responsible for ensuring DHT network connectivity to a neighbouring and random peer set. This includes joining the -/// network when the node has established some peer connections (e.g to seed peers). It maintains neighbouring and -/// random peer pools and instructs the comms `ConnectivityManager` to establish those connections. Once a configured -/// percentage of these peers is online, the node is established on the DHT network. -/// -/// The DHT connectivity actor monitors the connectivity state (using `ConnectivityEvent`s) and attempts -/// to maintain connectivity to the network as peers come and go. -pub struct DhtConnectivity { +/// DHT connectivity actor. +pub(crate) struct DhtConnectivity { config: Arc, peer_manager: Arc, node_identity: Arc, @@ -133,7 +136,9 @@ impl DhtConnectivity { task::spawn(async move { log_mdc::extend(mdc.clone()); debug!(target: LOG_TARGET, "Waiting for connectivity manager to start"); - let _result = self.connectivity.wait_started().await; + if let Err(err) = self.connectivity.wait_started().await { + error!(target: LOG_TARGET, "Comms connectivity failed to start: {}", err); + } log_mdc::extend(mdc.clone()); match self.run(connectivity_events).await { Ok(_) => Ok(()), @@ -442,7 +447,7 @@ impl DhtConnectivity { async fn refresh_random_pool_if_required(&mut self) -> Result<(), DhtConnectivityError> { let should_refresh = self.config.num_random_nodes > 0 && self.random_pool_last_refresh - .map(|instant| instant.elapsed() >= self.config.connectivity.random_pool_refresh) + .map(|instant| instant.elapsed() >= self.config.connectivity.random_pool_refresh_interval) .unwrap_or(true); if should_refresh { self.refresh_random_pool().await?; diff --git a/comms/dht/src/crypt.rs b/comms/dht/src/crypt.rs index 1df29abf2c..964d4d6a92 100644 --- a/comms/dht/src/crypt.rs +++ b/comms/dht/src/crypt.rs @@ -35,6 +35,7 @@ use tari_crypto::{ keys::{DiffieHellmanSharedSecret, PublicKey}, tari_utilities::{epoch_time::EpochTime, ByteArray}, }; +use zeroize::{Zeroize, ZeroizeOnDrop}; use crate::{ envelope::{DhtMessageFlags, DhtMessageHeader, DhtMessageType, NodeDestination}, @@ -42,37 +43,42 @@ use crate::{ version::DhtProtocolVersion, }; -pub fn generate_ecdh_secret(secret_key: &PK::K, public_key: &PK) -> PK +#[derive(Debug, Clone, Zeroize, ZeroizeOnDrop)] +pub struct CipherKey(chacha20::Key); + +/// Generates a Diffie-Hellman secret `kx.G` as a `chacha20::Key` given secret scalar `k` and public key `P = x.G`. +pub fn generate_ecdh_secret(secret_key: &PK::K, public_key: &PK) -> CipherKey where PK: PublicKey + DiffieHellmanSharedSecret { - PK::shared_secret(secret_key, public_key) + // TODO: PK will still leave the secret in released memory. Implementing Zerioze on RistrettoPublicKey is not + // currently possible because (Compressed)RistrettoPoint does not implement it. + let k = PK::shared_secret(secret_key, public_key); + CipherKey(*Key::from_slice(k.as_bytes())) } -pub fn decrypt(cipher_key: &CommsPublicKey, cipher_text: &[u8]) -> Result, DhtOutboundError> { +/// Decrypts cipher text using ChaCha20 stream cipher given the cipher key and cipher text with integral nonce. +pub fn decrypt(cipher_key: &CipherKey, cipher_text: &[u8]) -> Result, DhtOutboundError> { if cipher_text.len() < size_of::() { return Err(DhtOutboundError::CipherError( "Cipher text is not long enough to include nonce".to_string(), )); } + let (nonce, cipher_text) = cipher_text.split_at(size_of::()); let nonce = Nonce::from_slice(nonce); let mut cipher_text = cipher_text.to_vec(); - let key = Key::from_slice(cipher_key.as_bytes()); // 32-bytes - let mut cipher = ChaCha20::new(key, nonce); - + let mut cipher = ChaCha20::new(&cipher_key.0, nonce); cipher.apply_keystream(cipher_text.as_mut_slice()); - Ok(cipher_text) } -pub fn encrypt(cipher_key: &CommsPublicKey, plain_text: &[u8]) -> Result, DhtOutboundError> { +/// Encrypt the plain text using the ChaCha20 stream cipher +pub fn encrypt(cipher_key: &CipherKey, plain_text: &[u8]) -> Vec { let mut nonce = [0u8; size_of::()]; - OsRng.fill_bytes(&mut nonce); - let nonce_ga = Nonce::from_slice(&nonce); - let key = Key::from_slice(cipher_key.as_bytes()); // 32-bytes - let mut cipher = ChaCha20::new(key, nonce_ga); + let nonce_ga = Nonce::from_slice(&nonce); + let mut cipher = ChaCha20::new(&cipher_key.0, nonce_ga); // Cloning the plain text to avoid a caller thinking we have encrypted in place and losing the integral nonce added // below @@ -80,11 +86,13 @@ pub fn encrypt(cipher_key: &CommsPublicKey, plain_text: &[u8]) -> Result cipher.apply_keystream(plain_text_clone.as_mut_slice()); - let mut ciphertext_integral_nonce = nonce.to_vec(); + let mut ciphertext_integral_nonce = Vec::with_capacity(nonce.len() + plain_text_clone.len()); + ciphertext_integral_nonce.extend(&nonce); ciphertext_integral_nonce.append(&mut plain_text_clone); - Ok(ciphertext_integral_nonce) + ciphertext_integral_nonce } +/// Generates a challenge for the origin MAC. pub fn create_origin_mac_challenge(header: &DhtMessageHeader, body: &[u8]) -> Challenge { create_origin_mac_challenge_parts( header.version, @@ -97,6 +105,7 @@ pub fn create_origin_mac_challenge(header: &DhtMessageHeader, body: &[u8]) -> Ch ) } +/// Generates a challenge for the origin MAC. pub fn create_origin_mac_challenge_parts( protocol_version: DhtProtocolVersion, destination: &NodeDestination, @@ -108,7 +117,7 @@ pub fn create_origin_mac_challenge_parts( ) -> Challenge { let mut mac_challenge = Challenge::new(); mac_challenge.update(&protocol_version.to_bytes()); - mac_challenge.update(destination.to_inner_bytes().as_slice()); + mac_challenge.update(destination.as_inner_bytes()); mac_challenge.update(&(message_type as i32).to_le_bytes()); mac_challenge.update(&flags.bits().to_le_bytes()); if let Some(t) = expires { @@ -129,16 +138,18 @@ mod test { #[test] fn encrypt_decrypt() { - let key = CommsPublicKey::default(); + let pk = CommsPublicKey::default(); + let key = CipherKey(*chacha20::Key::from_slice(pk.as_bytes())); let plain_text = "Last enemy position 0830h AJ 9863".as_bytes().to_vec(); - let encrypted = encrypt(&key, &plain_text).unwrap(); + let encrypted = encrypt(&key, &plain_text); let decrypted = decrypt(&key, &encrypted).unwrap(); assert_eq!(decrypted, plain_text); } #[test] fn decrypt_fn() { - let key = CommsPublicKey::default(); + let pk = CommsPublicKey::default(); + let key = CipherKey(*chacha20::Key::from_slice(pk.as_bytes())); let cipher_text = from_hex("24bf9e698e14938e93c09e432274af7c143f8fb831f344f244ef02ca78a07ddc28b46fec536a0ca5c04737a604") .unwrap(); diff --git a/comms/dht/src/dedup/dedup_cache.rs b/comms/dht/src/dedup/dedup_cache.rs index 72782ca9b0..2c7786295d 100644 --- a/comms/dht/src/dedup/dedup_cache.rs +++ b/comms/dht/src/dedup/dedup_cache.rs @@ -24,8 +24,7 @@ use chrono::{NaiveDateTime, Utc}; use diesel::{dsl, result::DatabaseErrorKind, sql_types, ExpressionMethods, OptionalExtension, QueryDsl, RunQueryDsl}; use log::*; use tari_comms::types::CommsPublicKey; -use tari_crypto::tari_utilities::hex::to_hex; -use tari_utilities::hex::Hex; +use tari_utilities::hex::{to_hex, Hex}; use crate::{ schema::dedup_cache, diff --git a/comms/dht/src/dedup/mod.rs b/comms/dht/src/dedup/mod.rs index aa13d72117..2832b0504a 100644 --- a/comms/dht/src/dedup/mod.rs +++ b/comms/dht/src/dedup/mod.rs @@ -20,6 +20,10 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +//! # Dedup Cache +//! +//! Keeps track of messages seen before by this node and discards duplicates. + mod dedup_cache; use std::task::Poll; diff --git a/comms/dht/src/dht.rs b/comms/dht/src/dht.rs index eaa8f72846..58a3297434 100644 --- a/comms/dht/src/dht.rs +++ b/comms/dht/src/dht.rs @@ -43,7 +43,7 @@ use crate::{ event::{DhtEventReceiver, DhtEventSender}, filter, inbound, - inbound::{DecryptedDhtMessage, DhtInboundMessage, MetricsLayer}, + inbound::{DecryptedDhtMessage, DhtInboundMessage, ForwardLayer, MetricsLayer}, logging_middleware::MessageLoggingLayer, network_discovery::DhtNetworkDiscovery, outbound, @@ -318,7 +318,7 @@ impl Dht { Arc::clone(&self.node_identity), self.store_and_forward_requester(), )) - .layer(store_forward::ForwardLayer::new( + .layer(ForwardLayer::new( self.outbound_requester(), self.node_identity.features().contains(PeerFeatures::DHT_STORE_FORWARD), )) @@ -596,7 +596,7 @@ mod test { // Encrypt for someone else let node_identity2 = make_node_identity(); let ecdh_key = crypt::generate_ecdh_secret(node_identity2.secret_key(), node_identity2.public_key()); - let encrypted_bytes = crypt::encrypt(&ecdh_key, &msg.to_encoded_bytes()).unwrap(); + let encrypted_bytes = crypt::encrypt(&ecdh_key, &msg.to_encoded_bytes()); let dht_envelope = make_dht_envelope( &node_identity2, encrypted_bytes, diff --git a/comms/dht/src/discovery/mod.rs b/comms/dht/src/discovery/mod.rs index 31b92622af..f412c66d3b 100644 --- a/comms/dht/src/discovery/mod.rs +++ b/comms/dht/src/discovery/mod.rs @@ -20,6 +20,20 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +//! # DHT discovery protocol +//! +//! This protocol broadcasts an encrypted discovery message to the destination peer. +//! The source of this message is unknown to other network peers without using heuristic-based network analysis. +//! This method of discovery requires both peers to be online. +//! +//! The protocol functions as follows: +//! 1. Broadcast an encrypted [Discovery](crate::envelope::DhtMessageType) message destined for the peer containing the +//! necessary details to connect to this peer. +//! 1. If the peer is online, it may decrypt the message and view the peer +//! connection details. +//! 1. The peer may then add the peer and attempt to connect to it. +//! 1. Once a direct connection is established, the discovery is complete. + mod error; pub use error::DhtDiscoveryError; diff --git a/comms/dht/src/domain_message.rs b/comms/dht/src/domain_message.rs index b7fa8e91a7..3a08eb30ee 100644 --- a/comms/dht/src/domain_message.rs +++ b/comms/dht/src/domain_message.rs @@ -24,6 +24,7 @@ use std::cmp; use rand::{rngs::OsRng, RngCore}; +/// Trait that exposes conversion to a protobuf i32 enum type. pub trait ToProtoEnum { fn as_i32(&self) -> i32; } @@ -34,6 +35,7 @@ impl ToProtoEnum for i32 { } } +/// Domain message to be sent to another peer. #[derive(Debug, Clone)] pub struct OutboundDomainMessage { inner: T, @@ -41,6 +43,7 @@ pub struct OutboundDomainMessage { } impl OutboundDomainMessage { + /// Create a new outbound domain message pub fn new(message_type: &M, message: T) -> Self { Self { inner: message, @@ -48,15 +51,18 @@ impl OutboundDomainMessage { } } + /// Consumes this instance returning the inner message. pub fn into_inner(self) -> T { self.inner } - pub fn to_propagation_header(&self) -> MessageHeader { + /// Returns a propagation message header + pub(crate) fn to_propagation_header(&self) -> MessageHeader { MessageHeader::for_propagation(self.message_type) } - pub fn to_header(&self) -> MessageHeader { + /// Creates a MessageHeader for this outbound message + pub(crate) fn to_header(&self) -> MessageHeader { MessageHeader::new(self.message_type) } } @@ -64,6 +70,7 @@ impl OutboundDomainMessage { pub use crate::proto::message_header::MessageHeader; impl MessageHeader { + /// Creates a new message header with the given message type and random nonce. pub fn new(message_type: i32) -> Self { Self { message_type, @@ -73,7 +80,8 @@ impl MessageHeader { } } - pub fn for_propagation(message_type: i32) -> Self { + /// Creates a new message header with the given message type and a fixed nonce. + pub(crate) fn for_propagation(message_type: i32) -> Self { const PROPAGATION_NONCE: u64 = 0; Self { message_type, diff --git a/comms/dht/src/envelope.rs b/comms/dht/src/envelope.rs index 1181bdd9a4..dba33c957c 100644 --- a/comms/dht/src/envelope.rs +++ b/comms/dht/src/envelope.rs @@ -40,7 +40,7 @@ use thiserror::Error; pub use crate::proto::envelope::{dht_header::Destination, DhtEnvelope, DhtHeader, DhtMessageType}; use crate::version::DhtProtocolVersion; -/// Utility function that converts a `chrono::DateTime` to a `prost::Timestamp` +/// Utility function that converts a `chrono::DateTime` to a `prost_type::Timestamp` pub(crate) fn datetime_to_timestamp(datetime: DateTime) -> Timestamp { Timestamp { seconds: datetime.timestamp(), @@ -58,12 +58,13 @@ pub(crate) fn timestamp_to_datetime(timestamp: Timestamp) -> Option) -> EpochTime { - EpochTime::from(datetime) + EpochTime::from_secs_since_epoch(datetime.timestamp() as u64) } /// Utility function that converts a `EpochTime` to a `chrono::DateTime` pub(crate) fn epochtime_to_datetime(datetime: EpochTime) -> DateTime { - DateTime::from(datetime) + let dt = NaiveDateTime::from_timestamp(i64::try_from(datetime.as_u64()).unwrap_or(i64::MAX), 0); + DateTime::from_utc(dt, Utc) } #[derive(Debug, Error)] @@ -271,15 +272,18 @@ pub enum NodeDestination { } impl NodeDestination { - pub fn to_inner_bytes(&self) -> Vec { + /// Returns the slice of bytes of the `CommsPublicKey` or `NodeId`. Returns an empty slice if the destination is + /// `Unknown`. + pub fn as_inner_bytes(&self) -> &[u8] { use NodeDestination::{NodeId, PublicKey, Unknown}; match self { - Unknown => Vec::default(), - PublicKey(pk) => pk.to_vec(), - NodeId(node_id) => node_id.to_vec(), + Unknown => &[], + PublicKey(pk) => pk.as_bytes(), + NodeId(node_id) => node_id.as_bytes(), } } + /// Returns a reference to the `CommsPublicKey` if the destination is `CommsPublicKey`. pub fn public_key(&self) -> Option<&CommsPublicKey> { use NodeDestination::{NodeId, PublicKey, Unknown}; match self { @@ -289,6 +293,7 @@ impl NodeDestination { } } + /// Returns a reference to the `NodeId` if the destination is `NodeId`. pub fn node_id(&self) -> Option<&NodeId> { use NodeDestination::{NodeId, PublicKey, Unknown}; match self { @@ -298,16 +303,20 @@ impl NodeDestination { } } + /// Returns the NodeId for this destination, deriving it from the PublicKey if necessary or returning None if the + /// destination is `Unknown`. pub fn to_derived_node_id(&self) -> Option { self.node_id() .cloned() .or_else(|| self.public_key().map(NodeId::from_public_key)) } + /// Returns true if the destination is `Unknown`, otherwise false. pub fn is_unknown(&self) -> bool { matches!(self, NodeDestination::Unknown) } + /// Returns true if the NodeIdentity NodeId or PublicKey is equal to this destination. #[inline] pub fn equals_node_identity(&self, other: &NodeIdentity) -> bool { self == other.node_id() || self == other.public_key() diff --git a/comms/dht/src/event.rs b/comms/dht/src/event.rs index b01339f6c0..f7a4725e3d 100644 --- a/comms/dht/src/event.rs +++ b/comms/dht/src/event.rs @@ -29,6 +29,7 @@ use crate::network_discovery::DhtNetworkDiscoveryRoundInfo; pub type DhtEventSender = broadcast::Sender>; pub type DhtEventReceiver = broadcast::Receiver>; +/// Events emitted by the DHT actor. #[derive(Debug)] #[non_exhaustive] pub enum DhtEvent { diff --git a/comms/dht/src/inbound/decryption.rs b/comms/dht/src/inbound/decryption.rs index 527c2405e3..941aa2290b 100644 --- a/comms/dht/src/inbound/decryption.rs +++ b/comms/dht/src/inbound/decryption.rs @@ -39,6 +39,7 @@ use tower::{layer::Layer, Service, ServiceExt}; use crate::{ crypt, + crypt::CipherKey, envelope::DhtMessageHeader, inbound::message::{DecryptedDhtMessage, DhtInboundMessage}, proto::envelope::OriginMac, @@ -301,7 +302,7 @@ where S: Service } fn attempt_decrypt_origin_mac( - shared_secret: &CommsPublicKey, + shared_secret: &CipherKey, dht_header: &DhtMessageHeader, ) -> Result<(CommsPublicKey, Vec), DecryptionError> { let encrypted_origin_mac = Some(&dht_header.origin_mac) @@ -333,7 +334,7 @@ where S: Service } fn attempt_decrypt_message_body( - shared_secret: &CommsPublicKey, + shared_secret: &CipherKey, message_body: &[u8], ) -> Result { let decrypted = diff --git a/comms/dht/src/store_forward/forward.rs b/comms/dht/src/inbound/forward.rs similarity index 85% rename from comms/dht/src/store_forward/forward.rs rename to comms/dht/src/inbound/forward.rs index 7a2fa3b1eb..dca5b4c6ee 100644 --- a/comms/dht/src/store_forward/forward.rs +++ b/comms/dht/src/inbound/forward.rs @@ -1,24 +1,24 @@ -// Copyright 2019, The Tari Project +// Copyright 2022. The Tari Project // -// Redistribution and use in source and binary forms, with or without modification, are permitted provided that the -// following conditions are met: +// Redistribution and use in source and binary forms, with or without modification, are permitted provided that the +// following conditions are met: // -// 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following -// disclaimer. +// 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following +// disclaimer. // -// 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the -// following disclaimer in the documentation and/or other materials provided with the distribution. +// 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the +// following disclaimer in the documentation and/or other materials provided with the distribution. // -// 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote -// products derived from this software without specific prior written permission. +// 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote +// products derived from this software without specific prior written permission. // -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, -// INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR -// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, -// WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE -// USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, +// INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR +// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, +// WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE +// USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. use std::task::Poll; @@ -30,9 +30,8 @@ use tower::{layer::Layer, Service, ServiceExt}; use crate::{ envelope::NodeDestination, - inbound::DecryptedDhtMessage, + inbound::{error::DhtInboundError, DecryptedDhtMessage}, outbound::{OutboundMessageRequester, SendMessageParams}, - store_forward::error::StoreAndForwardError, }; const LOG_TARGET: &str = "comms::dht::storeforward::forward"; @@ -166,7 +165,7 @@ where S: Service Ok(()) } - async fn forward(&mut self, message: &DecryptedDhtMessage) -> Result<(), StoreAndForwardError> { + async fn forward(&mut self, message: &DecryptedDhtMessage) -> Result<(), DhtInboundError> { let DecryptedDhtMessage { source_peer, decryption_result, diff --git a/comms/dht/src/inbound/mod.rs b/comms/dht/src/inbound/mod.rs index ec6f22acbf..460efaeab5 100644 --- a/comms/dht/src/inbound/mod.rs +++ b/comms/dht/src/inbound/mod.rs @@ -20,6 +20,8 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +//! DHT middleware layers for inbound messages. + mod decryption; pub use decryption::DecryptionLayer; @@ -29,10 +31,14 @@ pub use deserialize::DeserializeLayer; mod dht_handler; pub use dht_handler::DhtHandlerLayer; +mod forward; +pub use forward::ForwardLayer; + mod metrics; pub use metrics::MetricsLayer; mod error; mod message; + pub use message::{DecryptedDhtMessage, DhtInboundMessage}; diff --git a/comms/dht/src/lib.rs b/comms/dht/src/lib.rs index de70c2589d..aa3a81e9d7 100644 --- a/comms/dht/src/lib.rs +++ b/comms/dht/src/lib.rs @@ -29,14 +29,15 @@ //! `InboundMessage`(comms) -> _DHT Inbound Middleware_ -> `DhtInboundMessage`(domain) //! //! The DHT inbound middleware consist of: -//! * `DeserializeMiddleware` deserializes the body of an `InboundMessage` into a `DhtEnvelope`. -//! * `DecryptionMiddleware` attempts to decrypt the body of a `DhtEnvelope` if required. The result of that decryption -//! (success or failure) is passed to the next service. -//! * `ForwardMiddleware` uses the result of the decryption to determine if the message is destined for this node or -//! not. If not, the message will be forwarded to the applicable peers using the OutboundRequester (i.e. the outbound -//! DHT middleware). -//! * `DhtHandlerMiddleware` handles DHT messages, such as `Join` and `Discover`. If the messages are _not_ DHT messages -//! the `next_service` is called. +//! * metrics: monitors the number of inbound messages +//! * decryption: deserializes and decrypts the `InboundMessage` and produces a +//! [DecryptedDhtMessage](crate::inbound::DecryptedDhtMessage). +//! * dedup: discards the message if previously received. +//! * logging: message logging +//! * SAF storage: stores certain messages for other peers in the SAF store. +//! * message storage: forwards messages for other peers. +//! * SAF message handler: handles SAF protocol messages (requests for SAF messages, SAF message responses). +//! * DHT message handler: handles DHT protocol messages (discovery, join etc.) //! //! #### Outbound Message Flow //! @@ -50,63 +51,12 @@ //! `DhtOutboundRequest` (domain) -> _DHT Outbound Middleware_ -> `OutboundMessage` (comms) //! //! The DHT outbound middleware consist of: -//! * `BroadcastMiddleware` produces multiple outbound messages according on the `BroadcastStrategy` from the received +//! * broadcast layer: produces multiple outbound messages according on the `BroadcastStrategy` from the received //! `DhtOutboundRequest` message. The `next_service` is called for each resulting message. -//! * `EncryptionMiddleware` encrypts the body of a message if `DhtMessagheFlags::ENCRYPTED` is given. The result is -//! passed onto the `next_service`. -//! * `SerializeMiddleware` wraps the body in a `DhtEnvelope`, serializes the result, constructs an `OutboundMessage` -//! and calls `next_service`. Typically, `next_service` will be a `SinkMiddleware` which send the message to the comms -//! OMS. -// -//! ## Usage -//! -//! ```edition2018,compile_fail -//! #use tari_comms::middleware::ServicePipeline; -//! #use tari_comms_dht::DhtBuilder; -//! #use tari_comms::middleware::sink::SinkMiddleware; -//! #use tari_comms::peer_manager::NodeIdentity; -//! #use rand::rngs::OsRng; -//! #use std::sync::Arc; -//! #use tari_comms::CommsBuilder; -//! #use tokio::runtime::Runtime; -//! #use tokio::sync::mpsc; -//! -//! let runtime = Runtime::new().unwrap(); -//! // Channel from comms to inbound dht -//! let (comms_in_tx, comms_in_rx)= mpsc::channel(100); -//! let (comms_out_tx, comms_out_rx)= mpsc::channel(100); -//! let node_identity = NodeIdentity::random(&mut OsRng::new().unwrap(), "127.0.0.1:9000".parse().unwrap()) -//! .map(Arc::new).unwrap(); -//! let comms = CommsBuilder::new(runtime.executor()) -//! // Messages coming from comms -//! .with_inbound_sink(comms_in_tx) -//! // Messages going to comms -//! .with_outbound_stream(comms_out_rx) -//! .with_node_identity(node_identity) -//! .build() -//! .unwrap(); -//! let peer_manager = comms.start().unwrap().peer_manager(); -//! let dht = Dht::builder().build(node_identity, peer_manager)?; -//! -//! let inbound_pipeline = ServicePipeline::new( -//! comms_in_rx, -//! // In Tari's case, the service would be a InboundMessageConnector in `tari_p2p` -//! dht.inbound_middleware_layer(/* some service which uses DhtInboundMessage */ ) -//! ); -//! // Use the given executor to spawn calls to the middleware -//! inbound_pipeline.spawn_with(rt.executor()); -//! -//! let outbound_pipeline = ServicePipeline::new( -//! dht.take_outbound_receiver(), -//! // SinkMiddleware sends the resulting OutboundMessages to the comms OMS -//! dht.outbound_middleware_layer(SinkMiddleware::new(comms_out_tx)) -//! ); -//! // Use the given executor to spawn calls to the middleware -//! outbound_pipeline.spawn_with(rt.executor()); -//! -//! let oms = dht.outbound_requester(); -//! oms.send_message(...).await; -//! ``` +//! * message logger layer. +//! * serialization: wraps the body in a [DhtOutboundMessage](crate::outbound::DhtOutboundMessage), serializes the +//! result, constructs an `OutboundMessage` and calls `next_service`. Typically, `next_service` will be a +//! `SinkMiddleware` which send the message to comms messaging. #![recursion_limit = "256"] #[macro_use] diff --git a/comms/dht/src/logging_middleware.rs b/comms/dht/src/logging_middleware.rs index e1aa03cfee..5789457095 100644 --- a/comms/dht/src/logging_middleware.rs +++ b/comms/dht/src/logging_middleware.rs @@ -35,6 +35,7 @@ pub struct MessageLoggingLayer<'a, R> { } impl<'a, R> MessageLoggingLayer<'a, R> { + /// Creates a new logging middleware layer pub fn new>>(prefix_msg: T) -> Self { Self { prefix_msg: prefix_msg.into(), @@ -55,6 +56,7 @@ where } } +/// [Service](https://tower-rs.github.io/tower/tower_service/) for DHT message logging. #[derive(Clone)] pub struct MessageLoggingService<'a, S> { prefix_msg: Cow<'a, str>, diff --git a/comms/dht/src/outbound/broadcast.rs b/comms/dht/src/outbound/broadcast.rs index 0e11eba5f4..5fb8f9cea7 100644 --- a/comms/dht/src/outbound/broadcast.rs +++ b/comms/dht/src/outbound/broadcast.rs @@ -39,11 +39,8 @@ use tari_comms::{ types::{Challenge, CommsPublicKey}, utils::signature, }; -use tari_crypto::{ - keys::PublicKey, - tari_utilities::{epoch_time::EpochTime, message_format::MessageFormat, ByteArray}, -}; -use tari_utilities::hex::Hex; +use tari_crypto::keys::PublicKey; +use tari_utilities::{epoch_time::EpochTime, hex::Hex, message_format::MessageFormat, ByteArray}; use tokio::sync::oneshot; use tower::{layer::Layer, Service, ServiceExt}; @@ -500,7 +497,7 @@ where S: Service let (e_secret_key, e_public_key) = CommsPublicKey::random_keypair(&mut OsRng); let shared_ephemeral_secret = crypt::generate_ecdh_secret(&e_secret_key, &**public_key); // Encrypt the message with the body - let encrypted_body = crypt::encrypt(&shared_ephemeral_secret, &body)?; + let encrypted_body = crypt::encrypt(&shared_ephemeral_secret, &body); let mac_challenge = crypt::create_origin_mac_challenge_parts( self.protocol_version, @@ -514,7 +511,7 @@ where S: Service // Sign the encrypted message let origin_mac = create_origin_mac(&self.node_identity, mac_challenge)?; // Encrypt and set the origin field - let encrypted_origin_mac = crypt::encrypt(&shared_ephemeral_secret, &origin_mac)?; + let encrypted_origin_mac = crypt::encrypt(&shared_ephemeral_secret, &origin_mac); Ok(( Some(Arc::new(e_public_key)), Some(encrypted_origin_mac.into()), diff --git a/comms/dht/src/outbound/error.rs b/comms/dht/src/outbound/error.rs index 970df07224..4b702e778b 100644 --- a/comms/dht/src/outbound/error.rs +++ b/comms/dht/src/outbound/error.rs @@ -21,7 +21,8 @@ // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. use tari_comms::message::MessageError; -use tari_crypto::{signatures::SchnorrSignatureError, tari_utilities::message_format::MessageFormatError}; +use tari_crypto::signatures::SchnorrSignatureError; +use tari_utilities::message_format::MessageFormatError; use thiserror::Error; use tokio::sync::mpsc::error::SendError; diff --git a/comms/dht/src/outbound/mod.rs b/comms/dht/src/outbound/mod.rs index 4ae8076eab..4a675f2853 100644 --- a/comms/dht/src/outbound/mod.rs +++ b/comms/dht/src/outbound/mod.rs @@ -20,6 +20,8 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +//! DHT middleware layers for outbound messages. + mod broadcast; pub use broadcast::BroadcastLayer; @@ -27,7 +29,7 @@ mod error; pub use error::DhtOutboundError; pub(crate) mod message; -pub use message::{DhtOutboundRequest, OutboundEncryption, SendMessageResponse}; +pub use message::{DhtOutboundMessage, DhtOutboundRequest, OutboundEncryption, SendMessageResponse}; mod message_params; pub use message_params::SendMessageParams; diff --git a/comms/dht/src/peer_validator.rs b/comms/dht/src/peer_validator.rs index db17086443..d9e1d71c42 100644 --- a/comms/dht/src/peer_validator.rs +++ b/comms/dht/src/peer_validator.rs @@ -32,6 +32,7 @@ use crate::DhtConfig; const LOG_TARGET: &str = "dht::network_discovery::peer_validator"; +/// Validation errors for peers shared on the network #[derive(Debug, thiserror::Error)] pub enum PeerValidatorError { #[error("Node ID was invalid for peer '{peer}'")] @@ -44,12 +45,14 @@ pub enum PeerValidatorError { PeerManagerError(#[from] PeerManagerError), } +/// Validator for Peers pub struct PeerValidator<'a> { peer_manager: &'a PeerManager, config: &'a DhtConfig, } impl<'a> PeerValidator<'a> { + /// Creates a new peer validator pub fn new(peer_manager: &'a PeerManager, config: &'a DhtConfig) -> Self { Self { peer_manager, config } } diff --git a/comms/dht/src/rpc/mod.rs b/comms/dht/src/rpc/mod.rs index 533d28fcdc..826641495c 100644 --- a/comms/dht/src/rpc/mod.rs +++ b/comms/dht/src/rpc/mod.rs @@ -20,6 +20,8 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +//! DHT RPC interface defining RPC methods for peer sharing. + #[cfg(test)] mod mock; #[cfg(test)] diff --git a/comms/dht/src/storage/connection.rs b/comms/dht/src/storage/connection.rs index 1e8e38f53e..f812f25954 100644 --- a/comms/dht/src/storage/connection.rs +++ b/comms/dht/src/storage/connection.rs @@ -40,6 +40,7 @@ use crate::storage::error::StorageError; const LOG_TARGET: &str = "comms::dht::storage::connection"; const SQLITE_POOL_SIZE: usize = 16; +/// Describes how to connect to the database (currently, SQLite). #[derive(Clone, Debug, Serialize, Deserialize)] #[serde(into = "String", try_from = "String")] pub enum DbConnectionUrl { @@ -52,10 +53,12 @@ pub enum DbConnectionUrl { } impl DbConnectionUrl { + /// Use a file to store the database pub fn file>(path: P) -> Self { DbConnectionUrl::File(path.as_ref().to_path_buf()) } + /// Returns a database connection string pub fn to_url_string(&self) -> String { use DbConnectionUrl::{File, Memory, MemoryShared}; match self { @@ -96,17 +99,20 @@ impl TryFrom for DbConnectionUrl { } } +/// A SQLite database connection #[derive(Clone)] pub struct DbConnection { pool: SqliteConnectionPool, } impl DbConnection { + /// Connect to an ephemeral database in memory #[cfg(test)] pub fn connect_memory(name: String) -> Result { Self::connect_url(&DbConnectionUrl::MemoryShared(name)) } + /// Connect using the given [DbConnectionUrl](self::DbConnectionUrl). pub fn connect_url(db_url: &DbConnectionUrl) -> Result { debug!(target: LOG_TARGET, "Connecting to database using '{:?}'", db_url); @@ -122,6 +128,7 @@ impl DbConnection { Ok(Self::new(pool)) } + /// Connect and migrate the database, once complete, a handle to the migrated database is returned. pub fn connect_and_migrate(db_url: &DbConnectionUrl) -> Result { let conn = Self::connect_url(db_url)?; let output = conn.migrate()?; @@ -133,10 +140,13 @@ impl DbConnection { Self { pool } } + /// Fetch a connection from the pool. This function synchronously blocks the current thread for up to 60 seconds or + /// until a connection is available. pub fn get_pooled_connection(&self) -> Result>, StorageError> { self.pool.get_pooled_connection().map_err(StorageError::DieselR2d2Error) } + /// Run database migrations pub fn migrate(&self) -> Result { embed_migrations!("./migrations"); diff --git a/comms/dht/src/storage/database.rs b/comms/dht/src/storage/database.rs index 96c89a153b..0894c2b79d 100644 --- a/comms/dht/src/storage/database.rs +++ b/comms/dht/src/storage/database.rs @@ -29,16 +29,19 @@ use crate::{ storage::{dht_setting_entry::NewDhtMetadataEntry, DhtMetadataKey}, }; +/// DHT database containing DHT key/value metadata #[derive(Clone)] pub struct DhtDatabase { connection: DbConnection, } impl DhtDatabase { + /// Create a new DHT database using the provided connection pub fn new(connection: DbConnection) -> Self { Self { connection } } + /// Get a value for the given key, or None if that value has not been set. pub fn get_metadata_value(&self, key: DhtMetadataKey) -> Result, StorageError> { match self.get_metadata_value_bytes(key)? { Some(bytes) => T::from_binary(&bytes).map(Some).map_err(Into::into), @@ -46,6 +49,7 @@ impl DhtDatabase { } } + /// Get the raw bytes for the given key, or None if that value has not been set. pub fn get_metadata_value_bytes(&self, key: DhtMetadataKey) -> Result>, StorageError> { let conn = self.connection.get_pooled_connection()?; dht_metadata::table @@ -58,11 +62,13 @@ impl DhtDatabase { }) } + /// Set the value for the given key pub fn set_metadata_value(&self, key: DhtMetadataKey, value: &T) -> Result<(), StorageError> { let bytes = value.to_binary()?; self.set_metadata_value_bytes(key, bytes) } + /// Set the raw bytes for the given key pub fn set_metadata_value_bytes(&self, key: DhtMetadataKey, value: Vec) -> Result<(), StorageError> { let conn = self.connection.get_pooled_connection()?; diesel::replace_into(dht_metadata::table) diff --git a/comms/dht/src/storage/dht_setting_entry.rs b/comms/dht/src/storage/dht_setting_entry.rs index c2f8f78553..df2f77d054 100644 --- a/comms/dht/src/storage/dht_setting_entry.rs +++ b/comms/dht/src/storage/dht_setting_entry.rs @@ -24,6 +24,7 @@ use std::fmt; use crate::schema::dht_metadata; +/// Supported metadata keys for the DHT database #[derive(Debug, Clone, Copy)] pub enum DhtMetadataKey { /// Timestamp each time the DHT is shut down @@ -38,6 +39,7 @@ impl fmt::Display for DhtMetadataKey { } } +/// Struct used to create a new metadata entry #[derive(Clone, Debug, Insertable)] #[table_name = "dht_metadata"] pub struct NewDhtMetadataEntry { @@ -45,6 +47,7 @@ pub struct NewDhtMetadataEntry { pub value: Vec, } +/// Struct used that contains a metadata entry #[derive(Clone, Debug, Queryable, Identifiable)] #[table_name = "dht_metadata"] pub struct DhtMetadataEntry { diff --git a/comms/dht/src/storage/error.rs b/comms/dht/src/storage/error.rs index ebad439dd5..0469846f5b 100644 --- a/comms/dht/src/storage/error.rs +++ b/comms/dht/src/storage/error.rs @@ -25,6 +25,7 @@ use tari_utilities::message_format::MessageFormatError; use thiserror::Error; use tokio::task; +/// Error type for DHT storage #[derive(Debug, Error)] pub enum StorageError { #[error("ConnectionError: {0}")] diff --git a/comms/dht/src/storage/mod.rs b/comms/dht/src/storage/mod.rs index 3b65b199f5..fc793f7ecd 100644 --- a/comms/dht/src/storage/mod.rs +++ b/comms/dht/src/storage/mod.rs @@ -20,6 +20,8 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +//! DHT storage maintains persistent DHT state including SAF messages and other DHT metadata. + mod connection; pub use connection::{DbConnection, DbConnectionUrl}; diff --git a/comms/dht/src/store_forward/config.rs b/comms/dht/src/store_forward/config.rs index 30d11a335a..d4e6f53e05 100644 --- a/comms/dht/src/store_forward/config.rs +++ b/comms/dht/src/store_forward/config.rs @@ -24,6 +24,7 @@ use std::time::Duration; use serde::{Deserialize, Serialize}; +/// Store and forward configuration. #[derive(Debug, Clone, Serialize, Deserialize)] #[serde(deny_unknown_fields)] pub struct SafConfig { diff --git a/comms/dht/src/store_forward/error.rs b/comms/dht/src/store_forward/error.rs index 6344cdf94b..f44200946a 100644 --- a/comms/dht/src/store_forward/error.rs +++ b/comms/dht/src/store_forward/error.rs @@ -32,6 +32,7 @@ use thiserror::Error; use crate::{actor::DhtActorError, envelope::DhtMessageError, outbound::DhtOutboundError, storage::StorageError}; +/// Error type for SAF #[derive(Debug, Error)] pub enum StoreAndForwardError { #[error("DhtMessageError: {0}")] diff --git a/comms/dht/src/store_forward/local_state.rs b/comms/dht/src/store_forward/local_state.rs index 3dc064b80c..ce756f2a5e 100644 --- a/comms/dht/src/store_forward/local_state.rs +++ b/comms/dht/src/store_forward/local_state.rs @@ -27,8 +27,9 @@ use std::{ use tari_comms::peer_manager::NodeId; +/// Keeps track of the current pending SAF requests. #[derive(Debug, Clone, Default)] -pub struct SafLocalState { +pub(crate) struct SafLocalState { inflight_saf_requests: HashMap, } diff --git a/comms/dht/src/store_forward/mod.rs b/comms/dht/src/store_forward/mod.rs index f35a5ffdc6..36373be5d8 100644 --- a/comms/dht/src/store_forward/mod.rs +++ b/comms/dht/src/store_forward/mod.rs @@ -20,6 +20,8 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +//! Stores messages for a limited time for other offline peers to request later. + type SafResult = Result; mod service; @@ -34,9 +36,6 @@ pub use error::StoreAndForwardError; mod config; pub use config::SafConfig; -mod forward; -pub use forward::ForwardLayer; - mod message; mod saf_handler; diff --git a/comms/dht/src/store_forward/saf_handler/layer.rs b/comms/dht/src/store_forward/saf_handler/layer.rs index cd8a8d8a13..618c9fe3d2 100644 --- a/comms/dht/src/store_forward/saf_handler/layer.rs +++ b/comms/dht/src/store_forward/saf_handler/layer.rs @@ -33,6 +33,7 @@ use crate::{ store_forward::{SafConfig, StoreAndForwardRequester}, }; +/// Layer responsible for handling SAF protocol messages. pub struct MessageHandlerLayer { config: SafConfig, saf_requester: StoreAndForwardRequester, diff --git a/comms/dht/src/store_forward/saf_handler/task.rs b/comms/dht/src/store_forward/saf_handler/task.rs index 7bf18fa8f9..64e5645793 100644 --- a/comms/dht/src/store_forward/saf_handler/task.rs +++ b/comms/dht/src/store_forward/saf_handler/task.rs @@ -276,7 +276,7 @@ where S: Service let message_tag = message.dht_header.message_tag; if let Err(err) = self.check_saf_messages_were_requested(&source_node_id).await { - // TODO: Peer send SAF messages we didn't request?? #banheuristics + // TODO: Peer sent SAF messages we didn't request?? #banheuristics warn!(target: LOG_TARGET, "SAF response check failed: {}", err); return Ok(()); } @@ -625,9 +625,8 @@ mod test { use chrono::Utc; use tari_comms::{message::MessageExt, runtime, wrap_in_envelope_body}; - use tari_crypto::tari_utilities::hex; use tari_test_utils::collect_recv; - use tari_utilities::hex::Hex; + use tari_utilities::{hex, hex::Hex}; use tokio::{sync::mpsc, task, time::sleep}; use super::*; diff --git a/comms/dht/src/store_forward/service.rs b/comms/dht/src/store_forward/service.rs index efb792567f..2f295b5fea 100644 --- a/comms/dht/src/store_forward/service.rs +++ b/comms/dht/src/store_forward/service.rs @@ -60,6 +60,7 @@ const LOG_TARGET: &str = "comms::dht::storeforward::actor"; /// This involves cleaning up messages which have been stored too long according to their priority const CLEANUP_INTERVAL: Duration = Duration::from_secs(10 * 60); // 10 mins +/// Query object for fetching stored messages #[derive(Debug, Clone)] pub struct FetchStoredMessageQuery { public_key: Box, @@ -69,6 +70,7 @@ pub struct FetchStoredMessageQuery { } impl FetchStoredMessageQuery { + /// Creates a new stored message request for pub fn new(public_key: Box, node_id: Box) -> Self { Self { public_key, @@ -78,21 +80,25 @@ impl FetchStoredMessageQuery { } } + /// Modify query to only include messages since the given date. pub fn with_messages_since(&mut self, since: DateTime) -> &mut Self { self.since = Some(since); self } + /// Modify query to request a certain category of messages. pub fn with_response_type(&mut self, response_type: SafResponseType) -> &mut Self { self.response_type = response_type; self } - pub fn since(&self) -> Option> { + #[cfg(test)] + pub(crate) fn since(&self) -> Option> { self.since } } +/// Request types for the SAF actor. #[derive(Debug)] pub enum StoreAndForwardRequest { FetchMessages(FetchStoredMessageQuery, oneshot::Sender>>), @@ -104,16 +110,18 @@ pub enum StoreAndForwardRequest { MarkSafResponseReceived(NodeId, oneshot::Sender>), } +/// Store and forward actor handle. #[derive(Clone)] pub struct StoreAndForwardRequester { sender: mpsc::Sender, } impl StoreAndForwardRequester { - pub fn new(sender: mpsc::Sender) -> Self { + pub(crate) fn new(sender: mpsc::Sender) -> Self { Self { sender } } + /// Fetch messages according to the given query from this node's local DB and return them. pub async fn fetch_messages(&mut self, request: FetchStoredMessageQuery) -> SafResult> { let (reply_tx, reply_rx) = oneshot::channel(); self.sender @@ -123,6 +131,7 @@ impl StoreAndForwardRequester { reply_rx.await.map_err(|_| StoreAndForwardError::RequestCancelled)? } + /// Insert a message into the local storage DB. pub async fn insert_message(&mut self, message: NewStoredMessage) -> SafResult { let (reply_tx, reply_rx) = oneshot::channel(); self.sender @@ -132,6 +141,7 @@ impl StoreAndForwardRequester { reply_rx.await.map_err(|_| StoreAndForwardError::RequestCancelled)? } + /// Remove messages from the local storage DB. pub async fn remove_messages(&mut self, message_ids: Vec) -> SafResult<()> { self.sender .send(StoreAndForwardRequest::RemoveMessages(message_ids)) @@ -140,6 +150,7 @@ impl StoreAndForwardRequester { Ok(()) } + /// Remove all messages older than the given `DateTime`. pub async fn remove_messages_older_than(&mut self, threshold: DateTime) -> SafResult<()> { self.sender .send(StoreAndForwardRequest::RemoveMessagesOlderThan(threshold)) @@ -148,6 +159,7 @@ impl StoreAndForwardRequester { Ok(()) } + /// Send a request for SAF messages from the given peer. pub async fn request_saf_messages_from_peer(&mut self, node_id: NodeId) -> SafResult<()> { self.sender .send(StoreAndForwardRequest::SendStoreForwardRequestToPeer(node_id)) @@ -156,6 +168,7 @@ impl StoreAndForwardRequester { Ok(()) } + /// Send a request for SAF messages from neighbouring peers. pub async fn request_saf_messages_from_neighbours(&mut self) -> SafResult<()> { self.sender .send(StoreAndForwardRequest::SendStoreForwardRequestNeighbours) @@ -164,7 +177,8 @@ impl StoreAndForwardRequester { Ok(()) } - pub async fn mark_saf_response_received(&mut self, peer: NodeId) -> SafResult> { + /// Updates internal SAF state that a SAF response has been received, removing it from the pending list. + pub(crate) async fn mark_saf_response_received(&mut self, peer: NodeId) -> SafResult> { let (reply_tx, reply_rx) = oneshot::channel(); self.sender .send(StoreAndForwardRequest::MarkSafResponseReceived(peer, reply_tx)) @@ -174,6 +188,7 @@ impl StoreAndForwardRequester { } } +/// Store and forward actor. pub struct StoreAndForwardService { config: SafConfig, dht_requester: DhtRequester, @@ -191,7 +206,8 @@ pub struct StoreAndForwardService { } impl StoreAndForwardService { - pub fn new( + /// Creates a new store and forward actor + pub(crate) fn new( config: SafConfig, conn: DbConnection, peer_manager: Arc, @@ -220,7 +236,7 @@ impl StoreAndForwardService { } } - pub fn spawn(self) { + pub(crate) fn spawn(self) { debug!(target: LOG_TARGET, "Store and forward service started"); task::spawn(self.run()); } diff --git a/comms/dht/src/store_forward/store.rs b/comms/dht/src/store_forward/store.rs index d06e3bd976..a26cee9bb2 100644 --- a/comms/dht/src/store_forward/store.rs +++ b/comms/dht/src/store_forward/store.rs @@ -54,6 +54,7 @@ pub struct StoreLayer { } impl StoreLayer { + /// New store layer. pub fn new( config: SafConfig, peer_manager: Arc, diff --git a/comms/dht/src/test_utils/makers.rs b/comms/dht/src/test_utils/makers.rs index 5b01668555..7354d1e72a 100644 --- a/comms/dht/src/test_utils/makers.rs +++ b/comms/dht/src/test_utils/makers.rs @@ -100,7 +100,7 @@ pub fn make_dht_header( origin_mac = make_valid_origin_mac(node_identity, challenge); if flags.is_encrypted() { let shared_secret = crypt::generate_ecdh_secret(e_secret_key, node_identity.public_key()); - origin_mac = crypt::encrypt(&shared_secret, &origin_mac).unwrap() + origin_mac = crypt::encrypt(&shared_secret, &origin_mac); } } DhtMessageHeader { @@ -170,7 +170,7 @@ pub fn make_dht_envelope( let (e_secret_key, e_public_key) = make_keypair(); if flags.is_encrypted() { let shared_secret = crypt::generate_ecdh_secret(&e_secret_key, node_identity.public_key()); - message = crypt::encrypt(&shared_secret, &message).unwrap(); + message = crypt::encrypt(&shared_secret, &message); } let header = make_dht_header( node_identity, diff --git a/comms/dht/src/test_utils/mod.rs b/comms/dht/src/test_utils/mod.rs index 5e453d1f47..03f531c075 100644 --- a/comms/dht/src/test_utils/mod.rs +++ b/comms/dht/src/test_utils/mod.rs @@ -20,6 +20,8 @@ // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE // USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +//! Provides a number of DHT mocks and other testing-related utilities. + macro_rules! unwrap_oms_send_msg { ($var:expr, reply_value=$reply_value:expr) => { match $var { diff --git a/comms/dht/src/version.rs b/comms/dht/src/version.rs index 24b783d77b..26fc01b923 100644 --- a/comms/dht/src/version.rs +++ b/comms/dht/src/version.rs @@ -31,6 +31,7 @@ use serde::{Deserialize, Serialize}; use crate::envelope::DhtMessageError; +/// Versions for the DHT protocol #[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)] #[serde(try_from = "u32", into = "u32")] pub enum DhtProtocolVersion { @@ -39,18 +40,22 @@ pub enum DhtProtocolVersion { } impl DhtProtocolVersion { + /// Returns the latest version pub fn latest() -> Self { DhtProtocolVersion::v2() } + /// Returns v1 version pub fn v1() -> Self { DhtProtocolVersion::V1 { minor: 0 } } + /// Returns v2 version pub fn v2() -> Self { DhtProtocolVersion::V2 { minor: 0 } } + /// Returns the byte representation for the version pub fn to_bytes(self) -> Vec { let mut buf = Vec::with_capacity(4 * 2); buf.write_all(&self.as_major().to_le_bytes()).unwrap(); @@ -58,6 +63,7 @@ impl DhtProtocolVersion { buf } + /// Returns the major version number pub fn as_major(&self) -> u32 { use DhtProtocolVersion::{V1, V2}; match self { @@ -66,6 +72,7 @@ impl DhtProtocolVersion { } } + /// Returns the minor version number pub fn as_minor(&self) -> u32 { use DhtProtocolVersion::{V1, V2}; match self { diff --git a/comms/dht/tests/dht.rs b/comms/dht/tests/dht.rs index 5610566aac..52528b9f2c 100644 --- a/comms/dht/tests/dht.rs +++ b/comms/dht/tests/dht.rs @@ -892,7 +892,7 @@ async fn dht_repropagate() { .unwrap(); } - // This relies on the DHT being set with .with_dedup_discard_hit_count(3) + // This relies on the DHT being set with dedup_allowed_message_occurrences = 3 receive_and_repropagate(&mut node_B, &out_msg).await; receive_and_repropagate(&mut node_C, &out_msg).await; receive_and_repropagate(&mut node_A, &out_msg).await; diff --git a/comms/rpc_macros/src/expand.rs b/comms/rpc_macros/src/expand.rs index 8c6d0d8b1e..9097a01f03 100644 --- a/comms/rpc_macros/src/expand.rs +++ b/comms/rpc_macros/src/expand.rs @@ -183,36 +183,13 @@ impl TraitInfoCollector { )); } - let request_arg = &node.sig.inputs[1]; - match request_arg { - FnArg::Typed(syn::PatType { ty, .. }) => match &**ty { - Type::Path(syn::TypePath { path, .. }) => { - let path = path - .segments - .first() - .ok_or_else(|| syn_error!(request_arg, "unexpected type in trait definition"))?; + self.parse_request_type(node, info)?; + self.parse_method_return_type(node, info)?; - match &path.arguments { - PathArguments::AngleBracketed(args) => { - let arg = args - .args - .first() - .ok_or_else(|| syn_error!(request_arg, "expected Request"))?; - match arg { - GenericArgument::Type(ty) => { - info.request_type = Some((*ty).clone()); - }, - _ => return Err(syn_error!(request_arg, "expected request type")), - } - }, - _ => return Err(syn_error!(request_arg, "expected request type")), - } - }, - _ => return Err(syn_error!(request_arg, "expected request type")), - }, - _ => return Err(syn_error!(request_arg, "expected request argument, got a receiver")), - } + Ok(()) + } + fn parse_method_return_type(&self, node: &syn::TraitItemMethod, info: &mut RpcMethodInfo) -> syn::Result<()> { let ident = info.method_ident.clone(); let invalid_return_type = || { syn_error!( @@ -223,9 +200,7 @@ impl TraitInfoCollector { }; match &node.sig.output { - ReturnType::Default => { - return Err(invalid_return_type()); - }, + ReturnType::Default => Err(invalid_return_type()), ReturnType::Type(_, ty) => match &**ty { Type::Path(path) => match path.path.segments.first() { Some(syn::PathSegment { @@ -253,26 +228,56 @@ impl TraitInfoCollector { match arg { GenericArgument::Type(ty) => { info.return_type = Some((*ty).clone()); + Ok(()) }, - _ => return Err(invalid_return_type()), + _ => Err(invalid_return_type()), } }, - _ => return Err(invalid_return_type()), + _ => Err(invalid_return_type()), } }, - _ => return Err(invalid_return_type()), + _ => Err(invalid_return_type()), } }, - _ => return Err(invalid_return_type()), - }, - _ => { - return Err(invalid_return_type()); + _ => Err(invalid_return_type()), }, + _ => Err(invalid_return_type()), }, } + } - Ok(()) + fn parse_request_type(&self, node: &syn::TraitItemMethod, info: &mut RpcMethodInfo) -> syn::Result<()> { + let request_arg = &node.sig.inputs[1]; + match request_arg { + FnArg::Typed(syn::PatType { ty, .. }) => match &**ty { + Type::Path(syn::TypePath { path, .. }) => { + let path = path + .segments + .first() + .ok_or_else(|| syn_error!(request_arg, "unexpected type in trait definition"))?; + + match &path.arguments { + PathArguments::AngleBracketed(args) => { + let arg = args + .args + .first() + .ok_or_else(|| syn_error!(request_arg, "expected Request"))?; + match arg { + GenericArgument::Type(ty) => { + info.request_type = Some((*ty).clone()); + Ok(()) + }, + _ => Err(syn_error!(request_arg, "expected request type")), + } + }, + _ => Err(syn_error!(request_arg, "expected request type")), + } + }, + _ => Err(syn_error!(request_arg, "expected request type")), + }, + _ => Err(syn_error!(request_arg, "expected request argument, got a receiver")), + } } } diff --git a/dan_layer/core/Cargo.toml b/dan_layer/core/Cargo.toml index 8873a8c3f9..ff4bb7e4db 100644 --- a/dan_layer/core/Cargo.toml +++ b/dan_layer/core/Cargo.toml @@ -11,7 +11,7 @@ tari_common = { path = "../../common" } tari_comms = { path = "../../comms/core" } tari_comms_dht = { path = "../../comms/dht" } tari_comms_rpc_macros = { path = "../../comms/rpc_macros" } -tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.12.5" } +tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.13.0" } tari_mmr = { path = "../../base_layer/mmr" } tari_p2p = { path = "../../base_layer/p2p" } tari_service_framework = { path = "../../base_layer/service_framework" } @@ -20,7 +20,7 @@ tari_storage = { path = "../../infrastructure/storage" } tari_core = {path = "../../base_layer/core"} tari_dan_common_types = {path = "../common_types"} tari_common_types = {path = "../../base_layer/common_types"} -tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.3.1" } +tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.4.3" } anyhow = "1.0.53" async-trait = "0.1.50" diff --git a/dan_layer/core/src/models/asset_definition.rs b/dan_layer/core/src/models/asset_definition.rs index 81a7210297..3aeb3d7233 100644 --- a/dan_layer/core/src/models/asset_definition.rs +++ b/dan_layer/core/src/models/asset_definition.rs @@ -25,7 +25,7 @@ use std::{fmt, marker::PhantomData}; use serde::{self, de, Deserialize, Deserializer, Serialize}; use tari_common_types::types::{PublicKey, ASSET_CHECKPOINT_ID}; use tari_core::transactions::transaction_components::TemplateParameter; -use tari_crypto::tari_utilities::hex::Hex; +use tari_utilities::hex::Hex; #[derive(Deserialize, Clone, Debug)] #[serde(default)] diff --git a/dan_layer/core/src/services/asset_proxy.rs b/dan_layer/core/src/services/asset_proxy.rs index 6dc34917e5..2999436abe 100644 --- a/dan_layer/core/src/services/asset_proxy.rs +++ b/dan_layer/core/src/services/asset_proxy.rs @@ -24,7 +24,7 @@ use async_trait::async_trait; use futures::stream::FuturesUnordered; use log::*; use tari_common_types::types::{PublicKey, ASSET_CHECKPOINT_ID}; -use tari_crypto::tari_utilities::hex::Hex; +use tari_utilities::hex::Hex; use tokio_stream::StreamExt; use crate::{ diff --git a/dan_layer/core/src/templates/tip002_template.rs b/dan_layer/core/src/templates/tip002_template.rs index 94b32d0636..0d861d7cb0 100644 --- a/dan_layer/core/src/templates/tip002_template.rs +++ b/dan_layer/core/src/templates/tip002_template.rs @@ -22,8 +22,8 @@ use prost::Message; use tari_core::transactions::transaction_components::TemplateParameter; -use tari_crypto::tari_utilities::{hex::Hex, ByteArray}; use tari_dan_common_types::proto::tips::tip002; +use tari_utilities::{hex::Hex, ByteArray}; use crate::{ models::{Instruction, InstructionSet, TemplateId}, diff --git a/dan_layer/core/src/templates/tip004_template.rs b/dan_layer/core/src/templates/tip004_template.rs index 88b75325cb..28b8aa7329 100644 --- a/dan_layer/core/src/templates/tip004_template.rs +++ b/dan_layer/core/src/templates/tip004_template.rs @@ -24,8 +24,9 @@ use digest::Digest; use log::*; use prost::Message; use tari_core::transactions::transaction_components::TemplateParameter; -use tari_crypto::{common::Blake256, tari_utilities::hex::Hex}; +use tari_crypto::common::Blake256; use tari_dan_common_types::proto::tips::tip004; +use tari_utilities::hex::Hex; use crate::{ models::InstructionSet, diff --git a/dan_layer/core/src/templates/tip721_template.rs b/dan_layer/core/src/templates/tip721_template.rs index f954b352cb..8ae2287d2f 100644 --- a/dan_layer/core/src/templates/tip721_template.rs +++ b/dan_layer/core/src/templates/tip721_template.rs @@ -23,8 +23,8 @@ use log::*; use prost::Message; use tari_core::transactions::transaction_components::TemplateParameter; -use tari_crypto::tari_utilities::{hex::Hex, ByteArray}; use tari_dan_common_types::proto::tips::tip721; +use tari_utilities::{hex::Hex, ByteArray}; use crate::{ models::InstructionSet, diff --git a/dan_layer/core/src/workers/states/decide_state.rs b/dan_layer/core/src/workers/states/decide_state.rs index 379d9b2698..d88e235b9f 100644 --- a/dan_layer/core/src/workers/states/decide_state.rs +++ b/dan_layer/core/src/workers/states/decide_state.rs @@ -24,7 +24,7 @@ use std::collections::HashMap; use log::*; use tari_common_types::types::PublicKey; -use tari_crypto::tari_utilities::hex::Hex; +use tari_utilities::hex::Hex; use tokio::time::{sleep, Duration}; use crate::{ diff --git a/dan_layer/storage_sqlite/Cargo.toml b/dan_layer/storage_sqlite/Cargo.toml index a3c1d3c269..19a873c888 100644 --- a/dan_layer/storage_sqlite/Cargo.toml +++ b/dan_layer/storage_sqlite/Cargo.toml @@ -8,7 +8,7 @@ license = "BSD-3-Clause" tari_dan_core = {path="../core"} tari_common = { path = "../../common"} tari_common_types = {path = "../../base_layer/common_types"} -tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.3.1" } +tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.4.3" } diesel = { version = "1.4.8", features = ["sqlite"] } diff --git a/infrastructure/storage/Cargo.toml b/infrastructure/storage/Cargo.toml index b0a7ee24cc..922a7477c2 100644 --- a/infrastructure/storage/Cargo.toml +++ b/infrastructure/storage/Cargo.toml @@ -19,4 +19,4 @@ serde_derive = "1.0.80" [dev-dependencies] rand = "0.8" -tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.3.1" } +tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.4.3" } diff --git a/infrastructure/storage/tests/lmdb.rs b/infrastructure/storage/tests/lmdb.rs index 27c57e8aef..38441e39e0 100644 --- a/infrastructure/storage/tests/lmdb.rs +++ b/infrastructure/storage/tests/lmdb.rs @@ -36,7 +36,6 @@ use tari_storage::{ lmdb_store::{db, LMDBBuilder, LMDBConfig, LMDBDatabase, LMDBError, LMDBStore}, IterationResult, }; -use tari_utilities::ExtendBytes; #[derive(Debug, PartialEq, Eq, Serialize, Deserialize)] struct User { @@ -71,17 +70,6 @@ impl User { } } -impl ExtendBytes for User { - fn append_raw_bytes(&self, buf: &mut Vec) { - self.id.append_raw_bytes(buf); - self.first.append_raw_bytes(buf); - self.last.append_raw_bytes(buf); - self.email.append_raw_bytes(buf); - self.male.append_raw_bytes(buf); - buf.extend_from_slice(self.ip.to_string().as_bytes()); - } -} - fn get_path(name: &str) -> String { let mut path = PathBuf::from(env!("CARGO_MANIFEST_DIR")); path.push("tests/data"); diff --git a/infrastructure/tari_script/Cargo.toml b/infrastructure/tari_script/Cargo.toml index 58bf503bd2..61b70885b7 100644 --- a/infrastructure/tari_script/Cargo.toml +++ b/infrastructure/tari_script/Cargo.toml @@ -11,8 +11,8 @@ readme = "README.md" license = "BSD-3-Clause" [dependencies] -tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.12.5" } -tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.3.1" } +tari_crypto = { git = "https://github.com/tari-project/tari-crypto.git", tag = "v0.13.0" } +tari_utilities = { git = "https://github.com/tari-project/tari_utilities.git", tag = "v0.4.3" } blake2 = "0.9" digest = "0.9.0" diff --git a/integration_tests/features/WalletRoutingMechanism.feature b/integration_tests/features/WalletRoutingMechanism.feature index 36828d014a..96472eba69 100644 --- a/integration_tests/features/WalletRoutingMechanism.feature +++ b/integration_tests/features/WalletRoutingMechanism.feature @@ -4,40 +4,59 @@ @wallet-routing_mechanism @wallet Feature: Wallet Routing Mechanism -@flaky -Scenario Outline: Wallets transacting via specified routing mechanism only - Given I have a seed node NODE - And I have base nodes connected to all seed nodes - And I have non-default wallet WALLET_A connected to all seed nodes using - And I have mining node MINER connected to base node NODE and wallet WALLET_A - And I have non-default wallets connected to all seed nodes using + @flaky + Scenario Outline: Wallets transacting via specified routing mechanism only + Given I have a seed node NODE + And I have base nodes connected to all seed nodes + And I have non-default wallet WALLET_A connected to all seed nodes using + And I have mining node MINER connected to base node NODE and wallet WALLET_A + And I have non-default wallets connected to all seed nodes using # We need to ensure the coinbase lock heights are gone and we have enough individual UTXOs; mine enough blocks - And mining node MINER mines 20 blocks - Then all nodes are at height 20 + And mining node MINER mines 20 blocks + Then all nodes are at height 20 # TODO: This wait is needed to stop base nodes from shutting down - When I wait 1 seconds - When I wait for wallet WALLET_A to have at least 100000000 uT - #When I print the world - And I multi-send 1000000 uT from wallet WALLET_A to all wallets at fee 100 + When I wait 1 seconds + When I wait for wallet WALLET_A to have at least 100000000 uT + #When I print the world + And I multi-send 1000000 uT from wallet WALLET_A to all wallets at fee 100 # TODO: This wait is needed to stop next merge mining task from continuing - When I wait 1 seconds - And mining node MINER mines 1 blocks - Then all nodes are at height 21 - Then all wallets detect all transactions as Mined_Unconfirmed + When I wait 1 seconds + And mining node MINER mines 1 blocks + Then all nodes are at height 21 + Then all wallets detect all transactions as Mined_Unconfirmed # TODO: This wait is needed to stop next merge mining task from continuing - When I wait 1 seconds - And mining node MINER mines 11 blocks - Then all nodes are at height 32 - Then all wallets detect all transactions as Mined_Confirmed + When I wait 1 seconds + And mining node MINER mines 11 blocks + Then all nodes are at height 32 + Then all wallets detect all transactions as Mined_Confirmed # TODO: This wait is needed to stop base nodes from shutting down - When I wait 1 seconds - @long-running - Examples: - | NumBaseNodes | NumWallets | Mechanism | - | 5 | 5 | DirectAndStoreAndForward | - | 5 | 5 | DirectOnly | + When I wait 1 seconds + @long-running + Examples: + | NumBaseNodes | NumWallets | Mechanism | + | 5 | 5 | DirectAndStoreAndForward | + | 5 | 5 | DirectOnly | - @long-running - Examples: - | NumBaseNodes | NumWallets | Mechanism | - | 5 | 5 | StoreAndForwardOnly | + @long-running + Examples: + | NumBaseNodes | NumWallets | Mechanism | + | 5 | 5 | StoreAndForwardOnly | + + Scenario: Store and forward TX + Given I have a seed node SEED + And I have a base node BASE connected to seed SEED + And I have wallet SENDER connected to base node BASE + And I have wallet RECEIVER connected to base node BASE + And I stop wallet RECEIVER + And I have mining node MINE connected to base node BASE and wallet SENDER + And mining node MINE mines 5 blocks + Then I wait for wallet SENDER to have at least 1000000 uT + And I send 1000000 uT from wallet SENDER to wallet RECEIVER at fee 100 + And I wait 121 seconds + And I stop wallet SENDER + And I wait 360 seconds + And I restart wallet RECEIVER + And I wait 121 seconds + And I stop wallet RECEIVER + And I restart wallet SENDER + And wallet SENDER detects all transactions are at least Broadcast \ No newline at end of file diff --git a/integration_tests/features/support/wallet_cli_steps.js b/integration_tests/features/support/wallet_cli_steps.js index 3a93d91240..c7dff3cf10 100644 --- a/integration_tests/features/support/wallet_cli_steps.js +++ b/integration_tests/features/support/wallet_cli_steps.js @@ -36,7 +36,7 @@ Then( async function (name, is_not, password) { let wallet = this.getWallet(name); try { - await wallet.start(password); + await wallet.start({ password }); } catch (error) { expect(error).to.equal( is_not === "not" ? "Incorrect password" : undefined diff --git a/integration_tests/features/support/world.js b/integration_tests/features/support/world.js index 89b962b992..ed985a8262 100644 --- a/integration_tests/features/support/world.js +++ b/integration_tests/features/support/world.js @@ -426,7 +426,7 @@ class CustomWorld { async startNode(name, args) { const node = this.seeds[name] || this.nodes[name]; - await node.start(args); + await node.start({ args }); console.log("\n", name, "started\n"); } diff --git a/integration_tests/helpers/baseNodeProcess.js b/integration_tests/helpers/baseNodeProcess.js index c99844d9f1..273622412c 100644 --- a/integration_tests/helpers/baseNodeProcess.js +++ b/integration_tests/helpers/baseNodeProcess.js @@ -252,7 +252,7 @@ class BaseNodeProcess { return await this.createGrpcClient(); } - async start(opts = []) { + async start(opts = {}) { const args = [ "--non-interactive-mode", "--watch", @@ -260,13 +260,15 @@ class BaseNodeProcess { "--base-path", ".", "--network", - "localnet", + opts.network || "localnet", ]; if (this.logFilePath) { args.push("--log-config", this.logFilePath); } - args.concat(opts); - const overrides = this.getOverrides(); + if (opts.args) { + args.concat(opts.args); + } + const overrides = Object.assign(this.getOverrides(), opts.config); Object.keys(overrides).forEach((k) => { args.push("-p"); args.push(`${k}=${overrides[k]}`); diff --git a/integration_tests/helpers/ffi/ffiInterface.js b/integration_tests/helpers/ffi/ffiInterface.js index 83cfe81938..e4d5c1598b 100644 --- a/integration_tests/helpers/ffi/ffiInterface.js +++ b/integration_tests/helpers/ffi/ffiInterface.js @@ -51,7 +51,10 @@ class InterfaceFFI { } const ps = spawn(cmd, args, { cwd: baseDir, - env: { ...process.env }, + env: { + ...process.env, + CARGO_TARGET_DIR: process.cwd() + "/temp/ffi-target", + }, }); ps.on("close", (_code) => { resolve(ps); diff --git a/integration_tests/helpers/walletProcess.js b/integration_tests/helpers/walletProcess.js index 868db80cf7..9022120250 100644 --- a/integration_tests/helpers/walletProcess.js +++ b/integration_tests/helpers/walletProcess.js @@ -213,17 +213,17 @@ class WalletProcess { }); } - async start(password) { + async start(opts = {}) { const args = [ "--base-path", ".", "--password", - `${password ? password : "kensentme"}`, + opts.password || "kensentme", "--seed-words-file-name", this.seedWordsFile, "--non-interactive", "--network", - "localnet", + opts.network || (this.options || {}).network || "localnet", ]; if (this.recoverWallet) { args.push("--recover", "--seed-words", this.seedWords); @@ -231,7 +231,7 @@ class WalletProcess { if (this.logFilePath) { args.push("--log-config", this.logFilePath); } - const overrides = this.getOverrides(); + const overrides = Object.assign(this.getOverrides(), opts.config); Object.keys(overrides).forEach((k) => { args.push("-p"); args.push(`${k}=${overrides[k]}`); diff --git a/integration_tests/package-lock.json b/integration_tests/package-lock.json index a5bff4bcbc..743e2cc174 100644 --- a/integration_tests/package-lock.json +++ b/integration_tests/package-lock.json @@ -898,9 +898,9 @@ "dev": true }, "async": { - "version": "3.2.1", - "resolved": false, - "integrity": "sha512-XdD5lRO/87udXCMC9meWdYiR+Nq6ZjUfXidViUZGu2F1MO4T3XwZ1et0hb2++BgLfhyJwy44BGB/yx80ABx8hg==" + "version": "3.2.3", + "resolved": "https://registry.npmjs.org/async/-/async-3.2.3.tgz", + "integrity": "sha512-spZRyzKL5l5BZQrr/6m/SqFdBN0q3OCI0f9rjfBzCMBIP4p75P620rR3gTmaksNOhmzgdxcaxdNfMy6anrbM0g==" }, "axios": { "version": "0.21.4",