From 90956168bcc3de5c7d9fb6109da5d1c9aeb6b653 Mon Sep 17 00:00:00 2001 From: Andrew Gunnerson Date: Sun, 27 Aug 2023 19:43:03 -0400 Subject: [PATCH] avbroot 2.0: Rewrite in Rust Why? ---- It was always my intention to write avbroot in a compiled language. Python was a stop-gap solution since it was possible to use the various tools and parsers from AOSP to make the initial prototyping and implementation easier. However, doing so required a whole lot of hacks since nearly all of the Python modules we use were intended to be used as executables, not libraries, and they were definitely not meant to be used outside of AOSP's code base. Although the dependencies on AOSP code have been reduced over time, working on the Python code is still frustrating. The majority of the modules we use from both the standard library and external dependencies are lacking type annotations. All of the Python language servers and type checker tools I've used choked on them. There have been serveral avbroot bugs in the past that wouldn't have happened with any statically typed language. The catalyst for me working on this recently was dealing with some python-protobuf versions that wouldn't work with AOSP's pregenerated protobuf bindings. When parsing protobuf messages, it would fail with obscure runtime type errors. I need my projects to not feel frustrating or else I'll just get burnt out. Hence, the Rust rewrite. With fewer hacks this time! avbroot no longer has any dependencies on external tools like openssl. I'll be providing precompiled binaries for the three major desktop OS's, built by GitHub Actions. avbroot will also be versioned now, starting at 2.0.0. Whats new? ---------- * A new `avbroot ota verify` subcommand has been added to check that all OTA and AVB related components have been properly hashed and signed. This works for all OTA images, including stock ones. * A couple new `avbroot avb` subcommands have been added for dumping vbmeta header/footer information and verifying AVB signatures. These are roughly equivalent to avbtool's `info_image` and `verify_image` subcommands, though avbroot is about an order of magnitude faster than the latter. * A new set of `avbroot boot` subcommands have been added for packing and unpacking boot images. It supports Android v0-v4 images and vendor v3-v4 images. Repacking is lossless even when using deprecated fields, like the boot image v4 VTS signature. * A new `avbroot ramdisk` subcommand has been added for inspecting the CPIO structure of ramdisks. * A new set of `avbroot key` subcommands have been added for generating signing keys so that it's no longer necessary to install openssl and avbtool (though of course, keys generated by other tools remain fully compatible). * Since avbroot has a ton of CLI options, a new `avbroot completion` subcommand has been added for generating tab-completion configs for various shells (eg. bash, zsh, fish, powershell). What was removed? ----------------- Nothing :) The `patch` and `extract` subcommands have been moved under `avbroot ota` and the `magisk-info` subcommand has been moved under `avbroot boot`, but there are compatibility shims in place to keep all the old commands working. The command-line interface will remain backwards compatible for as long as possible, even with new major releases. The Rust API, however, has no backwards compatibility guarantees. I currently don't intend for avbroot's "library" components to be used anywhere outside of Custota and avbroot itself. Performance ----------- Due to having better access to low-level APIs (especially `pread` and `pwrite`), nearly everything that can be multithreaded in avbroot is now multithreaded. In addition, during the patching operation, everything is done entirely in memory without temp files and the maximum memory usage is still about 100MB lower than with the Python implementation. The new implementation is bottlenecked by how fast a single CPU core can calculate 3 SHA256 hashes of overlapping regions spanning the majority of the OTA file. About 90% of the CPU time is spent calculating SHA256 hashes and another 5% or so performing XZ-compression. Some numbers: * Patching should take roughly 40%-70% of the time it took before. * Extracting with `--all` should take roughly 10%-30% of the time it took before. Folks with x86_64 CPUs supporting SHA-NI extensions (eg. Intel 11th gen and newer) should see even bigger improvements. Reproducibility --------------- The new implementation's output files are bit-for-bit identical when the inputs are the same. However, they do not exactly match what the Python implementation produced. * The zip entries, aside from `metadata` and `metadata.pb`, are written in sorted order. * All zip entries are stored without compression. * All zip entries are stored without additional metadata (eg. modification timestamp). * The OTA certificate, both in the OTA zip and in the recovery ramdisk's `otacerts.zip`, goes through deserialization + serialization before being written. Text in the certificate file before the header and after the footer will be stripped out. * The protobuf structures (payload header and OTA metadata) are serialized differently. Protobuf has more than one way to encode the same messages "on the wire". The Rust quick_protobuf library serializes messages a bit differently than python-protobuf, but the outputs are mutually compatible. * XZ compression of modified partition images in the payload is now done at compression level 0 instead of 6. This reduces the patching time by several seconds at the cost of a couple MiB increase in file size. * Ramdisks are now compressed with standard LZ4 instead of LZ4HC (high compression mode). For our use case, the difference is <100 KiB, but using standard LZ4 allows us to use a pure-Rust LZ4 library and makes the compression step much faster. * Older ramdisks compressed with gzip are slightly different due to a different gzip implementation being used (flate2 vs. zlib). The two implementations structure the gzip frames slightly differently, but the output is identical when decompressed. * Magisk's config file in the ramdisk (`.backup/.magisk`) will have the `SHA1` field set to all zeros. This allows avbroot to keep track of less information during patching for better performance. The field is only used for Magisk's uninstall feature, which can't ever be used in a locked bootloader setup anyway. Misc ---- While working on the new `avbroot ota verify` subcommand, I found that the `ossi` stock image (OnePlus 10 Pro) used in avbroot's tests has an invalid vbmeta hash for the `odm` partition. I thought it was an avbroot bug, but AOSP's avbtool reports the same invalid hash too. If that image actually boots, then I'm not sure AVB can be trusted on those devices... Signed-off-by: Andrew Gunnerson --- .github/actions/preload-img-cache/action.yml | 33 +- .../actions/preload-magisk-cache/action.yml | 25 +- .github/actions/preload-tox-cache/action.yml | 30 - .github/workflows/ci.yml | 180 +- .github/workflows/deny.yml | 16 + .github/workflows/modules.yml | 9 +- .gitignore | 8 +- .gitmodules | 9 - Cargo.lock | 2246 +++++++++++++++++ Cargo.toml | 69 + README.extra.md | 83 + README.md | 215 +- avbroot.py | 6 - avbroot/__init__.py | 13 - avbroot/boot.py | 480 ---- avbroot/formats/bootimage.py | 771 ------ avbroot/formats/compression.py | 187 -- avbroot/formats/cpio.py | 282 --- avbroot/formats/padding.py | 47 - avbroot/main.py | 731 ------ avbroot/openssl.py | 222 -- avbroot/ota.py | 817 ------ avbroot/util.py | 221 -- avbroot/vbmeta.py | 195 -- build.rs | 43 + deny.toml | 39 + {tests => e2e}/.gitignore | 0 e2e/Cargo.toml | 26 + e2e/README.md | 64 + e2e/e2e.toml | 116 + .../keys/TEST_KEY_DO_NOT_USE_avb.key | 0 e2e/keys/TEST_KEY_DO_NOT_USE_avb.passphrase | 1 + e2e/keys/TEST_KEY_DO_NOT_USE_avb_pkmd.bin | Bin 0 -> 1032 bytes .../keys/TEST_KEY_DO_NOT_USE_ota.crt | 0 .../keys/TEST_KEY_DO_NOT_USE_ota.key | 0 e2e/keys/TEST_KEY_DO_NOT_USE_ota.passphrase | 1 + e2e/src/cli.rs | 181 ++ e2e/src/config.rs | 140 + e2e/src/download.rs | 456 ++++ e2e/src/main.rs | 794 ++++++ external/avb | 1 - external/build | 1 - external/update_engine | 1 - extra/README.md | 62 - extra/bootimagetool.py | 168 -- extra/cpiotool.py | 120 - protobuf/ota_metadata.proto | 115 + protobuf/update_metadata.proto | 437 ++++ requirements.txt | 3 - src/boot.rs | 755 ++++++ src/cli/args.rs | 51 + src/cli/avb.rs | 237 ++ src/cli/boot.rs | 493 ++++ src/cli/completion.rs | 26 + src/cli/key.rs | 168 ++ src/cli/mod.rs | 27 + src/cli/ota.rs | 1347 ++++++++++ src/cli/ramdisk.rs | 153 ++ src/crypto.rs | 406 +++ src/format/avb.rs | 1691 +++++++++++++ src/format/bootimage.rs | 1248 +++++++++ src/format/compression.rs | 206 ++ src/format/cpio.rs | 362 +++ src/format/mod.rs | 12 + src/format/ota.rs | 656 +++++ src/format/padding.rs | 49 + src/format/payload.rs | 941 +++++++ src/lib.rs | 22 + src/main.rs | 26 + src/protobuf.rs | 1 + src/stream.rs | 932 +++++++ src/util.rs | 118 + tests/README.md | 84 - tests/avb.rs | 82 + tests/bootimage.rs | 125 + tests/compression.rs | 39 + tests/config.py | 44 - tests/data/boot_v0.img | Bin 0 -> 16384 bytes tests/data/boot_v1.img | Bin 0 -> 20480 bytes tests/data/boot_v2.img | Bin 0 -> 24576 bytes tests/data/boot_v3.img | Bin 0 -> 12288 bytes tests/data/boot_v4.img | Bin 0 -> 12288 bytes tests/data/boot_v4_vts.img | Bin 0 -> 16384 bytes tests/data/vbmeta_appended.img | Bin 0 -> 12288 bytes tests/data/vbmeta_root.img | Bin 0 -> 3712 bytes tests/data/vendor_v3.img | Bin 0 -> 12288 bytes tests/data/vendor_v4.img | Bin 0 -> 12288 bytes tests/distros/Containerfile.alpine | 5 - tests/distros/Containerfile.arch | 6 - tests/distros/Containerfile.arch-wine | 55 - tests/distros/Containerfile.debian | 5 - tests/distros/Containerfile.fedora | 4 - tests/distros/Containerfile.opensuse | 4 - tests/distros/Containerfile.ubuntu | 5 - tests/distros/Containerfile.ubuntu-lts | 5 - tests/distros/wine/pacman.additional.conf | 6 - tests/distros/wine/python3.sh | 3 - tests/downloader.py | 386 --- tests/tests.py | 626 ----- tests/tests.yaml | 153 -- tests/tests_containerized.py | 260 -- tox.ini | 14 - 102 files changed, 15257 insertions(+), 6245 deletions(-) delete mode 100644 .github/actions/preload-tox-cache/action.yml create mode 100644 .github/workflows/deny.yml delete mode 100644 .gitmodules create mode 100644 Cargo.lock create mode 100644 Cargo.toml create mode 100644 README.extra.md delete mode 100755 avbroot.py delete mode 100644 avbroot/__init__.py delete mode 100644 avbroot/boot.py delete mode 100644 avbroot/formats/bootimage.py delete mode 100644 avbroot/formats/compression.py delete mode 100644 avbroot/formats/cpio.py delete mode 100644 avbroot/formats/padding.py delete mode 100644 avbroot/main.py delete mode 100644 avbroot/openssl.py delete mode 100644 avbroot/ota.py delete mode 100644 avbroot/util.py delete mode 100644 avbroot/vbmeta.py create mode 100644 build.rs create mode 100644 deny.toml rename {tests => e2e}/.gitignore (100%) create mode 100644 e2e/Cargo.toml create mode 100644 e2e/README.md create mode 100644 e2e/e2e.toml rename {tests => e2e}/keys/TEST_KEY_DO_NOT_USE_avb.key (100%) create mode 100644 e2e/keys/TEST_KEY_DO_NOT_USE_avb.passphrase create mode 100644 e2e/keys/TEST_KEY_DO_NOT_USE_avb_pkmd.bin rename {tests => e2e}/keys/TEST_KEY_DO_NOT_USE_ota.crt (100%) rename {tests => e2e}/keys/TEST_KEY_DO_NOT_USE_ota.key (100%) create mode 100644 e2e/keys/TEST_KEY_DO_NOT_USE_ota.passphrase create mode 100644 e2e/src/cli.rs create mode 100644 e2e/src/config.rs create mode 100644 e2e/src/download.rs create mode 100644 e2e/src/main.rs delete mode 160000 external/avb delete mode 160000 external/build delete mode 160000 external/update_engine delete mode 100644 extra/README.md delete mode 100755 extra/bootimagetool.py delete mode 100755 extra/cpiotool.py create mode 100644 protobuf/ota_metadata.proto create mode 100644 protobuf/update_metadata.proto delete mode 100644 requirements.txt create mode 100644 src/boot.rs create mode 100644 src/cli/args.rs create mode 100644 src/cli/avb.rs create mode 100644 src/cli/boot.rs create mode 100644 src/cli/completion.rs create mode 100644 src/cli/key.rs create mode 100644 src/cli/mod.rs create mode 100644 src/cli/ota.rs create mode 100644 src/cli/ramdisk.rs create mode 100644 src/crypto.rs create mode 100644 src/format/avb.rs create mode 100644 src/format/bootimage.rs create mode 100644 src/format/compression.rs create mode 100644 src/format/cpio.rs create mode 100644 src/format/mod.rs create mode 100644 src/format/ota.rs create mode 100644 src/format/padding.rs create mode 100644 src/format/payload.rs create mode 100644 src/lib.rs create mode 100644 src/main.rs create mode 100644 src/protobuf.rs create mode 100644 src/stream.rs create mode 100644 src/util.rs delete mode 100644 tests/README.md create mode 100644 tests/avb.rs create mode 100644 tests/bootimage.rs create mode 100644 tests/compression.rs delete mode 100644 tests/config.py create mode 100644 tests/data/boot_v0.img create mode 100644 tests/data/boot_v1.img create mode 100644 tests/data/boot_v2.img create mode 100644 tests/data/boot_v3.img create mode 100644 tests/data/boot_v4.img create mode 100644 tests/data/boot_v4_vts.img create mode 100644 tests/data/vbmeta_appended.img create mode 100644 tests/data/vbmeta_root.img create mode 100644 tests/data/vendor_v3.img create mode 100644 tests/data/vendor_v4.img delete mode 100644 tests/distros/Containerfile.alpine delete mode 100644 tests/distros/Containerfile.arch delete mode 100644 tests/distros/Containerfile.arch-wine delete mode 100644 tests/distros/Containerfile.debian delete mode 100644 tests/distros/Containerfile.fedora delete mode 100644 tests/distros/Containerfile.opensuse delete mode 100644 tests/distros/Containerfile.ubuntu delete mode 100644 tests/distros/Containerfile.ubuntu-lts delete mode 100644 tests/distros/wine/pacman.additional.conf delete mode 100644 tests/distros/wine/python3.sh delete mode 100644 tests/downloader.py delete mode 100755 tests/tests.py delete mode 100644 tests/tests.yaml delete mode 100755 tests/tests_containerized.py delete mode 100644 tox.ini diff --git a/.github/actions/preload-img-cache/action.yml b/.github/actions/preload-img-cache/action.yml index aa21e7f..0205e5d 100644 --- a/.github/actions/preload-img-cache/action.yml +++ b/.github/actions/preload-img-cache/action.yml @@ -15,39 +15,34 @@ runs: with: key: ${{ inputs.cache-key-prefix }}${{ inputs.device }} # Make sure any changes to path are also reflected in ci.yml setup - path: tests/files/${{ inputs.device }}-sparse.tar + path: e2e/files/${{ inputs.device }}-sparse.tar - if: ${{ steps.cache-img.outputs.cache-hit }} name: Extracting image from sparse archive shell: sh - run: | - tar -C tests/files -xf tests/files/${{ inputs.device }}-sparse.tar - - - if: ${{ ! steps.cache-img.outputs.cache-hit }} - uses: awalsh128/cache-apt-pkgs-action@v1 - with: - packages: python3-lz4 python3-protobuf + working-directory: e2e/files + run: tar -xf ${{ inputs.device }}-sparse.tar - - if: ${{ ! steps.cache-img.outputs.cache-hit }} - uses: awalsh128/cache-apt-pkgs-action@v1 + - name: Restore e2e executable + if: ${{ ! steps.cache-img.outputs.cache-hit }} + uses: actions/cache/restore@v3 with: - packages: python3-strictyaml + key: e2e-${{ github.sha }}-${{ runner.os }} + fail-on-cache-miss: true + path: | + target/release/e2e + target/release/e2e.exe - name: Downloading device image for ${{ inputs.device }} if: ${{ ! steps.cache-img.outputs.cache-hit }} shell: sh - run: | - ./tests/tests.py \ - download \ - --stripped \ - --no-magisk \ - --device \ - ${{ inputs.device }} + working-directory: e2e + run: ../target/release/e2e download --stripped -d ${{ inputs.device }} - if: ${{ ! steps.cache-img.outputs.cache-hit }} name: Creating sparse archive from image shell: sh + working-directory: e2e/files run: | - cd tests/files tar --sparse -cf ${{ inputs.device }}-sparse.tar \ ${{ inputs.device }}/*.stripped diff --git a/.github/actions/preload-magisk-cache/action.yml b/.github/actions/preload-magisk-cache/action.yml index fc6ac93..57ec336 100644 --- a/.github/actions/preload-magisk-cache/action.yml +++ b/.github/actions/preload-magisk-cache/action.yml @@ -12,23 +12,20 @@ runs: with: key: ${{ inputs.cache-key }} # Make sure any changes to path are also reflected in ci.yml setup - path: tests/files/magisk + path: e2e/files/magisk - - if: ${{ ! steps.cache-magisk.outputs.cache-hit }} - uses: awalsh128/cache-apt-pkgs-action@v1 - with: - packages: python3-lz4 python3-protobuf - - - if: ${{ ! steps.cache-magisk.outputs.cache-hit }} - uses: awalsh128/cache-apt-pkgs-action@v1 + - name: Restore e2e executable + if: ${{ ! steps.cache-magisk.outputs.cache-hit }} + uses: actions/cache/restore@v3 with: - packages: python3-strictyaml + key: e2e-${{ github.sha }}-${{ runner.os }} + fail-on-cache-miss: true + path: | + target/release/e2e + target/release/e2e.exe - name: Downloading Magisk if: ${{ ! steps.cache-magisk.outputs.cache-hit }} shell: sh - run: | - ./tests/tests.py \ - download \ - --magisk \ - --no-devices + working-directory: e2e + run: ../target/release/e2e download --magisk diff --git a/.github/actions/preload-tox-cache/action.yml b/.github/actions/preload-tox-cache/action.yml deleted file mode 100644 index 93ef175..0000000 --- a/.github/actions/preload-tox-cache/action.yml +++ /dev/null @@ -1,30 +0,0 @@ -name: Preload tox cache -inputs: - cache-key-prefix: - description: 'Tox cache-key prefix' - required: true - python-version: - description: 'Python version' - required: true - -runs: - using: "composite" - steps: - - uses: actions/cache@v3 - with: - key: ${{ inputs.cache-key-prefix }}${{ inputs.python-version }} - restore-keys: | - tox- - # Make sure any changes to path are also reflected in ci.yml setup - path: | - .tox/ - ~/.cache/pip - - - uses: awalsh128/cache-apt-pkgs-action@v1 - with: - packages: tox - - - uses: actions/setup-python@v4 - with: - python-version: | - ${{ fromJson('{ "py39": "3.9", "py310": "3.10", "py311": "3.11" }')[inputs.python-version] }} diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index a2a2fb9..ff9784f 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -11,9 +11,86 @@ concurrency: cancel-in-progress: true jobs: + build: + runs-on: ${{ matrix.os }} + env: + CARGO_TERM_COLOR: always + RUSTFLAGS: -C strip=symbols + strategy: + fail-fast: false + matrix: + os: + - ubuntu-latest + - windows-latest + - macos-latest + steps: + - name: Check out repository + uses: actions/checkout@v3 + with: + # For git describe + fetch-depth: 0 + + - name: Get version + id: get_version + shell: bash + run: | + echo -n 'version=' >> "${GITHUB_OUTPUT}" + git describe --always \ + | sed -E "s/^v//g;s/([^-]*-g)/r\1/;s/-/./g" \ + >> "${GITHUB_OUTPUT}" + + - name: Get Rust LLVM target triple + id: get_target + shell: bash + env: + RUSTC_BOOTSTRAP: '1' + run: | + echo -n 'name=' >> "${GITHUB_OUTPUT}" + rustc -Z unstable-options --print target-spec-json \ + | jq -r '."llvm-target"' \ + >> "${GITHUB_OUTPUT}" + + - name: Cache Rust dependencies + uses: Swatinem/rust-cache@v2 + + - name: Clippy + run: cargo clippy --release --workspace --features static + + - name: Build + run: cargo build --release --workspace --features static + + - name: Tests + run: cargo test --release --workspace --features static + + - name: Archive documentation + uses: actions/upload-artifact@v3 + with: + name: avbroot-${{ steps.get_version.outputs.version }}-${{ steps.get_target.outputs.name }} + path: | + LICENSE + README.md + + # This is separate so we can have a flat directory structure. + - name: Archive executable + uses: actions/upload-artifact@v3 + with: + name: avbroot-${{ steps.get_version.outputs.version }}-${{ steps.get_target.outputs.name }} + path: | + target/release/avbroot + target/release/avbroot.exe + + - name: Cache e2e executable + uses: actions/cache@v3 + with: + key: e2e-${{ github.sha }}-${{ runner.os }} + path: | + target/release/e2e + target/release/e2e.exe + setup: name: Prepare workflow data runs-on: ubuntu-latest + needs: build timeout-minutes: 2 outputs: config-path: ${{ steps.load-config.outputs.config-path }} @@ -21,54 +98,36 @@ jobs: magisk-key: ${{ steps.cache-keys.outputs.magisk-key }} img-key-prefix: ${{ steps.cache-keys.outputs.img-key-prefix }} img-hit: ${{ steps.get-img-cache.outputs.cache-matched-key }} - tox-key-prefix: ${{ steps.cache-keys.outputs.tox-key-prefix }} - tox-hit: ${{ steps.get-tox-cache.outputs.cache-matched-key }} steps: - uses: actions/checkout@v3 - with: - submodules: true - - uses: awalsh128/cache-apt-pkgs-action@v1 + - name: Restore e2e executable + uses: actions/cache/restore@v3 with: - packages: python3-strictyaml + key: e2e-${{ github.sha }}-${{ runner.os }} + fail-on-cache-miss: true + path: | + target/release/e2e + target/release/e2e.exe - name: Loading test config id: load-config - shell: python + working-directory: e2e run: | - import json - import os - import sys - - sys.path.append(os.environ['GITHUB_WORKSPACE']) - import tests.config - - config_data = tests.config.load_config() - devices = [d.data for d in config_data['device']] - - with open(os.environ['GITHUB_OUTPUT'], 'a') as f: - f.write(f'config-path={tests.config.CONFIG_PATH}\n') - f.write(f"device-list={json.dumps(devices)}\n") + echo 'config-path=e2e/e2e.toml' >> "${GITHUB_OUTPUT}" + echo -n 'device-list=' >> "${GITHUB_OUTPUT}" + ../target/release/e2e list \ + | jq -cnR '[inputs | select(length > 0)]' \ + >> "${GITHUB_OUTPUT}" - name: Generating cache keys id: cache-keys run: | { - echo "tox-key-prefix=tox-${{ hashFiles('tox.ini') }}-"; \ echo "img-key-prefix=img-${{ hashFiles(steps.load-config.outputs.config-path) }}-"; \ echo "magisk-key=magisk-${{ hashFiles(steps.load-config.outputs.config-path) }}"; } >> $GITHUB_OUTPUT - - name: Checking for cached tox environments - id: get-tox-cache - uses: actions/cache/restore@v3 - with: - key: ${{ steps.cache-keys.outputs.tox-key-prefix }} - lookup-only: true - path: | - .tox/ - ~/.cache/pip - - name: Checking for cached device images id: get-img-cache uses: actions/cache/restore@v3 @@ -76,7 +135,7 @@ jobs: key: ${{ steps.cache-keys.outputs.img-key-prefix }} lookup-only: true path: | - tests/files/${{ fromJSON(steps.load-config.outputs.device-list)[0] }}-sparse.tar + e2e/files/${{ fromJSON(steps.load-config.outputs.device-list)[0] }}-sparse.tar - name: Checking for cached magisk apk id: get-magisk-cache @@ -84,7 +143,7 @@ jobs: with: key: ${{ steps.cache-keys.outputs.magisk-key }} lookup-only: true - path: tests/files/magisk + path: e2e/files/magisk - name: Preloading Magisk cache if: ${{ ! steps.get-magisk-cache.outputs.cache-hit }} @@ -106,8 +165,6 @@ jobs: device: ${{ fromJSON(needs.setup.outputs.device-list) }} steps: - uses: actions/checkout@v3 - with: - submodules: true - name: Preloading image cache uses: ./.github/actions/preload-img-cache @@ -115,45 +172,24 @@ jobs: cache-key-prefix: ${{ needs.setup.outputs.img-key-prefix }} device: ${{ matrix.device }} - preload-tox: - name: Preload tox environments - runs-on: ubuntu-latest - needs: setup - timeout-minutes: 5 - # Assume that preloading always succesfully cached all tox environments before. - # If for some reason only some got cached, on the first run, the cache will not be preloaded - # which will result in some being downloaded multiple times when running the tests. - if: ${{ ! needs.setup.outputs.tox-hit }} - strategy: - matrix: - python: [py39, py310, py311] - steps: - - uses: actions/checkout@v3 - - - name: Preloading tox cache - uses: ./.github/actions/preload-tox-cache - with: - cache-key-prefix: ${{ needs.setup.outputs.tox-key-prefix }} - python-version: ${{ matrix.python }} - - - name: Generating tox environment - run: tox -e ${{ matrix.python }} --notest - tests: - name: Run test for ${{ matrix.device }} with ${{ matrix.python }} + name: Run test for ${{ matrix.device }} on ${{ matrix.os }} runs-on: ubuntu-latest - needs: [setup, preload-img, preload-tox] + needs: + - setup + - preload-img timeout-minutes: 10 # Continue on skipped but not on failures or cancels if: ${{ always() && ! failure() && ! cancelled() }} strategy: matrix: device: ${{ fromJSON(needs.setup.outputs.device-list) }} - python: [py39, py310, py311] + os: + - ubuntu-latest + - windows-latest + - macos-latest steps: - uses: actions/checkout@v3 - with: - submodules: true - name: Restoring Magisk cache uses: ./.github/actions/preload-magisk-cache @@ -166,12 +202,16 @@ jobs: cache-key-prefix: ${{ needs.setup.outputs.img-key-prefix }} device: ${{ matrix.device }} - - name: Restoring tox cache - uses: ./.github/actions/preload-tox-cache + - name: Restore e2e executable + uses: actions/cache/restore@v3 with: - cache-key-prefix: ${{ needs.setup.outputs.tox-key-prefix }} - python-version: ${{ matrix.python }} + key: e2e-${{ github.sha }}-${{ runner.os }} + fail-on-cache-miss: true + path: | + target/release/e2e + target/release/e2e.exe # Finally run tests - - name: Run test for ${{ matrix.device }} with ${{ matrix.python }} - run: tox -e ${{ matrix.python }} -- --stripped -d ${{ matrix.device }} + - name: Run test for ${{ matrix.device }} + working-directory: e2e + run: ../target/release/e2e test --stripped -d ${{ matrix.device }} diff --git a/.github/workflows/deny.yml b/.github/workflows/deny.yml new file mode 100644 index 0000000..4f7787a --- /dev/null +++ b/.github/workflows/deny.yml @@ -0,0 +1,16 @@ +--- +on: + push: + branches: + - master + pull_request: +jobs: + check: + name: cargo-deny + runs-on: ubuntu-latest + steps: + - name: Check out repository + uses: actions/checkout@v3 + + - name: Run cargo-deny + uses: EmbarkStudios/cargo-deny-action@v1 diff --git a/.github/workflows/modules.yml b/.github/workflows/modules.yml index 7be238c..1bdedd8 100644 --- a/.github/workflows/modules.yml +++ b/.github/workflows/modules.yml @@ -5,19 +5,24 @@ on: - master pull_request: jobs: - build-app: + build: name: Build modules runs-on: ubuntu-latest steps: - name: Check out repository uses: actions/checkout@v3 with: + # For git describe fetch-depth: 0 - name: Get version id: get_version shell: bash - run: echo "version=r$(git rev-list --count HEAD).$(git rev-parse --short HEAD)" >> "${GITHUB_OUTPUT}" + run: | + echo -n 'version=' >> "${GITHUB_OUTPUT}" + git describe --always \ + | sed -E "s/^v//g;s/([^-]*-g)/r\1/;s/-/./g" \ + >> "${GITHUB_OUTPUT}" - name: Build and test run: ./modules/build.py diff --git a/.gitignore b/.gitignore index e416b31..c199a49 100644 --- a/.gitignore +++ b/.gitignore @@ -1,5 +1,5 @@ -# Caches -__pycache__/ +# Build directories +/target/ # Secrets *.pem @@ -11,4 +11,6 @@ __pycache__/ *.img *.zip *.patched -.tox + +# We do want the test images +!/tests/data/*.img diff --git a/.gitmodules b/.gitmodules deleted file mode 100644 index a964ff1..0000000 --- a/.gitmodules +++ /dev/null @@ -1,9 +0,0 @@ -[submodule "external/avb"] - path = external/avb - url = https://android.googlesource.com/platform/external/avb/ -[submodule "external/update_engine"] - path = external/update_engine - url = https://android.googlesource.com/platform/system/update_engine -[submodule "external/build"] - path = external/build - url = https://android.googlesource.com/platform/build diff --git a/Cargo.lock b/Cargo.lock new file mode 100644 index 0000000..dcb41bc --- /dev/null +++ b/Cargo.lock @@ -0,0 +1,2246 @@ +# This file is automatically @generated by Cargo. +# It is not intended for manual editing. +version = 3 + +[[package]] +name = "addr2line" +version = "0.21.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8a30b2e23b9e17a9f90641c7ab1549cd9b44f296d3ccbf309d2863cfe398a0cb" +dependencies = [ + "gimli", +] + +[[package]] +name = "adler" +version = "1.0.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f26201604c87b1e01bd3d98f8d5d9a8fcbb815e8cedb41ffccbeb4bf593a35fe" + +[[package]] +name = "aes" +version = "0.8.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ac1f845298e95f983ff1944b728ae08b8cebab80d684f0a832ed0fc74dfa27e2" +dependencies = [ + "cfg-if", + "cipher", + "cpufeatures", +] + +[[package]] +name = "aho-corasick" +version = "1.0.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6748e8def348ed4d14996fa801f4122cd763fff530258cdc03f64b25f89d3a5a" +dependencies = [ + "memchr", +] + +[[package]] +name = "anstream" +version = "0.5.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b1f58811cfac344940f1a400b6e6231ce35171f614f26439e80f8c1465c5cc0c" +dependencies = [ + "anstyle", + "anstyle-parse", + "anstyle-query", + "anstyle-wincon", + "colorchoice", + "utf8parse", +] + +[[package]] +name = "anstyle" +version = "1.0.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "15c4c2c83f81532e5845a733998b6971faca23490340a418e9b72a3ec9de12ea" + +[[package]] +name = "anstyle-parse" +version = "0.2.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "938874ff5980b03a87c5524b3ae5b59cf99b1d6bc836848df7bc5ada9643c333" +dependencies = [ + "utf8parse", +] + +[[package]] +name = "anstyle-query" +version = "1.0.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5ca11d4be1bab0c8bc8734a9aa7bf4ee8316d462a08c6ac5052f888fef5b494b" +dependencies = [ + "windows-sys", +] + +[[package]] +name = "anstyle-wincon" +version = "2.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "58f54d10c6dfa51283a066ceab3ec1ab78d13fae00aa49243a45e4571fb79dfd" +dependencies = [ + "anstyle", + "windows-sys", +] + +[[package]] +name = "anyhow" +version = "1.0.75" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a4668cab20f66d8d020e1fbc0ebe47217433c1b6c8f2040faf858554e394ace6" + +[[package]] +name = "assert_matches" +version = "1.5.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9b34d609dfbaf33d6889b2b7106d3ca345eacad44200913df5ba02bfd31d2ba9" + +[[package]] +name = "autocfg" +version = "1.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d468802bab17cbc0cc575e9b053f41e72aa36bfa6b7f55e3529ffa43161b97fa" + +[[package]] +name = "avbroot" +version = "0.1.0" +dependencies = [ + "anyhow", + "assert_matches", + "base64", + "byteorder", + "bzip2", + "clap", + "clap_complete", + "cms", + "const-oid", + "ctrlc", + "flate2", + "hex", + "lz4_flex", + "memchr", + "num-bigint-dig", + "num-traits", + "pb-rs", + "phf", + "pkcs8", + "quick-protobuf", + "rand", + "rayon", + "regex", + "ring", + "rpassword", + "rsa", + "rustix", + "serde", + "sha1", + "sha2", + "tempfile", + "thiserror", + "toml_edit", + "topological-sort", + "x509-cert", + "xz2", + "zip", +] + +[[package]] +name = "backtrace" +version = "0.3.69" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2089b7e3f35b9dd2d0ed921ead4f6d318c27680d4a5bd167b3ee120edb105837" +dependencies = [ + "addr2line", + "cc", + "cfg-if", + "libc", + "miniz_oxide", + "object", + "rustc-demangle", +] + +[[package]] +name = "base64" +version = "0.21.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "414dcefbc63d77c526a76b3afcf6fbb9b5e2791c19c3aa2297733208750c6e53" + +[[package]] +name = "base64ct" +version = "1.6.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8c3c1a368f70d6cf7302d78f8f7093da241fb8e8807c05cc9e51a125895a6d5b" + +[[package]] +name = "bitflags" +version = "1.3.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bef38d45163c2f1dde094a7dfd33ccf595c92905c8f8f4fdc18d06fb1037718a" + +[[package]] +name = "bitflags" +version = "2.4.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b4682ae6287fcf752ecaabbfcc7b6f9b72aa33933dc23a554d853aea8eea8635" + +[[package]] +name = "block-buffer" +version = "0.10.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3078c7629b62d3f0439517fa394996acacc5cbc91c5a20d8c658e77abd503a71" +dependencies = [ + "generic-array", +] + +[[package]] +name = "block-padding" +version = "0.3.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a8894febbff9f758034a5b8e12d87918f56dfc64a8e1fe757d65e29041538d93" +dependencies = [ + "generic-array", +] + +[[package]] +name = "bumpalo" +version = "3.13.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a3e2c3daef883ecc1b5d58c15adae93470a91d425f3532ba1695849656af3fc1" + +[[package]] +name = "byteorder" +version = "1.4.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "14c189c53d098945499cdfa7ecc63567cf3886b3332b312a5b4585d8d3a6a610" + +[[package]] +name = "bytes" +version = "1.4.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "89b2fd2a0dcf38d7971e2194b6b6eebab45ae01067456a7fd93d5547a61b70be" + +[[package]] +name = "bzip2" +version = "0.4.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bdb116a6ef3f6c3698828873ad02c3014b3c85cadb88496095628e3ef1e347f8" +dependencies = [ + "bzip2-sys", + "libc", +] + +[[package]] +name = "bzip2-sys" +version = "0.1.11+1.0.8" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "736a955f3fa7875102d57c82b8cac37ec45224a07fd32d58f9f7a186b6cd4cdc" +dependencies = [ + "cc", + "libc", + "pkg-config", +] + +[[package]] +name = "cbc" +version = "0.1.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "26b52a9543ae338f279b96b0b9fed9c8093744685043739079ce85cd58f289a6" +dependencies = [ + "cipher", +] + +[[package]] +name = "cc" +version = "1.0.83" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f1174fb0b6ec23863f8b971027804a42614e347eafb0a95bf0b12cdae21fc4d0" +dependencies = [ + "libc", +] + +[[package]] +name = "cfg-if" +version = "1.0.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd" + +[[package]] +name = "cipher" +version = "0.4.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "773f3b9af64447d2ce9850330c473515014aa235e6a783b02db81ff39e4a3dad" +dependencies = [ + "crypto-common", + "inout", +] + +[[package]] +name = "clap" +version = "4.4.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7c8d502cbaec4595d2e7d5f61e318f05417bd2b66fdc3809498f0d3fdf0bea27" +dependencies = [ + "clap_builder", + "clap_derive", + "once_cell", +] + +[[package]] +name = "clap_builder" +version = "4.4.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5891c7bc0edb3e1c2204fc5e94009affabeb1821c9e5fdc3959536c5c0bb984d" +dependencies = [ + "anstream", + "anstyle", + "clap_lex", + "strsim", +] + +[[package]] +name = "clap_complete" +version = "4.4.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "586a385f7ef2f8b4d86bddaa0c094794e7ccbfe5ffef1f434fe928143fc783a5" +dependencies = [ + "clap", +] + +[[package]] +name = "clap_derive" +version = "4.4.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c9fd1a5729c4548118d7d70ff234a44868d00489a4b6597b0b020918a0e91a1a" +dependencies = [ + "heck", + "proc-macro2", + "quote", + "syn", +] + +[[package]] +name = "clap_lex" +version = "0.5.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "cd7cc57abe963c6d3b9d8be5b06ba7c8957a930305ca90304f24ef040aa6f961" + +[[package]] +name = "cms" +version = "0.2.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "01b1b34bce0eaafd63b374fa6b58178d72c0b6670e92db786bdd3cde9e37a1f1" +dependencies = [ + "const-oid", + "der", + "spki", + "x509-cert", +] + +[[package]] +name = "colorchoice" +version = "1.0.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "acbf1af155f9b9ef647e42cdc158db4b64a1b61f743629225fde6f3e0be2a7c7" + +[[package]] +name = "const-oid" +version = "0.9.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "28c122c3980598d243d63d9a704629a2d748d101f278052ff068be5a4423ab6f" + +[[package]] +name = "core-foundation" +version = "0.9.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "194a7a9e6de53fa55116934067c844d9d749312f75c6f6d0980e8c252f8c2146" +dependencies = [ + "core-foundation-sys", + "libc", +] + +[[package]] +name = "core-foundation-sys" +version = "0.8.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e496a50fda8aacccc86d7529e2c1e0892dbd0f898a6b5645b5561b89c3210efa" + +[[package]] +name = "cpufeatures" +version = "0.2.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a17b76ff3a4162b0b27f354a0c87015ddad39d35f9c0c36607a3bdd175dde1f1" +dependencies = [ + "libc", +] + +[[package]] +name = "crc32fast" +version = "1.3.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b540bd8bc810d3885c6ea91e2018302f68baba2129ab3e88f32389ee9370880d" +dependencies = [ + "cfg-if", +] + +[[package]] +name = "crossbeam-channel" +version = "0.5.8" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a33c2bf77f2df06183c3aa30d1e96c0695a313d4f9c453cc3762a6db39f99200" +dependencies = [ + "cfg-if", + "crossbeam-utils", +] + +[[package]] +name = "crossbeam-deque" +version = "0.8.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ce6fd6f855243022dcecf8702fef0c297d4338e226845fe067f6341ad9fa0cef" +dependencies = [ + "cfg-if", + "crossbeam-epoch", + "crossbeam-utils", +] + +[[package]] +name = "crossbeam-epoch" +version = "0.9.15" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ae211234986c545741a7dc064309f67ee1e5ad243d0e48335adc0484d960bcc7" +dependencies = [ + "autocfg", + "cfg-if", + "crossbeam-utils", + "memoffset", + "scopeguard", +] + +[[package]] +name = "crossbeam-utils" +version = "0.8.16" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5a22b2d63d4d1dc0b7f1b6b2747dd0088008a9be28b6ddf0b1e7d335e3037294" +dependencies = [ + "cfg-if", +] + +[[package]] +name = "crypto-common" +version = "0.1.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1bfb12502f3fc46cca1bb51ac28df9d618d813cdc3d2f25b9fe775a34af26bb3" +dependencies = [ + "generic-array", + "typenum", +] + +[[package]] +name = "ctrlc" +version = "3.4.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2a011bbe2c35ce9c1f143b7af6f94f29a167beb4cd1d29e6740ce836f723120e" +dependencies = [ + "nix", + "windows-sys", +] + +[[package]] +name = "der" +version = "0.7.8" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "fffa369a668c8af7dbf8b5e56c9f744fbd399949ed171606040001947de40b1c" +dependencies = [ + "const-oid", + "der_derive", + "flagset", + "pem-rfc7468", + "zeroize", +] + +[[package]] +name = "der_derive" +version = "0.7.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5fe87ce4529967e0ba1dcf8450bab64d97dfd5010a6256187ffe2e43e6f0e049" +dependencies = [ + "proc-macro2", + "quote", + "syn", +] + +[[package]] +name = "digest" +version = "0.10.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9ed9a281f7bc9b7576e61468ba615a66a5c8cfdff42420a70aa82701a3b1e292" +dependencies = [ + "block-buffer", + "const-oid", + "crypto-common", + "subtle", +] + +[[package]] +name = "e2e" +version = "0.1.0" +dependencies = [ + "anyhow", + "avbroot", + "clap", + "ctrlc", + "hex", + "reqwest", + "ring", + "serde", + "tempfile", + "tokio", + "tokio-stream", + "toml_edit", + "zip", +] + +[[package]] +name = "either" +version = "1.9.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a26ae43d7bcc3b814de94796a5e736d4029efb0ee900c12e2d54c993ad1a1e07" + +[[package]] +name = "encoding_rs" +version = "0.8.33" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7268b386296a025e474d5140678f75d6de9493ae55a5d709eeb9dd08149945e1" +dependencies = [ + "cfg-if", +] + +[[package]] +name = "equivalent" +version = "1.0.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5443807d6dff69373d433ab9ef5378ad8df50ca6298caf15de6e52e24aaf54d5" + +[[package]] +name = "errno" +version = "0.3.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6b30f669a7961ef1631673d2766cc92f52d64f7ef354d4fe0ddfd30ed52f0f4f" +dependencies = [ + "errno-dragonfly", + "libc", + "windows-sys", +] + +[[package]] +name = "errno-dragonfly" +version = "0.1.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "aa68f1b12764fab894d2755d2518754e71b4fd80ecfb822714a1206c2aab39bf" +dependencies = [ + "cc", + "libc", +] + +[[package]] +name = "fastrand" +version = "2.0.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6999dc1837253364c2ebb0704ba97994bd874e8f195d665c50b7548f6ea92764" + +[[package]] +name = "flagset" +version = "0.4.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "cda653ca797810c02f7ca4b804b40b8b95ae046eb989d356bce17919a8c25499" + +[[package]] +name = "flate2" +version = "1.0.27" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c6c98ee8095e9d1dcbf2fcc6d95acccb90d1c81db1e44725c6a984b1dbdfb010" +dependencies = [ + "crc32fast", + "miniz_oxide", +] + +[[package]] +name = "fnv" +version = "1.0.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3f9eec918d3f24069decb9af1554cad7c880e2da24a9afd88aca000531ab82c1" + +[[package]] +name = "foreign-types" +version = "0.3.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f6f339eb8adc052cd2ca78910fda869aefa38d22d5cb648e6485e4d3fc06f3b1" +dependencies = [ + "foreign-types-shared", +] + +[[package]] +name = "foreign-types-shared" +version = "0.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "00b0228411908ca8685dba7fc2cdd70ec9990a6e753e89b6ac91a84c40fbaf4b" + +[[package]] +name = "form_urlencoded" +version = "1.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a62bc1cf6f830c2ec14a513a9fb124d0a213a629668a4186f329db21fe045652" +dependencies = [ + "percent-encoding", +] + +[[package]] +name = "futures-channel" +version = "0.3.28" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "955518d47e09b25bbebc7a18df10b81f0c766eaf4c4f1cccef2fca5f2a4fb5f2" +dependencies = [ + "futures-core", +] + +[[package]] +name = "futures-core" +version = "0.3.28" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4bca583b7e26f571124fe5b7561d49cb2868d79116cfa0eefce955557c6fee8c" + +[[package]] +name = "futures-io" +version = "0.3.28" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4fff74096e71ed47f8e023204cfd0aa1289cd54ae5430a9523be060cdb849964" + +[[package]] +name = "futures-macro" +version = "0.3.28" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "89ca545a94061b6365f2c7355b4b32bd20df3ff95f02da9329b34ccc3bd6ee72" +dependencies = [ + "proc-macro2", + "quote", + "syn", +] + +[[package]] +name = "futures-sink" +version = "0.3.28" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f43be4fe21a13b9781a69afa4985b0f6ee0e1afab2c6f454a8cf30e2b2237b6e" + +[[package]] +name = "futures-task" +version = "0.3.28" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "76d3d132be6c0e6aa1534069c705a74a5997a356c0dc2f86a47765e5617c5b65" + +[[package]] +name = "futures-util" +version = "0.3.28" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "26b01e40b772d54cf6c6d721c1d1abd0647a0106a12ecaa1c186273392a69533" +dependencies = [ + "futures-core", + "futures-io", + "futures-macro", + "futures-sink", + "futures-task", + "memchr", + "pin-project-lite", + "pin-utils", + "slab", +] + +[[package]] +name = "generic-array" +version = "0.14.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "85649ca51fd72272d7821adaf274ad91c288277713d9c18820d8499a7ff69e9a" +dependencies = [ + "typenum", + "version_check", +] + +[[package]] +name = "getrandom" +version = "0.2.10" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "be4136b2a15dd319360be1c07d9933517ccf0be8f16bf62a3bee4f0d618df427" +dependencies = [ + "cfg-if", + "libc", + "wasi", +] + +[[package]] +name = "gimli" +version = "0.28.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6fb8d784f27acf97159b40fc4db5ecd8aa23b9ad5ef69cdd136d3bc80665f0c0" + +[[package]] +name = "h2" +version = "0.3.21" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "91fc23aa11be92976ef4729127f1a74adf36d8436f7816b185d18df956790833" +dependencies = [ + "bytes", + "fnv", + "futures-core", + "futures-sink", + "futures-util", + "http", + "indexmap 1.9.3", + "slab", + "tokio", + "tokio-util", + "tracing", +] + +[[package]] +name = "hashbrown" +version = "0.12.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8a9ee70c43aaf417c914396645a0fa852624801b24ebb7ae78fe8272889ac888" + +[[package]] +name = "hashbrown" +version = "0.14.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2c6201b9ff9fd90a5a3bac2e56a830d0caa509576f0e503818ee82c181b3437a" + +[[package]] +name = "heck" +version = "0.4.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "95505c38b4572b2d910cecb0281560f54b440a19336cbbcb27bf6ce6adc6f5a8" + +[[package]] +name = "hermit-abi" +version = "0.3.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "443144c8cdadd93ebf52ddb4056d257f5b52c04d3c804e657d19eb73fc33668b" + +[[package]] +name = "hex" +version = "0.4.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70" +dependencies = [ + "serde", +] + +[[package]] +name = "hmac" +version = "0.12.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6c49c37c09c17a53d937dfbb742eb3a961d65a994e6bcdcf37e7399d0cc8ab5e" +dependencies = [ + "digest", +] + +[[package]] +name = "http" +version = "0.2.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bd6effc99afb63425aff9b05836f029929e345a6148a14b7ecd5ab67af944482" +dependencies = [ + "bytes", + "fnv", + "itoa", +] + +[[package]] +name = "http-body" +version = "0.4.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d5f38f16d184e36f2408a55281cd658ecbd3ca05cce6d6510a176eca393e26d1" +dependencies = [ + "bytes", + "http", + "pin-project-lite", +] + +[[package]] +name = "httparse" +version = "1.8.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d897f394bad6a705d5f4104762e116a75639e470d80901eed05a860a95cb1904" + +[[package]] +name = "httpdate" +version = "1.0.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "df3b46402a9d5adb4c86a0cf463f42e19994e3ee891101b1841f30a545cb49a9" + +[[package]] +name = "hyper" +version = "0.14.27" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ffb1cfd654a8219eaef89881fdb3bb3b1cdc5fa75ded05d6933b2b382e395468" +dependencies = [ + "bytes", + "futures-channel", + "futures-core", + "futures-util", + "h2", + "http", + "http-body", + "httparse", + "httpdate", + "itoa", + "pin-project-lite", + "socket2 0.4.9", + "tokio", + "tower-service", + "tracing", + "want", +] + +[[package]] +name = "hyper-tls" +version = "0.5.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d6183ddfa99b85da61a140bea0efc93fdf56ceaa041b37d553518030827f9905" +dependencies = [ + "bytes", + "hyper", + "native-tls", + "tokio", + "tokio-native-tls", +] + +[[package]] +name = "idna" +version = "0.4.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7d20d6b07bfbc108882d88ed8e37d39636dcc260e15e30c45e6ba089610b917c" +dependencies = [ + "unicode-bidi", + "unicode-normalization", +] + +[[package]] +name = "indexmap" +version = "1.9.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bd070e393353796e801d209ad339e89596eb4c8d430d18ede6a1cced8fafbd99" +dependencies = [ + "autocfg", + "hashbrown 0.12.3", +] + +[[package]] +name = "indexmap" +version = "2.0.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d5477fe2230a79769d8dc68e0eabf5437907c0457a5614a9e8dddb67f65eb65d" +dependencies = [ + "equivalent", + "hashbrown 0.14.0", +] + +[[package]] +name = "inout" +version = "0.1.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a0c10553d664a4d0bcff9f4215d0aac67a639cc68ef660840afe309b807bc9f5" +dependencies = [ + "block-padding", + "generic-array", +] + +[[package]] +name = "ipnet" +version = "2.8.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "28b29a3cd74f0f4598934efe3aeba42bae0eb4680554128851ebbecb02af14e6" + +[[package]] +name = "itoa" +version = "1.0.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "af150ab688ff2122fcef229be89cb50dd66af9e01a4ff320cc137eecc9bacc38" + +[[package]] +name = "js-sys" +version = "0.3.64" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c5f195fe497f702db0f318b07fdd68edb16955aed830df8363d837542f8f935a" +dependencies = [ + "wasm-bindgen", +] + +[[package]] +name = "lazy_static" +version = "1.4.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646" +dependencies = [ + "spin", +] + +[[package]] +name = "libc" +version = "0.2.147" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b4668fb0ea861c1df094127ac5f1da3409a82116a4ba74fca2e58ef927159bb3" + +[[package]] +name = "libm" +version = "0.2.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f7012b1bbb0719e1097c47611d3898568c546d597c2e74d66f6087edd5233ff4" + +[[package]] +name = "linux-raw-sys" +version = "0.4.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "57bcfdad1b858c2db7c38303a6d2ad4dfaf5eb53dfeb0910128b2c26d6158503" + +[[package]] +name = "log" +version = "0.4.20" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b5e6163cb8c49088c2c36f57875e58ccd8c87c7427f7fbd50ea6710b2f3f2e8f" + +[[package]] +name = "lz4_flex" +version = "0.11.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3ea9b256699eda7b0387ffbc776dd625e28bde3918446381781245b7a50349d8" +dependencies = [ + "twox-hash", +] + +[[package]] +name = "lzma-sys" +version = "0.1.20" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5fda04ab3764e6cde78b9974eec4f779acaba7c4e84b36eca3cf77c581b85d27" +dependencies = [ + "cc", + "libc", + "pkg-config", +] + +[[package]] +name = "memchr" +version = "2.6.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "76fc44e2588d5b436dbc3c6cf62aef290f90dab6235744a93dfe1cc18f451e2c" + +[[package]] +name = "memoffset" +version = "0.9.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5a634b1c61a95585bd15607c6ab0c4e5b226e695ff2800ba0cdccddf208c406c" +dependencies = [ + "autocfg", +] + +[[package]] +name = "mime" +version = "0.3.17" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6877bb514081ee2a7ff5ef9de3281f14a4dd4bceac4c09388074a6b5df8a139a" + +[[package]] +name = "minimal-lexical" +version = "0.2.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "68354c5c6bd36d73ff3feceb05efa59b6acb7626617f4962be322a825e61f79a" + +[[package]] +name = "miniz_oxide" +version = "0.7.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e7810e0be55b428ada41041c41f32c9f1a42817901b4ccf45fa3d4b6561e74c7" +dependencies = [ + "adler", +] + +[[package]] +name = "mio" +version = "0.8.8" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "927a765cd3fc26206e66b296465fa9d3e5ab003e651c1b3c060e7956d96b19d2" +dependencies = [ + "libc", + "wasi", + "windows-sys", +] + +[[package]] +name = "native-tls" +version = "0.2.11" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "07226173c32f2926027b63cce4bcd8076c3552846cbe7925f3aaffeac0a3b92e" +dependencies = [ + "lazy_static", + "libc", + "log", + "openssl", + "openssl-probe", + "openssl-sys", + "schannel", + "security-framework", + "security-framework-sys", + "tempfile", +] + +[[package]] +name = "nix" +version = "0.26.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "598beaf3cc6fdd9a5dfb1630c2800c7acd31df7aaf0f565796fba2b53ca1af1b" +dependencies = [ + "bitflags 1.3.2", + "cfg-if", + "libc", +] + +[[package]] +name = "nom" +version = "7.1.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d273983c5a657a70a3e8f2a01329822f3b8c8172b73826411a55751e404a0a4a" +dependencies = [ + "memchr", + "minimal-lexical", +] + +[[package]] +name = "num-bigint-dig" +version = "0.8.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "dc84195820f291c7697304f3cbdadd1cb7199c0efc917ff5eafd71225c136151" +dependencies = [ + "byteorder", + "lazy_static", + "libm", + "num-integer", + "num-iter", + "num-traits", + "rand", + "serde", + "smallvec", + "zeroize", +] + +[[package]] +name = "num-integer" +version = "0.1.45" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "225d3389fb3509a24c93f5c29eb6bde2586b98d9f016636dff58d7c6f7569cd9" +dependencies = [ + "autocfg", + "num-traits", +] + +[[package]] +name = "num-iter" +version = "0.1.43" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7d03e6c028c5dc5cac6e2dec0efda81fc887605bb3d884578bb6d6bf7514e252" +dependencies = [ + "autocfg", + "num-integer", + "num-traits", +] + +[[package]] +name = "num-traits" +version = "0.2.16" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f30b0abd723be7e2ffca1272140fac1a2f084c77ec3e123c192b66af1ee9e6c2" +dependencies = [ + "autocfg", + "libm", +] + +[[package]] +name = "num_cpus" +version = "1.16.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4161fcb6d602d4d2081af7c3a45852d875a03dd337a6bfdd6e06407b61342a43" +dependencies = [ + "hermit-abi", + "libc", +] + +[[package]] +name = "object" +version = "0.32.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "77ac5bbd07aea88c60a577a1ce218075ffd59208b2d7ca97adf9bfc5aeb21ebe" +dependencies = [ + "memchr", +] + +[[package]] +name = "once_cell" +version = "1.18.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "dd8b5dd2ae5ed71462c540258bedcb51965123ad7e7ccf4b9a8cafaa4a63576d" + +[[package]] +name = "openssl" +version = "0.10.57" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bac25ee399abb46215765b1cb35bc0212377e58a061560d8b29b024fd0430e7c" +dependencies = [ + "bitflags 2.4.0", + "cfg-if", + "foreign-types", + "libc", + "once_cell", + "openssl-macros", + "openssl-sys", +] + +[[package]] +name = "openssl-macros" +version = "0.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a948666b637a0f465e8564c73e89d4dde00d72d4d473cc972f390fc3dcee7d9c" +dependencies = [ + "proc-macro2", + "quote", + "syn", +] + +[[package]] +name = "openssl-probe" +version = "0.1.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ff011a302c396a5197692431fc1948019154afc178baf7d8e37367442a4601cf" + +[[package]] +name = "openssl-sys" +version = "0.9.92" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "db7e971c2c2bba161b2d2fdf37080177eff520b3bc044787c7f1f5f9e78d869b" +dependencies = [ + "cc", + "libc", + "pkg-config", + "vcpkg", +] + +[[package]] +name = "pb-rs" +version = "0.10.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "354a34df9c65b596152598001c0fe3393379ec2db03ae30b9985659422e2607e" +dependencies = [ + "log", + "nom", +] + +[[package]] +name = "pbkdf2" +version = "0.12.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f8ed6a7761f76e3b9f92dfb0a60a6a6477c61024b775147ff0973a02653abaf2" +dependencies = [ + "digest", + "hmac", +] + +[[package]] +name = "pem-rfc7468" +version = "0.7.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "88b39c9bfcfc231068454382784bb460aae594343fb030d46e9f50a645418412" +dependencies = [ + "base64ct", +] + +[[package]] +name = "percent-encoding" +version = "2.3.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9b2a4787296e9989611394c33f193f676704af1686e70b8f8033ab5ba9a35a94" + +[[package]] +name = "phf" +version = "0.11.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ade2d8b8f33c7333b51bcf0428d37e217e9f32192ae4772156f65063b8ce03dc" +dependencies = [ + "phf_macros", + "phf_shared", +] + +[[package]] +name = "phf_generator" +version = "0.11.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "48e4cc64c2ad9ebe670cb8fd69dd50ae301650392e81c05f9bfcb2d5bdbc24b0" +dependencies = [ + "phf_shared", + "rand", +] + +[[package]] +name = "phf_macros" +version = "0.11.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3444646e286606587e49f3bcf1679b8cef1dc2c5ecc29ddacaffc305180d464b" +dependencies = [ + "phf_generator", + "phf_shared", + "proc-macro2", + "quote", + "syn", +] + +[[package]] +name = "phf_shared" +version = "0.11.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "90fcb95eef784c2ac79119d1dd819e162b5da872ce6f3c3abe1e8ca1c082f72b" +dependencies = [ + "siphasher", +] + +[[package]] +name = "pin-project-lite" +version = "0.2.13" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8afb450f006bf6385ca15ef45d71d2288452bc3683ce2e2cacc0d18e4be60b58" + +[[package]] +name = "pin-utils" +version = "0.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8b870d8c151b6f2fb93e84a13146138f05d02ed11c7e7c54f8826aaaf7c9f184" + +[[package]] +name = "pkcs1" +version = "0.7.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c8ffb9f10fa047879315e6625af03c164b16962a5368d724ed16323b68ace47f" +dependencies = [ + "der", + "pkcs8", + "spki", +] + +[[package]] +name = "pkcs5" +version = "0.7.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e847e2c91a18bfa887dd028ec33f2fe6f25db77db3619024764914affe8b69a6" +dependencies = [ + "aes", + "cbc", + "der", + "pbkdf2", + "scrypt", + "sha2", + "spki", +] + +[[package]] +name = "pkcs8" +version = "0.10.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f950b2377845cebe5cf8b5165cb3cc1a5e0fa5cfa3e1f7f55707d8fd82e0a7b7" +dependencies = [ + "der", + "pkcs5", + "rand_core", + "spki", +] + +[[package]] +name = "pkg-config" +version = "0.3.27" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "26072860ba924cbfa98ea39c8c19b4dd6a4a25423dbdf219c1eca91aa0cf6964" + +[[package]] +name = "ppv-lite86" +version = "0.2.17" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5b40af805b3121feab8a3c29f04d8ad262fa8e0561883e7653e024ae4479e6de" + +[[package]] +name = "proc-macro2" +version = "1.0.66" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "18fb31db3f9bddb2ea821cde30a9f70117e3f119938b5ee630b7403aa6e2ead9" +dependencies = [ + "unicode-ident", +] + +[[package]] +name = "quick-protobuf" +version = "0.8.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9d6da84cc204722a989e01ba2f6e1e276e190f22263d0cb6ce8526fcdb0d2e1f" +dependencies = [ + "byteorder", +] + +[[package]] +name = "quote" +version = "1.0.33" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5267fca4496028628a95160fc423a33e8b2e6af8a5302579e322e4b520293cae" +dependencies = [ + "proc-macro2", +] + +[[package]] +name = "rand" +version = "0.8.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "34af8d1a0e25924bc5b7c43c079c942339d8f0a8b57c39049bef581b46327404" +dependencies = [ + "libc", + "rand_chacha", + "rand_core", +] + +[[package]] +name = "rand_chacha" +version = "0.3.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e6c10a63a0fa32252be49d21e7709d4d4baf8d231c2dbce1eaa8141b9b127d88" +dependencies = [ + "ppv-lite86", + "rand_core", +] + +[[package]] +name = "rand_core" +version = "0.6.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ec0be4795e2f6a28069bec0b5ff3e2ac9bafc99e6a9a7dc3547996c5c816922c" +dependencies = [ + "getrandom", +] + +[[package]] +name = "rayon" +version = "1.7.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1d2df5196e37bcc87abebc0053e20787d73847bb33134a69841207dd0a47f03b" +dependencies = [ + "either", + "rayon-core", +] + +[[package]] +name = "rayon-core" +version = "1.11.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4b8f95bd6966f5c87776639160a66bd8ab9895d9d4ab01ddba9fc60661aebe8d" +dependencies = [ + "crossbeam-channel", + "crossbeam-deque", + "crossbeam-utils", + "num_cpus", +] + +[[package]] +name = "redox_syscall" +version = "0.3.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "567664f262709473930a4bf9e51bf2ebf3348f2e748ccc50dea20646858f8f29" +dependencies = [ + "bitflags 1.3.2", +] + +[[package]] +name = "regex" +version = "1.9.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "12de2eff854e5fa4b1295edd650e227e9d8fb0c9e90b12e7f36d6a6811791a29" +dependencies = [ + "aho-corasick", + "memchr", + "regex-automata", + "regex-syntax", +] + +[[package]] +name = "regex-automata" +version = "0.3.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "49530408a136e16e5b486e883fbb6ba058e8e4e8ae6621a77b048b314336e629" +dependencies = [ + "aho-corasick", + "memchr", + "regex-syntax", +] + +[[package]] +name = "regex-syntax" +version = "0.7.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "dbb5fb1acd8a1a18b3dd5be62d25485eb770e05afb408a9627d14d451bae12da" + +[[package]] +name = "reqwest" +version = "0.11.20" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3e9ad3fe7488d7e34558a2033d45a0c90b72d97b4f80705666fea71472e2e6a1" +dependencies = [ + "base64", + "bytes", + "encoding_rs", + "futures-core", + "futures-util", + "h2", + "http", + "http-body", + "hyper", + "hyper-tls", + "ipnet", + "js-sys", + "log", + "mime", + "native-tls", + "once_cell", + "percent-encoding", + "pin-project-lite", + "serde", + "serde_json", + "serde_urlencoded", + "tokio", + "tokio-native-tls", + "tokio-util", + "tower-service", + "url", + "wasm-bindgen", + "wasm-bindgen-futures", + "wasm-streams", + "web-sys", + "winreg", +] + +[[package]] +name = "ring" +version = "0.16.20" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3053cf52e236a3ed746dfc745aa9cacf1b791d846bdaf412f60a8d7d6e17c8fc" +dependencies = [ + "cc", + "libc", + "once_cell", + "spin", + "untrusted", + "web-sys", + "winapi", +] + +[[package]] +name = "rpassword" +version = "7.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6678cf63ab3491898c0d021b493c94c9b221d91295294a2a5746eacbe5928322" +dependencies = [ + "libc", + "rtoolbox", + "winapi", +] + +[[package]] +name = "rsa" +version = "0.9.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6ab43bb47d23c1a631b4b680199a45255dce26fa9ab2fa902581f624ff13e6a8" +dependencies = [ + "byteorder", + "const-oid", + "digest", + "num-bigint-dig", + "num-integer", + "num-iter", + "num-traits", + "pkcs1", + "pkcs8", + "rand_core", + "sha1", + "sha2", + "signature", + "spki", + "subtle", + "zeroize", +] + +[[package]] +name = "rtoolbox" +version = "0.0.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "034e22c514f5c0cb8a10ff341b9b048b5ceb21591f31c8f44c43b960f9b3524a" +dependencies = [ + "libc", + "winapi", +] + +[[package]] +name = "rustc-demangle" +version = "0.1.23" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d626bb9dae77e28219937af045c257c28bfd3f69333c512553507f5f9798cb76" + +[[package]] +name = "rustix" +version = "0.38.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9bfe0f2582b4931a45d1fa608f8a8722e8b3c7ac54dd6d5f3b3212791fedef49" +dependencies = [ + "bitflags 2.4.0", + "errno", + "libc", + "linux-raw-sys", + "windows-sys", +] + +[[package]] +name = "ryu" +version = "1.0.15" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1ad4cc8da4ef723ed60bced201181d83791ad433213d8c24efffda1eec85d741" + +[[package]] +name = "salsa20" +version = "0.10.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "97a22f5af31f73a954c10289c93e8a50cc23d971e80ee446f1f6f7137a088213" +dependencies = [ + "cipher", +] + +[[package]] +name = "schannel" +version = "0.1.22" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0c3733bf4cf7ea0880754e19cb5a462007c4a8c1914bff372ccc95b464f1df88" +dependencies = [ + "windows-sys", +] + +[[package]] +name = "scopeguard" +version = "1.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49" + +[[package]] +name = "scrypt" +version = "0.11.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0516a385866c09368f0b5bcd1caff3366aace790fcd46e2bb032697bb172fd1f" +dependencies = [ + "pbkdf2", + "salsa20", + "sha2", +] + +[[package]] +name = "security-framework" +version = "2.9.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "05b64fb303737d99b81884b2c63433e9ae28abebe5eb5045dcdd175dc2ecf4de" +dependencies = [ + "bitflags 1.3.2", + "core-foundation", + "core-foundation-sys", + "libc", + "security-framework-sys", +] + +[[package]] +name = "security-framework-sys" +version = "2.9.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e932934257d3b408ed8f30db49d85ea163bfe74961f017f405b025af298f0c7a" +dependencies = [ + "core-foundation-sys", + "libc", +] + +[[package]] +name = "serde" +version = "1.0.188" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "cf9e0fcba69a370eed61bcf2b728575f726b50b55cba78064753d708ddc7549e" +dependencies = [ + "serde_derive", +] + +[[package]] +name = "serde_derive" +version = "1.0.188" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4eca7ac642d82aa35b60049a6eccb4be6be75e599bd2e9adb5f875a737654af2" +dependencies = [ + "proc-macro2", + "quote", + "syn", +] + +[[package]] +name = "serde_json" +version = "1.0.105" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "693151e1ac27563d6dbcec9dee9fbd5da8539b20fa14ad3752b2e6d363ace360" +dependencies = [ + "itoa", + "ryu", + "serde", +] + +[[package]] +name = "serde_spanned" +version = "0.6.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "96426c9936fd7a0124915f9185ea1d20aa9445cc9821142f0a73bc9207a2e186" +dependencies = [ + "serde", +] + +[[package]] +name = "serde_urlencoded" +version = "0.7.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d3491c14715ca2294c4d6a88f15e84739788c1d030eed8c110436aafdaa2f3fd" +dependencies = [ + "form_urlencoded", + "itoa", + "ryu", + "serde", +] + +[[package]] +name = "sha1" +version = "0.10.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f04293dc80c3993519f2d7f6f511707ee7094fe0c6d3406feb330cdb3540eba3" +dependencies = [ + "cfg-if", + "cpufeatures", + "digest", +] + +[[package]] +name = "sha2" +version = "0.10.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "479fb9d862239e610720565ca91403019f2f00410f1864c5aa7479b950a76ed8" +dependencies = [ + "cfg-if", + "cpufeatures", + "digest", +] + +[[package]] +name = "signal-hook-registry" +version = "1.4.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d8229b473baa5980ac72ef434c4415e70c4b5e71b423043adb4ba059f89c99a1" +dependencies = [ + "libc", +] + +[[package]] +name = "signature" +version = "2.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5e1788eed21689f9cf370582dfc467ef36ed9c707f073528ddafa8d83e3b8500" +dependencies = [ + "digest", + "rand_core", +] + +[[package]] +name = "siphasher" +version = "0.3.11" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "38b58827f4464d87d377d175e90bf58eb00fd8716ff0a62f80356b5e61555d0d" + +[[package]] +name = "slab" +version = "0.4.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8f92a496fb766b417c996b9c5e57daf2f7ad3b0bebe1ccfca4856390e3d3bb67" +dependencies = [ + "autocfg", +] + +[[package]] +name = "smallvec" +version = "1.11.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "62bb4feee49fdd9f707ef802e22365a35de4b7b299de4763d44bfea899442ff9" + +[[package]] +name = "socket2" +version = "0.4.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "64a4a911eed85daf18834cfaa86a79b7d266ff93ff5ba14005426219480ed662" +dependencies = [ + "libc", + "winapi", +] + +[[package]] +name = "socket2" +version = "0.5.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2538b18701741680e0322a2302176d3253a35388e2e62f172f64f4f16605f877" +dependencies = [ + "libc", + "windows-sys", +] + +[[package]] +name = "spin" +version = "0.5.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6e63cff320ae2c57904679ba7cb63280a3dc4613885beafb148ee7bf9aa9042d" + +[[package]] +name = "spki" +version = "0.7.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9d1e996ef02c474957d681f1b05213dfb0abab947b446a62d37770b23500184a" +dependencies = [ + "base64ct", + "der", +] + +[[package]] +name = "static_assertions" +version = "1.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a2eb9349b6444b326872e140eb1cf5e7c522154d69e7a0ffb0fb81c06b37543f" + +[[package]] +name = "strsim" +version = "0.10.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "73473c0e59e6d5812c5dfe2a064a6444949f089e20eec9a2e5506596494e4623" + +[[package]] +name = "subtle" +version = "2.5.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "81cdd64d312baedb58e21336b31bc043b77e01cc99033ce76ef539f78e965ebc" + +[[package]] +name = "syn" +version = "2.0.29" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c324c494eba9d92503e6f1ef2e6df781e78f6a7705a0202d9801b198807d518a" +dependencies = [ + "proc-macro2", + "quote", + "unicode-ident", +] + +[[package]] +name = "tempfile" +version = "3.8.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "cb94d2f3cc536af71caac6b6fcebf65860b347e7ce0cc9ebe8f70d3e521054ef" +dependencies = [ + "cfg-if", + "fastrand", + "redox_syscall", + "rustix", + "windows-sys", +] + +[[package]] +name = "thiserror" +version = "1.0.47" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "97a802ec30afc17eee47b2855fc72e0c4cd62be9b4efe6591edde0ec5bd68d8f" +dependencies = [ + "thiserror-impl", +] + +[[package]] +name = "thiserror-impl" +version = "1.0.47" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6bb623b56e39ab7dcd4b1b98bb6c8f8d907ed255b18de254088016b27a8ee19b" +dependencies = [ + "proc-macro2", + "quote", + "syn", +] + +[[package]] +name = "tinyvec" +version = "1.6.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "87cc5ceb3875bb20c2890005a4e226a4651264a5c75edb2421b52861a0a0cb50" +dependencies = [ + "tinyvec_macros", +] + +[[package]] +name = "tinyvec_macros" +version = "0.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1f3ccbac311fea05f86f61904b462b55fb3df8837a366dfc601a0161d0532f20" + +[[package]] +name = "tokio" +version = "1.32.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "17ed6077ed6cd6c74735e21f37eb16dc3935f96878b1fe961074089cc80893f9" +dependencies = [ + "backtrace", + "bytes", + "libc", + "mio", + "num_cpus", + "pin-project-lite", + "signal-hook-registry", + "socket2 0.5.3", + "tokio-macros", + "windows-sys", +] + +[[package]] +name = "tokio-macros" +version = "2.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "630bdcf245f78637c13ec01ffae6187cca34625e8c63150d424b59e55af2675e" +dependencies = [ + "proc-macro2", + "quote", + "syn", +] + +[[package]] +name = "tokio-native-tls" +version = "0.3.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bbae76ab933c85776efabc971569dd6119c580d8f5d448769dec1764bf796ef2" +dependencies = [ + "native-tls", + "tokio", +] + +[[package]] +name = "tokio-stream" +version = "0.1.14" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "397c988d37662c7dda6d2208364a706264bf3d6138b11d436cbac0ad38832842" +dependencies = [ + "futures-core", + "pin-project-lite", + "tokio", +] + +[[package]] +name = "tokio-util" +version = "0.7.8" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "806fe8c2c87eccc8b3267cbae29ed3ab2d0bd37fca70ab622e46aaa9375ddb7d" +dependencies = [ + "bytes", + "futures-core", + "futures-sink", + "pin-project-lite", + "tokio", + "tracing", +] + +[[package]] +name = "toml_datetime" +version = "0.6.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7cda73e2f1397b1262d6dfdcef8aafae14d1de7748d66822d3bfeeb6d03e5e4b" +dependencies = [ + "serde", +] + +[[package]] +name = "toml_edit" +version = "0.19.14" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f8123f27e969974a3dfba720fdb560be359f57b44302d280ba72e76a74480e8a" +dependencies = [ + "indexmap 2.0.0", + "serde", + "serde_spanned", + "toml_datetime", + "winnow", +] + +[[package]] +name = "topological-sort" +version = "0.2.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ea68304e134ecd095ac6c3574494fc62b909f416c4fca77e440530221e549d3d" + +[[package]] +name = "tower-service" +version = "0.3.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b6bc1c9ce2b5135ac7f93c72918fc37feb872bdc6a5533a8b85eb4b86bfdae52" + +[[package]] +name = "tracing" +version = "0.1.37" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8ce8c33a8d48bd45d624a6e523445fd21ec13d3653cd51f681abf67418f54eb8" +dependencies = [ + "cfg-if", + "pin-project-lite", + "tracing-core", +] + +[[package]] +name = "tracing-core" +version = "0.1.31" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0955b8137a1df6f1a2e9a37d8a6656291ff0297c1a97c24e0d8425fe2312f79a" +dependencies = [ + "once_cell", +] + +[[package]] +name = "try-lock" +version = "0.2.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3528ecfd12c466c6f163363caf2d02a71161dd5e1cc6ae7b34207ea2d42d81ed" + +[[package]] +name = "twox-hash" +version = "1.6.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "97fee6b57c6a41524a810daee9286c02d7752c4253064d0b05472833a438f675" +dependencies = [ + "cfg-if", + "static_assertions", +] + +[[package]] +name = "typenum" +version = "1.16.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "497961ef93d974e23eb6f433eb5fe1b7930b659f06d12dec6fc44a8f554c0bba" + +[[package]] +name = "unicode-bidi" +version = "0.3.13" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "92888ba5573ff080736b3648696b70cafad7d250551175acbaa4e0385b3e1460" + +[[package]] +name = "unicode-ident" +version = "1.0.11" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "301abaae475aa91687eb82514b328ab47a211a533026cb25fc3e519b86adfc3c" + +[[package]] +name = "unicode-normalization" +version = "0.1.22" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5c5713f0fc4b5db668a2ac63cdb7bb4469d8c9fed047b1d0292cc7b0ce2ba921" +dependencies = [ + "tinyvec", +] + +[[package]] +name = "untrusted" +version = "0.7.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a156c684c91ea7d62626509bce3cb4e1d9ed5c4d978f7b4352658f96a4c26b4a" + +[[package]] +name = "url" +version = "2.4.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "143b538f18257fac9cad154828a57c6bf5157e1aa604d4816b5995bf6de87ae5" +dependencies = [ + "form_urlencoded", + "idna", + "percent-encoding", +] + +[[package]] +name = "utf8parse" +version = "0.2.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "711b9620af191e0cdc7468a8d14e709c3dcdb115b36f838e601583af800a370a" + +[[package]] +name = "vcpkg" +version = "0.2.15" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "accd4ea62f7bb7a82fe23066fb0957d48ef677f6eeb8215f372f52e48bb32426" + +[[package]] +name = "version_check" +version = "0.9.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "49874b5167b65d7193b8aba1567f5c7d93d001cafc34600cee003eda787e483f" + +[[package]] +name = "want" +version = "0.3.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bfa7760aed19e106de2c7c0b581b509f2f25d3dacaf737cb82ac61bc6d760b0e" +dependencies = [ + "try-lock", +] + +[[package]] +name = "wasi" +version = "0.11.0+wasi-snapshot-preview1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9c8d87e72b64a3b4db28d11ce29237c246188f4f51057d65a7eab63b7987e423" + +[[package]] +name = "wasm-bindgen" +version = "0.2.87" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7706a72ab36d8cb1f80ffbf0e071533974a60d0a308d01a5d0375bf60499a342" +dependencies = [ + "cfg-if", + "wasm-bindgen-macro", +] + +[[package]] +name = "wasm-bindgen-backend" +version = "0.2.87" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5ef2b6d3c510e9625e5fe6f509ab07d66a760f0885d858736483c32ed7809abd" +dependencies = [ + "bumpalo", + "log", + "once_cell", + "proc-macro2", + "quote", + "syn", + "wasm-bindgen-shared", +] + +[[package]] +name = "wasm-bindgen-futures" +version = "0.4.37" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c02dbc21516f9f1f04f187958890d7e6026df8d16540b7ad9492bc34a67cea03" +dependencies = [ + "cfg-if", + "js-sys", + "wasm-bindgen", + "web-sys", +] + +[[package]] +name = "wasm-bindgen-macro" +version = "0.2.87" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "dee495e55982a3bd48105a7b947fd2a9b4a8ae3010041b9e0faab3f9cd028f1d" +dependencies = [ + "quote", + "wasm-bindgen-macro-support", +] + +[[package]] +name = "wasm-bindgen-macro-support" +version = "0.2.87" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "54681b18a46765f095758388f2d0cf16eb8d4169b639ab575a8f5693af210c7b" +dependencies = [ + "proc-macro2", + "quote", + "syn", + "wasm-bindgen-backend", + "wasm-bindgen-shared", +] + +[[package]] +name = "wasm-bindgen-shared" +version = "0.2.87" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ca6ad05a4870b2bf5fe995117d3728437bd27d7cd5f06f13c17443ef369775a1" + +[[package]] +name = "wasm-streams" +version = "0.3.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b4609d447824375f43e1ffbc051b50ad8f4b3ae8219680c94452ea05eb240ac7" +dependencies = [ + "futures-util", + "js-sys", + "wasm-bindgen", + "wasm-bindgen-futures", + "web-sys", +] + +[[package]] +name = "web-sys" +version = "0.3.64" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9b85cbef8c220a6abc02aefd892dfc0fc23afb1c6a426316ec33253a3877249b" +dependencies = [ + "js-sys", + "wasm-bindgen", +] + +[[package]] +name = "winapi" +version = "0.3.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5c839a674fcd7a98952e593242ea400abe93992746761e38641405d28b00f419" +dependencies = [ + "winapi-i686-pc-windows-gnu", + "winapi-x86_64-pc-windows-gnu", +] + +[[package]] +name = "winapi-i686-pc-windows-gnu" +version = "0.4.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6" + +[[package]] +name = "winapi-x86_64-pc-windows-gnu" +version = "0.4.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f" + +[[package]] +name = "windows-sys" +version = "0.48.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "677d2418bec65e3338edb076e806bc1ec15693c5d0104683f2efe857f61056a9" +dependencies = [ + "windows-targets", +] + +[[package]] +name = "windows-targets" +version = "0.48.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9a2fa6e2155d7247be68c096456083145c183cbbbc2764150dda45a87197940c" +dependencies = [ + "windows_aarch64_gnullvm", + "windows_aarch64_msvc", + "windows_i686_gnu", + "windows_i686_msvc", + "windows_x86_64_gnu", + "windows_x86_64_gnullvm", + "windows_x86_64_msvc", +] + +[[package]] +name = "windows_aarch64_gnullvm" +version = "0.48.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2b38e32f0abccf9987a4e3079dfb67dcd799fb61361e53e2882c3cbaf0d905d8" + +[[package]] +name = "windows_aarch64_msvc" +version = "0.48.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "dc35310971f3b2dbbf3f0690a219f40e2d9afcf64f9ab7cc1be722937c26b4bc" + +[[package]] +name = "windows_i686_gnu" +version = "0.48.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a75915e7def60c94dcef72200b9a8e58e5091744960da64ec734a6c6e9b3743e" + +[[package]] +name = "windows_i686_msvc" +version = "0.48.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8f55c233f70c4b27f66c523580f78f1004e8b5a8b659e05a4eb49d4166cca406" + +[[package]] +name = "windows_x86_64_gnu" +version = "0.48.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "53d40abd2583d23e4718fddf1ebec84dbff8381c07cae67ff7768bbf19c6718e" + +[[package]] +name = "windows_x86_64_gnullvm" +version = "0.48.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0b7b52767868a23d5bab768e390dc5f5c55825b6d30b86c844ff2dc7414044cc" + +[[package]] +name = "windows_x86_64_msvc" +version = "0.48.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ed94fce61571a4006852b7389a063ab983c02eb1bb37b47f8272ce92d06d9538" + +[[package]] +name = "winnow" +version = "0.5.15" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7c2e3184b9c4e92ad5167ca73039d0c42476302ab603e2fec4487511f38ccefc" +dependencies = [ + "memchr", +] + +[[package]] +name = "winreg" +version = "0.50.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "524e57b2c537c0f9b1e69f1965311ec12182b4122e45035b1508cd24d2adadb1" +dependencies = [ + "cfg-if", + "windows-sys", +] + +[[package]] +name = "x509-cert" +version = "0.2.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "25eefca1d99701da3a57feb07e5079fc62abba059fc139e98c13bbb250f3ef29" +dependencies = [ + "const-oid", + "der", + "sha1", + "signature", + "spki", +] + +[[package]] +name = "xz2" +version = "0.1.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "388c44dc09d76f1536602ead6d325eb532f5c122f17782bd57fb47baeeb767e2" +dependencies = [ + "lzma-sys", +] + +[[package]] +name = "zeroize" +version = "1.6.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2a0956f1ba7c7909bfb66c2e9e4124ab6f6482560f6628b5aaeba39207c9aad9" + +[[package]] +name = "zip" +version = "0.6.6" +source = "git+https://github.com/chenxiaolong/zip?rev=989101f9384b9e94e36e6e9e0f51908fdf98bde6#989101f9384b9e94e36e6e9e0f51908fdf98bde6" +dependencies = [ + "byteorder", + "crc32fast", + "crossbeam-utils", + "flate2", +] diff --git a/Cargo.toml b/Cargo.toml new file mode 100644 index 0000000..c8d0db8 --- /dev/null +++ b/Cargo.toml @@ -0,0 +1,69 @@ +[package] +name = "avbroot" +version = "0.1.0" +license = "GPL-3.0-only" +edition = "2021" + +# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html + +[dependencies] +anyhow = "1.0.75" +base64 = "0.21.3" +byteorder = "1.4.3" +bzip2 = "0.4.4" +clap = { version = "4.4.1", features = ["derive"] } +clap_complete = "4.4.0" +cms = { version = "0.2.2", features = ["std"] } +const-oid = "0.9.5" +ctrlc = "3.4.0" +flate2 = "1.0.27" +hex = "0.4.3" +lz4_flex = "0.11.1" +memchr = "2.6.0" +num-bigint-dig = "0.8.4" +num-traits = "0.2.16" +phf = { version = "0.11.2", features = ["macros"] } +pkcs8 = { version = "0.10.2", features = ["encryption", "pem"] } +quick-protobuf = "0.8.1" +rand = "0.8.5" +rayon = "1.7.0" +regex = { version = "1.9.4", default-features = false, features = ["perf", "std"] } +# We use ring instead of sha2 for sha256 digest computation of large files +# because sha2 is significantly slower on older x86_64 CPUs without the SHA-NI +# instructions. sha2 is still used for signing purposes. +# https://github.com/RustCrypto/hashes/issues/327 +ring = "0.16.20" +rpassword = "7.2.0" +rsa = { version = "0.9.2", features = ["sha1", "sha2"] } +serde = { version = "1.0.188", features = ["derive"] } +sha1 = "0.10.5" +sha2 = "0.10.7" +tempfile = "3.8.0" +thiserror = "1.0.47" +toml_edit = { version = "0.19.14", features = ["serde"] } +topological-sort = "0.2.2" +x509-cert = { version = "0.2.4", features = ["builder"] } +xz2 = "0.1.7" + +# https://github.com/zip-rs/zip/pull/383 +[dependencies.zip] +git = "https://github.com/chenxiaolong/zip" +rev = "989101f9384b9e94e36e6e9e0f51908fdf98bde6" +default-features = false +features = ["deflate"] + +[target.'cfg(unix)'.dependencies] +rustix = { version = "0.38.9", default-features = false, features = ["process"] } + +[build-dependencies] +# Disable the clap feature since it pulls in an ancient version of clap. +pb-rs = { version = "0.10.0", default-features = false } + +[dev-dependencies] +assert_matches = "1.5.0" + +[features] +static = ["bzip2/static", "xz2/static"] + +[workspace] +members = ["e2e"] diff --git a/README.extra.md b/README.extra.md new file mode 100644 index 0000000..46d0a76 --- /dev/null +++ b/README.extra.md @@ -0,0 +1,83 @@ +# avbroot extra + +avbroot includes several feature-complete parsers for various things, like boot images. Some of these are exposed as extra subcommands. They aren't needed for normal OTA patching, but may be useful in other scenarios. + +Note that while avbroot maintains a stable command line interface for the patching-related subcommands, these extra subcommands do not have backwards compatibility guarantees. + +## `avbroot avb` + +### Showing vbmeta header and footer information + +```bash +avbroot avb dump -i +``` + +This subcommand shows all of the vbmeta header and footer fields. `vbmeta` partition images will only have a header, while partitions with actual data (eg. boot images) will have both a header and a footer. + +### Verifying AVB hashes and signatures + +```bash +avbroot avb verify -i -p +``` + +This subcommand verifies the vbmeta header signature and the hashes for all vbmeta descriptors (including hashtree descriptors). If the vbmeta image has a chain descriptor for another partition, that partition image will be verified as well (recursively). All partitions are expected to be in the same directory as the vbmeta image being verified. + +If `-p` is omitted, the signatures and hashes are checked only for validity, not that they are trusted. + +## `avbroot boot` + +### Unpacking a boot image + +```bash +avbroot boot unpack -i +``` + +This subcommand unpacks all of the components of the boot image into the current directory by default (see `--help`). The header fields are saved to `header.toml` and each blob section is saved to a separate file. Each blob is written to disk as-is, without decompression. + +### Packing a boot image + +```bash +avbroot boot pack -o +``` + +This subcommand packs a new boot image from the individual components in the current directory by default (see `--help`). The default input filenames are the same as the output filenames for the `unpack` subcommand. + +### Repacking a boot image + +```bash +avbroot boot repack -i -o +``` + +This subcommand repacks a boot image without writing the individual components to disk first. This is useful for roundtrip testing of avbroot's boot image parser. The output should be identical to the input, minus any footers, like the AVB footer. + +### Showing information about a boot image + +```bash +avbroot boot info -i +``` + +All of the `boot` subcommands show the boot image information. This specific subcommand just does it without performing any other operation. To show avbroot's internal representation of the information, pass in `-d`. + +## `avbroot ramdisk` + +### Dumping a cpio archive + +```bash +avbroot ramdisk dump -i +``` + +This subcommand dumps all information about a cpio archive to stdout. This includes the compression format, all header fields (including the trailer entry), and all data. If an entry's data can be decoded as UTF-8, then it is printed out as text. Otherwise, the binary data is printed out `\x##`-encoded for non-ASCII bytes. The escape-encoded data is truncated to 512 bytes by default to avoid outputting too much data, but this behavior can be disabled with `--no-truncate`. + +### Repacking a cpio archive + +```bash +avbroot ramdisk repack -i -o +``` + +This subcommand repacks a cpio archive, including recompression if needed. This is useful for roundtrip testing of avbroot's cpio parser and compression handling. The uncompressed output should be identical to the uncompressed input, except: + +* files are sorted by name +* inodes are reassigned, starting from 300000 +* there is no excess padding at the end of the file + +The compressed output may differ from what other tools produce due to differences in compression levels and header metadata. avbroot avoids specifying header information where possible (eg. gzip timestamp) for reproducibility. diff --git a/README.md b/README.md index c232f29..5c1f8f4 100644 --- a/README.md +++ b/README.md @@ -4,6 +4,8 @@ avbroot is a program for patching Android A/B-style OTA images for root access w Having a good understanding of how AVB and A/B OTAs work is recommended prior to using avbroot. At the very least, please make sure the [warnings and caveats](#warnings-and-caveats) are well-understood to avoid the risk of hard bricking. +**NOTE:** avbroot 2.0 has been rewritten in Rust and no longer relies on any AOSP code. The CLI is fully backwards compatible, but the old Python implementation can be found in the `python` branch if needed. + ## Patches avbroot applies two patches to the boot images: @@ -44,94 +46,36 @@ The boot-related components are signed with an AVB key and OTA-related component 1. Generate the AVB and OTA signing keys: ```bash - openssl genrsa 4096 | openssl pkcs8 -topk8 -scrypt -out avb.key - openssl genrsa 4096 | openssl pkcs8 -topk8 -scrypt -out ota.key + avbroot key generate-key -o avb.key + avbroot key generate-key -o ota.key ``` 2. Convert the public key portion of the AVB signing key to the AVB public key metadata format. This is the format that the bootloader requires when setting the custom root of trust. ```bash - python /path/to/avbroot/external/avb/avbtool.py extract_public_key --key avb.key --output avb_pkmd.bin + avbroot key extract-avb -k avb.key -o avb_pkmd.bin ``` 3. Generate a self-signed certificate for the OTA signing key. This is used by recovery for verifying OTA updates. ```bash - openssl req -new -x509 -sha256 -key ota.key -out ota.crt -days 10000 -subj '/CN=OTA/' - ``` - -## Installing dependencies - -avbroot depends on the `openssl` command line tool and the `lz4` and `protobuf` Python libraries. Also, Python 3.9 or newer is required. - -### Linux - -On Linux, the dependencies can be installed from the distro's package manager: - -| Distro | Command | -|------------|------------------------------------------------------------| -| Alpine | `sudo apk add openssl py3-lz4 py3-protobuf` | -| Arch Linux | `sudo pacman -S openssl python-lz4 python-protobuf` | -| Debian | `sudo apt install openssl python3-lz4 python3-protobuf` | -| Fedora | `sudo dnf install openssl python3-lz4 python3-protobuf` | -| OpenSUSE | `sudo zypper install openssl python3-lz4 python3-protobuf` | -| Ubuntu | (Same as Debian) | - -### Windows - -Installing openssl and python from the [Scoop package manager](https://scoop.sh/) is suggested. - -```powershell -scoop install openssl python -``` - -Installing from other sources should work as well, but it might be necessary to manually add `openssl`'s installation directory to the `PATH` environment variable. - -To install the Python dependencies: - -1. Create a virtual environment (replacing `` with the path where it should be created): - - ```powershell - python -m venv + avbroot key generate-cert -k ota.key -o ota.crt ``` -2. Activate the virtual environment. This must be done in every new terminal session before running avbroot. - - ```powershell - . \Scripts\Activate.ps1 - ``` - -3. Install the dependencies. - - ```powershell - pip install -r requirements.txt - ``` +The commands above are provided for convenience. avbroot is compatible with any standard PKCS8-encoded 4096-bit RSA private key and X509 certificate (eg. like those generated by openssl). ## Usage 1. Make sure the caveats listed above are understood. It is possible to hard brick by doing the wrong thing! -2. Clone this git repo recursively, as there are several AOSP repositories included as submodules in the `external/` directory. - - ```bash - git clone --recursive https://github.com/chenxiaolong/avbroot.git - ``` - - If the repo is already cloned, run the following command instead to fetch the submodules: +2. Download the latest version from the [releases page](https://github.com/chenxiaolong/avbroot/releases). To verify the digital signature, see the [verifying digital signatures](#verifying-digital-signatures) section. - ```bash - git submodule update --init --recursive - ``` - -3. Follow the steps to [install dependencies](#installing-dependencies). +3. Follow the steps to [generate signing keys](#generating-keys). -4. Follow the steps to [generate signing keys](#generating-keys). - -5. Patch the full OTA ZIP. +4. Patch the full OTA ZIP. ```bash - python avbroot.py \ - patch \ + avbroot ota patch \ --input /path/to/ota.zip \ --privkey-avb /path/to/avb.key \ --privkey-ota /path/to/ota.key \ @@ -145,18 +89,17 @@ To install the Python dependencies: If you prefer to use an existing boot image patched by the Magisk app or you want to use KernelSU, see the [advanced usage section](#advanced-usage). -6. **[Initial setup only]** Unlock the bootloader. This will trigger a data wipe. +5. **[Initial setup only]** Unlock the bootloader. This will trigger a data wipe. -7. **[Initial setup only]** Extract the patched images from the patched OTA. +6. **[Initial setup only]** Extract the patched images from the patched OTA. ```bash - python avbroot.py \ - extract \ + avbroot ota extract \ --input /path/to/ota.zip.patched \ --directory extracted ``` -8. **[Initial setup only]** Flash the patched images and the AVB public key metadata. This sets up the custom root of trust. Future updates are done by simply sideloading patched OTA zips. +7. **[Initial setup only]** Flash the patched images and the AVB public key metadata. This sets up the custom root of trust. Future updates are done by simply sideloading patched OTA zips. ```bash # Flash the boot images that were extracted @@ -172,13 +115,13 @@ To install the Python dependencies: fastboot flash avb_custom_key /path/to/avb_pkmd.bin ``` -9. **[Initial setup only]** Run `dmesg | grep libfs_avb` as root to verify that AVB is working properly. A message similar to the following is expected: +8. **[Initial setup only]** Run `dmesg | grep libfs_avb` as root to verify that AVB is working properly. A message similar to the following is expected: ```bash init: [libfs_avb]Returning avb_handle with status: Success ``` -10. **[Initial setup only]** Lock the bootloader. This will trigger a data wipe again. **Do not uncheck `OEM unlocking`!** +9. **[Initial setup only]** Lock the bootloader. This will trigger a data wipe again. **Do not uncheck `OEM unlocking`!** **WARNING**: If you are flashing CalyxOS, the setup wizard will [automatically turn off the `OEM unlocking` switch](https://github.com/CalyxOS/platform_packages_apps_SetupWizard/blob/7d2df25cedcbff83ddb608e628f9d97b38259c26/src/org/lineageos/setupwizard/SetupWizardApp.java#L135-L140). Make sure to manually reenable it again from Android's developer settings. Consider using [avbroot's `oemunlockonboot` Magisk module](#oemunlockonboot-enable-oem-unlocking-on-every-boot) to automatically ensure OEM unlocking is enabled on every boot. @@ -186,7 +129,7 @@ To install the Python dependencies: To update Android or Magisk: -1. Follow step 5 in [the previous section](#usage) to patch the new OTA (or an existing OTA with a newer Magisk APK). +1. Follow step 4 in [the previous section](#usage) to patch the new OTA (or an existing OTA with a newer Magisk APK). 2. Reboot to recovery mode. If stuck at a `No command` screen, press the volume up button once while holding down the power button. @@ -196,7 +139,7 @@ To update Android or Magisk: ## avbroot Magisk modules -avbroot's Magisk modules can be built by running: +avbroot's Magisk modules can be found on the [releases page](https://github.com/chenxiaolong/avbroot/releases) or they can be built locally by running: ```bash python modules/build.py @@ -204,8 +147,6 @@ python modules/build.py This requires Java and the Android SDK to be installed. The `ANDROID_HOME` environment variable should be set to the Android SDK path. -Alternatively, prebuilt modules can be downloaded [from GitHub Actions](https://github.com/chenxiaolong/avbroot/actions/workflows/modules.yml?query=branch%3Amaster). Select the latest workflow run and then download `avbroot-modules-` at the bottom of the page. Note that GitHub only allows downloading the file when logged in. - ### `clearotacerts`: Blocking A/B OTA Updates Unpatched OTA updates are already blocked in recovery because the original OTA certificate has been replaced with the custom certificate. To disable automatic OTAs while booted into Android, turn off `Automatic system updates` in Android's Developer Options. @@ -227,8 +168,7 @@ Magisk versions 25211 and newer require a writable partition for storing custom 1. Extract the boot image from the original/unpatched OTA: ```bash - python avbroot.py \ - extract \ + avbroot ota extract \ --input /path/to/ota.zip \ --directory . \ --boot-only @@ -245,8 +185,7 @@ Magisk versions 25211 and newer require a writable partition for storing custom Alternatively, avbroot can print out what Magisk detected by running: ```bash - python avbroot.py \ - magisk-info \ + avbroot ota magisk-info \ --image magisk_patched-*.img ``` @@ -256,6 +195,55 @@ Magisk versions 25211 and newer require a writable partition for storing custom If it's not possible to run the Magisk app on the target device (eg. device is currently unbootable), patch and flash the OTA once using `--ignore-magisk-warnings`, follow these steps, and then repatch and reflash the OTA with `--magisk-preinit-device `. +## Verifying OTAs + +To verify all signatures and hashes related to the OTA installation and AVB boot process, run: + +```bash +avbroot ota verify \ + --input /path/to/ota.zip \ + --cert-ota /path/to/ota.crt \ + --public-key-avb /path/to/avb_pkmd.bin +``` + +If the `--cert-ota` and `--public-key-avb` options are omitted, then the signatures are only checked for validity, not that they are trusted. + +## Tab completion + +Since avbroot has tons of command line options, it may be useful to set up tab completions for the shell. These configs can be generated from avbroot itself. + +#### bash + +Add to `~/.bashrc`: + +```bash +eval "$(avbroot completion -s bash)" +``` + +#### zsh + +Add to `~/.zshrc`: + +```bash +eval "$(avbroot completion -s zsh)" +``` + +#### fish + +Add to `~/.config/fish/config.fish`: + +```bash +avbroot completion -s fish | source +``` + +#### PowerShell + +Add to PowerShell's `profile.ps1` startup script: + +```powershell +Invoke-Expression (& avbroot completion -s powershell) +``` + ## Advanced Usage ### Using a prepatched boot image @@ -295,18 +283,18 @@ avbroot prompts for the private key passphrases interactively by default. To run * Supply the passphrases via files: ```bash - avbroot patch \ - --passphrase-avb-file /path/to/avb.passphrase \ - --passphrase-ota-file /path/to/ota.passphrase \ + avbroot ota patch \ + --pass-avb-file /path/to/avb.passphrase \ + --pass-ota-file /path/to/ota.passphrase \ <...> ``` On Unix-like systems, the "files" can be pipes. With shells that support process substituion (bash, zsh, etc.), the passphrase can be queried from a command (eg. querying a password manager). ```bash - avbroot patch \ - --passphrase-avb-file <(command to query AVB passphrase) \ - --passphrase-ota-file <(command to query OTA passphrase) \ + avbroot ota patch \ + --pass-avb-file <(command to query AVB passphrase) \ + --pass-ota-file <(command to query OTA passphrase) \ <...> ``` @@ -316,41 +304,58 @@ avbroot prompts for the private key passphrases interactively by default. To run export PASSPHRASE_AVB="the AVB passphrase" export PASSPHRASE_OTA="the OTA passphrase" - avbroot patch \ - --passphrase-avb-env-var PASSPHRASE_AVB \ - --passphrase-ota-env-var PASSPHRASE_OTA \ + avbroot ota patch \ + --pass-avb-env-var PASSPHRASE_AVB \ + --pass-ota-env-var PASSPHRASE_OTA \ <...> ``` -* Use unencrypted private keys. This is not recommended, but can be done by: - - ```bash - openssl pkcs8 -in avb.key -topk8 -nocrypt -out avb.unencrypted.key - openssl pkcs8 -in ota.key -topk8 -nocrypt -out ota.unencrypted.key - ``` +* Use unencrypted private keys. This is strongly discouraged. ### Extracting the entire OTA To extract all images contained within the OTA's `payload.bin`, run: ```bash -python avbroot.py \ - extract \ +avbroot ota extract \ --input /path/to/ota.zip \ --directory extracted \ --all ``` -## Implementation Details +## Building from source + +Make sure the [Rust toolchain](https://www.rust-lang.org/) is installed. Then run: + +```bash +cargo build --release +``` + +The output binary is written to `target/release/avbroot`. -* avbroot relies on AOSP's avbtool and OTA utilities. These are collections of applications that aren't meant to be used as libraries, but avbroot shoehorns them in anyway. These tools are not called via CLI because avbroot requires more control over the operations being performed than what is provided via the CLI interfaces. This "integration" is incredibly hacky and will likely require changes whenever the submodules are updated to point to newer AOSP commits. +Debug builds work too, but they will run significantly slower (in the sha256 computations) due to compiler optimizations being turned off. -* AVB has two methods of handling signature verification: +By default, the build links to the system's bzip2 and liblzma libraries, which are the only external libraries avbroot depends on. To compile and statically link these two libraries, pass in `--features static`. - * An image can have an unsigned vbmeta footer, which causes the image's hash to be embedded in the (signed) root `vbmeta` image via vbmeta hash descriptors. - * An image can have a signed vbmeta footer, which causes a public key for verification to be embedded in the root `vbmeta` image via vbmeta chainload descriptors. This is meant for out-of-band updates where signed images can be updated without also updating the root `vbmeta` image. +## Verifying digital signatures - avbroot preserves whether an image uses a chainload or hash descriptor. If a boot image was previously signed, then it will be signed with the AVB key during patching. This preserves the state of the AVB rollback indices, which makes it possible to flip between the original and patched images without a factory reset while debugging avbroot (with the bootloader unlocked). +First, save the public key to a file listing the keys to be trusted. + +```bash +echo 'avbroot ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDOe6/tBnO7xZhAWXRj3ApUYgn+XZ0wnQiXM8B7tPgv4' > avbroot_trusted_keys +``` + +Then, verify the signature of the zip file using the list of trusted keys. + +```bash +ssh-keygen -Y verify -f avbroot_trusted_keys -I avbroot -n file -s .zip.sig < .zip +``` + +If the file is successfully verified, the output will be: + +``` +Good "file" signature for avbroot with ED25519 key SHA256:Ct0HoRyrFLrnF9W+A/BKEiJmwx7yWkgaW/JvghKrboA +``` ## Contributing diff --git a/avbroot.py b/avbroot.py deleted file mode 100755 index ec4f629..0000000 --- a/avbroot.py +++ /dev/null @@ -1,6 +0,0 @@ -#!/usr/bin/env python3 - -from avbroot import main - -if __name__ == '__main__': - main.main() diff --git a/avbroot/__init__.py b/avbroot/__init__.py deleted file mode 100644 index 7ce622b..0000000 --- a/avbroot/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -import os -import sys - -external_dir = os.path.join(os.path.realpath(os.path.dirname(__file__)), - '..', 'external') - -# OTA utilities (loaded first because there are multiple common.py files and -# this is the one we need to import) -sys.path.append(os.path.join(external_dir, 'build', 'tools', 'releasetools')) -# avbtool -sys.path.append(os.path.join(external_dir, 'avb')) -# Payload protobuf -sys.path.append(os.path.join(external_dir, 'update_engine', 'scripts')) diff --git a/avbroot/boot.py b/avbroot/boot.py deleted file mode 100644 index d4e9caf..0000000 --- a/avbroot/boot.py +++ /dev/null @@ -1,480 +0,0 @@ -import hashlib -import io -import lzma -import re -import shutil -import zipfile - -import avbtool - -from . import openssl -from . import util -from . import vbmeta -from .formats import bootimage -from .formats import compression -from .formats import cpio - - -def _load_ramdisk(ramdisk): - with ( - io.BytesIO(ramdisk) as f_raw, - compression.CompressedFile(f_raw, 'rb') as f, - ): - return cpio.load(f.fp), f.format - - -def _save_ramdisk(entries, format): - with io.BytesIO() as f_raw: - with compression.CompressedFile(f_raw, 'wb', format=format) as f: - cpio.save(f.fp, entries) - - return f_raw.getvalue() - - -class BootImagePatch: - def __call__(self, image_file): - with open(image_file, 'r+b') as f: - boot_image = bootimage.load_autodetect(f) - - boot_image = self.patch(image_file, boot_image) - - f.seek(0) - f.truncate(0) - - boot_image.generate(f) - - def patch(self, image_file, boot_image): - raise NotImplementedError() - - -class MagiskRootPatch(BootImagePatch): - ''' - Root the boot image with Magisk. - ''' - - # - Half-open intervals. - # - Versions <25102 are not supported because they're missing commit - # 1f8c063dc64806c4f7320ed66c785ff7bc116383, which would leave devices - # that use Android 13 GKIs unable to boot into recovery - # - Versions 25207 through 25210 are not supported because they used the - # RULESDEVICE config option, which stored the writable block device as an - # rdev major/minor pair, which was not consistent across reboots and was - # replaced by PREINITDEVICE - VERS_SUPPORTED = ( - util.Range(25102, 25207), - util.Range(25211, 26200), - ) - VER_PREINIT_DEVICE = util.Range(25211, VERS_SUPPORTED[-1].end) - VER_RANDOM_SEED = util.Range(25211, VERS_SUPPORTED[-1].end) - - def __init__(self, magisk_apk, preinit_device, random_seed): - self.magisk_apk = magisk_apk - self.version = self._get_version() - - self.preinit_device = preinit_device - - if random_seed is None: - # Use a hardcoded random seed by default to ensure byte-for-byte - # reproducibility - self.random_seed = 0xfedcba9876543210 - else: - self.random_seed = random_seed - - def _get_version(self): - with zipfile.ZipFile(self.magisk_apk, 'r') as z: - with z.open('assets/util_functions.sh', 'r') as f: - for line in f: - if line.startswith(b'MAGISK_VER_CODE='): - return int(line[16:].strip()) - - raise Exception('Failed to determine Magisk version from: ' - f'{self.magisk_apk}') - - def validate(self): - if not any(self.version in s for s in self.VERS_SUPPORTED): - supported = '; '.join(str(s) for s in self.VERS_SUPPORTED) - raise ValueError(f'Unsupported Magisk version {self.version} ' - f'(supported: {supported})') - - if self.preinit_device is None and \ - self.version in self.VER_PREINIT_DEVICE: - raise ValueError(f'Magisk version {self.version} ' - f'({self.VER_PREINIT_DEVICE}) requires a preinit ' - f'device to be specified') - - def patch(self, image_file, boot_image): - with zipfile.ZipFile(self.magisk_apk, 'r') as zip: - return self._patch(image_file, boot_image, zip) - - def _patch(self, image_file, boot_image, zip): - if len(boot_image.ramdisks) > 1: - raise Exception('Boot image is not expected to have ' - f'{len(boot_image.ramdisks)} ramdisks') - - # Magisk saves the original SHA1 digest in its config file - with open(image_file, 'rb') as f: - hasher = util.hash_file(f, hashlib.sha1()) - - # Load the existing ramdisk if it exists. If it doesn't, we have to - # generate one from scratch - if boot_image.ramdisks: - entries, ramdisk_format = _load_ramdisk(boot_image.ramdisks[0]) - else: - entries, ramdisk_format = [], compression.Format.LZ4_LEGACY - - old_entries = entries.copy() - - # Create magisk directory structure - for path, perms in ( - (b'overlay.d', 0o750), - (b'overlay.d/sbin', 0o750), - ): - entries.append(cpio.CpioEntryNew.new_directory(path, perms=perms)) - - # Delete the original init - if boot_image.ramdisks: - entries = [e for e in entries if e.name != b'init'] - - # Add magiskinit - with zip.open('lib/arm64-v8a/libmagiskinit.so', 'r') as f: - entries.append(cpio.CpioEntryNew.new_file( - b'init', perms=0o750, data=f.read())) - - # Add xz-compressed magisk32 and magisk64 - xz_files = { - 'lib/armeabi-v7a/libmagisk32.so': b'magisk32.xz', - 'lib/arm64-v8a/libmagisk64.so': b'magisk64.xz', - } - - # Add stub apk, which only exists after the Magisk commit: - # ad0e6511e11ebec65aa9b5b916e1397342850319 - if 'assets/stub.apk' in zip.namelist(): - xz_files['assets/stub.apk'] = b'stub.xz' - - for source, target in xz_files.items(): - with ( - zip.open(source, 'r') as f_in, - io.BytesIO() as f_out_raw, - ): - with lzma.open(f_out_raw, 'wb', preset=9, - check=lzma.CHECK_CRC32) as f_out: - shutil.copyfileobj(f_in, f_out) - - entries.append(cpio.CpioEntryNew.new_file( - b'overlay.d/sbin/' + target, perms=0o644, - data=f_out_raw.getvalue())) - - # Create magisk .backup directory structure - self._apply_magisk_backup(old_entries, entries) - - # Create magisk config - magisk_config = \ - b'KEEPVERITY=true\n' \ - b'KEEPFORCEENCRYPT=true\n' \ - b'PATCHVBMETAFLAG=false\n' \ - b'RECOVERYMODE=false\n' - - if self.version in self.VER_PREINIT_DEVICE: - magisk_config += b'PREINITDEVICE=%s\n' % \ - self.preinit_device.encode('ascii') - - magisk_config += b'SHA1=%s\n' % hasher.hexdigest().encode('ascii') - - if self.version in self.VER_RANDOM_SEED: - magisk_config += b'RANDOMSEED=0x%x\n' % self.random_seed - - entries.append(cpio.CpioEntryNew.new_file( - b'.backup/.magisk', perms=0o000, data=magisk_config)) - - # Repack ramdisk - new_ramdisk = _save_ramdisk(entries, ramdisk_format) - if boot_image.ramdisks: - boot_image.ramdisks[0] = new_ramdisk - else: - boot_image.ramdisks.append(new_ramdisk) - - return boot_image - - @staticmethod - def _apply_magisk_backup(old_entries, new_entries): - ''' - Compare old and new ramdisk entry lists, creating the Magisk `.backup/` - directory structure. `.backup/.rmlist` will contain a sorted list of - NULL-terminated strings, listing which files were newly added or - changed. The old entries for changed files will be added to the new - entries as `.backup/`. - - Both lists and entries within the lists may be mutated. - ''' - - old_by_name = {e.name: e for e in old_entries} - new_by_name = {e.name: e for e in new_entries} - - added = new_by_name.keys() - old_by_name.keys() - deleted = old_by_name.keys() - new_by_name.keys() - changed = set(n for n in old_by_name.keys() & new_by_name.keys() - if old_by_name[n].content != new_by_name[n].content) - - new_entries.append(cpio.CpioEntryNew.new_directory( - b'.backup', perms=0o000)) - - for name in deleted | changed: - entry = old_by_name[name] - entry.name = b'.backup/' + entry.name - new_entries.append(entry) - - rmlist_data = b''.join(n + b'\0' for n in sorted(added)) - new_entries.append(cpio.CpioEntryNew.new_file( - b'.backup/.rmlist', perms=0o000, data=rmlist_data)) - - -class OtaCertPatch(BootImagePatch): - ''' - Replace the OTA certificates in the vendor_boot image with the custom OTA - signing certificate. - ''' - - OTACERTS_PATH = b'system/etc/security/otacerts.zip' - - def __init__(self, cert_ota): - self.cert_ota = cert_ota - - def patch(self, image_file, boot_image): - found_otacerts = False - - # Check each ramdisk - for i, ramdisk in enumerate(boot_image.ramdisks): - entries, ramdisk_format = _load_ramdisk(ramdisk) - - # Fail hard if otacerts does not exist. We don't want to lock the - # user out of future updates if the OTA certificate mechanism has - # changed. - otacerts = next((e for e in entries if e.name == - self.OTACERTS_PATH), None) - if otacerts: - found_otacerts = True - else: - continue - - # Create new otacerts archive. The old certs are ignored since - # flashing a stock OTA will render the device unbootable. - with io.BytesIO() as f_zip: - with zipfile.ZipFile(f_zip, 'w') as z: - # Use zeroed-out metadata to ensure the archive is bit for - # bit reproducible across runs. - info = zipfile.ZipInfo('ota.x509.pem') - # Mark entry as created on Unix for reproducibility - info.create_system = 3 - with ( - z.open(info, 'w') as f_out, - open(self.cert_ota, 'rb') as f_in, - ): - shutil.copyfileobj(f_in, f_out) - - otacerts.content = f_zip.getvalue() - - # Repack ramdisk - boot_image.ramdisks[i] = _save_ramdisk(entries, ramdisk_format) - - if not found_otacerts: - raise Exception(f'{self.OTACERTS_PATH} not found in ramdisk') - - return boot_image - - -class PrepatchedImage(BootImagePatch): - ''' - Replace the boot image with a prepatched boot image if it is compatible. - - An image is compatible if all the non-size-related header fields are - identical and the set of included sections (eg. kernel, dtb) are the same. - The only exception is the number of ramdisk sections, which is allowed to - be higher than the original image. - ''' - - MIN_LEVEL = 0 - MAX_LEVEL = 2 - - VERSION_REGEX = re.compile( - b'Linux version (\d+\.\d+).\d+-(android\d+)-(\d+)-') - - def __init__(self, prepatched, fatal_level, warning_fn): - self.prepatched = prepatched - self.fatal_level = fatal_level - self.warning_fn = warning_fn - - def patch(self, image_file, boot_image): - with open(self.prepatched, 'r+b') as f: - prepatched_image = bootimage.load_autodetect(f) - - old_header = boot_image.to_dict() - new_header = prepatched_image.to_dict() - - # Level 0: Warnings that don't affect booting - # Level 1: Warnings that may affect booting - # Level 2: Warnings that are very likely to affect booting - issues = [[], [], []] - - for k in new_header.keys() - old_header.keys(): - issues[2].append(f'{k} header field was added') - - for k in old_header.keys() - new_header.keys(): - issues[2].append(f'{k} header field was removed') - - for k in old_header.keys() & new_header.keys(): - if old_header[k] != new_header[k]: - if k in ('id', 'os_version'): - level = 0 - elif k in ('cmdline', 'extra_cmdline'): - level = 1 - else: - level = 2 - - issues[level].append(f'{k} header field was changed: ' - f'{old_header[k]} -> {new_header[k]}') - - for attr in 'kernel', 'second', 'recovery_dtbo', 'dtb', 'bootconfig': - original_val = getattr(boot_image, attr) - prepatched_val = getattr(prepatched_image, attr) - - if original_val is None and prepatched_val is not None: - issues[1].append(f'{attr} section was added') - elif original_val is not None and prepatched_val is None: - issues[2].append(f'{attr} section was removed') - - if len(prepatched_image.ramdisks) < len(boot_image.ramdisks): - issues[2].append('Number of ramdisk sections decreased: ' - f'{len(boot_image.ramdisks)} -> ' - f'{len(prepatched_image.ramdisks)}') - - if boot_image.kernel is not None: - old_kmi = self._get_kmi_version(boot_image) - new_kmi = self._get_kmi_version(prepatched_image) - - if old_kmi != new_kmi: - issues[2].append('Kernel module interface version changed: ' - f'{old_kmi} -> {new_kmi}') - - warnings = [e for i in range(self.MIN_LEVEL, - min(self.MAX_LEVEL + 1, self.fatal_level)) - for e in issues[i]] - errors = [e for i in range(max(self.MIN_LEVEL, self.fatal_level), - self.MAX_LEVEL + 1) - for e in issues[i]] - - if warnings: - self.warning_fn('The prepatched boot image may not be compatible ' - 'with the original:\n' + - '\n'.join(f'- {w}' for w in warnings)) - - if errors: - raise ValueError('The prepatched boot image is not compatible ' - 'with the original:\n' + - '\n'.join(f'- {e}' for e in errors)) - - return prepatched_image - - @classmethod - def _get_kmi_version(cls, boot_image): - try: - with ( - io.BytesIO(boot_image.kernel) as f_raw, - compression.CompressedFile(f_raw, 'rb') as f, - ): - decompressed = f.fp.read() - except ValueError: - decompressed = boot_image.kernel - - m = cls.VERSION_REGEX.search(decompressed) - if not m: - return None - - return b'-'.join(m.groups()).decode('ascii') - - -def patch_boot(avb, input_path, output_path, key, passphrase, - only_if_previously_signed, patch_funcs): - ''' - Call each function in patch_funcs against a boot image with vbmeta stripped - out and then resign the image using the provided private key. - ''' - - image = avbtool.ImageHandler(input_path, read_only=True) - footer, header, descriptors, image_size = avb._parse_image(image) - - have_key_old = not not header.public_key_size - if not have_key_old and only_if_previously_signed: - key = None - - have_key_new = not not key - - if have_key_old != have_key_new: - raise Exception('Key presence does not match: %s (old) != %s (new)' % - (have_key_old, have_key_new)) - - hash = None - new_descriptors = [] - - for d in descriptors: - if isinstance(d, avbtool.AvbHashDescriptor): - if hash is not None: - raise Exception('Expected only one hash descriptor') - hash = d - else: - new_descriptors.append(d) - - if hash is None: - raise Exception('No hash descriptor found') - - algorithm_name = avbtool.lookup_algorithm_by_type(header.algorithm_type)[0] - - # Pixel 7's init_boot image is originally signed by a 2048-bit RSA key, but - # avbroot expects RSA 4096 keys - if algorithm_name == 'SHA256_RSA2048': - algorithm_name = 'SHA256_RSA4096' - - with util.open_output_file(output_path) as f: - shutil.copyfile(input_path, f.name) - - # Strip the vbmeta footer from the boot image - avb.erase_footer(f.name, False) - - # Invoke the patching functions - for patch_func in patch_funcs: - patch_func(f.name) - - # Sign the new boot image - with ( - vbmeta.smuggle_descriptors(), - openssl.inject_passphrase(passphrase), - ): - avb.add_hash_footer( - image_filename=f.name, - partition_size=image_size, - dynamic_partition_size=False, - partition_name=hash.partition_name, - hash_algorithm=hash.hash_algorithm, - salt=hash.salt.hex(), - chain_partitions=None, - algorithm_name=algorithm_name, - key_path=key, - public_key_metadata_path=None, - rollback_index=header.rollback_index, - flags=header.flags, - rollback_index_location=header.rollback_index_location, - props=None, - props_from_file=None, - kernel_cmdlines=new_descriptors, - setup_rootfs_from_kernel=None, - include_descriptors_from_image=None, - calc_max_image_size=False, - signing_helper=None, - signing_helper_with_files=None, - release_string=header.release_string, - append_to_release_string=None, - output_vbmeta_image=None, - do_not_append_vbmeta_image=False, - print_required_libavb_version=False, - use_persistent_digest=False, - do_not_use_ab=False, - ) diff --git a/avbroot/formats/bootimage.py b/avbroot/formats/bootimage.py deleted file mode 100644 index b3e3282..0000000 --- a/avbroot/formats/bootimage.py +++ /dev/null @@ -1,771 +0,0 @@ -import collections -import os -import struct -import typing - -from . import padding -from .. import util - - -BOOT_MAGIC = b'ANDROID!' -BOOT_NAME_SIZE = 16 -BOOT_ARGS_SIZE = 512 -BOOT_EXTRA_ARGS_SIZE = 1024 - -VENDOR_BOOT_MAGIC = b'VNDRBOOT' -VENDOR_BOOT_ARGS_SIZE = 2048 -VENDOR_BOOT_NAME_SIZE = 16 - -VENDOR_RAMDISK_TYPE_NONE = 0 -VENDOR_RAMDISK_TYPE_PLATFORM = 1 -VENDOR_RAMDISK_TYPE_RECOVERY = 2 -VENDOR_RAMDISK_TYPE_DLKM = 3 -VENDOR_RAMDISK_NAME_SIZE = 32 -VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE = 16 - -PAGE_SIZE = 4096 - -BOOT_IMG_HDR_V0 = struct.Struct( - '<' - f'{len(BOOT_MAGIC)}s' # magic - 'I' # kernel_size - 'I' # kernel_addr - 'I' # ramdisk_size - 'I' # ramdisk_addr - 'I' # second_size - 'I' # second_addr - 'I' # tags_addr - 'I' # page_size - 'I' # header_version - 'I' # os_version - f'{BOOT_NAME_SIZE}s' # name - f'{BOOT_ARGS_SIZE}s' # cmdline - f'{8 * 4}s' # id (uint32_t[8]) - f'{BOOT_EXTRA_ARGS_SIZE}s' # extra_cmdline -) - -BOOT_IMG_HDR_V1_EXTRA = struct.Struct( - '<' - 'I' # recovery_dtbo_size - 'Q' # recovery_dtbo_offset - 'I' # header_size -) - -BOOT_IMG_HDR_V2_EXTRA = struct.Struct( - '<' - 'I' # dtb_size - 'Q' # dtb_addr -) - -BOOT_IMG_HDR_V3 = struct.Struct( - '<' - f'{len(BOOT_MAGIC)}s' # magic - 'I' # kernel_size - 'I' # ramdisk_size - 'I' # os_version - 'I' # header_size - '16s' # reserved (uint32_t[4]) - 'I' # header_version - f'{BOOT_ARGS_SIZE + BOOT_EXTRA_ARGS_SIZE}s' # cmdline -) - -VENDOR_BOOT_IMG_HDR_V3 = struct.Struct( - '<' - f'{len(VENDOR_BOOT_MAGIC)}s' # magic - 'I' # header_version - 'I' # page_size - 'I' # kernel_addr - 'I' # ramdisk_addr - 'I' # vendor_ramdisk_size - f'{VENDOR_BOOT_ARGS_SIZE}s' # cmdline - 'I' # tags_addr - f'{VENDOR_BOOT_NAME_SIZE}s' # name - 'I' # header_size - 'I' # dtb_size - 'Q' # dtb_addr -) - -BOOT_IMG_HDR_V4_EXTRA = struct.Struct( - '<' - 'I' # signature_size -) - -VENDOR_BOOT_IMG_HDR_V4_EXTRA = struct.Struct( - '<' - 'I' # vendor_ramdisk_table_size - 'I' # vendor_ramdisk_table_entry_num - 'I' # vendor_ramdisk_table_entry_size - 'I' # bootconfig_size -) - -VENDOR_RAMDISK_TABLE_ENTRY_V4 = struct.Struct( - '<' - 'I' # ramdisk_size - 'I' # ramdisk_offset - 'I' # ramdisk_type - f'{VENDOR_RAMDISK_NAME_SIZE}s' # ramdisk_name - f'{VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE * 4}s' # board_id (uint32_t[]) -) - - -class WrongFormat(ValueError): - pass - - -class BootImage: - def __init__( - self, - f: typing.Optional[typing.BinaryIO] = None, - data: typing.Optional[dict[str, typing.Any]] = None, - ) -> None: - assert (f is None) != (data is None) - - self.kernel: typing.Optional[bytes] = None - self.ramdisks: list[bytes] = [] - self.second: typing.Optional[bytes] = None - self.recovery_dtbo: typing.Optional[bytes] = None - self.dtb: typing.Optional[bytes] = None - self.bootconfig: typing.Optional[bytes] = None - - if f: - self._from_file(f) - else: - self._from_dict(data) - - def _from_file(self, f: typing.BinaryIO) -> None: - raise NotImplementedError() - - def generate(self, f: typing.BinaryIO) -> None: - raise NotImplementedError() - - def _from_dict(self, data: dict[str, typing.Any]) -> None: - raise NotImplementedError() - - def to_dict(self) -> None: - raise NotImplementedError() - - -class _BootImageV0Through2(BootImage): - def _from_file(self, f: typing.BinaryIO) -> None: - # Common fields for v0 through v2 - magic, kernel_size, kernel_addr, ramdisk_size, ramdisk_addr, \ - second_size, second_addr, tags_addr, page_size, header_version, \ - os_version, name, cmdline, id, extra_cmdline = \ - BOOT_IMG_HDR_V0.unpack(util.read_exact(f, BOOT_IMG_HDR_V0.size)) - - if magic != BOOT_MAGIC: - raise WrongFormat(f'Unknown magic: {magic}') - elif header_version not in (0, 1, 2): - raise WrongFormat(f'Unknown header version: {header_version}') - - self.kernel_addr = kernel_addr - self.ramdisk_addr = ramdisk_addr - self.second_addr = second_addr - self.tags_addr = tags_addr - self.page_size = page_size - self.header_version = header_version - self.os_version = os_version - self.name = name.rstrip(b'\0') - self.cmdline = cmdline.rstrip(b'\0') - self.id = id - self.extra_cmdline = extra_cmdline.rstrip(b'\0') - - # Parse v1 fields - if header_version >= 1: - recovery_dtbo_size, recovery_dtbo_offset, header_size = \ - BOOT_IMG_HDR_V1_EXTRA.unpack( - util.read_exact(f, BOOT_IMG_HDR_V1_EXTRA.size)) - - self.recovery_dtbo_offset = recovery_dtbo_offset - - # Parse v2 fields - if header_version == 2: - dtb_size, dtb_addr = BOOT_IMG_HDR_V2_EXTRA.unpack( - util.read_exact(f, BOOT_IMG_HDR_V2_EXTRA.size)) - - self.dtb_addr = dtb_addr - - if header_version >= 1 and f.tell() != header_size: - raise ValueError(f'Invalid header size: {header_size}') - - padding.read_skip(f, page_size) - - if kernel_size > 0: - self.kernel = util.read_exact(f, kernel_size) - padding.read_skip(f, page_size) - - if ramdisk_size > 0: - self.ramdisks.append(util.read_exact(f, ramdisk_size)) - padding.read_skip(f, page_size) - - if second_size > 0: - self.second = util.read_exact(f, second_size) - padding.read_skip(f, page_size) - - if header_version >= 1 and recovery_dtbo_size > 0: - self.recovery_dtbo = util.read_exact(f, recovery_dtbo_size) - padding.read_skip(f, page_size) - - if header_version == 2 and dtb_size > 0: - self.dtb = util.read_exact(f, dtb_size) - padding.read_skip(f, page_size) - - def generate(self, f: typing.BinaryIO) -> None: - if len(self.ramdisks) > 1: - raise ValueError('Only one ramdisk is supported') - elif self.bootconfig is not None: - raise ValueError('Boot config is not supported') - elif self.header_version < 1 and self.recovery_dtbo is not None: - raise ValueError('Recovery dtbo/acpio is not supported') - elif self.header_version < 2 and self.dtb is not None: - raise ValueError('Device tree is not supported') - - f.write(BOOT_IMG_HDR_V0.pack( - BOOT_MAGIC, - len(self.kernel) if self.kernel else 0, - self.kernel_addr, - len(self.ramdisks[0]) if self.ramdisks else 0, - self.ramdisk_addr, - len(self.second) if self.second else 0, - self.second_addr, - self.tags_addr, - self.page_size, - self.header_version, - self.os_version, - self.name, - self.cmdline, - self.id, - self.extra_cmdline, - )) - - if self.header_version >= 1: - header_size = BOOT_IMG_HDR_V0.size - if self.header_version >= 1: - header_size += BOOT_IMG_HDR_V1_EXTRA.size - if self.header_version == 2: - header_size += BOOT_IMG_HDR_V2_EXTRA.size - - f.write(BOOT_IMG_HDR_V1_EXTRA.pack( - len(self.recovery_dtbo) if self.recovery_dtbo else 0, - self.recovery_dtbo_offset, - header_size, - )) - - if self.header_version == 2: - f.write(BOOT_IMG_HDR_V2_EXTRA.pack( - len(self.dtb) if self.dtb else 0, - self.dtb_addr, - )) - - padding.write(f, self.page_size) - - if self.kernel: - f.write(self.kernel) - padding.write(f, self.page_size) - - if self.ramdisks: - f.write(self.ramdisks[0]) - padding.write(f, self.page_size) - - if self.second: - f.write(self.second) - padding.write(f, self.page_size) - - if self.header_version >= 1 and self.recovery_dtbo: - f.write(self.recovery_dtbo) - padding.write(f, self.page_size) - - if self.header_version == 2 and self.dtb: - f.write(self.dtb) - padding.write(f, self.page_size) - - def __str__(self) -> str: - kernel_size = len(self.kernel) if self.kernel else 0 - ramdisk_size = len(self.ramdisks[0]) if self.ramdisks else 0 - second_size = len(self.second) if self.second else 0 - - result = \ - f'Boot image v{self.header_version} header:\n' \ - f'- Kernel size: {kernel_size}\n' \ - f'- Kernel address: 0x{self.kernel_addr:x}\n' \ - f'- Ramdisk size: {ramdisk_size}\n' \ - f'- Ramdisk address: 0x{self.ramdisk_addr:x}\n' \ - f'- Second stage size: {second_size}\n' \ - f'- Second stage address: 0x{self.second_addr:x}\n' \ - f'- Kernel tags address: 0x{self.tags_addr:x}\n' \ - f'- Page size: {self.page_size}\n' \ - f'- OS version: 0x{self.os_version:x}\n' \ - f'- Name: {self.name!r}\n' \ - f'- Kernel cmdline: {self.cmdline!r}\n' \ - f'- ID: {self.id.hex()}\n' \ - f'- Extra kernel cmdline: {self.extra_cmdline!r}\n' - - if self.header_version >= 1: - recovery_dtbo_size = len(self.recovery_dtbo) \ - if self.recovery_dtbo else 0 - - result += \ - f'- Recovery dtbo size: {recovery_dtbo_size}\n' \ - f'- Recovery dtbo offset: {self.recovery_dtbo_offset}\n' - - if self.header_version == 2: - dtb_size = len(self.dtb) if self.dtb else 0 - - result += \ - f'- Device tree size: {dtb_size}\n' \ - f'- Device tree address: {self.dtb_addr}\n' - - return result - - def _from_dict(self, data: dict[str, typing.Any]) -> None: - type = data.get('type') - header_version = data.get('header_version') - - if type != 'android': - raise WrongFormat(f'Unknown type: {type}') - elif header_version not in (0, 1, 2): - raise WrongFormat(f'Unknown header version: {header_version}') - - self.header_version = header_version - self.kernel_addr = data['kernel_address'] - self.ramdisk_addr = data['ramdisk_address'] - self.second_addr = data['second_address'] - self.tags_addr = data['tags_address'] - self.page_size = data['page_size'] - self.os_version = data['os_version'] - self.name = data['name'] - self.cmdline = data['cmdline'] - self.id = data['id'] - self.extra_cmdline = data['extra_cmdline'] - - if header_version >= 1: - self.recovery_dtbo_offset = data['recovery_dtbo_offset'] - - if self.header_version == 2: - self.dtb_addr = data['dtb_address'] - - def to_dict(self) -> dict[str, typing.Any]: - result = { - 'type': 'android', - 'header_version': self.header_version, - 'kernel_address': self.kernel_addr, - 'ramdisk_address': self.ramdisk_addr, - 'second_address': self.second_addr, - 'tags_address': self.tags_addr, - 'page_size': self.page_size, - 'os_version': self.os_version, - 'name': self.name, - 'cmdline': self.cmdline, - 'id': self.id, - 'extra_cmdline': self.extra_cmdline, - } - - if self.header_version >= 1: - result['recovery_dtbo_offset'] = self.recovery_dtbo_offset - - if self.header_version == 2: - result['dtb_address'] = self.dtb_addr - - return result - - -class _BootImageV3Through4(BootImage): - def _from_file(self, f: typing.BinaryIO) -> None: - # Common fields for both v3 and v4 - magic, kernel_size, ramdisk_size, os_version, header_size, reserved, \ - header_version, cmdline = BOOT_IMG_HDR_V3.unpack( - util.read_exact(f, BOOT_IMG_HDR_V3.size)) - - if magic != BOOT_MAGIC: - raise WrongFormat(f'Unknown magic: {magic}') - elif header_version not in (3, 4): - raise WrongFormat(f'Unknown header version: {header_version}') - - # Parse v4 fields - if header_version == 4: - signature_size, = BOOT_IMG_HDR_V4_EXTRA.unpack( - util.read_exact(f, BOOT_IMG_HDR_V4_EXTRA.size)) - - if f.tell() != header_size: - raise ValueError(f'Invalid header size: {header_size}') - - self.header_version = header_version - self.os_version = os_version - self.reserved = reserved - self.cmdline = cmdline.rstrip(b'\0') - - padding.read_skip(f, PAGE_SIZE) - - if kernel_size > 0: - self.kernel = util.read_exact(f, kernel_size) - padding.read_skip(f, PAGE_SIZE) - - if ramdisk_size > 0: - self.ramdisks.append(util.read_exact(f, ramdisk_size)) - padding.read_skip(f, PAGE_SIZE) - - if header_version == 4: - # Don't preserve the signature. It is only used for VTS tests and - # is not relevant for booting - f.seek(signature_size, os.SEEK_CUR) - padding.read_skip(f, PAGE_SIZE) - - def generate(self, f: typing.BinaryIO) -> None: - if len(self.ramdisks) > 1: - raise ValueError('Only one ramdisk is supported') - elif self.second is not None: - raise ValueError('Second stage bootloader is not supported') - elif self.recovery_dtbo is not None: - raise ValueError('Recovery dtbo/acpio is not supported') - elif self.dtb is not None: - raise ValueError('Device tree is not supported') - elif self.bootconfig is not None: - raise ValueError('Boot config is not supported') - - f.write(BOOT_IMG_HDR_V3.pack( - BOOT_MAGIC, - len(self.kernel) if self.kernel else 0, - len(self.ramdisks[0]) if self.ramdisks else 0, - self.os_version, - BOOT_IMG_HDR_V3.size + (BOOT_IMG_HDR_V4_EXTRA.size - if self.header_version == 4 else 0), - self.reserved, - self.header_version, - self.cmdline, - )) - - if self.header_version == 4: - f.write(BOOT_IMG_HDR_V4_EXTRA.pack( - # We don't care about the VTS signature - 0 - )) - - padding.write(f, PAGE_SIZE) - - if self.kernel: - f.write(self.kernel) - padding.write(f, PAGE_SIZE) - - if self.ramdisks: - f.write(self.ramdisks[0]) - padding.write(f, PAGE_SIZE) - - def __str__(self) -> str: - kernel_size = len(self.kernel) if self.kernel else 0 - ramdisk_size = len(self.ramdisks[0]) if self.ramdisks else 0 - - return \ - f'Boot image v{self.header_version} header:\n' \ - f'- Kernel size: {kernel_size}\n' \ - f'- Ramdisk size: {ramdisk_size}\n' \ - f'- OS version: 0x{self.os_version:x}\n' \ - f'- Reserved: {self.reserved.hex()}\n' \ - f'- Kernel cmdline: {self.cmdline!r}\n' - - def _from_dict(self, data: dict[str, typing.Any]) -> None: - type = data.get('type') - header_version = data.get('header_version') - - if type != 'android': - raise WrongFormat(f'Unknown type: {type}') - elif header_version not in (3, 4): - raise WrongFormat(f'Unknown header version: {header_version}') - - self.header_version = header_version - self.os_version = data['os_version'] - self.reserved = data['reserved'] - self.cmdline = data['cmdline'] - - def to_dict(self) -> dict[str, typing.Any]: - return { - 'type': 'android', - 'header_version': self.header_version, - 'os_version': self.os_version, - 'reserved': self.reserved, - 'cmdline': self.cmdline, - } - - -_RamdiskMeta = collections.namedtuple( - '_RamdiskMeta', ['type', 'name', 'board_id']) - - -class _VendorBootImageV3Through4(BootImage): - def _from_file(self, f: typing.BinaryIO) -> None: - # Common fields for both v3 and v4 - magic, header_version, page_size, kernel_addr, ramdisk_addr, \ - vendor_ramdisk_size, cmdline, tags_addr, name, header_size, \ - dtb_size, dtb_addr = VENDOR_BOOT_IMG_HDR_V3.unpack( - util.read_exact(f, VENDOR_BOOT_IMG_HDR_V3.size)) - - if magic != VENDOR_BOOT_MAGIC: - raise WrongFormat(f'Unknown magic: {magic}') - elif header_version not in (3, 4): - raise WrongFormat(f'Unknown header version: {header_version}') - - # Parse v4 fields - if header_version == 4: - vendor_ramdisk_table_size, vendor_ramdisk_table_entry_num, \ - vendor_ramdisk_table_entry_size, bootconfig_size = \ - VENDOR_BOOT_IMG_HDR_V4_EXTRA.unpack( - util.read_exact(f, VENDOR_BOOT_IMG_HDR_V4_EXTRA.size)) - - if vendor_ramdisk_table_entry_size != \ - VENDOR_RAMDISK_TABLE_ENTRY_V4.size: - raise ValueError('Invalid ramdisk table entry size: ' - f'{vendor_ramdisk_table_entry_size}') - elif vendor_ramdisk_table_size != vendor_ramdisk_table_entry_num \ - * vendor_ramdisk_table_entry_size: - raise ValueError('Invalid ramdisk table size: ' - f'{vendor_ramdisk_table_size}') - - if f.tell() != header_size: - raise ValueError(f'Invalid header size: {header_size}') - - self.page_size = page_size - self.header_version = header_version - self.kernel_addr = kernel_addr - self.ramdisk_addr = ramdisk_addr - self.cmdline = cmdline.rstrip(b'\0') - self.tags_addr = tags_addr - self.name = name.rstrip(b'\0') - self.dtb_addr = dtb_addr - - padding.read_skip(f, page_size) - - vendor_ramdisk_offset = f.tell() - - if header_version == 3: - # v3 has one big ramdisk - self.ramdisks.append(util.read_exact(f, vendor_ramdisk_size)) - else: - # v4 has multiple ramdisks, processed later - f.seek(vendor_ramdisk_size, os.SEEK_CUR) - - padding.read_skip(f, page_size) - - if dtb_size > 0: - self.dtb = util.read_exact(f, dtb_size) - padding.read_skip(f, page_size) - - if header_version == 4: - self.ramdisks_meta = [] - - total_ramdisk_size = 0 - - for _ in range(0, vendor_ramdisk_table_entry_num): - ramdisk_size, ramdisk_offset, ramdisk_type, ramdisk_name, \ - board_id = VENDOR_RAMDISK_TABLE_ENTRY_V4.unpack( - util.read_exact(f, VENDOR_RAMDISK_TABLE_ENTRY_V4.size)) - - table_offset = f.tell() - f.seek(vendor_ramdisk_offset + ramdisk_offset) - - self.ramdisks.append(util.read_exact(f, ramdisk_size)) - self.ramdisks_meta.append(_RamdiskMeta( - ramdisk_type, - ramdisk_name.rstrip(b'\0'), - board_id, - )) - - f.seek(table_offset) - - total_ramdisk_size += ramdisk_size - - if total_ramdisk_size != vendor_ramdisk_size: - raise ValueError('Invalid vendor ramdisk size: ' - f'{vendor_ramdisk_size}') - - padding.read_skip(f, page_size) - - if bootconfig_size > 0: - self.bootconfig = util.read_exact(f, bootconfig_size) - padding.read_skip(f, page_size) - - def generate(self, f: typing.BinaryIO) -> None: - if self.header_version == 3: - if len(self.ramdisks) > 1: - raise ValueError('Only one ramdisk is supported') - elif self.bootconfig is not None: - raise ValueError('Boot config is not supported') - else: - if len(self.ramdisks) != len(self.ramdisks_meta): - raise ValueError('Mismatched ramdisk and ramdisk_meta') - - if self.second is not None: - raise ValueError('Second stage bootloader is not supported') - elif self.recovery_dtbo is not None: - raise ValueError('Recovery dtbo/acpio is not supported') - - vendor_ramdisk_size = sum(len(r) for r in self.ramdisks) - - f.write(VENDOR_BOOT_IMG_HDR_V3.pack( - VENDOR_BOOT_MAGIC, - self.header_version, - self.page_size, - self.kernel_addr, - self.ramdisk_addr, - vendor_ramdisk_size, - self.cmdline, - self.tags_addr, - self.name, - VENDOR_BOOT_IMG_HDR_V3.size + ( - VENDOR_BOOT_IMG_HDR_V4_EXTRA.size - if self.header_version == 4 else 0), - len(self.dtb) if self.dtb else 0, - self.dtb_addr, - )) - - if self.header_version == 4: - f.write(VENDOR_BOOT_IMG_HDR_V4_EXTRA.pack( - len(self.ramdisks) * VENDOR_RAMDISK_TABLE_ENTRY_V4.size, - len(self.ramdisks), - VENDOR_RAMDISK_TABLE_ENTRY_V4.size, - len(self.bootconfig) if self.bootconfig else 0, - )) - - padding.write(f, self.page_size) - - for ramdisk in self.ramdisks: - f.write(ramdisk) - - padding.write(f, self.page_size) - - if self.dtb: - f.write(self.dtb) - padding.write(f, self.page_size) - - if self.header_version == 4: - ramdisk_offset = 0 - - for ramdisk, meta in zip(self.ramdisks, self.ramdisks_meta): - f.write(VENDOR_RAMDISK_TABLE_ENTRY_V4.pack( - len(ramdisk), - ramdisk_offset, - meta.type, - meta.name, - meta.board_id, - )) - - ramdisk_offset += len(ramdisk) - - padding.write(f, self.page_size) - - if self.bootconfig: - f.write(self.bootconfig) - padding.write(f, self.page_size) - - def __str__(self) -> str: - dtb_size = len(self.dtb) if self.dtb else 0 - - result = \ - f'Vendor boot image v{self.header_version} header:\n' \ - f'- Page size: {self.page_size}\n' \ - f'- Kernel address: 0x{self.kernel_addr:x}\n' - - if self.header_version == 3: - ramdisk_size = len(self.ramdisks[0]) if self.ramdisks else 0 - result += f'- Ramdisk size: {ramdisk_size}\n' - - result += \ - f'- Ramdisk address: 0x{self.ramdisk_addr:x}\n' \ - f'- Kernel cmdline: {self.cmdline!r}\n' \ - f'- Kernel tags address: 0x{self.tags_addr:x}\n' \ - f'- Name: {self.name!r}\n' \ - f'- Device tree size: {dtb_size}\n' \ - f'- Device tree address: {self.dtb_addr}\n' - - if self.header_version == 4: - for ramdisk, meta in zip(self.ramdisks, self.ramdisks_meta): - result += \ - '- Ramdisk:\n' \ - f' - Size: {len(ramdisk)}\n' \ - f' - Type: {meta.type}\n' \ - f' - Name: {meta.name}\n' \ - f' - Board ID: {meta.board_id.hex()}\n' - - bootconfig_size = len(self.bootconfig) if self.bootconfig else 0 - - result += f'- Bootconfig size: {bootconfig_size}\n' - - return result - - def _from_dict(self, data: dict[str, typing.Any]) -> None: - type = data.get('type') - header_version = data.get('header_version') - - if type != 'vendor': - raise WrongFormat(f'Unknown type: {type}') - elif header_version not in (3, 4): - raise WrongFormat(f'Unknown header version: {header_version}') - - self.header_version = header_version - self.page_size = data['page_size'] - self.kernel_addr = data['kernel_address'] - self.ramdisk_addr = data['ramdisk_address'] - self.cmdline = data['cmdline'] - self.tags_addr = data['tags_address'] - self.name = data['name'] - self.dtb_addr = data['dtb_address'] - - if header_version == 4: - self.ramdisks_meta = [] - for meta in data['ramdisk_meta']: - self.ramdisks_meta.append(_RamdiskMeta( - meta['type'], - meta['name'], - meta['board_id'], - )) - - def to_dict(self) -> dict[str, typing.Any]: - result = { - 'type': 'vendor', - 'header_version': self.header_version, - 'page_size': self.page_size, - 'kernel_address': self.kernel_addr, - 'ramdisk_address': self.ramdisk_addr, - 'cmdline': self.cmdline, - 'tags_address': self.tags_addr, - 'name': self.name, - 'dtb_address': self.dtb_addr, - } - - if self.header_version == 4: - result['ramdisk_meta'] = [] - for meta in self.ramdisks_meta: - result['ramdisk_meta'].append({ - 'type': meta.type, - 'name': meta.name, - 'board_id': meta.board_id, - }) - - return result - - -def load_autodetect(f: typing.BinaryIO) -> BootImage: - for cls in ( - _BootImageV0Through2, - _BootImageV3Through4, - _VendorBootImageV3Through4, - ): - try: - f.seek(0) - return cls(f=f) - except WrongFormat: - continue - - raise ValueError('Unknown boot image format') - - -def create_from_dict(data: dict) -> BootImage: - for cls in ( - _BootImageV0Through2, - _BootImageV3Through4, - _VendorBootImageV3Through4, - ): - try: - return cls(data=data) - except WrongFormat: - continue - - raise ValueError('Unknown boot image format') diff --git a/avbroot/formats/compression.py b/avbroot/formats/compression.py deleted file mode 100644 index 3a06c82..0000000 --- a/avbroot/formats/compression.py +++ /dev/null @@ -1,187 +0,0 @@ -import enum -import gzip -import typing - -import lz4.block - -from .. import util - - -GZIP_MAGIC = b'\x1f\x8b' - - -class Lz4Legacy: - MAGIC = b'\x02\x21\x4c\x18' - MAX_BLOCK_SIZE = 8 * 1024 * 1024 - - def __init__(self, fp: typing.BinaryIO, - mode: typing.Literal['rb', 'wb'] = 'rb'): - if mode not in ('rb', 'wb'): - raise ValueError(f'Invalid mode: {mode}') - - self.fp = fp - self.mode = mode - - if mode == 'rb': - magic = util.read_exact(self.fp, len(self.MAGIC)) - if magic != self.MAGIC: - raise ValueError(f'Invalid magic: {magic!r}') - - self.rblock = b'' - self.rblock_offset = 0 - else: - self.fp.write(self.MAGIC) - - self.wblock = bytearray() - - self.file_offset = 0 - - def __enter__(self) -> 'Lz4Legacy': - return self - - def __exit__(self, *exc_args) -> None: - self.close() - - def _read_block(self) -> None: - if self.rblock_offset < len(self.rblock): - # Haven't finished reading block yet - return - - size_raw = self.fp.read(4) - if not size_raw or size_raw == self.MAGIC: - self.rblock = b'' - self.rblock_offset = 0 - return - elif len(size_raw) != 4: - raise EOFError('Failed to read block size') - - size_compressed = int.from_bytes(size_raw, 'little') - - compressed = util.read_exact(self.fp, size_compressed) - self.rblock = lz4.block.decompress(compressed, self.MAX_BLOCK_SIZE) - self.rblock_offset = 0 - - def _write_block(self, force=False) -> None: - if not force and len(self.wblock) < self.MAX_BLOCK_SIZE: - # Block not fully filled yet - return - - compressed = lz4.block.compress( - self.wblock, - mode='high_compression', - compression=12, - store_size=False, - ) - - self.fp.write(len(compressed).to_bytes(4, 'little')) - self.fp.write(compressed) - - self.wblock.clear() - - def read(self, size=None) -> bytes: - assert self.mode == 'rb' - - result = bytearray() - - while size is None or size > 0: - self._read_block() - - to_read = len(self.rblock) - self.rblock_offset - if to_read == 0: - # EOF - break - elif size is not None: - to_read = min(to_read, size) - - result.extend(self.rblock[self.rblock_offset: - self.rblock_offset + to_read]) - - self.rblock_offset += to_read - self.file_offset += to_read - - if size is not None: - size -= to_read - - return result - - def write(self, data: bytes) -> int: - assert self.mode == 'wb' - - offset = 0 - - while offset < len(data): - self._write_block() - - to_write = min( - self.MAX_BLOCK_SIZE - len(self.wblock), - len(data) - offset, - ) - - self.wblock.extend(data[offset:offset + to_write]) - - self.file_offset += to_write - offset += to_write - - return len(data) - - def flush(self) -> None: - assert self.mode == 'wb' - - self._write_block(force=True) - - def close(self) -> None: - try: - if self.mode == 'wb': - self.flush() - finally: - self.mode = 'closed' - - def tell(self) -> int: - return self.file_offset - - -Format = enum.Enum('Format', ['GZIP', 'LZ4_LEGACY']) - - -_MAGIC_TO_FORMAT = { - GZIP_MAGIC: Format.GZIP, - Lz4Legacy.MAGIC: Format.LZ4_LEGACY, -} -_MAGIC_MAX_SIZE = max(len(m) for m in _MAGIC_TO_FORMAT) - - -class CompressedFile: - def __init__( - self, - fp: typing.BinaryIO, - mode: typing.Literal['rb', 'wb'] = 'rb', - format: typing.Optional[Format] = None, - raw_if_unknown = False, - ): - if mode == 'rb' and not format: - magic = fp.read(_MAGIC_MAX_SIZE) - fp.seek(0) - - for m, f in _MAGIC_TO_FORMAT.items(): - if magic.startswith(m): - format = f - break - - if format == Format.GZIP: - format_fp = gzip.GzipFile(fileobj=fp, mode=mode, mtime=0) - elif format == Format.LZ4_LEGACY: - format_fp = Lz4Legacy(fp, mode) - elif raw_if_unknown: - format_fp = fp - else: - raise ValueError('Unknown compression format') - - self.fp = format_fp - self.format = format - - def __enter__(self): - self.fp.__enter__() - return self - - def __exit__(self, *exc_args): - self.fp.__exit__(*exc_args) diff --git a/avbroot/formats/cpio.py b/avbroot/formats/cpio.py deleted file mode 100644 index 738b835..0000000 --- a/avbroot/formats/cpio.py +++ /dev/null @@ -1,282 +0,0 @@ -# This is a miniature implementation of cpio, originally written for -# DualBootPatcher, supporting only enough of the file format for messing with -# boot image ramdisks. Only the "new format" for cpio entries are supported. - -import stat -import typing - -from . import padding -from .. import util - -MAGIC_NEW = b'070701' # new format -MAGIC_NEW_CRC = b'070702' # new format w/crc - -# Constants from cpio.h - -# A header with a filename "TRAILER!!!" indicates the end of the archive. -CPIO_TRAILER = b'TRAILER!!!' - -C_ISCTG = 0o0110000 - -IO_BLOCK_SIZE = 512 - - -def _read_int(f: typing.BinaryIO) -> int: - return int(util.read_exact(f, 8), 16) - - -def _write_int(f: typing.BinaryIO, value: int) -> int: - if value < 0 or value > 0xffffffff: - raise ValueError(f'{value} out of range for 32-bit integer') - - return f.write(b'%08x' % value) - - -class CpioEntryNew: - # c_magic - "070701" for "new" portable format - # "070702" for CRC format - # c_ino - # c_mode - # c_uid - # c_gid - # c_nlink - # c_mtime - # c_filesize - must be 0 for FIFOs and directories - # c_dev_maj - # c_dev_min - # c_rdev_maj - only valid for chr and blk special files - # c_rdev_min - only valid for chr and blk special files - # c_namesize - count includes terminating NUL in pathname - # c_chksum - 0 for "new" portable format; for CRC format - # the sum of all the bytes in the file - - @staticmethod - def new_trailer() -> 'CpioEntryNew': - entry = CpioEntryNew() - entry.nlink = 1 # Must be 1 for crc format - entry.name = CPIO_TRAILER - - return entry - - @staticmethod - def new_symlink(link_target: bytes, name: bytes) -> 'CpioEntryNew': - if not link_target: - raise ValueError('Symlink target is empty') - elif not name: - raise ValueError('Symlink name is empty') - - entry = CpioEntryNew() - entry.mode = stat.S_IFLNK | 0o777 - entry.nlink = 1 - entry.name = name - entry.content = link_target - - return entry - - @staticmethod - def new_directory(name: bytes, perms: int = 0o755) -> 'CpioEntryNew': - if not name: - raise ValueError('Directory name is empty') - - entry = CpioEntryNew() - entry.mode = stat.S_IFDIR | stat.S_IMODE(perms) - entry.nlink = 1 - entry.name = name - - return entry - - @staticmethod - def new_file(name: bytes, perms: int = 0o644, - data: bytes = b'') -> 'CpioEntryNew': - if not name: - raise ValueError('File name is empty') - - entry = CpioEntryNew() - entry.mode = stat.S_IFREG | stat.S_IMODE(perms) - entry.nlink = 1 - entry.name = name - entry.content = data - - return entry - - def __init__(self, f: typing.Optional[typing.BinaryIO] = None) -> None: - super(CpioEntryNew, self).__init__() - - if f is None: - self.magic = MAGIC_NEW - self.ino = 0 - self.mode = 0 - self.uid = 0 - self.gid = 0 - self.nlink = 0 - self.mtime = 0 - self.filesize = 0 - self.dev_maj = 0 - self.dev_min = 0 - self.rdev_maj = 0 - self.rdev_min = 0 - self.namesize = 0 - self.chksum = 0 - - self._name = b'' - self._content = b'' - else: - self.magic = util.read_exact(f, 6) - if self.magic != MAGIC_NEW and self.magic != MAGIC_NEW_CRC: - raise Exception(f'Unknown magic: {self.magic!r}') - - self.ino = _read_int(f) - self.mode = _read_int(f) - self.uid = _read_int(f) - self.gid = _read_int(f) - self.nlink = _read_int(f) - self.mtime = _read_int(f) - self.filesize = _read_int(f) - self.dev_maj = _read_int(f) - self.dev_min = _read_int(f) - self.rdev_maj = _read_int(f) - self.rdev_min = _read_int(f) - self.namesize = _read_int(f) - self.chksum = _read_int(f) - - # Filename - self._name = util.read_exact(f, self.namesize - 1) - # Discard NULL terminator - util.read_exact(f, 1) - padding.read_skip(f, 4) - - # File contents - self._content = util.read_exact(f, self.filesize) - padding.read_skip(f, 4) - - def write(self, f: typing.BinaryIO): - if len(self.magic) != 6: - raise ValueError(f'Magic is not 6 bytes: {self.magic!r}') - - f.write(self.magic) - - _write_int(f, self.ino) - _write_int(f, self.mode) - _write_int(f, self.uid) - _write_int(f, self.gid) - _write_int(f, self.nlink) - _write_int(f, self.mtime) - _write_int(f, self.filesize) - _write_int(f, self.dev_maj) - _write_int(f, self.dev_min) - _write_int(f, self.rdev_maj) - _write_int(f, self.rdev_min) - _write_int(f, self.namesize) - _write_int(f, self.chksum) - - # Filename - f.write(self._name) - f.write(b'\x00') - padding.write(f, 4) - - # File contents - f.write(self._content) - padding.write(f, 4) - - @property - def name(self) -> bytes: - return self._name - - @name.setter - def name(self, value: bytes): - self._name = value - self.namesize = len(value) + 1 - - @property - def content(self) -> bytes: - return self._content - - @content.setter - def content(self, value: bytes): - self._content = value - self.filesize = len(value) - - def __str__(self) -> str: - filetype = stat.S_IFMT(self.mode) - - if stat.S_ISDIR(self.mode): - ftypestr = 'directory' - elif stat.S_ISLNK(self.mode): - ftypestr = 'symbolic link' - elif stat.S_ISREG(self.mode): - ftypestr = 'regular file' - elif stat.S_ISFIFO(self.mode): - ftypestr = 'pipe' - elif stat.S_ISCHR(self.mode): - ftypestr = 'character device' - elif stat.S_ISBLK(self.mode): - ftypestr = 'block device' - elif stat.S_ISSOCK(self.mode): - ftypestr = 'socket' - elif filetype == C_ISCTG: - ftypestr = 'reserved' - else: - ftypestr = 'unknown (%o)' % filetype - - return \ - f'Filename: {self.name!r}\n' \ - f'Filetype: {ftypestr}\n' \ - f'Magic: {self.magic!r}\n' \ - f'Inode: {self.ino}\n' \ - f'Mode: {self.mode:o}\n' \ - f'Permissions: {self.mode - filetype:o}\n' \ - f'UID: {self.uid}\n' \ - f'GID: {self.gid}\n' \ - f'Links: {self.nlink}\n' \ - f'Modified: {self.mtime}\n' \ - f'File size: {self.filesize}\n' \ - f'Device: {self.dev_maj:x},{self.dev_min:x}\n' \ - f'Device ID: {self.rdev_maj:x},{self.rdev_min:x}\n' \ - f'Filename length: {self.namesize}\n' \ - f'Checksum: {self.chksum:x}\n' - - -def load(f: typing.BinaryIO, include_trailer: bool = False, - reassign_inodes: bool = True) -> list[CpioEntryNew]: - entries = [] - - while True: - entry = CpioEntryNew(f) - - if stat.S_IFMT(entry.mode) != stat.S_IFDIR and entry.nlink > 1: - raise ValueError(f'Hard links are not supported: {entry.name!r}') - - # Inodes are reassigned on save - if reassign_inodes: - entry.ino = 0 - - if entry.name == CPIO_TRAILER: - if include_trailer: - entries.append(entry) - break - - entries.append(entry) - - return entries - - -def save(f: typing.BinaryIO, entries: list[CpioEntryNew], sort=True, - pad_to_block_size=False): - inode = 300000 - - if sort: - entries = sorted(entries, key=lambda e: e.name) - - for entry in entries: - entry.ino = inode - inode += 1 - - entry.write(f) - - trailer = CpioEntryNew.new_trailer() - trailer.ino = inode - trailer.write(f) - - # Pad until end of block - if pad_to_block_size: - padding.write(f, IO_BLOCK_SIZE) diff --git a/avbroot/formats/padding.py b/avbroot/formats/padding.py deleted file mode 100644 index 2e9855b..0000000 --- a/avbroot/formats/padding.py +++ /dev/null @@ -1,47 +0,0 @@ -import os -import typing - - -def _is_power_of_2(n: int) -> bool: - if hasattr(n, 'bit_count'): - return n.bit_count() == 1 - else: - return bin(n).count('1') == 1 - - -def calc(offset: int, page_size: int) -> int: - ''' - Calculate the amount of padding that needs to be added to align the - specified offset to a page boundary. The page size must be a power of 2. - ''' - - if not _is_power_of_2(page_size): - raise ValueError(f'{page_size} is not a power of 2') - - return (page_size - (offset & (page_size - 1))) & (page_size - 1) - - -def read_skip(f: typing.BinaryIO, page_size: int) -> int: - ''' - Seek file to the next page boundary if it is not already at a page - boundary. If the file does not support seeking, then data is read and - discarded. - ''' - - padding = calc(f.tell(), page_size) - - if hasattr(f, 'seek'): - f.seek(padding, os.SEEK_CUR) - else: - f.read(padding) - - return padding - - -def write(f: typing.BinaryIO, page_size: int) -> int: - ''' - Write null bytes to pad the file to the next page boundary if it is not - already at a page boundary. - ''' - - return f.write(calc(f.tell(), page_size) * b'\x00') diff --git a/avbroot/main.py b/avbroot/main.py deleted file mode 100644 index c203830..0000000 --- a/avbroot/main.py +++ /dev/null @@ -1,731 +0,0 @@ -import argparse -import concurrent.futures -import contextlib -import copy -import dataclasses -import graphlib -import io -import os -import shutil -import struct -import tempfile -import time -import typing -import unittest.mock -import zipfile - -import avbtool - -from . import boot -from . import openssl -from . import ota -from . import util -from . import vbmeta -from .formats import bootimage -from .formats import compression -from .formats import cpio - - -PATH_METADATA = 'META-INF/com/android/metadata' -PATH_METADATA_PB = f'{PATH_METADATA}.pb' -PATH_OTACERT = 'META-INF/com/android/otacert' -PATH_PAYLOAD = 'payload.bin' -PATH_PROPERTIES = 'payload_properties.txt' - -PARTITION_PRIORITIES = { - # The kernel is always in boot - '@gki_kernel': ('boot',), - # Devices launching with Android 13 use a GKI init_boot ramdisk - '@gki_ramdisk': ('init_boot', 'boot'), - # OnePlus devices have a recovery image - '@otacerts': ('recovery', 'vendor_boot', 'boot'), -} - - -@dataclasses.dataclass -class PatchContext: - replace_images: dict[str, os.PathLike[str]] - boot_partition: str - root_patch: typing.Optional[boot.BootImagePatch] - clear_vbmeta_flags: bool - privkey_avb: os.PathLike[str] - passphrase_avb: str - privkey_ota: os.PathLike[str] - passphrase_ota: str - cert_ota: os.PathLike[str] - - -def print_status(*args, **kwargs): - print('\x1b[1m*****', *args, '*****\x1b[0m', **kwargs) - - -def print_warning(*args, **kwargs): - print('\x1b[1;31m*****', '[WARNING]', *args, '*****\x1b[0m', **kwargs) - - -def get_partitions_by_type(manifest): - all_partitions = set(p.partition_name for p in manifest.partitions) - by_type = {} - - for t, candidates in PARTITION_PRIORITIES.items(): - partition = next((p for p in candidates if p in all_partitions), None) - if partition is None: - raise ValueError(f'Cannot find partition of type: {t}') - - by_type[t] = partition - - for partition in all_partitions: - if 'vbmeta' in partition: - by_type[f'@vbmeta:{partition}'] = partition - - return by_type - - -def get_required_images(manifest, boot_partition, with_root): - all_partitions = set(p.partition_name for p in manifest.partitions) - by_type = get_partitions_by_type(manifest) - images = {k: v for k, v in by_type.items() - if k == '@otacerts' or k.startswith('@vbmeta:')} - - if with_root: - if boot_partition in by_type: - images['@rootpatch'] = by_type[boot_partition] - elif boot_partition in all_partitions: - images['@rootpatch'] = boot_partition - else: - raise ValueError(f'Boot partition not found: {boot_partition}') - - return images - - -def get_vbmeta_patch_order(avb, image_paths, vbmeta_images): - dep_graph = vbmeta.get_vbmeta_deps( - avb, {n: image_paths[n] for n in vbmeta_images}) - - # Only keep dependencies among the subset of images we're working with - dep_graph = {n: {d for d in deps if d in image_paths} - for n, deps in dep_graph.items() if n in image_paths} - - # Avoid patching vbmeta images that don't need changes - while True: - unneeded_vbmeta = set(n for n, d in dep_graph.items() - if n in vbmeta_images and not d) - if not unneeded_vbmeta: - break - - dep_graph = {n: {d for d in deps if d not in unneeded_vbmeta} - for n, deps in dep_graph.items() - if n not in unneeded_vbmeta} - - full_order = graphlib.TopologicalSorter(dep_graph).static_order() - order = [n for n in full_order if n in vbmeta_images] - - return dep_graph, order - - -def patch_ota_payload(f_in, open_more_f_in, f_out, file_size, - context: PatchContext): - with tempfile.TemporaryDirectory() as temp_dir: - extract_dir = os.path.join(temp_dir, 'extract') - patch_dir = os.path.join(temp_dir, 'patch') - payload_dir = os.path.join(temp_dir, 'payload') - os.mkdir(extract_dir) - os.mkdir(patch_dir) - os.mkdir(payload_dir) - - version, manifest, blob_offset = ota.parse_payload(f_in) - all_partitions = set(p.partition_name for p in manifest.partitions) - image_paths = {} - - # Use user-provided partition images if provided. This may be a larger - # set than what's needed for our patches. - for name, path in context.replace_images.items(): - if name not in all_partitions: - raise ValueError( - f'Cannot replace non-existent partition: {name}') - - image_paths[name] = path - - # Extract remaining required partition images from the original payload. - required_images = get_required_images(manifest, context.boot_partition, - context.root_patch is not None) - vbmeta_images = set(p for n, p in required_images.items() - if n.startswith('@vbmeta:')) - - to_extract = required_images.values() - image_paths.keys() - for name in to_extract: - image_paths[name] = os.path.join(extract_dir, f'{name}.img') - - if to_extract: - print_status('Extracting', ', '.join(sorted(to_extract)), - 'from the payload') - ota.extract_images(open_more_f_in, manifest, blob_offset, - extract_dir, to_extract) - - image_patches = {} - if context.root_patch is not None: - image_patches.setdefault(required_images['@rootpatch'], []).append( - context.root_patch) - image_patches.setdefault(required_images['@otacerts'], []).append( - boot.OtaCertPatch(context.cert_ota)) - - avb = avbtool.Avb() - - print_status('Patching', ', '.join(sorted(image_patches))) - with concurrent.futures.ThreadPoolExecutor( - max_workers=len(image_patches)) as executor: - def apply_patches(image, patches): - patched_path = os.path.join(patch_dir, f'{image}.img') - - boot.patch_boot( - avb, - image_paths[image], - patched_path, - context.privkey_avb, - context.passphrase_avb, - True, - patches, - ) - - image_paths[image] = patched_path - - futures = [executor.submit(apply_patches, i, p) - for i, p in image_patches.items()] - - for future in concurrent.futures.as_completed(futures): - future.result() - - vbmeta_deps, vbmeta_order = \ - get_vbmeta_patch_order(avb, image_paths, vbmeta_images) - print_status('Building', ', '.join(vbmeta_order)) - - for image in vbmeta_order: - patched_path = os.path.join(patch_dir, f'{image}.img') - - vbmeta.patch_vbmeta_image( - avb, - {n: p for n, p in image_paths.items() - if n in vbmeta_deps[image]}, - image_paths[image], - patched_path, - context.privkey_avb, - context.passphrase_avb, - manifest.block_size, - context.clear_vbmeta_flags, - ) - - image_paths[image] = patched_path - - # Don't replace untouched vbmeta images - for image in vbmeta_images - set(vbmeta_order): - del image_paths[image] - - print_status('Updating OTA payload to reference new', - ', '.join(sorted(image_paths))) - return ota.patch_payload( - f_in, - f_out, - version, - manifest, - blob_offset, - payload_dir, - image_paths, - file_size, - context.privkey_ota, - context.passphrase_ota, - ) - - -def strip_bad_extra_fields(extra): - offset = 0 - new_extra = bytearray() - - while offset < len(extra): - record_sig, record_len = \ - struct.unpack(' zipfile.ZIP64_LIMIT or \ - args[0].compress_size > zipfile.ZIP64_LIMIT - - fields = list(struct.unpack_from(zipfile.structFileHeader, blob)) - if fields[3] & (1 << 3) and zip64: - fields[8] = 0xffffffff - fields[9] = 0xffffffff - - return struct.pack(zipfile.structFileHeader, *fields) + \ - blob[zipfile.sizeFileHeader:] - else: - return blob - - with unittest.mock.patch('zipfile.ZipInfo.FileHeader', wrapper): - yield - - -def patch_ota_zip(f_zip_in, f_zip_out, context: PatchContext): - with ( - zipfile.ZipFile(f_zip_in, 'r') as z_in, - zipfile.ZipFile(f_zip_out, 'w') as z_out, - ): - infolist = z_in.infolist() - missing = { - PATH_METADATA, - PATH_METADATA_PB, - PATH_OTACERT, - PATH_PAYLOAD, - PATH_PROPERTIES, - } - i_payload = -1 - i_properties = -1 - - for i, info in enumerate(infolist): - if info.filename in missing: - missing.remove(info.filename) - - if info.filename == PATH_PAYLOAD: - i_payload = i - elif info.filename == PATH_PROPERTIES: - i_properties = i - - if not missing and i_payload >= 0 and i_properties >= 0: - break - - if missing: - raise Exception(f'Missing files in zip: {missing}') - - # Ensure payload is processed before properties - if i_payload > i_properties: - infolist[i_payload], infolist[i_properties] = \ - infolist[i_properties], infolist[i_payload] - - properties = None - metadata_info = None - metadata_pb_info = None - metadata_pb_raw = None - - for info in infolist: - out_info = copy.copy(info) - out_info.extra = strip_bad_extra_fields(out_info.extra) - - # Ignore because the plain-text legacy metadata file is regenerated - # from the new metadata - if info.filename == PATH_METADATA: - metadata_info = out_info - continue - - # The existing metadata is needed to generate a new signed zip - elif info.filename == PATH_METADATA_PB: - metadata_pb_info = out_info - - with z_in.open(info, 'r') as f_in: - metadata_pb_raw = f_in.read() - - continue - - # Use the user's OTA certificate - elif info.filename == PATH_OTACERT: - print_status('Replacing', info.filename) - - with ( - open(context.cert_ota, 'rb') as f_cert, - z_out.open(out_info, 'w') as f_out, - ): - shutil.copyfileobj(f_cert, f_out) - - continue - - # Copy other files, patching if needed - with ( - z_in.open(info, 'r') as f_in, - z_out.open(out_info, 'w') as f_out, - ): - if info.filename == PATH_PAYLOAD: - print_status('Patching', info.filename) - - if info.compress_type != zipfile.ZIP_STORED: - raise Exception( - f'{info.filename} is not stored uncompressed') - - properties = patch_ota_payload( - f_in, - lambda: z_in.open(info, 'r'), - f_out, - info.file_size, - context, - ) - - elif info.filename == PATH_PROPERTIES: - print_status('Patching', info.filename) - - if info.compress_type != zipfile.ZIP_STORED: - raise Exception( - f'{info.filename} is not stored uncompressed') - - f_out.write(properties) - - else: - print_status('Copying', info.filename) - - shutil.copyfileobj(f_in, f_out) - - print_status('Generating', PATH_METADATA, 'and', PATH_METADATA_PB) - metadata = ota.add_metadata( - z_out, - metadata_info, - metadata_pb_info, - metadata_pb_raw, - ) - - # Signing process needs to capture the zip central directory - f_zip_out.start_capture() - - return metadata - - -def patch_subcommand(args): - output = args.output - if output is None: - output = args.input + '.patched' - - if args.rootless: - root_patch = None - elif args.magisk is not None: - root_patch = boot.MagiskRootPatch( - args.magisk, args.magisk_preinit_device, args.magisk_random_seed) - - try: - root_patch.validate() - except ValueError as e: - if args.ignore_magisk_warnings: - print_warning(e) - else: - raise e - else: - root_patch = boot.PrepatchedImage( - args.prepatched, - args.ignore_prepatched_compat + 1, - print_warning, - ) - - # Get passphrases for keys - passphrase_avb = openssl.prompt_passphrase( - args.privkey_avb, - args.passphrase_avb_env_var, - args.passphrase_avb_file, - ) - passphrase_ota = openssl.prompt_passphrase( - args.privkey_ota, - args.passphrase_ota_env_var, - args.passphrase_ota_file, - ) - - # Ensure that the certificate matches the private key - if not openssl.cert_matches_key(args.cert_ota, args.privkey_ota, - passphrase_ota): - raise Exception('OTA certificate does not match private key') - - start = time.perf_counter_ns() - - with util.open_output_file(output) as temp_raw: - with ( - ota.open_signing_wrapper(temp_raw, args.privkey_ota, - passphrase_ota, args.cert_ota) as temp, - ota.match_android_zip64_limit(), - fix_streaming_local_header_sizes(), - ): - context = PatchContext( - replace_images=args.replace or {}, - boot_partition=args.boot_partition, - root_patch=root_patch, - clear_vbmeta_flags=args.clear_vbmeta_flags, - privkey_avb=args.privkey_avb, - passphrase_avb=passphrase_avb, - privkey_ota=args.privkey_ota, - passphrase_ota=passphrase_ota, - cert_ota=args.cert_ota, - ) - - metadata = patch_ota_zip(args.input, temp, context) - - # We do a lot of low-level hackery. Reopen and verify offsets - print_status('Verifying metadata offsets') - with zipfile.ZipFile(temp_raw, 'r') as z: - ota.verify_metadata(z, metadata) - - # Excluding the time it takes for the user to type in the passwords - elapsed = time.perf_counter_ns() - start - print_status(f'Completed after {elapsed / 1_000_000_000:.1f}s') - - -def extract_subcommand(args): - with zipfile.ZipFile(args.input, 'r') as z: - info = z.getinfo(PATH_PAYLOAD) - - with z.open(info, 'r') as f: - _, manifest, blob_offset = ota.parse_payload(f) - - if args.all: - unique_images = set(p.partition_name - for p in manifest.partitions) - else: - images = get_required_images(manifest, args.boot_partition, True) - if args.boot_only: - unique_images = {images['@rootpatch']} - else: - unique_images = set(images.values()) - - print_status('Extracting', ', '.join(sorted(unique_images)), - 'from the payload') - os.makedirs(args.directory, exist_ok=True) - - # Extract in parallel. There's is no actual I/O parallelism due to - # zipfile's internal locks, but this is still significantly faster than - # doing it single threaded. The extraction process is mostly CPU board - # due to decompression. - ota.extract_images(lambda: z.open(info, 'r'), - manifest, blob_offset, args.directory, - unique_images) - - -def magisk_info_subcommand(args): - with open(args.image, 'rb') as f: - img = bootimage.load_autodetect(f) - - if not img.ramdisks: - raise ValueError('Boot image does not have a ramdisk') - - with ( - io.BytesIO(img.ramdisks[0]) as f_raw, - compression.CompressedFile(f_raw, 'rb', raw_if_unknown=True) as f, - ): - entries = cpio.load(f.fp) - config = next((e for e in entries if e.name == b'.backup/.magisk'), - None) - if config is None: - raise ValueError('Not a Magisk-patched boot image') - - print(config.content.decode('ascii'), end='') - - -def uint64_arg(arg): - value = int(arg) - if value < 0 or value >= 2 ** 64: - raise ValueError('Out of range for unsigned 64-bit integer') - - return value - - -class KeyValuePairAction(argparse.Action): - def __init__(self, option_strings, dest, nargs=None, **kwargs): - if nargs != 2: - raise ValueError('nargs must be 2') - - super().__init__(option_strings, dest, nargs=nargs, **kwargs) - - def __call__(self, parser, namespace, values, option_string=None): - data = getattr(namespace, self.dest, None) - if data is None: - data = {} - - data[values[0]] = values[1] - setattr(namespace, self.dest, data) - - -def parse_args(argv=None): - parser = argparse.ArgumentParser() - subparsers = parser.add_subparsers( - dest='subcommand', - required=True, - help='Subcommands', - ) - - patch = subparsers.add_parser( - 'patch', - help='Patch a full OTA zip', - ) - - patch.add_argument( - '--input', - required=True, - help='Path to original raw payload or OTA zip', - ) - patch.add_argument( - '--output', - help='Path to new raw payload or OTA zip', - ) - patch.add_argument( - '--privkey-avb', - required=True, - help='Private key for signing root vbmeta image', - ) - patch.add_argument( - '--privkey-ota', - required=True, - help='Private key for signing OTA payload', - ) - patch.add_argument( - '--cert-ota', - required=True, - help='Certificate for OTA payload signing key', - ) - - for arg in ('AVB', 'OTA'): - group = patch.add_mutually_exclusive_group() - group.add_argument( - f'--passphrase-{arg.lower()}-env-var', - help=f'Environment variable containing {arg} private key passphrase', - ) - group.add_argument( - f'--passphrase-{arg.lower()}-file', - help=f'File containing {arg} private key passphrase', - ) - - patch.add_argument( - '--replace', - nargs=2, - action=KeyValuePairAction, - help='Use partition image from a file instead of the original payload', - ) - - boot_group = patch.add_mutually_exclusive_group(required=True) - boot_group.add_argument( - '--magisk', - help='Path to Magisk APK', - ) - boot_group.add_argument( - '--prepatched', - help='Path to prepatched boot image', - ) - boot_group.add_argument( - '--rootless', - action='store_true', - help='Skip applying root patch', - ) - - patch.add_argument( - '--magisk-preinit-device', - help='Magisk preinit device', - ) - patch.add_argument( - '--magisk-random-seed', - type=uint64_arg, - help='Magisk random seed', - ) - patch.add_argument( - '--ignore-magisk-warnings', - action='store_true', - help='Ignore Magisk compatibility/version warnings', - ) - patch.add_argument( - '--ignore-prepatched-compat', - default=0, - action='count', - help='Ignore compatibility issues with prepatched boot images', - ) - - patch.add_argument( - '--clear-vbmeta-flags', - action='store_true', - help='Forcibly clear vbmeta flags if they disable AVB', - ) - - extract = subparsers.add_parser( - 'extract', - help='Extract patched images from a patched OTA zip', - ) - - extract.add_argument( - '--input', - required=True, - help='Path to patched OTA zip', - ) - extract.add_argument( - '--directory', - default='.', - help='Output directory for extracted images', - ) - extract_group = extract.add_mutually_exclusive_group() - extract_group.add_argument( - '--all', - action='store_true', - help='Extract all images from the payload', - ) - extract_group.add_argument( - '--boot-only', - action='store_true', - help='Extract only the boot image', - ) - - for subcmd in (patch, extract): - subcmd.add_argument( - '--boot-partition', - default='@gki_ramdisk', - help='Boot partition name', - ) - - magisk_info = subparsers.add_parser( - 'magisk-info', - help='Print Magisk config from a patched boot image', - ) - magisk_info.add_argument( - '--image', - required=True, - help='Patch to Magisk-patched boot image', - ) - - args = parser.parse_args(args=argv) - - if args.subcommand == 'patch': - if args.magisk is None: - if args.magisk_preinit_device: - parser.error('--magisk-preinit-device requires --magisk') - elif args.magisk_random_seed: - parser.error('--magisk-random-seed requires --magisk') - elif args.ignore_magisk_warnings: - parser.error('--ignore-magisk-warnings requires --magisk') - elif args.prepatched is None: - if args.ignore_prepatched_compat: - parser.error('--ignore-prepatched-compat requires --prepatched') - - return args - - -def main(argv=None): - args = parse_args(argv=argv) - - util.load_umask_unsafe() - - if args.subcommand == 'patch': - patch_subcommand(args) - elif args.subcommand == 'extract': - extract_subcommand(args) - elif args.subcommand == 'magisk-info': - magisk_info_subcommand(args) - else: - raise NotImplementedError() diff --git a/avbroot/openssl.py b/avbroot/openssl.py deleted file mode 100644 index 4e4f536..0000000 --- a/avbroot/openssl.py +++ /dev/null @@ -1,222 +0,0 @@ -import binascii -import contextlib -import getpass -import os -import random -import string -import subprocess -import unittest.mock - -# This module calls the openssl binary because AOSP's avbtool.py already does -# that and the operations are simple enough to not require pulling in a -# library. - - -@contextlib.contextmanager -def _passphrase_fd(passphrase): - ''' - If the specified passphrase is not None, yield the readable end of a pipe - that produces the passphrase encoded as UTF-8, followed by a newline. The - read end of the pipe is marked as inheritable. Both ends of the pipe are - closed after leaving the context. - ''' - - assert os.name != 'nt' - - if passphrase is None: - yield None - return - - # For simplicity, we don't write to the pipe on a thread, so pick a maximum - # length that doesn't exceed any OS's pipe buffer size, while still being - # usable for just about every use case. - if len(passphrase) >= 4096: - raise ValueError('Passphrase is too long') - - pipe_r, pipe_w = os.pipe() - write_closed = False - - try: - os.set_inheritable(pipe_r, True) - - os.write(pipe_w, passphrase.encode('UTF-8')) - os.write(pipe_w, b'\n') - os.close(pipe_w) - write_closed = True - - yield pipe_r - finally: - os.close(pipe_r) - if not write_closed: - os.close(pipe_w) - - -class _PopenPassphraseWrapper: - ''' - Wrapper around subprocess.Popen() that adds arguments for passing in the - private key passphrase via a pipe on non-Windows systems. On Windows, - openssl does not support reading from pipes, so the passphrase is passed in - via an environment variable. - ''' - - def __init__(self, passphrase): - self.orig_popen = subprocess.Popen - self.passphrase = passphrase - - def __call__(self, cmd, *args, **kwargs): - if self.passphrase is not None and cmd and \ - os.path.basename(cmd[0]) == 'openssl': - if os.name == 'nt': - # On Windows, opensssl does not support reading the passphrase - # from a file descriptor. An environment variable is the next - # best way to handle this. - if 'env' not in kwargs: - kwargs['env'] = dict(os.environ) - - env_var = ''.join(random.choices(string.ascii_letters, k=64)) - kwargs['env'][env_var] = self.passphrase - - new_cmd = [*cmd, '-passin', f'env:{env_var}'] - - return self.orig_popen(new_cmd, *args, **kwargs) - else: - with _passphrase_fd(self.passphrase) as fd: - kwargs['close_fds'] = False - - new_cmd = [*cmd, '-passin', f'fd:{fd}'] - - return self.orig_popen(new_cmd, *args, **kwargs) - - # The pipe is closed at this point in this process, but the - # child already inherited the fd and the passphrase is sitting - # the pipe buffer. - else: - return self.orig_popen(cmd, *args, **kwargs) - - -def inject_passphrase(passphrase): - ''' - While this context is active, patch subprocess calls to openssl so that - the passphrase is specified via an injected -passin argument, if it is not - None. The passphrase is passed to the command via a pipe file descriptor - (non-Windows) or an environment variable (Windows). - ''' - - return unittest.mock.patch( - 'subprocess.Popen', side_effect=_PopenPassphraseWrapper(passphrase)) - - -def _guess_format(path): - ''' - Simple heuristic to determine the encoding of a key. This is needed because - openssl 1.1 doesn't support autodetection. - ''' - - with open(path, 'rb') as f: - for line in f: - if line.startswith(b'-----BEGIN '): - return 'PEM' - - return 'DER' - - -def _get_modulus(path, passphrase, is_x509): - ''' - Get the RSA modulus of the given file, which can be a private key or - certificate. - ''' - - with inject_passphrase(passphrase): - output = subprocess.check_output([ - 'openssl', - 'x509' if is_x509 else 'rsa', - '-in', path, - '-inform', _guess_format(path), - '-noout', - '-modulus', - ]) - - prefix, delim, suffix = output.strip().partition(b'=') - if not delim or prefix != b'Modulus': - raise Exception(f'Unexpected modulus output: {repr(output)}') - - return binascii.unhexlify(suffix) - - -def max_signature_size(pkey, passphrase): - ''' - Get the maximum size of a signature signed by the specified RSA key. This - is equal to the modulus size. - ''' - - return len(_get_modulus(pkey, passphrase, False)) - - -def sign_data(pkey, passphrase, data): - ''' - Sign with . - ''' - - with inject_passphrase(passphrase): - return subprocess.check_output( - [ - 'openssl', 'pkeyutl', - '-sign', - '-inkey', pkey, - '-keyform', _guess_format(pkey), - '-pkeyopt', 'digest:sha256', - ], - input=data, - ) - - -def cert_matches_key(cert, pkey, passphrase): - ''' - Check that the x509 certificate matches the RSA private key. - ''' - - return _get_modulus(cert, None, True) \ - == _get_modulus(pkey, passphrase, False) - - -def _is_encrypted(pkey): - ''' - Check if a private key is encrypted. - ''' - - with open(pkey, 'rb') as f: - for line in f: - if b'-----BEGIN ENCRYPTED PRIVATE KEY-----' == line.strip(): - return True - - return False - - -def prompt_passphrase(pkey, passphrase_env_var=None, passphrase_file=None): - ''' - If the private key is encrypted: - - * try to read from the specified passphrase file (first line with trailing - line endings stripped) - * try to read from the passphrase environment variable - * prompt for the passphrase interactively - - There is no fallback behavior. - ''' - - if not _is_encrypted(pkey): - return None - - if passphrase_file is not None: - with open(passphrase_file, 'r') as f: - passphrase = f.readline().rstrip('\r\n') - elif passphrase_env_var is not None: - passphrase = os.environ[passphrase_env_var] - else: - passphrase = getpass.getpass(f'Passphrase for {pkey}: ') - - # Verify that it is correct - with inject_passphrase(passphrase): - subprocess.check_output(['openssl', 'pkey', '-in', pkey, '-noout']) - - return passphrase diff --git a/avbroot/ota.py b/avbroot/ota.py deleted file mode 100644 index 9353494..0000000 --- a/avbroot/ota.py +++ /dev/null @@ -1,817 +0,0 @@ -import base64 -import binascii -import bz2 -import collections -import concurrent.futures -import contextlib -import hashlib -import io -import lzma -import os -import struct -import sys -import subprocess -import threading -import unittest.mock -import zipfile - -# Silence undesired warning -orig_argv0 = sys.argv[0] -sys.argv[0] = os.path.basename(sys.argv[0]).removesuffix('.py') -import ota_utils -sys.argv[0] = orig_argv0 - -import ota_metadata_pb2 -import update_metadata_pb2 - -from . import openssl -from . import util - - -OTA_MAGIC = b'CrAU' - - -def parse_payload(f): - ''' - Parse payload header from a file-like object. After this function returns, - the file position is set to the beginning of the blob section. - ''' - - f.seek(0) - - # Validate header - magic = f.read(4) - if magic != OTA_MAGIC: - raise Exception(f'Invalid magic: {magic}') - - version, = struct.unpack('!Q', f.read(8)) - if version != 2: - raise Exception(f'Unsupported version: {version}') - - manifest_size, = struct.unpack('!Q', f.read(8)) - metadata_signature_size, = struct.unpack('!I', f.read(4)) - - # Read manifest - manifest_raw = f.read(manifest_size) - manifest = update_metadata_pb2.DeltaArchiveManifest() - manifest.ParseFromString(manifest_raw) - - if any(p.HasField('old_partition_info') for p in manifest.partitions): - raise Exception('File is a delta OTA, not a full OTA') - - # Skip manifest signatures - f.seek(metadata_signature_size, os.SEEK_CUR) - - return (version, manifest, f.tell()) - - -def _extract_image(f_payload, f_out, block_size, blob_offset, partition, - cancel_signal): - ''' - Extract the partition image from to by processing the - manifests list of install operations. - ''' - - Type = update_metadata_pb2.InstallOperation.Type - - for op in partition.operations: - for extent in op.dst_extents: - if cancel_signal.is_set(): - raise Exception('Interrupted') - - f_payload.seek(blob_offset + op.data_offset) - f_out.seek(extent.start_block * block_size) - h_data = hashlib.sha256() - - if op.type == Type.REPLACE: - util.copyfileobj_n(f_payload, f_out, op.data_length, - hasher=h_data) - elif op.type == Type.REPLACE_BZ: - decompressor = bz2.BZ2Decompressor() - util.decompress_n(decompressor, f_payload, f_out, - op.data_length, hasher=h_data) - elif op.type == Type.REPLACE_XZ: - decompressor = lzma.LZMADecompressor() - util.decompress_n(decompressor, f_payload, f_out, - op.data_length, hasher=h_data) - elif op.type == Type.ZERO or op.type == Type.DISCARD: - util.zero_n(f_out, extent.num_blocks * block_size) - else: - raise Exception(f'Unsupported operation: {op.type}') - - if h_data.digest() != op.data_sha256_hash and op.type != Type.ZERO: - raise Exception('Expected hash %s, but got %s' % - (h_data.hexdigest(), - binascii.hexlify(op.data_sha256_hash))) - - -def extract_images(f, manifest, blob_offset, output_dir, partition_names): - ''' - Extract the specified partition images from the payload into . - - If is callable, then it should produce a new file object each time it - is called. This allows extracting images in parallel. - ''' - - remaining = set(partition_names) - max_workers = len(remaining) - cancel_signal = threading.Event() - futures = [] - - if not callable(f): - f_orig = f - - @contextlib.contextmanager - def dummy(): - yield f_orig - - f = dummy - max_workers = 1 - - def extract(p): - output_path = os.path.join(output_dir, p.partition_name + '.img') - - with ( - f() as f_in, - open(output_path, 'wb') as f_out, - ): - _extract_image(f_in, f_out, manifest.block_size, blob_offset, p, - cancel_signal) - - with concurrent.futures.ThreadPoolExecutor( - max_workers=max_workers) as executor: - try: - for p in manifest.partitions: - if p.partition_name not in remaining: - continue - - remaining.remove(p.partition_name) - - futures.append(executor.submit(extract, p)) - - for future in concurrent.futures.as_completed(futures): - future.result() - except BaseException: - cancel_signal.set() - raise - - if remaining: - raise Exception(f'Images not found: {remaining}') - - -def _compress_image(partition, block_size, input_path, output_path): - ''' - XZ-compress the image at to and update the - partition metadata with the appropriate checksums and install operations - metadata. - - The size in the (sole) install operation is set correctly, but the offset - must be manually updated. It is initially set to the maximum uint64 value. - ''' - - h_uncompressed = hashlib.sha256() - h_compressed = hashlib.sha256() - size_uncompressed = 0 - size_compressed = 0 - # AOSP's payload_consumer does not support CRC during decompression - compressor = lzma.LZMACompressor(check=lzma.CHECK_NONE) - buf = bytearray(16384) - buf_view = memoryview(buf) - - with ( - open(input_path, 'rb', buffering=0) as f_in, - open(output_path, 'wb') as f_out, - ): - while n := f_in.readinto(buf_view): - h_uncompressed.update(buf_view[:n]) - size_uncompressed += n - - xz_data = compressor.compress(buf_view[:n]) - h_compressed.update(xz_data) - size_compressed += len(xz_data) - f_out.write(xz_data) - - xz_data = compressor.flush() - h_compressed.update(xz_data) - size_compressed += len(xz_data) - f_out.write(xz_data) - - if size_uncompressed % block_size: - raise Exception('Size of %s (%d) is not aligned to the block size (%d)' - % (partition.partition_name, size_uncompressed, - block_size)) - - partition.new_partition_info.size = size_uncompressed - partition.new_partition_info.hash = h_uncompressed.digest() - - extent = update_metadata_pb2.Extent() - extent.start_block = 0 - extent.num_blocks = size_uncompressed // block_size - - operation = update_metadata_pb2.InstallOperation() - operation.type = update_metadata_pb2.InstallOperation.Type.REPLACE_XZ - # Must be manually updated by the caller - operation.data_offset = 2 ** 64 - 1 - operation.data_length = size_compressed - operation.dst_extents.append(extent) - operation.data_sha256_hash = h_compressed.digest() - - partition.ClearField('operations') - partition.operations.append(operation) - - -def _recompute_offsets(manifest, new_images): - ''' - Recompute the blob offsets to account for the new images. - - Returns ([(, , )], ). If the - image file is None, then the data offset is relative to the blob offset of - the original payload. Otherwise, the data offset is an absolute offset into - the image file. - ''' - - # (, , ) - data_list = [] - offset = 0 - - for p in manifest.partitions: - is_patched = p.partition_name in new_images - p_offset = 0 - - for op in p.operations: - if is_patched: - data_list.append(( - new_images[p.partition_name], - p_offset, - op.data_length, - )) - else: - data_list.append(( - None, - op.data_offset, - op.data_length, - )) - - op.data_offset = offset - p_offset += op.data_length - offset += op.data_length - - return (data_list, offset) - - -def _sign_hash(hash, key, passphrase, max_sig_size): - ''' - Sign with and return a Signatures protobuf struct with the - signature padded to . - ''' - - hash_signed = openssl.sign_data(key, passphrase, hash) - assert len(hash_signed) <= max_sig_size - - signature = update_metadata_pb2.Signatures.Signature() - signature.unpadded_signature_size = len(hash_signed) - signature.data = hash_signed + b'\0' * (max_sig_size - len(hash_signed)) - - signatures = update_metadata_pb2.Signatures() - signatures.signatures.append(signature) - - return signatures - - -def _serialize_protobuf(p): - return p.SerializeToString(deterministic=True) - - -def patch_payload(f_in, f_out, version, manifest, blob_offset, temp_dir, - patched, file_size, key, passphrase): - ''' - Copy the payload from to , updating references to - images as they are encountered. will be signed with . - ''' - - max_sig_size = openssl.max_signature_size(key, passphrase) - - # Strip out old payload signature - if manifest.HasField('signatures_size'): - trunc_file_size = blob_offset + manifest.signatures_offset - if trunc_file_size > file_size: - raise Exception('Payload signature offset is beyond EOF') - file_size = trunc_file_size - - # Partition name -> compressed image path - compressed = {} - - # Update the partition manifests to refer to the patched images - for name, path in patched.items(): - # Find the partition in the manifest - partition = next((p for p in manifest.partitions - if p.partition_name == name), None) - if partition is None: - raise Exception(f'Partition {name} not found in manifest') - - # Compress the image and update the partition manifest accordingly - compressed_path = os.path.join(temp_dir, f'{name}.img') - _compress_image( - partition, - manifest.block_size, - path, - compressed_path, - ) - compressed[name] = compressed_path - - # Fill out blob offsets and compute final size - blob_data_list, blob_size = _recompute_offsets(manifest, compressed) - - # Get the length of an dummy signature struct since the length fields are - # part of the data to be signed - dummy_sig = _sign_hash(hashlib.sha256().digest(), key, passphrase, - max_sig_size) - dummy_sig_size = len(_serialize_protobuf(dummy_sig)) - - # Fill out new payload signature information - manifest.signatures_offset = blob_size - manifest.signatures_size = dummy_sig_size - - # Build new manifest - manifest_raw_new = _serialize_protobuf(manifest) - - class MultipleHasher: - def __init__(self, hashers): - self.hashers = hashers - - def update(self, data): - for hasher in self.hashers: - hasher.update(data) - - # Excludes signatures (hashes are for signing) - h_partial = hashlib.sha256() - # Includes signatures (hashes are for properties file) - h_full = hashlib.sha256() - # Updates both of the above - h_both = MultipleHasher((h_partial, h_full)) - - def write(hasher, data): - hasher.update(data) - f_out.write(data) - - # Write header to output file - write(h_both, OTA_MAGIC) - write(h_both, struct.pack('!Q', version)) - write(h_both, struct.pack('!Q', len(manifest_raw_new))) - write(h_both, struct.pack('!I', dummy_sig_size)) - - # Write new manifest - write(h_both, manifest_raw_new) - - # Sign metadata (header + manifest) hash. The signature is not included in - # the payload hash. - metadata_hash = h_partial.digest() - metadata_sig = _sign_hash(metadata_hash, key, passphrase, max_sig_size) - write(h_full, _serialize_protobuf(metadata_sig)) - - # Write new blob - for image_file, data_offset, data_length in blob_data_list: - if image_file is None: - f_in.seek(blob_offset + data_offset) - util.copyfileobj_n(f_in, f_out, data_length, hasher=h_both) - else: - with open(image_file, 'rb') as f_image: - f_image.seek(data_offset) - util.copyfileobj_n(f_image, f_out, data_length, hasher=h_both) - - # Append payload signature - payload_sig = _sign_hash(h_partial.digest(), key, passphrase, max_sig_size) - write(h_full, _serialize_protobuf(payload_sig)) - - # Generate properties file - metadata_offset = len(OTA_MAGIC) + struct.calcsize('!QQI') - metadata_size = metadata_offset + len(manifest_raw_new) - blob_size = manifest.signatures_offset + manifest.signatures_size - new_file_size = metadata_size + dummy_sig_size + blob_size - - def b64(d): return base64.b64encode(d) - props = [ - b'FILE_HASH=%s\n' % b64(h_full.digest()), - b'FILE_SIZE=%d\n' % new_file_size, - b'METADATA_HASH=%s\n' % b64(metadata_hash), - b'METADATA_SIZE=%d\n' % metadata_size, - ] - - return b''.join(props) - - -def _get_property_files(): - ''' - Return the set of property files to add to the OTA metadata files. - ''' - - return ( - ota_utils.AbOtaPropertyFiles(), - ota_utils.StreamingPropertyFiles(), - ) - - -def _serialize_metadata(metadata): - ''' - Generate the legacy plain-text and protobuf serializations of the given - metadata instance. - ''' - - legacy_metadata = ota_utils.BuildLegacyOtaMetadata(metadata) - legacy_metadata_str = "".join([f'{k}={v}\n' for k, v in - sorted(legacy_metadata.items())]) - metadata_bytes = _serialize_protobuf(metadata) - - return legacy_metadata_str.encode('UTF-8'), metadata_bytes - - -_FileRange = collections.namedtuple( - '_FileRange', ('start', 'end', 'data_or_fp')) - - -class _ConcatenatedFileDescriptor: - ''' - A read-only seekable file descriptor that presents several file descriptors - or byte arrays as a single concatenated file. - ''' - - def __init__(self): - # List of (start, end, data_or_fp) - self.ranges = [] - self.offset = 0 - - def _get_range(self): - for range in self.ranges: - if self.offset >= range.start and self.offset < range.end: - return range - - return None - - def _eof_offset(self): - return self.ranges[-1].end if self.ranges else 0 - - def add_file(self, fp): - start = self._eof_offset() - self.ranges.append(_FileRange(start, start + fp.tell(), fp)) - - def add_bytes(self, data): - start = self._eof_offset() - self.ranges.append(_FileRange(start, start + len(data), data)) - - def read(self, size=None): - buf = b'' - - while size is None or size > 0: - range = self._get_range() - if not range: - break - - to_read = range.end - self.offset - if size is not None: - to_read = min(to_read, size) - data_offset = self.offset - range.start - - if isinstance(range.data_or_fp, bytes): - data = range.data_or_fp[data_offset:data_offset + to_read] - else: - range.data_or_fp.seek(data_offset) - data = range.data_or_fp.read(to_read) - - if not buf: - buf = data - else: - buf += data - - if len(data) < to_read: - if range is not self.ranges[-1]: - raise Exception('Unexpected EOF') - else: - break - - if size is not None: - size -= to_read - - return buf - - def seek(self, offset, whence=os.SEEK_SET): - if whence == os.SEEK_SET: - self.offset = offset - elif whence == os.SEEK_CUR: - self.offset += offset - elif whence == os.SEEK_END: - self.offset = self._eof_offset() + offset - else: - raise ValueError(f'Invalid whence: {whence}') - - def tell(self): - return self.offset - - -class _MemoryFile(io.BytesIO): - ''' - Subclass of io.BytesIO where seeking can be conditionally disabled. - ''' - - def __init__(self, *args, allow_seek=True, **kwargs): - super().__init__(*args, **kwargs) - self.allow_seek = allow_seek - - def seek(self, *args, **kwargs): - if not self.allow_seek: - raise AttributeError('seek is not supported') - - return super().seek(*args, **kwargs) - - -class _FakeZipFile: - ''' - A wrapper around a ZipFile instance that allows appending new entries in - memory without modifying the backing file. - - NOTE: The underlying ZipFile's file descriptor's position may be changed. - ''' - - def __init__(self, z): - self.zip = z - - self.fp = _ConcatenatedFileDescriptor() - - # We have a seekable underlying file descriptor to the zip, but we - # intentionally don't allow _TeeFileDescriptor to be seekable to - # guarantee that ZipFile writes sequentially. - self.orig_fp = self.zip.fp - if isinstance(self.orig_fp, _TeeFileDescriptor): - self.orig_fp = self.orig_fp.backing - - self.fp.add_file(self.orig_fp) - - self.next_offset = self.zip.start_dir - - self.extra_infos = {} - - def getinfo(self, name): - if name in self.extra_infos: - return self.extra_infos[name] - else: - return self.zip.getinfo(name) - - def namelist(self): - return self.zip.namelist() + list(self.extra_infos.keys()) - - def add_file(self, info, data): - # Disable seeking to ensure that data descriptors are written, like the - # backing ZipFile - with _MemoryFile(allow_seek=False) as mem: - with zipfile.ZipFile(mem, 'w') as z: - with z.open(info, 'w') as f: - f.write(data) - - # Capture local file header, data, and data descriptor - buf_without_footer = mem.getvalue() - self.fp.add_bytes(buf_without_footer) - - # Fix offset and add to fake entries - new_info = z.infolist()[-1] - new_info.header_offset = self.next_offset - self.extra_infos[new_info.filename] = new_info - - self.next_offset += len(buf_without_footer) - - -def add_metadata(z_out, metadata_info, metadata_pb_info, metadata_pb_raw): - ''' - Add metadata files to the output OTA zip. and - should be the ZipInfo instances associated with the - files from the original OTA zip. should be the serialized - OTA metadata protobuf struct from the original OTA. - - The zip file's backing file position MUST BE set to where the central - directory would start. - ''' - - metadata = ota_metadata_pb2.OtaMetadata() - metadata.ParseFromString(metadata_pb_raw) - - metadata.property_files.clear() - - props = _get_property_files() - - # Create a fake zip instance that allows appending new entries in memory so - # that ota_utils can compute offsets for the property files - fake_zip = _FakeZipFile(z_out) - - # Compute initial property files with reserved space as placeholders to - # store the self-referential metadata entries later - for p in props: - metadata.property_files[p.name] = p.Compute(fake_zip) - - # Add the placeholders to the fake zip to compute final property files - new_metadata_raw, new_metadata_pb_raw = _serialize_metadata(metadata) - fake_zip.add_file(metadata_info, new_metadata_raw) - fake_zip.add_file(metadata_pb_info, new_metadata_pb_raw) - - # Compute the final property files using the offsets of the fake entries - for p in props: - metadata.property_files[p.name] = \ - p.Finalize(fake_zip, len(metadata.property_files[p.name])) - - # Offset computation changes the file offset of the actual file. Seek back - # to where the next entry or central directory would go - fake_zip.orig_fp.seek(z_out.start_dir) - - # Add the final metadata files to the real zip - new_metadata_raw, new_metadata_pb_raw = _serialize_metadata(metadata) - with z_out.open(metadata_info, 'w') as f: - f.write(new_metadata_raw) - with z_out.open(metadata_pb_info, 'w') as f: - f.write(new_metadata_pb_raw) - - return metadata - - -def verify_metadata(z, metadata): - ''' - Verify that the offsets and file sizes within the metadata file properties - of a fully written OTA zip are correct. - ''' - - for p in _get_property_files(): - p.Verify(z, metadata.property_files[p.name].strip()) - - -class _TeeFileDescriptor: - ''' - A file-like instance that propagates writes to multiple streams. - - start_capture() is used to pause output and divert writes to a memory - buffer until _finish_capture(), which can modify the buffer. - ''' - - def __init__(self, streams, file_index=None): - self.streams = streams - self.capture = None - self.backing = None if file_index is None else streams[file_index] - - def write(self, data): - if self.capture: - self.capture.write(data) - else: - for stream in self.streams: - # Naive hole punching to create sparse files - if stream is self.backing and util.is_zero(data): - stream.seek(len(data), os.SEEK_CUR) - else: - stream.write(data) - - return len(data) - - def flush(self): - for stream in self.streams: - stream.flush() - - def tell(self): - if self.backing is None: - # Fake non-existance - raise AttributeError('tell is not supported') - - capture_len = self.capture.tell() if self.capture else 0 - return self.backing.tell() + capture_len - - def start_capture(self): - if self.capture is not None: - raise RuntimeError('Capture already started') - - self.capture = _MemoryFile() - - @contextlib.contextmanager - def _finish_capture(self): - if not self.capture: - raise RuntimeError('No capture started') - - yield self.capture - - for stream in self.streams: - stream.write(self.capture.getbuffer()) - - self.capture.close() - self.capture = None - - -@contextlib.contextmanager -def open_signing_wrapper(f, privkey, passphrase, cert): - ''' - Create a file-like wrapper around an existing file object that performs CMS - signing as data is being written. - ''' - - with openssl.inject_passphrase(passphrase): - session_kwargs = {} - if os.name != 'nt': - # We don't want the controlling terminal to interrupt openssl on - # ^C or ^\. That'll cause _TeeFileDescriptor's writes to the stdin - # pipe to fail, and certain classes, like ZipFile, will write to - # the fd in their __exit__ methods. This causes a BrokenPipeError - # to be raised while the existing KeyboardInterrupt is being - # propagated up. We'll handling killing openssl ourselves. - session_kwargs['start_new_session'] = True - - process = subprocess.Popen( - [ - 'openssl', - 'cms', - '-sign', - '-binary', - '-outform', 'DER', - '-inkey', privkey, - '-signer', cert, - # Mimic signapk behavior by excluding signed attributes - '-noattr', - '-nosmimecap', - ], - stdin=subprocess.PIPE, - stdout=subprocess.PIPE, - **session_kwargs, - ) - - try: - wrapper = _TeeFileDescriptor((f, process.stdin), file_index=0) - yield wrapper - - with wrapper._finish_capture() as f_buffer: - # Save a copy of the zip central directory - f_buffer.seek(0) - footer = f_buffer.read() - - # Delete the archive comment size field - if len(footer) < 2: - raise Exception('zip central directory is too small') - elif footer[-2:] != b'\x00\x00': - raise Exception('zip has unexpected archive comment') - - f_buffer.seek(-2, os.SEEK_CUR) - f_buffer.truncate(f_buffer.tell()) - - process.stdin.close() - signature = process.stdout.read() - except BaseException: - process.kill() - raise - finally: - process.wait() - - if process.returncode != 0: - raise Exception(f'openssl exited with status: {process.returncode}') - - # Double check that the EOCD magic is where it should be when there is no - # archive comment - if footer[-22:-18] != zipfile.stringEndArchive: - raise Exception('EOCD magic not found') - - # Build a new archive comment that contains the signature - with io.BytesIO() as comment: - message = b'signed by avbroot\0' - comment.write(message) - comment.write(signature) - - comment_size = comment.tell() + 6 - - if comment_size > 0xffff: - raise Exception('Archive comment with signature is too large') - - comment.write(struct.pack( - ' - 0x7fffffff. However, Android's libarchive behavior is incorrect [1] and - treats the data descriptor size fields as 32-bit unless the compressed or - uncompressed size in the central directory is >= 0xffffffff. This causes - files containing entries with sizes in [2 GiB, 4 GiB - 2] to fail to flash - in Android's recovery environment. Work around this by changing ZipFile's - threshold to match Android's. - - [1] https://cs.android.com/android/platform/superproject/+/android-13.0.0_r18:system/libziparchive/zip_archive.cc;l=692 - ''' - - # Because Python uses > and Android uses >= 0xffffffff - with unittest.mock.patch('zipfile.ZIP64_LIMIT', 0xfffffffe): - yield diff --git a/avbroot/util.py b/avbroot/util.py deleted file mode 100644 index fe40465..0000000 --- a/avbroot/util.py +++ /dev/null @@ -1,221 +0,0 @@ -import contextlib -import dataclasses -import functools -import os -import tempfile - - -_ZERO_BLOCK = memoryview(b'\0' * 16384) - -umask = None - - -def load_umask_unsafe(): - # POSIX provides no way to query the umask without changing it. Parsing - # /proc/self/status can work, but it's Linux only. Instead, we'll just do it - # once when the program is initially started. - global umask - - if os.name != 'nt' and umask is None: - current_umask = os.umask(0o777) - os.umask(current_umask) - - umask = current_umask - - -@dataclasses.dataclass -@functools.total_ordering -class Range: - ''' - Simple class to represent a half-open interval. - ''' - - start: int - end: int - - def __repr__(self) -> str: - return f'[{self.start}, {self.end})' - - def __str__(self) -> str: - return f'>={self.start}, <{self.end}' - - def __lt__(self, other) -> bool: - return (self.start, self.end) < (other.start, other.end) - - def __eq__(self, other) -> bool: - return (self.start, self.end) == (other.start, other.end) - - def __contains__(self, item) -> bool: - return item >= self.start and item < self.end - - def __bool__(self) -> bool: - return self.start < self.end - - def size(self) -> int: - return self.end - self.start - - -@contextlib.contextmanager -def open_output_file(path): - ''' - Create a temporary file in the same directory as the specified path and - replace it if the function succeeds. On non-Windows, the file replacement - is atomic. On Windows, it is not. - ''' - - directory = os.path.dirname(path) - - with tempfile.NamedTemporaryFile(dir=directory, delete=False) as f: - try: - yield f - - if os.name == 'nt': - # Windows does not allow renaming a file with handles open - f.close() - - # Windows only supports atomic renames by calling - # SetFileInformationByHandle() with the FileRenameInfoEx - # operation and the FILE_RENAME_FLAG_REPLACE_IF_EXISTS and - # FILE_RENAME_FLAG_POSIX_SEMANTICS flags. This is not exposed - # in Python and it's not worth adding a new dependency for - # doing low-level win32 API calls. - try: - os.unlink(path) - except FileNotFoundError: - pass - else: - # NamedTemporaryFile always uses 600 permissions with no way to - # override it. We'll do our own umask-respecting chmod. - os.fchmod(f.fileno(), 0o666 & ~umask) - - os.rename(f.name, path) - except BaseException: - if os.name == 'nt': - # Windows does not allow deleting a file with handles open - f.close() - - os.unlink(f.name) - raise - - -def hash_file(f, hasher, buf_size=16384): - ''' - Update when the data from until EOF. - ''' - - buf = bytearray(buf_size) - buf_view = memoryview(buf) - - while True: - n = f.readinto(buf_view) - if not n: - break - - hasher.update(buf_view[:n]) - - return hasher - - -def copyfileobj_n(f_in, f_out, size, buf_size=16384, hasher=None): - ''' - Copy bytes from to . - - Raises IOError if EOF is reached in before bytes are read. - ''' - - buf = bytearray(buf_size) - buf_view = memoryview(buf) - - while size: - to_read = min(len(buf_view), size) - n = f_in.readinto(buf_view[:to_read]) - if not n: - break - - if hasher: - hasher.update(buf_view[:n]) - - f_out.write(buf_view[:n]) - size -= n - - if size: - raise IOError(f'Unexpected EOF; expected {size} more bytes') - - -def decompress_n(decompressor, f_in, f_out, size, buf_size=16384, hasher=None): - ''' - Read bytes from and decompress them to . - - Raises IOError if EOF is reached in before bytes are read. - ''' - - buf = bytearray(buf_size) - buf_view = memoryview(buf) - - while size: - to_read = min(len(buf_view), size) - n = f_in.readinto(buf_view[:to_read]) - if not n: - break - - if hasher: - hasher.update(buf_view[:n]) - - data = decompressor.decompress(buf_view[:n]) - - f_out.write(data) - size -= n - - if size: - raise IOError(f'Unexpected EOF; expected {size} more bytes') - elif not decompressor.eof: - raise IOError('Did not reach end of compressed input') - - -def zero_n(f_out, size, buf_size=16384): - ''' - Write zeroes to . - ''' - - buf = bytearray(buf_size) - buf_view = memoryview(buf) - - while size: - to_write = min(len(buf_view), size) - f_out.write(buf_view[:to_write]) - size -= to_write - - -def read_exact(f, size: int) -> bytes: - ''' - Read exactly bytes from or raise an EOFError. - ''' - - data = f.read(size) - if len(data) != size: - raise EOFError(f'Unexpected EOF: expected {size} bytes, ' - f'but only read {len(data)} bytes') - - if not isinstance(data, bytes): - # io.BytesIO returns a bytearray - return bytes(data) - else: - return data - - -def is_zero(data): - ''' - Check if all bytes in the bytes-like object are null bytes. - ''' - - view = memoryview(data) - - while view: - n = min(len(view), len(_ZERO_BLOCK)) - - if view[:n] != _ZERO_BLOCK[:n]: - return False - - view = view[n:] - - return True diff --git a/avbroot/vbmeta.py b/avbroot/vbmeta.py deleted file mode 100644 index 6427f64..0000000 --- a/avbroot/vbmeta.py +++ /dev/null @@ -1,195 +0,0 @@ -import contextlib -import os -import typing -import unittest.mock - -import avbtool - -from . import openssl -from . import util - - -class SmuggledViaKernelCmdlineDescriptor: - def __init__(self): - self.kernel_cmdline = None - - def encode(self): - return self.kernel_cmdline.encode() - - -@contextlib.contextmanager -def smuggle_descriptors(): - ''' - Smuggle predefined vbmeta descriptors into Avb.make_vbmeta_image via the - kernel_cmdlines parameter. The make_vbmeta_image function will: - - * loop through kernel_cmdlines - * create a AvbKernelCmdlineDescriptor instance for each item - * assign kernel_cmdline to each descriptor instance - * call encode on each descriptor - ''' - - with unittest.mock.patch('avbtool.AvbKernelCmdlineDescriptor', - SmuggledViaKernelCmdlineDescriptor): - yield - - -def _get_descriptor_overrides( - avb: avbtool.Avb, - images: dict[str, os.PathLike[str]], -) -> typing.Tuple[dict[str, bytes], dict[str, avbtool.AvbDescriptor]]: - ''' - Build a set of public key (chain) and hash/hashtree descriptor overrides - that should be inserted in the parent vbmeta image for the given partition - images. - - If a partition image itself is signed, then a chain descriptor will be used. - Otherwise, the existing hash or hashtree descriptor is used. - ''' - - # Partition name -> raw public key - out_public_keys = {} - # Partition name -> descriptor - out_descriptors = {} - - # Construct descriptor overrides - for name, path in images.items(): - image = avbtool.ImageHandler(path, read_only=True) - footer, header, descriptors, image_size = avb._parse_image(image) - - if name in out_public_keys or name in out_descriptors: - raise ValueError(f'Duplicate partition name: {name}') - - if header.public_key_size: - # vbmeta is signed; use a chain descriptor - blob = avb._load_vbmeta_blob(image) - offset = header.SIZE + \ - header.authentication_data_block_size + \ - header.public_key_offset - out_public_keys[name] = \ - blob[offset:offset + header.public_key_size] - else: - # vbmeta is unsigned; use the existing descriptor in the footer - partition_descriptor = next( - (d for d in descriptors - if (isinstance(d, avbtool.AvbHashDescriptor) - or isinstance(d, avbtool.AvbHashtreeDescriptor)) - and d.partition_name == name), - None, - ) - if partition_descriptor is None: - raise ValueError(f'{path} has no descriptor for itself') - - out_descriptors[name] = partition_descriptor - - return (out_public_keys, out_descriptors) - - -def get_vbmeta_deps( - avb: avbtool.Avb, - vbmeta_images: dict[str, os.PathLike[str]], -) -> dict[str, set[str]]: - ''' - Return the forward and reverse dependency tree for the specified vbmeta - images. - ''' - - deps = {} - - for name, path in vbmeta_images.items(): - image = avbtool.ImageHandler(path, read_only=True) - _, _, descriptors, _ = avb._parse_image(image) - - deps.setdefault(name, set()) - - for d in descriptors: - if isinstance(d, avbtool.AvbChainPartitionDescriptor) \ - or isinstance(d, avbtool.AvbHashDescriptor) \ - or isinstance(d, avbtool.AvbHashtreeDescriptor): - deps[name].add(d.partition_name) - deps.setdefault(d.partition_name, set()) - - return deps - - -def patch_vbmeta_image( - avb: avbtool.Avb, - images: dict[str, os.PathLike[str]], - input_path: os.PathLike[str], - output_path: os.PathLike[str], - key: os.PathLike[str], - passphrase: str, - padding_size: int, - clear_flags: bool, -): - ''' - Patch the vbmeta image to reference the provided images. - ''' - - # Load the original root vbmeta image - image = avbtool.ImageHandler(input_path, read_only=True) - footer, header, descriptors, image_size = avb._parse_image(image) - - if header.flags != 0: - if clear_flags: - header.flags = 0 - else: - raise ValueError(f'vbmeta flags disable AVB: 0x{header.flags:x}') - - # Build a set of new descriptors in the same order as the original - # descriptors, except with the descriptors patched to reference the given - # images - override_public_keys, override_descriptors = \ - _get_descriptor_overrides(avb, images) - new_descriptors = [] - - for d in descriptors: - if isinstance(d, avbtool.AvbChainPartitionDescriptor) and \ - d.partition_name in override_public_keys: - d.public_key = override_public_keys.pop(d.partition_name) - elif (isinstance(d, avbtool.AvbHashDescriptor) or \ - isinstance(d, avbtool.AvbHashtreeDescriptor)) and \ - d.partition_name in override_descriptors: - d = override_descriptors.pop(d.partition_name) - - new_descriptors.append(d) - - if override_public_keys: - raise Exception(f'Unused public key overrides: {override_public_keys}') - if override_descriptors: - raise Exception(f'Unused descriptor overrides: {override_descriptors}') - - algorithm_name = avbtool.lookup_algorithm_by_type(header.algorithm_type)[0] - - # Some older Pixel devices' vbmeta images are originally signed by a - # 2048-bit RSA key, but avbroot expects RSA 4096 keys - if algorithm_name == 'SHA256_RSA2048': - algorithm_name = 'SHA256_RSA4096' - - with util.open_output_file(output_path) as f: - # Smuggle in the prebuilt descriptors via kernel_cmdlines - with ( - smuggle_descriptors(), - openssl.inject_passphrase(passphrase), - ): - avb.make_vbmeta_image( - output=f, - chain_partitions=None, - algorithm_name=algorithm_name, - key_path=key, - public_key_metadata_path=None, - rollback_index=header.rollback_index, - flags=header.flags, - rollback_index_location=header.rollback_index_location, - props=None, - props_from_file=None, - kernel_cmdlines=new_descriptors, - setup_rootfs_from_kernel=None, - include_descriptors_from_image=None, - signing_helper=None, - signing_helper_with_files=None, - release_string=header.release_string, - append_to_release_string=False, - print_required_libavb_version=False, - padding_size=padding_size, - ) diff --git a/build.rs b/build.rs new file mode 100644 index 0000000..9d9671c --- /dev/null +++ b/build.rs @@ -0,0 +1,43 @@ +/* + * SPDX-FileCopyrightText: 2023 Andrew Gunnerson + * SPDX-License-Identifier: GPL-3.0-only + */ + +use std::{env, ffi::OsStr, fs, io, path::Path}; + +use pb_rs::{types::FileDescriptor, ConfigBuilder}; + +fn main() { + let out_dir = Path::new(&env::var("OUT_DIR").unwrap()).join("protobuf"); + let in_dir = Path::new(&env::var("CARGO_MANIFEST_DIR").unwrap()).join("protobuf"); + + println!("cargo:rerun-if-changed={}", in_dir.to_str().unwrap()); + + let mut protos = Vec::new(); + + for entry in fs::read_dir(&in_dir).unwrap() { + let path = entry.unwrap().path(); + if path.extension() == Some(OsStr::new("proto")) { + println!("cargo:rerun-if-changed={}", path.to_str().unwrap()); + protos.push(path); + } + } + + match fs::remove_dir_all(&out_dir) { + Err(e) if e.kind() == io::ErrorKind::NotFound => {} + r => r.unwrap(), + } + + fs::create_dir_all(&out_dir).unwrap(); + + let config = ConfigBuilder::new(&protos, None, Some(&out_dir), &[in_dir]) + .unwrap() + .dont_use_cow(true) + // We're using this as a means to force quick-protobuf to use BTreeMap + // instead of HashMap so that the serialized messages are reproducible. + // https://github.com/tafia/quick-protobuf/issues/251 + .nostd(true) + .build(); + + FileDescriptor::run(&config).unwrap(); +} diff --git a/deny.toml b/deny.toml new file mode 100644 index 0000000..fb4e11e --- /dev/null +++ b/deny.toml @@ -0,0 +1,39 @@ +[advisories] +vulnerability = "deny" +unmaintained = "deny" +yanked = "deny" +notice = "deny" + +[licenses] +unlicensed = "deny" +allow = [ + "Apache-2.0", + "BSD-3-Clause", + "ISC", + "MIT", + "OpenSSL", + "Unicode-DFS-2016", +] +copyleft = "allow" +default = "deny" + +[[licenses.clarify]] +name = "ring" +expression = "MIT AND ISC AND OpenSSL" +license-files = [ + { path = "LICENSE", hash = 0xbd0eed23 }, +] + +[bans] +multiple-versions = "warn" +deny = [ + # https://github.com/serde-rs/serde/issues/2538 + { name = "serde_derive", version = ">=1.0.172,<1.0.184" }, +] + +[sources] +unknown-registry = "deny" +unknown-git = "deny" +allow-git = [ + "https://github.com/chenxiaolong/zip", +] diff --git a/tests/.gitignore b/e2e/.gitignore similarity index 100% rename from tests/.gitignore rename to e2e/.gitignore diff --git a/e2e/Cargo.toml b/e2e/Cargo.toml new file mode 100644 index 0000000..7ab6ff3 --- /dev/null +++ b/e2e/Cargo.toml @@ -0,0 +1,26 @@ +[package] +name = "e2e" +version = "0.1.0" +edition = "2021" + +# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html + +[dependencies] +anyhow = "1.0.75" +avbroot = { path = ".." } +clap = { version = "4.4.1", features = ["derive"] } +ctrlc = "3.4.0" +hex = { version = "0.4.3", features = ["serde"] } +reqwest = { version = "0.11.20", features = ["stream"] } +ring = "0.16.20" +serde = { version = "1.0.188", features = ["derive"] } +tempfile = "3.8.0" +tokio = { version = "1.32.0", features = ["signal", "rt-multi-thread", "macros"] } +tokio-stream = "0.1.14" +toml_edit = { version = "0.19.14", features = ["serde"] } + +# https://github.com/zip-rs/zip/pull/383 +[dependencies.zip] +git = "https://github.com/chenxiaolong/zip" +rev = "989101f9384b9e94e36e6e9e0f51908fdf98bde6" +default-features = false diff --git a/e2e/README.md b/e2e/README.md new file mode 100644 index 0000000..61c7883 --- /dev/null +++ b/e2e/README.md @@ -0,0 +1,64 @@ +# End-to-end tests + +avbroot's output file is reproducible for a given input file. [`e2e.toml`](./e2e.toml) lists some OTA images with unique properties and the expected checksums before and after patching. These tests use pregenerated, hardcoded test keys for signing. **These keys should NEVER be used for any other purpose.** + +For each image listed in the config, the test process will: + +1. Download the OTA if it doesn't already exist in `./files//` (or the workdir specified by `-w`) +2. Verify the OTA checksum +3. Run avbroot against the OTA using `--magisk` +4. Extract the AVB-related partitions from the patched OTA and verify their checksums +5. Verify the patched OTA checksum +6. Run avbroot against the OTA again using `--prepatched` +7. Verify the patched OTA checksum again + +For more efficient CI testing, the tests can operate on "stripped" OTAs. A stripped OTA is identical to the full OTA, except that partitions in `payload.bin` unrelated to AVB are zeroed out. This reduces the download size and disk space requirements by a couple orders of magnitude. **A stripped OTA is NOT bootable and should never be flashed on a real device.** + +## Running the tests + +To test against the device OTA images listed in [`e2e.toml`](./e2e.toml), run: + +```bash +# To test all device OTAs +cargo run --release -- test -a +# Or to test against specific device OTAs +cargo run --release -- test -d cheetah -d bluejay +``` + +To test against stripped OTAs (smaller download, but not bootable), pass in `--stripped`. + +## Downloading a device image + +To download a full OTA image, run: + +```bash +cargo run --release -- download -d +``` + +This normally happens automatically when running the `test` subcommand. To download the stripped OTA image instead, pass in `--stripped`. + +If the image file does not already exist, then it will be downloaded and the checksums will be validated. If the download is interrupted, it will automatically resume when the command is rerun. If the file is already downloaded, the command is effectively a no-op unless `--revalidate` is passed in to revalidate the image checksums. + +## Adding a new device image + +To add a new device image to the testing configuration, run: + +```bash +cargo run --release -- add -d -u -H +``` + +If the OS vendor does not provide a SHA-256 checksum, omit `-H` and the program will compute the checksum from the downloaded data. + +This process will download the full OTA, strip it, patch the full OTA, patch the stripped OTA, extract the AVB partitions, and write all of the checksums to [`e2e.toml`](./e2e.toml). + +The process for updating an existing device config is exactly the same as adding a new one. + +## Stripping a full OTA + +To convert a full OTA to the stripped form, run: + +```bash +cargo run --release -- strip -i -o +``` + +This normally happens automatically as a part of adding a new device image. diff --git a/e2e/e2e.toml b/e2e/e2e.toml new file mode 100644 index 0000000..9225df1 --- /dev/null +++ b/e2e/e2e.toml @@ -0,0 +1,116 @@ +[magisk] +"url" = "https://github.com/topjohnwu/Magisk/releases/download/v26.0/Magisk-v26.0.apk" +"hash" = "9e14d3d3ca1f1a2765f8ca215ebbf35ea5fd2896fb147eea581fcaa3b4e77d25" + +# Google Pixel 7 Pro +# What's unique: init_boot (boot v4) + vendor_boot (vendor v4) +[device.cheetah] +url = "https://dl.google.com/dl/android/aosp/cheetah-ota-tq2a.230305.008.c1-6ac5ff2e.zip" +sections = [ + { start = 0, end = 151715 }, + { start = 21683150, end = 23179365 }, + { start = 2043499985, end = 2043508325 }, + { start = 2315331822, end = 2333060126 }, + { start = 2344084950, end = 2344090066 }, +] +hash.original.full = "6ac5ff2e14dc16755ea4ea30e6dbe25103b889a36a465194ef943bd0d665b91c" +hash.original.stripped = "549522015f0369a3b89385f532ab62235b47c2c39540bd0adaaf6acc81fdda94" +hash.patched.full = "b380720852e2a1e994bcf38064f577bac68b18e799cf8166cb6a7edb8a661cb4" +hash.patched.stripped = "560cdf4d7b25fb5bc650f96ae5a3c7593ac229a364bb39887287cfb734f6b377" +hash.avb_images."init_boot.img" = "fac9305ce22b897fbfb193968d5346f4c70f6c18c3060bb106226f134ce5f433" +hash.avb_images."vbmeta.img" = "0b3d719b751dd43bbec95d02b1bf57b5dae62e42a52502895892a207b773e77d" +hash.avb_images."vbmeta_system.img" = "cf8c77dcf0a4474d49b5bdc2a44bdb3646464d5212fbe12aa5d3c5f531742f4f" +hash.avb_images."vbmeta_vendor.img" = "660d8f61acd95a4f8ad416b4cbe126e9c039706462b4236ad723953c72ac49a8" +hash.avb_images."vendor_boot.img" = "c788d4d8eb7926ad1bfa2a9c000343c3c73cedc580449f5270a711feb620f033" + +# Google Pixel 6a +# What's unique: boot (boot v4, no ramdisk) + vendor_boot (vendor v4, 2 ramdisks) +[device.bluejay] +url = "https://dl.google.com/dl/android/aosp/bluejay-ota-tq2a.230305.008.e1-915f9087.zip" +sections = [ + { start = 0, end = 140787 }, + { start = 1060207, end = 21612852 }, + { start = 1886150700, end = 1886158844 }, + { start = 2069112987, end = 2092260102 }, + { start = 2098778558, end = 2098783674 }, +] +hash.original.full = "915f9087b627b6961be9bb447dc63a7a1083b536753a78715e98641eaeb9c9d1" +hash.original.stripped = "a3ee5b6e39e687665c31790118ab9f47715b0b8285ae9847dbf81307f963db14" +hash.patched.full = "bfa7b26d90bdc889a7a199439e1564b219e5e2dbfddc1657bc1c6b73229be67e" +hash.patched.stripped = "4e56ee4a8554f2b08ffc2f1470ad60b9b63d2b6fb469ed1f7c4b1204bbf8ad7d" +hash.avb_images."boot.img" = "a1a705092e7034d20b83c94d78291418e13c343b1573c1b11e0fc884fc00ae62" +hash.avb_images."vbmeta.img" = "3c123705be57ab142d2b43beef9b123eaad129df17a523b59b1d63b9122d28b0" +hash.avb_images."vbmeta_system.img" = "285b83e4290f3257dc3678f0c3191794830bb2d72fb0969b69fc8f09d7ddff12" +hash.avb_images."vbmeta_vendor.img" = "981f736586b91a9f4c93c4208a0d191a35ff15118c6fa505d755ee7fda8b2477" +hash.avb_images."vendor_boot.img" = "ceffcb4fdb33aa3bb2c70621060acd5af612206b21cc413d5c2d39fee25144ab" + +# Google Pixel 4a 5G +# What's unique: boot (boot v3) + vendor_boot (vendor v3) +[device.bramble] +url = "https://dl.google.com/dl/android/aosp/bramble-ota-tq2a.230305.008.c1-a925dd09.zip" +sections = [ + { start = 0, end = 140531 }, + { start = 496187, end = 11655082 }, + { start = 1650283561, end = 1650287993 }, + { start = 1884739919, end = 1908081011 }, + { start = 1910894027, end = 1910899144 }, +] +hash.original.full = "a925dd09c8d613d46cf72677c16f4fadee18bc21734d57047c6ccf31f672507b" +hash.original.stripped = "79322b0b417359e8f072032de676d7e5bd2715a3b3554c48ed5cc9e9a25c6866" +hash.patched.full = "cf79cd60acd3635f5085d8bb411d4a8dc4d7e62440f62a14e7ea12f5a9b7cd8a" +hash.patched.stripped = "b684a78fe08014f1633e74d1f380e137823ba77a47ddf428bc649364f5548b75" +hash.avb_images."boot.img" = "2bbf2c6d2f82d454426b26ac3b4887b26ca0591b458e6cf137207ecbb8f7649d" +hash.avb_images."vbmeta.img" = "ab3b2487671b3fc28e163898621d6c068164d335b6ddd9b94e4fde9471c95d66" +hash.avb_images."vbmeta_system.img" = "2fcd52d7462916a8510bbb07f2f5a14200afe2de97568396fe75e04c5c283152" +hash.avb_images."vendor_boot.img" = "e27f157c4ebf4e958165997a4b87d4de1cfc34f4dea423a153327a27b053cac7" + +# Google Pixel 4a +# What's unique: boot (boot v2) +[device.sunfish] +url = "https://dl.google.com/dl/android/aosp/sunfish-ota-tq2a.230305.008.c1-174fd16b.zip" +sections = [ + { start = 0, end = 129700 }, + { start = 476996, end = 34035261 }, + { start = 1624751947, end = 1624756371 }, + { start = 1823132563, end = 1823137581 }, +] +hash.original.full = "174fd16b47ef994ea8f3cb0f3fb456df2654b0aa1f9ea6fb8e54e5c6319f2601" +hash.original.stripped = "943ce3ae2aac8a0ccd4a7e9d4e38a9c495c39639f2621c6853be4c7a3fa0fc26" +hash.patched.full = "cdfffa731f0aca0ab9eff1e7d7bfee8c4716054d64cf0a47e50f50da6cbdb849" +hash.patched.stripped = "35b720058c460dd28b00f40bed4bcd1d133522d42d00ba048172b3092a7444ee" +hash.avb_images."boot.img" = "22182f2efc7043f35d79e71abb400ce919c6b3b8419405e16469644932367ee6" +hash.avb_images."vbmeta.img" = "4cbe171bb37515f59cd4082cc2d982ca1edfeba9db09dd62c8959dd318423888" +hash.avb_images."vbmeta_system.img" = "7cdb590bfc1056a5a8c7606ff05e99eb344efe108296682698b5cfe83905e0cd" + +# OnePlus 10 Pro +# Build NE2215_11_C.26 +# What's unique: +# - boot (boot v4) + recovery (boot v4) +# - boot images have VTS signature block filled with all 0s +# - payload.bin uses ZERO blocks +# Build info: +# - Unofficial list of full OTAs: https://forum.xda-developers.com/t/oneplus-10-pro-rom-ota-oxygen-os-repo-of-oxygen-os-builds.4572593/ +# - The North American builds are used because they're the only ones hosted on +# a well known domain +# - The build number can be found in /build.prop since it's not +# obvious from the filename +[device.ossi] +url = "https://android.googleapis.com/packages/ota-api/package/4cacbe5e6a3ab6a6fade68cc40f44d0fa6a2928a.zip" +sections = [ + { start = 0, end = 204048 }, + { start = 19105432, end = 34966775 }, + { start = 2657405750, end = 2657407254 }, + { start = 4984045446, end = 5006377281 }, + { start = 5114504441, end = 5114507197 }, + { start = 5138158449, end = 5138159817 }, + { start = 5140101511, end = 5140105324 }, +] +hash.original.full = "929f892fbd70699cf7f118a119aac1ae1b86351e1ada17715666fa4401e63472" +hash.original.stripped = "4eabaf79b6c2b5df305e3ecdc2b9570c0dd27350b4e8d6434584000c4989ff3d" +hash.patched.full = "8e9cf3159e57d706047325a7b774dc2dc84f0d56baae100ad54c0c723b621469" +hash.patched.stripped = "0b5dcdfdfea742bf6662b286ba260f8572a7b0081bb1129b7274c29214cdfb49" +hash.avb_images."boot.img" = "f5dc3b147c54589be8db00ca15257a3688424a9eb88bd9c2cec82ebf4f6bf859" +hash.avb_images."recovery.img" = "a42c0bf4f023cd24394184a33ee113783a9c89a7cc4c0c582a5f72cc23b72309" +hash.avb_images."vbmeta.img" = "c022cf79da301a8430af5c49704944c490707fa0306031fe3ea22c39ce4734f6" +hash.avb_images."vbmeta_system.img" = "749616b7f04487c05e9e363ad2071a0ab3bae29d497daf1f1a7695f7c8cfa82a" +hash.avb_images."vbmeta_vendor.img" = "a6037fce745384425fb12745b8568386b84fb57ca6f94f6e47bcf754de341ae4" diff --git a/tests/keys/TEST_KEY_DO_NOT_USE_avb.key b/e2e/keys/TEST_KEY_DO_NOT_USE_avb.key similarity index 100% rename from tests/keys/TEST_KEY_DO_NOT_USE_avb.key rename to e2e/keys/TEST_KEY_DO_NOT_USE_avb.key diff --git a/e2e/keys/TEST_KEY_DO_NOT_USE_avb.passphrase b/e2e/keys/TEST_KEY_DO_NOT_USE_avb.passphrase new file mode 100644 index 0000000..b591b27 --- /dev/null +++ b/e2e/keys/TEST_KEY_DO_NOT_USE_avb.passphrase @@ -0,0 +1 @@ +XltUCz36vqCNSzspPZxFMGXah3kLyrTXDwfmasgn6nL4CtZDw5OeeLwlmkDuV2Im diff --git a/e2e/keys/TEST_KEY_DO_NOT_USE_avb_pkmd.bin b/e2e/keys/TEST_KEY_DO_NOT_USE_avb_pkmd.bin new file mode 100644 index 0000000000000000000000000000000000000000..b4cb3684d8e5330db6644936441325cfe5350a3d GIT binary patch literal 1032 zcmV+j1o!&@01yCxuPN)S?jOLXmDv__s_Rcx^wVpm_?g|cL%v-c$avIUcW>q)8Z!(C zLoy>YwvgMqoxwk(c0#MiSc%)ZdB`fAx*5hzObMShBnNeaPMwAUKSMeis<6J%@*yq7 zJo4FR-qkV*!R6~)Ry_dI2f!h@($IwH4Ba56l_Ff@9)Gk4%?FXIn?eiB= z=X~Tk>1Z3r_RLOLlv8HAWmQn%ebN={QD_zEMSt%3FdKI-k|`h6b%%oT2Q;~?b>~K( zLjTkSTI4!|0i%9a-`Y;zUSpr^SF3Vp*m!Y}CW%s_ert|D{7^SpuQ7e3pmzN>{$!c) zUE+O#+@)sq0E3Maq(ZPB%^&@?Rq8JHdAhMY*y^@PKrF$d_!LC#ByoCtKc-s3xYJ@( zP^#fub<^B8>0fUQMWWMNnf7HgaBzx|#RWf6QU1dk_+HhZ3%DEBQsgJkK*ayGc^V{+G z8pG%}x^?PR625XqzvpPG9^WIxY1@b2mElE9v3;U@yDus`#HC`*w$+4twWr4}SbrYF*i_?F_<(H%aqr3&XSy^X#NX|;B z`fk;&(`tSZ5phys3l5KlX(nlcQU>c?DSdx;(Gq7}Lm1)AAt_5iQiY!`R_I9rSxi< zB8u=?&C&2^UmKJKzOdv^=k<`a5ALB=PK(<2zJJm#Zv_f>QF9#E_;Y(&%aS*Zsnw8Z z5N)DcFo-I;)MB0gY^lV+&r>(QEh?UjVS?P&*T#X`0pDy9(6cqqADAIlIXq5RE%iXo z;>9{F6m9I5M7~ae%;F97%M)ul2>RKfhQ5g{W>Yf_5z4{oZ7`3}(ur(}6dvEb%E%|p z`Fh2H^pkQNnxQaOXWua*cgU+}0K=(x=K0VIIG?KOsheH`J95SK&WK;4x`A)}b5)u8 z(c_10k);9eyYVq!LZt)U*P?28NgFM^e, + + /// All device configs. + #[arg(short, long, conflicts_with = "device")] + pub all: bool, +} + +#[derive(Debug, Args)] +pub struct DownloadGroup { + /// Revalidate hash of existing download. + #[arg(long)] + pub revalidate: bool, + + /// Download the stripped OTA instead of the full OTA. + #[arg(long)] + pub stripped: bool, +} + +#[derive(Debug, Args)] +pub struct PatchGroup { + /// Delete patched output files on success. + #[arg(long)] + pub delete_on_success: bool, + + /// Suffix for patched output files. + #[arg(long = "output-file-suffix", default_value = ".patched", value_parser)] + pub suffix: OsString, +} + +#[derive(Debug, Args)] +pub struct ConfigGroup { + /// Path to config file. + #[arg( + short, + long, + value_name = "FILE", + value_parser, + default_value = "e2e.toml" + )] + pub config: PathBuf, + + /// Working directory for storing images. + #[arg( + short, + long, + value_name = "DIRECTORY", + value_parser, + default_value = "files" + )] + pub work_dir: PathBuf, +} + +/// Convert a full OTA to stripped form. +/// +/// A stripped OTA omits byte regions of the OTA that aren't needed for testing +/// avbroot's patching logic (eg. the system partition image). This reduces the +/// size of the test files by about two orders of magnitude. +#[derive(Debug, Parser)] +pub struct StripCli { + /// Path to original OTA zip. + #[arg(short, long, value_name = "FILE", value_parser)] + pub input: PathBuf, + + /// Path to new stripped OTA zip. + #[arg(short, long, value_name = "FILE", value_parser)] + pub output: PathBuf, +} + +#[derive(Debug, Clone)] +pub struct Sha256Arg(pub [u8; 32]); + +impl FromStr for Sha256Arg { + type Err = hex::FromHexError; + + fn from_str(s: &str) -> Result { + let mut data = [0u8; 32]; + hex::decode_to_slice(s, &mut data)?; + Ok(Self(data)) + } +} + +/// Add a new OTA image to the test config. +/// +/// This will download the OTA image, strip it, patch both images, and add the +/// resulting metadata (eg. checksums) to the specified test config file. +#[derive(Debug, Parser)] +pub struct AddCli { + /// URL to the full OTA zip. + #[arg(short, long)] + pub url: String, + + /// Device config name. + #[arg(short, long, value_name = "NAME")] + pub device: String, + + /// Expected sha256 hash of the full OTA zip. + #[arg(short = 'H', long, value_name = "SHA256_HEX", value_parser)] + pub hash: Option, + + #[command(flatten)] + pub patch: PatchGroup, + + #[command(flatten)] + pub config: ConfigGroup, + + /// Skip verifying OTA and AVB signatures. + /// + /// OTAs for some devices (eg. ossi) ship with vbmeta partitions containing + /// invalid hashes. These will normally fail during validation. + #[arg(long)] + pub skip_verify: bool, +} + +/// Download a device image. +#[derive(Debug, Parser)] +pub struct DownloadCli { + /// Download the Magisk APK. + #[arg(short, long)] + pub magisk: bool, + + #[command(flatten)] + pub device: DeviceGroup, + + #[command(flatten)] + pub download: DownloadGroup, + + #[command(flatten)] + pub config: ConfigGroup, +} + +/// Run tests. +#[derive(Debug, Parser)] +pub struct TestCli { + #[command(flatten)] + pub device: DeviceGroup, + + #[command(flatten)] + pub download: DownloadGroup, + + #[command(flatten)] + pub patch: PatchGroup, + + #[command(flatten)] + pub config: ConfigGroup, +} + +/// List devices in config file. +#[derive(Debug, Parser)] +pub struct ListCli { + #[command(flatten)] + pub config: ConfigGroup, +} + +#[derive(Debug, Subcommand)] +pub enum Command { + Strip(StripCli), + Add(AddCli), + Download(DownloadCli), + Test(TestCli), + List(ListCli), +} + +#[derive(Debug, Parser)] +pub struct Cli { + #[command(subcommand)] + pub command: Command, +} diff --git a/e2e/src/config.rs b/e2e/src/config.rs new file mode 100644 index 0000000..0969eb9 --- /dev/null +++ b/e2e/src/config.rs @@ -0,0 +1,140 @@ +/* + * SPDX-FileCopyrightText: 2023 Andrew Gunnerson + * SPDX-License-Identifier: GPL-3.0-only + */ + +use std::{collections::BTreeMap, fs, ops::Range, path::Path}; + +use anyhow::{anyhow, Context, Result}; +use serde::{Deserialize, Serialize}; +use toml_edit::{ + ser::ValueSerializer, + visit_mut::{self, VisitMut}, + Array, Document, InlineTable, Item, KeyMut, Table, Value, +}; + +#[derive(Serialize, Deserialize)] +pub struct Sha256Hash( + #[serde( + serialize_with = "hex::serialize", + deserialize_with = "hex::deserialize" + )] + pub [u8; 32], +); + +#[derive(Serialize, Deserialize)] +pub struct Magisk { + pub url: String, + pub hash: Sha256Hash, +} + +#[derive(Serialize, Deserialize)] +pub struct OtaHashes { + pub full: Sha256Hash, + pub stripped: Sha256Hash, +} + +#[derive(Serialize, Deserialize)] +pub struct ImageHashes { + pub original: OtaHashes, + pub patched: OtaHashes, + pub avb_images: BTreeMap, +} + +#[derive(Serialize, Deserialize)] +pub struct Device { + pub url: String, + pub sections: Vec>, + pub hash: ImageHashes, +} + +#[derive(Serialize, Deserialize)] +pub struct Config { + pub magisk: Magisk, + pub device: BTreeMap, +} + +struct ConfigFormatter; + +impl VisitMut for ConfigFormatter { + fn visit_table_like_kv_mut(&mut self, key: KeyMut<'_>, node: &mut Item) { + // Convert non-array-of-tables inline tables into regular tables. + if let Item::Value(Value::InlineTable(t)) = node { + let inline_table = std::mem::replace(t, InlineTable::new()); + *node = Item::Table(inline_table.into_table()); + } + + // But for hashes, use dotted notation until TOML 1.1, which allows + // newlines in inline tables, is released. + if key == "hash" || key == "original" || key == "patched" || key == "avb_images" { + if let Some(t) = node.as_table_like_mut() { + t.set_dotted(true); + } + } + + visit_mut::visit_table_like_kv_mut(self, key, node); + } + + fn visit_table_mut(&mut self, node: &mut Table) { + // Make tables implicit unless they are empty, which may be meaningful. + if !node.is_empty() { + node.set_implicit(true); + } + + visit_mut::visit_table_mut(self, node); + } + + fn visit_array_mut(&mut self, node: &mut Array) { + visit_mut::visit_array_mut(self, node); + + // Put array elements on their own indented lines. + if node.is_empty() { + node.set_trailing(""); + node.set_trailing_comma(false); + } else { + for item in node.iter_mut() { + item.decor_mut().set_prefix("\n "); + } + node.set_trailing("\n"); + node.set_trailing_comma(true); + } + } +} + +/// Add a device to the config file. This leaves all comments intact, except for +/// those contained within the existing device section if it exists. +pub fn add_device(document: &mut Document, name: &str, device: &Device) -> Result<()> { + let device_table = document.entry("device").or_insert_with(|| { + let mut t = toml_edit::Table::new(); + t.set_implicit(true); + Item::Table(t) + }); + let old_table = device_table.get(name).and_then(|i| i.as_table()); + + let value = device.serialize(ValueSerializer::new())?; + let Value::InlineTable(inline_table) = value else { + unreachable!("Device did not serialize as an inline table"); + }; + let mut table = inline_table.into_table(); + + ConfigFormatter.visit_table_mut(&mut table); + + // Keep top-level comment on the table. + if let Some(t) = old_table { + *table.decor_mut() = t.decor().clone(); + } + + device_table[name] = Item::Table(table); + + Ok(()) +} + +pub fn load_config(path: &Path) -> Result<(Config, Document)> { + let contents = + fs::read_to_string(path).with_context(|| anyhow!("Failed to read config: {path:?}"))?; + let config: Config = toml_edit::de::from_str(&contents) + .with_context(|| anyhow!("Failed to parse config: {path:?}"))?; + let document: Document = contents.parse().unwrap(); + + Ok((config, document)) +} diff --git a/e2e/src/download.rs b/e2e/src/download.rs new file mode 100644 index 0000000..0e897c9 --- /dev/null +++ b/e2e/src/download.rs @@ -0,0 +1,456 @@ +/* + * SPDX-FileCopyrightText: 2023 Andrew Gunnerson + * SPDX-License-Identifier: GPL-3.0-only + */ + +use std::{ + cmp, + collections::{HashMap, VecDeque}, + fs::{self, OpenOptions}, + io::{self, Seek, SeekFrom, Write}, + ops::Range, + path::{Path, PathBuf}, + time::{Duration, Instant}, +}; + +use anyhow::{anyhow, bail, Context, Result}; +use avbroot::stream::PSeekFile; +use serde::{Deserialize, Serialize}; +use tokio::{ + runtime::Runtime, + signal::ctrl_c, + sync::{mpsc, oneshot}, + task::{self, JoinSet}, +}; +use tokio_stream::StreamExt; + +/// Minimum download chunk size per thread +const MIN_CHUNK_SIZE: u64 = 1024 * 1024; + +pub trait ProgressDisplay { + fn progress(&mut self, current: u64, total: u64); + + fn error(&mut self, msg: &str); + + fn finish(&mut self); +} + +pub struct BasicProgressDisplay { + current: u64, + total: u64, + interval: Duration, + last_render: Instant, + avg: VecDeque<(Instant, u64)>, +} + +// Speed is a simple moving average over 5 seconds. +static AVG_INTERVAL: Duration = Duration::from_millis(100); +static AVG_WINDOW_SIZE: usize = 5000 / AVG_INTERVAL.as_millis() as usize; + +impl BasicProgressDisplay { + pub fn new(interval: Duration) -> Self { + Self { + current: 0, + total: 0, + interval, + last_render: Instant::now() - interval, + avg: VecDeque::new(), + } + } + + fn clear_line(&self) { + eprint!("\x1b[2K\r"); + } +} + +impl ProgressDisplay for BasicProgressDisplay { + fn progress(&mut self, current: u64, total: u64) { + self.current = current; + self.total = total; + + let now = Instant::now(); + + if self.avg.is_empty() || (now - self.avg.back().unwrap().0) > AVG_INTERVAL { + if self.avg.len() == AVG_WINDOW_SIZE { + self.avg.pop_front(); + } + + self.avg.push_back((now, current)); + } + + if now - self.last_render > self.interval { + let current_mib = current as f64 / 1024.0 / 1024.0; + let total_mib = total as f64 / 1024.0 / 1024.0; + + let front = self.avg.front().unwrap(); + let back = self.avg.back().unwrap(); + + let avg_window_mib = (back.1 - front.1) as f64 / 1024.0 / 1024.0; + let avg_window_duration = back.0 - front.0; + + let speed_mib_s = if avg_window_duration.is_zero() { + 0.0 + } else { + avg_window_mib / avg_window_duration.as_secs_f64() + }; + + self.clear_line(); + eprint!("{current_mib:.1} / {total_mib:.1} MiB ({speed_mib_s:.1} MiB/s)"); + + self.last_render = now; + } + } + + fn error(&mut self, msg: &str) { + self.clear_line(); + eprintln!("{msg}"); + } + + fn finish(&mut self) { + self.clear_line(); + } +} + +#[derive(Debug)] +struct ProgressMessage { + task_id: u64, + bytes: u64, + // Controller replies with new ending offset + resp: oneshot::Sender, +} + +/// Download a contiguous byte range. The number of bytes downloaded per loop +/// iteration will be sent to the specified channel via a `ProgressMessage`. The +/// receiver of the message must reply with the new ending offset for this +/// download via the oneshot channel in the `resp` field. An appropriate error +/// will be returned if the full range (subject to modification) cannot be fully +/// downloaded (eg. premature EOF is an error). +async fn download_range( + task_id: u64, + url: &str, + mut file: PSeekFile, + initial_range: Range, + channel: mpsc::Sender, +) -> Result<()> { + assert!(initial_range.start < initial_range.end); + + let client = reqwest::ClientBuilder::new().build()?; + + let response = client + .get(url) + .header( + reqwest::header::RANGE, + format!("bytes={}-{}", initial_range.start, initial_range.end - 1), + ) + .send() + .await + .and_then(|r| r.error_for_status()) + .with_context(|| anyhow!("Failed to start download for range: {initial_range:?}"))?; + let mut stream = response.bytes_stream(); + let mut range = initial_range.clone(); + + while range.start < range.end { + let data = if let Some(x) = stream.next().await { + x? + } else { + return Err(anyhow!("Unexpected EOF from server")); + }; + + // This may overlap with another task's write when a range split occurs, + // but the same data will be written anyway, so it's not a huge deal. + task::block_in_place(|| { + file.seek(SeekFrom::Start(range.start))?; + file.write_all(&data) + }) + .with_context(|| { + format!( + "Failed to write {} bytes to output file at offset {}", + data.len(), + range.start, + ) + })?; + + let consumed = cmp::min(range.end - range.start, data.len() as u64); + range.start += consumed; + + // Report progress to the controller. + let (tx, rx) = oneshot::channel(); + let msg = ProgressMessage { + task_id, + bytes: consumed, + resp: tx, + }; + channel.send(msg).await?; + + // Get new ending offset from controller. + let new_end = rx.await?; + if new_end != range.end { + debug_assert!(new_end <= range.end); + range.end = new_end; + } + } + + Ok(()) +} + +/// Create download task for a byte range. This just calls [`download_range()`] +/// and returns a tuple containing the task ID and the result. +async fn download_task( + task_id: u64, + url: String, + file: PSeekFile, + initial_range: Range, + channel: mpsc::Sender, +) -> (u64, Result<()>) { + ( + task_id, + download_range(task_id, &url, file, initial_range, channel).await, + ) +} + +/// Send a HEAD request to get the value of the Content-Length header. +async fn get_content_length(url: &str) -> Result { + let response = reqwest::Client::new() + .head(url) + .send() + .await + .and_then(|r| r.error_for_status()) + .context("Failed to send HEAD request to get Content-Length")?; + + response + .headers() + .get("content-length") + .and_then(|h| h.to_str().ok()) + .and_then(|h| h.parse().ok()) + .ok_or_else(|| anyhow!("HEAD request did not return a valid Content-Length")) +} + +/// Download a set of file chunks in parallel. Expected or recoverable errors +/// are printed to stderr. Unrecoverable errors are returned as an Err. Download +/// progress is reported via `display`. Returns the remaining ranges that need +/// to be downloaded. +async fn download_ranges( + url: &str, + output: &Path, + initial_ranges: Option<&[Range]>, + display: &mut dyn ProgressDisplay, + max_tasks: usize, + max_errors: u8, +) -> Result>> { + let file_size = get_content_length(url).await?; + + // Open for writing, but without truncation. + let file = task::block_in_place(|| { + OpenOptions::new() + .write(true) + .create(true) + .open(output) + .map(PSeekFile::new) + .with_context(|| anyhow!("Failed to open for writing: {output:?}")) + })?; + + task::block_in_place(|| file.set_len(file_size)) + .with_context(|| anyhow!("Failed to set file size: {output:?}"))?; + + // Queue of ranges that need to be downloaded. + let mut remaining = VecDeque::from(match initial_ranges { + Some(r) => r.to_vec(), + #[allow(clippy::single_range_in_vec_init)] + None => vec![0..file_size], + }); + // Ranges that have failed. + let mut failed = Vec::>::new(); + // Ranges for currently running tasks. + let mut task_ranges = HashMap::>::new(); + + // Overall progress. + let mut progress = file_size - remaining.iter().map(|r| r.end - r.start).sum::(); + display.progress(progress, file_size); + + let mut tasks = JoinSet::new(); + let mut next_task_id = 0; + let mut error_count = 0u8; + // Progress messages from tasks. + let (tx, mut rx) = mpsc::channel(max_tasks); + + loop { + // Spawn new tasks. + while tasks.len() < max_tasks { + if remaining.is_empty() && !tasks.is_empty() { + // No more ranges to download. Split another task's range. + let (_, old_range) = task_ranges + .iter_mut() + .max_by_key(|(_, r)| r.end - r.start) + .unwrap(); + let size = old_range.end - old_range.start; + + if size >= MIN_CHUNK_SIZE { + let new_range = old_range.start + size / 2..old_range.end; + old_range.end = new_range.start; + remaining.push_back(new_range); + } + } + + if let Some(task_range) = remaining.pop_front() { + tasks.spawn(download_task( + next_task_id, + url.to_owned(), + file.clone(), + task_range.clone(), + tx.clone(), + )); + + task_ranges.insert(next_task_id, task_range); + next_task_id += 1; + } else { + // No pending ranges and no running tasks can be split. + break; + } + } + + tokio::select! { + // Interrupted by user. + c = ctrl_c() => { + c?; + break; + } + + // Received progress notification. + msg = rx.recv() => { + let msg = msg.unwrap(); + + progress += msg.bytes; + display.progress(progress, file_size); + + let task_range = task_ranges.get_mut(&msg.task_id).unwrap(); + task_range.start += msg.bytes; + + msg.resp.send(task_range.end).unwrap(); + } + + // Received completion message. + r = tasks.join_next() => { + match r { + // All tasks exited + None => { + break; + }, + + // Download task panicked + Some(Err(e)) => { + return Err(e).context("Unexpected panic in download task"); + } + + // Task completed successfully + Some(Ok((task_id, Ok(_)))) => { + task_ranges.remove(&task_id).unwrap(); + } + + // Task failed + Some(Ok((task_id, Err(e)))) => { + display.error(&format!("[Task#{task_id}] {e}")); + error_count += 1; + + let range = task_ranges.remove(&task_id).unwrap(); + + if error_count < max_errors { + remaining.push_back(range); + } else { + failed.push(range); + } + } + } + } + } + } + + display.finish(); + + failed.extend(remaining.into_iter()); + failed.extend(task_ranges.into_values()); + + Ok(failed) +} + +#[derive(Serialize, Deserialize)] +struct State { + ranges: Vec>, +} + +fn read_state(path: &Path) -> Result> { + let data = match fs::read_to_string(path) { + Ok(f) => f, + Err(e) if e.kind() == io::ErrorKind::NotFound => return Ok(None), + Err(e) => Err(e).with_context(|| anyhow!("Failed to read download state: {path:?}"))?, + }; + + let state = toml_edit::de::from_str(&data) + .with_context(|| anyhow!("Failed to parse download state: {path:?}"))?; + + Ok(Some(state)) +} + +fn write_state(path: &Path, state: &State) -> Result<()> { + let data = toml_edit::ser::to_string(state).unwrap(); + + fs::write(path, data).with_context(|| anyhow!("Failed to write download state: {path:?}"))?; + + Ok(()) +} + +fn delete_if_exists(path: &Path) -> Result<()> { + if let Err(e) = fs::remove_file(path) { + if e.kind() != io::ErrorKind::NotFound { + return Err(e).context(format!("Failed to delete file: {path:?}")); + } + } + + Ok(()) +} + +pub fn state_path(path: &Path) -> PathBuf { + let mut s = path.as_os_str().to_owned(); + s.push(".state"); + PathBuf::from(s) +} + +/// Download `url` to `output` with parallel threads. +/// +/// If `initial_ranges` is specified, only those sections of the file will be +/// downloaded. The empty regions are left untouched (i.e. filled with zeroes). +/// A `.state` file is written if the download is interrupted. If the state +/// file exists when this function is called, `initial_ranges` is ignored and +/// the ranges from the state file are used to resume the download. +pub fn download( + url: &str, + output: &Path, + initial_ranges: Option<&[Range]>, + display: &mut dyn ProgressDisplay, + max_tasks: usize, + max_errors: u8, +) -> Result<()> { + let state_path = state_path(output); + let ranges = match read_state(&state_path)? { + Some(r) => Some(r.ranges), + None => initial_ranges.map(|r| r.to_vec()), + }; + + let runtime = Runtime::new()?; + let remaining = runtime.block_on(download_ranges( + url, + output, + ranges.as_deref(), + display, + max_tasks, + max_errors, + ))?; + + if remaining.is_empty() { + delete_if_exists(&state_path)?; + } else { + write_state(&state_path, &State { ranges: remaining })?; + bail!("Download was interrupted"); + } + + Ok(()) +} diff --git a/e2e/src/main.rs b/e2e/src/main.rs new file mode 100644 index 0000000..d0ab36c --- /dev/null +++ b/e2e/src/main.rs @@ -0,0 +1,794 @@ +/* + * SPDX-FileCopyrightText: 2023 Andrew Gunnerson + * SPDX-FileCopyrightText: 2023 Pascal Roeleven + * SPDX-License-Identifier: GPL-3.0-only + */ + +mod cli; +mod config; +mod download; + +use std::{ + collections::{BTreeMap, BTreeSet, HashSet}, + ffi::{OsStr, OsString}, + fs::{self, File}, + io::{self, BufReader, BufWriter, Seek, SeekFrom}, + ops::Range, + path::{Path, PathBuf}, + sync::{ + atomic::{AtomicBool, Ordering}, + Arc, + }, + time::Duration, +}; + +use anyhow::{anyhow, bail, Context, Result}; +use avbroot::{ + cli::ota::{ExtractCli, PatchCli, VerifyCli}, + format::{ota, payload::PayloadHeader}, + stream::{self, FromReader, HashingReader, PSeekFile, SectionReader}, +}; +use clap::Parser; +use tempfile::TempDir; +use zip::ZipArchive; + +use crate::{ + cli::{AddCli, Cli, Command, DeviceGroup, DownloadCli, ListCli, StripCli, TestCli}, + config::{Config, Device, ImageHashes, OtaHashes, Sha256Hash}, +}; + +const DOWNLOAD_TASKS: usize = 4; +const DOWNLOAD_RETRIES: u8 = 3; +const DOWNLOAD_PROGRESS_INTERVAL: Duration = Duration::from_millis(50); + +/// Sort and merge overlapping intervals. +fn merge_overlapping(sections: &[Range]) -> Vec> { + let mut sections = sections.to_vec(); + sections.sort_by_key(|r| (r.start, r.end)); + + let mut result = Vec::>::new(); + + for section in sections { + if let Some(last) = result.last_mut() { + if section.start <= last.end { + last.end = section.end; + continue; + } + } + + result.push(section); + } + + result +} + +/// Convert an exclusion list into an inclusion list in the range [start, end). +fn exclusion_to_inclusion(holes: &[Range], file_range: Range) -> Result>> { + let exclusions = merge_overlapping(holes); + + if let (Some(first), Some(last)) = (exclusions.first(), exclusions.last()) { + if first.start < file_range.start || last.end > file_range.end { + bail!("Sections are outside of the range {file_range:?}"); + } + } + + let flattened = exclusions.iter().flat_map(|p| [p.start, p.end]); + let points = [file_range.start] + .into_iter() + .chain(flattened) + .chain([file_range.end]) + .collect::>(); + + Ok(points.chunks_exact(2).map(|c| c[0]..c[1]).collect()) +} + +/// Convert a full OTA to a stripped OTA with all non-AVB-related partitions +/// removed from the payload. No headers are updated, so the output file will +/// have invalid hashes and signatures. +/// +/// Returns the list of file sections and the sha256 digest. +fn strip_image( + input: &Path, + output: &Path, + cancel_signal: &Arc, +) -> Result<(Vec>, [u8; 32])> { + println!("Stripping {input:?} to {output:?}"); + + let mut raw_reader = File::open(input) + .map(PSeekFile::new) + .with_context(|| anyhow!("Failed to open for reading: {input:?}"))?; + let mut zip_reader = ZipArchive::new(BufReader::new(raw_reader.clone())) + .with_context(|| anyhow!("Failed to read zip: {input:?}"))?; + let payload_entry = zip_reader + .by_name(ota::PATH_PAYLOAD) + .with_context(|| anyhow!("Failed to open zip entry: {:?}", ota::PATH_PAYLOAD))?; + let payload_offset = payload_entry.data_start(); + let payload_size = payload_entry.size(); + + // Open the payload data directly. + let mut payload_reader = SectionReader::new( + BufReader::new(raw_reader.clone()), + payload_offset, + payload_size, + )?; + + let header = PayloadHeader::from_reader(&mut payload_reader) + .with_context(|| anyhow!("Failed to load OTA payload header"))?; + + let required_images = + avbroot::cli::ota::get_required_images(&header.manifest, "@gki_ramdisk", true)? + .into_values() + .collect::>(); + let mut data_holes = vec![]; + + use avbroot::protobuf::chromeos_update_engine::mod_InstallOperation::Type; + + for p in &header.manifest.partitions { + if !required_images.contains(&p.partition_name) { + for op in &p.operations { + match op.type_pb { + Type::ZERO | Type::DISCARD => continue, + _ => { + let start = payload_offset + + header.blob_offset + + op.data_offset.expect("Missing data_offset"); + let end = start + op.data_length.expect("Missing data_length"); + + data_holes.push(start..end); + } + } + } + } + } + + // Keep all sections outside of the partitions skipped. + let file_size = raw_reader.seek(SeekFrom::End(0))?; + let sections_to_keep = exclusion_to_inclusion(&data_holes, 0..file_size)?; + + let mut context = ring::digest::Context::new(&ring::digest::SHA256); + let raw_writer = + File::create(output).with_context(|| anyhow!("Failed to open for writing: {output:?}"))?; + raw_writer + .set_len(file_size) + .with_context(|| anyhow!("Failed to set file size: {output:?}"))?; + let mut buf_writer = BufWriter::new(raw_writer); + let mut buf_reader = BufReader::new(raw_reader); + + buf_reader.rewind()?; + + for section in §ions_to_keep { + let offset = buf_reader.stream_position()?; + + // Hash holes as zeros. + if offset != section.start { + stream::copy_n_inspect( + io::repeat(0), + io::sink(), + section.start - offset, + |data| context.update(data), + cancel_signal, + )?; + + buf_reader.seek(SeekFrom::Start(section.start))?; + buf_writer.seek(SeekFrom::Start(section.start))?; + } + + stream::copy_n_inspect( + &mut buf_reader, + &mut buf_writer, + section.end - section.start, + |data| context.update(data), + cancel_signal, + )?; + } + + // There can't be a hole at the end of a zip, so nothing to hash. + + let digest = context.finish(); + Ok((sections_to_keep, digest.as_ref().try_into().unwrap())) +} + +fn url_filename(url: &str) -> Result<&str> { + url.rsplit_once('/') + .map(|(_, name)| name) + .ok_or_else(|| anyhow!("Failed to determine filename from URL: {url}")) +} + +fn hash_file(path: &Path, cancel_signal: &Arc) -> Result<[u8; 32]> { + println!("Calculating hash of {path:?}"); + + let raw_reader = + File::open(path).with_context(|| anyhow!("Failed to open for reading: {path:?}"))?; + let buf_reader = BufReader::new(raw_reader); + let context = ring::digest::Context::new(&ring::digest::SHA256); + let mut hashing_reader = HashingReader::new(buf_reader, context); + + stream::copy(&mut hashing_reader, io::sink(), cancel_signal)?; + + let (_, context) = hashing_reader.finish(); + let digest = context.finish(); + + Ok(digest.as_ref().try_into().unwrap()) +} + +fn verify_hash(path: &Path, sha256: &[u8; 32], cancel_signal: &Arc) -> Result<()> { + let digest = hash_file(path, cancel_signal)?; + + if sha256 != digest.as_ref() { + bail!( + "Expected sha256 {}, but have {}: {path:?}", + hex::encode(sha256), + hex::encode(digest), + ); + } + + Ok(()) +} + +#[derive(Clone, Copy, Debug, PartialEq, Eq)] +enum Validate { + Always, + IfNew, + Never, +} + +fn download_file( + path: &Path, + url: &str, + sha256: &[u8; 32], + sections: Option<&[Range]>, + path_is_dir: bool, + validate: Validate, + cancel_signal: &Arc, +) -> Result { + let path = if path_is_dir { + path.join(url_filename(url)?) + } else { + path.to_owned() + }; + + if let Some(parent) = path.parent() { + fs::create_dir_all(parent) + .with_context(|| anyhow!("Failed to create directory: {parent:?}"))?; + } + + let mut do_validate = validate != Validate::Never; + + if path.exists() && !download::state_path(&path).exists() { + if validate == Validate::IfNew { + do_validate = false; + } + } else { + println!("Downloading {url} to {path:?}"); + + let mut display = download::BasicProgressDisplay::new(DOWNLOAD_PROGRESS_INTERVAL); + + download::download( + url, + &path, + sections, + &mut display, + DOWNLOAD_TASKS, + DOWNLOAD_RETRIES, + )?; + } + + if do_validate { + verify_hash(&path, sha256, cancel_signal)?; + } + + Ok(path) +} + +fn download_magisk( + config: &Config, + work_dir: &Path, + revalidate: bool, + cancel_signal: &Arc, +) -> Result { + download_file( + &work_dir.join("magisk"), + &config.magisk.url, + &config.magisk.hash.0, + None, + true, + if revalidate { + Validate::Always + } else { + Validate::IfNew + }, + cancel_signal, + ) +} + +fn download_image( + config: &Config, + device: &str, + work_dir: &Path, + stripped: bool, + revalidate: bool, + cancel_signal: &Arc, +) -> Result { + let info = &config.device[device]; + let mut path = work_dir.join(device); + path.push(url_filename(&info.url)?); + let mut sha256 = &info.hash.original.full.0; + let mut sections = None; + + if stripped { + path.as_mut_os_string().push(".stripped"); + sha256 = &info.hash.original.stripped.0; + sections = Some(info.sections.as_slice()); + } + + download_file( + &path, + &info.url, + sha256, + sections, + false, + if revalidate { + Validate::Always + } else { + Validate::IfNew + }, + cancel_signal, + ) +} + +#[rustfmt::skip] +fn test_keys() -> Result<(TempDir, Vec, Vec)> { + let avb_key = include_bytes!(concat!( + env!("CARGO_MANIFEST_DIR"), + "/keys/TEST_KEY_DO_NOT_USE_avb.key", + )); + let avb_pass = include_bytes!(concat!( + env!("CARGO_MANIFEST_DIR"), + "/keys/TEST_KEY_DO_NOT_USE_avb.passphrase", + )); + let avb_pkmd = include_bytes!(concat!( + env!("CARGO_MANIFEST_DIR"), + "/keys/TEST_KEY_DO_NOT_USE_avb_pkmd.bin", + )); + let ota_key = include_bytes!(concat!( + env!("CARGO_MANIFEST_DIR"), + "/keys/TEST_KEY_DO_NOT_USE_ota.key", + )); + let ota_pass = include_bytes!(concat!( + env!("CARGO_MANIFEST_DIR"), + "/keys/TEST_KEY_DO_NOT_USE_ota.passphrase", + )); + let ota_cert = include_bytes!(concat!( + env!("CARGO_MANIFEST_DIR"), + "/keys/TEST_KEY_DO_NOT_USE_ota.crt", + )); + + let temp_dir = TempDir::new().context("Failed to create temporary directory for test keys")?; + let mut patch_args = Vec::::new(); + let mut verify_args = Vec::::new(); + + for (name, data, patch_arg, verify_arg) in [ + ("avb.key", &avb_key[..], Some("--key-avb"), None), + ("avb.pass", &avb_pass[..], Some("--pass-avb-file"), None), + ("avb.pkmd", &avb_pkmd[..], None, Some("--public-key-avb")), + ("ota.key", &ota_key[..], Some("--key-ota"), None), + ("ota.pass", &ota_pass[..], Some("--pass-ota-file"), None), + ("ota.crt", &ota_cert[..], Some("--cert-ota"), Some("--cert-ota")), + ] { + let path = temp_dir.path().join(name); + fs::write(&path, data).with_context(|| anyhow!("Failed to write test key: {path:?}"))?; + + if let Some(arg) = patch_arg { + patch_args.push(arg.into()); + patch_args.push(path.as_os_str().to_owned()); + } + if let Some(arg) = verify_arg { + verify_args.push(arg.into()); + verify_args.push(path.as_os_str().to_owned()); + } + } + + Ok((temp_dir, patch_args, verify_args)) +} + +fn patch_image( + input_file: &Path, + output_file: &Path, + extra_args: &[OsString], + cancel_signal: &Arc, +) -> Result<()> { + println!("Patching {input_file:?}"); + + let (_temp_key_dir, key_args, _) = test_keys()?; + + // We're intentionally using the CLI interface. + let mut args: Vec = vec![ + "patch".into(), + "--input".into(), + input_file.as_os_str().into(), + "--output".into(), + output_file.as_os_str().into(), + ]; + args.extend(key_args); + args.extend_from_slice(extra_args); + + if args.contains(&OsStr::new("--magisk").into()) { + // This doesn't need to be correct. The test outputs aren't meant to + // be booted on real devices. + args.push("--magisk-preinit-device".into()); + args.push("metadata".into()); + } + + let cli = PatchCli::try_parse_from(args)?; + avbroot::cli::ota::patch_subcommand(&cli, cancel_signal)?; + + Ok(()) +} + +fn extract_image( + input_file: &Path, + output_dir: &Path, + cancel_signal: &Arc, +) -> Result<()> { + println!("Extracting AVB partitions from {input_file:?}"); + + let cli = ExtractCli::try_parse_from([ + OsStr::new("extract"), + OsStr::new("--input"), + input_file.as_os_str(), + OsStr::new("--directory"), + output_dir.as_os_str(), + ])?; + avbroot::cli::ota::extract_subcommand(&cli, cancel_signal)?; + + Ok(()) +} + +fn verify_image(input_file: &Path, cancel_signal: &Arc) -> Result<()> { + println!("Verifying signatures in {input_file:?}"); + + let (_temp_key_dir, _, key_args) = test_keys()?; + + let mut args: Vec = vec![ + "verify".into(), + "--input".into(), + input_file.as_os_str().into(), + ]; + args.extend(key_args); + + let cli = VerifyCli::try_parse_from(args)?; + avbroot::cli::ota::verify_subcommand(&cli, cancel_signal)?; + + Ok(()) +} + +fn get_magisk_partition(path: &Path) -> Result { + let raw_reader = + File::open(path).with_context(|| anyhow!("Failed to open for reading: {path:?}"))?; + let mut zip = ZipArchive::new(BufReader::new(raw_reader)) + .with_context(|| anyhow!("Failed to read zip: {path:?}"))?; + let payload_entry = zip + .by_name(ota::PATH_PAYLOAD) + .with_context(|| anyhow!("Failed to open zip entry: {:?}", ota::PATH_PAYLOAD))?; + let payload_offset = payload_entry.data_start(); + let payload_size = payload_entry.size(); + + drop(payload_entry); + let buf_reader = zip.into_inner(); + + // Open the payload data directly. + let mut payload_reader = SectionReader::new(buf_reader, payload_offset, payload_size)?; + + let header = PayloadHeader::from_reader(&mut payload_reader) + .with_context(|| anyhow!("Failed to load OTA payload header"))?; + let images = avbroot::cli::ota::get_partitions_by_type(&header.manifest)?; + + Ok(images["@gki_ramdisk"].clone()) +} + +fn filter_devices<'a>(config: &'a Config, cli: &'a DeviceGroup) -> Result> { + let mut devices = config + .device + .keys() + .map(|d| d.as_str()) + .collect::>(); + + if !cli.all { + let invalid = cli + .device + .iter() + .filter(|d| !devices.contains(d.as_str())) + .collect::>(); + if !invalid.is_empty() { + bail!("Invalid devices: {invalid:?}"); + } + + devices = cli.device.iter().map(|d| d.as_str()).collect(); + } + + Ok(devices) +} + +fn strip_subcommand(cli: &StripCli, cancel_signal: &Arc) -> Result<()> { + let (sections, sha256) = strip_image(&cli.input, &cli.output, cancel_signal)?; + + println!("Preserved sections:"); + for section in sections { + println!("- {section:?}"); + } + + println!("SHA256: {}", hex::encode(sha256)); + + Ok(()) +} + +fn add_subcommand(cli: &AddCli, cancel_signal: &Arc) -> Result<()> { + let (config, mut document) = config::load_config(&cli.config.config)?; + + let image_dir = cli.config.work_dir.join(&cli.device); + + let full_ota = image_dir.join(url_filename(&cli.url)?); + let mut full_ota_patched = full_ota.clone(); + full_ota_patched.as_mut_os_string().push(&cli.patch.suffix); + let mut stripped_ota = full_ota.clone(); + stripped_ota.as_mut_os_string().push(".stripped"); + let mut stripped_ota_patched = stripped_ota.clone(); + stripped_ota_patched + .as_mut_os_string() + .push(&cli.patch.suffix); + + let full_ota_hash = cli.hash.as_ref().map(|h| h.0); + + download_file( + &full_ota, + &cli.url, + &full_ota_hash.unwrap_or_default(), + None, + false, + if full_ota_hash.is_some() { + Validate::Always + } else { + Validate::Never + }, + cancel_signal, + )?; + + // Calculate the hash ourselves if one wasn't provided. + let full_ota_hash = match full_ota_hash { + Some(h) => h, + None => hash_file(&full_ota, cancel_signal)?, + }; + + let magisk_file = download_magisk(&config, &cli.config.work_dir, true, cancel_signal)?; + let magisk_args = [OsString::from("--magisk"), magisk_file.into_os_string()]; + + // Patch the full image. + patch_image(&full_ota, &full_ota_patched, &magisk_args, cancel_signal)?; + let full_ota_patched_hash = hash_file(&full_ota_patched, cancel_signal)?; + + // Check that the patched full image looks good. + if cli.skip_verify { + println!("OTA and AVB signature validation skipped"); + } else { + verify_image(&full_ota_patched, cancel_signal)?; + } + + // Strip the full image. + let (sections, stripped_ota_hash) = strip_image(&full_ota, &stripped_ota, cancel_signal)?; + + // Patch the stripped image. This doesn't fail zip's CRC checks because the + // `ota patch` commands reads the payload directly from the raw backing + // file. + patch_image( + &stripped_ota, + &stripped_ota_patched, + &magisk_args, + cancel_signal, + )?; + let stripped_ota_patched_hash = hash_file(&stripped_ota_patched, cancel_signal)?; + + // Hash all of the AVB-related partition images so that `e2e test` can fail + // fast if something goes wrong. + let mut avb_images = BTreeMap::::new(); + + { + let temp_dir = TempDir::new().context("Failed to create temp directory")?; + extract_image(&full_ota_patched, temp_dir.path(), cancel_signal)?; + + for entry in fs::read_dir(temp_dir.path())? { + let entry = entry?; + let hash = hash_file(&entry.path(), cancel_signal)?; + + avb_images.insert(entry.file_name().into_string().unwrap(), Sha256Hash(hash)); + } + } + + println!("Adding {} to config file", cli.device); + + let device = Device { + url: cli.url.clone(), + sections, + hash: ImageHashes { + original: OtaHashes { + full: Sha256Hash(full_ota_hash), + stripped: Sha256Hash(stripped_ota_hash), + }, + patched: OtaHashes { + full: Sha256Hash(full_ota_patched_hash), + stripped: Sha256Hash(stripped_ota_patched_hash), + }, + avb_images, + }, + }; + + config::add_device(&mut document, &cli.device, &device)?; + + let config_serialized = document.to_string(); + fs::write(&cli.config.config, config_serialized) + .with_context(|| anyhow!("Failed to write config: {:?}", cli.config.config))?; + + if cli.patch.delete_on_success { + for path in [full_ota_patched, stripped_ota_patched] { + fs::remove_file(&path).with_context(|| anyhow!("Failed to delete file: {path:?}"))?; + } + } + + Ok(()) +} + +fn download_subcommand(cli: &DownloadCli, cancel_signal: &Arc) -> Result<()> { + let (config, _) = config::load_config(&cli.config.config)?; + let devices = filter_devices(&config, &cli.device)?; + + if !cli.magisk && devices.is_empty() { + bail!("No downloads selected"); + } + + if cli.magisk { + download_magisk( + &config, + &cli.config.work_dir, + cli.download.revalidate, + cancel_signal, + )?; + } + + for device in devices { + download_image( + &config, + device, + &cli.config.work_dir, + cli.download.stripped, + cli.download.revalidate, + cancel_signal, + )?; + } + + Ok(()) +} + +fn test_subcommand(cli: &TestCli, cancel_signal: &Arc) -> Result<()> { + let (config, _) = config::load_config(&cli.config.config)?; + let devices = filter_devices(&config, &cli.device)?; + + if devices.is_empty() { + bail!("No devices selected"); + } + + let magisk_file = download_magisk( + &config, + &cli.config.work_dir, + cli.download.revalidate, + cancel_signal, + )?; + let magisk_args = [OsString::from("--magisk"), magisk_file.into_os_string()]; + + for device in devices { + let info = &config.device[device]; + + let image_file = download_image( + &config, + device, + &cli.config.work_dir, + cli.download.stripped, + cli.download.revalidate, + cancel_signal, + )?; + + let mut patched_file = image_file.clone(); + patched_file.as_mut_os_string().push(&cli.patch.suffix); + + let patched_hash = if cli.download.stripped { + &info.hash.patched.stripped.0 + } else { + &info.hash.patched.full.0 + }; + + patch_image(&image_file, &patched_file, &magisk_args, cancel_signal)?; + + let temp_dir = TempDir::new().context("Failed to create temp directory")?; + + // Check partitions first so we fail fast if the issue is with AVB. + extract_image(&patched_file, temp_dir.path(), cancel_signal)?; + + let mut expected = info.hash.avb_images.keys().collect::>(); + + for entry in fs::read_dir(temp_dir.path())? { + let entry = entry?; + let name = entry.file_name().into_string().unwrap(); + let hash = info + .hash + .avb_images + .get(&name) + .ok_or_else(|| anyhow!("Missing AVB image hash for {name}"))?; + + verify_hash(&entry.path(), &hash.0, cancel_signal)?; + expected.remove(&name); + } + + if !expected.is_empty() { + bail!("Missing AVB images: {expected:?}"); + } + + // Then, validate the hash of everything. + verify_hash(&patched_file, patched_hash, cancel_signal)?; + + // Patch again, but this time, use the previously patched boot image + // instead of applying the Magisk patch. + let magisk_partition = get_magisk_partition(&patched_file)?; + let prepatched_args = [ + OsStr::new("--prepatched").to_owned(), + temp_dir + .path() + .join(format!("{magisk_partition}.img")) + .into_os_string(), + ]; + + fs::remove_file(&patched_file) + .with_context(|| anyhow!("Failed to delete file: {patched_file:?}"))?; + + patch_image(&image_file, &patched_file, &prepatched_args, cancel_signal)?; + + verify_hash(&patched_file, patched_hash, cancel_signal)?; + + if cli.patch.delete_on_success { + fs::remove_file(&patched_file) + .with_context(|| anyhow!("Failed to delete file: {patched_file:?}"))?; + } + } + + Ok(()) +} + +fn list_subcommand(cli: &ListCli) -> Result<()> { + let (config, _) = config::load_config(&cli.config.config)?; + + for device in config.device.keys() { + println!("{device}"); + } + + Ok(()) +} + +fn main() -> Result<()> { + // Set up a cancel signal so we can properly clean up any temporary files. + let cancel_signal = Arc::new(AtomicBool::new(false)); + { + let signal = cancel_signal.clone(); + + ctrlc::set_handler(move || { + signal.store(true, Ordering::SeqCst); + }) + .expect("Failed to set signal handler"); + } + + let cli = Cli::parse(); + + match cli.command { + Command::Strip(c) => strip_subcommand(&c, &cancel_signal), + Command::Add(c) => add_subcommand(&c, &cancel_signal), + Command::Download(c) => download_subcommand(&c, &cancel_signal), + Command::Test(c) => test_subcommand(&c, &cancel_signal), + Command::List(c) => list_subcommand(&c), + } +} diff --git a/external/avb b/external/avb deleted file mode 160000 index 3210440..0000000 --- a/external/avb +++ /dev/null @@ -1 +0,0 @@ -Subproject commit 3210440973140a2646a2c88268ef899993e721ee diff --git a/external/build b/external/build deleted file mode 160000 index 2014bbb..0000000 --- a/external/build +++ /dev/null @@ -1 +0,0 @@ -Subproject commit 2014bbb8e7bf66085bf6e9d0f1460150bee98a13 diff --git a/external/update_engine b/external/update_engine deleted file mode 160000 index e47767a..0000000 --- a/external/update_engine +++ /dev/null @@ -1 +0,0 @@ -Subproject commit e47767a5a860bbdee11798d6e95f6afbfef04e5b diff --git a/extra/README.md b/extra/README.md deleted file mode 100644 index 94f7d05..0000000 --- a/extra/README.md +++ /dev/null @@ -1,62 +0,0 @@ -# avbroot extra - -This directory contains some extra scripts that aren't required for avbroot's operation, but may be useful for troubleshooting. - -## `bootimagetool` - -This is a frontend to the [`avbroot/formats/bootimage.py`](../avbroot/formats/bootimage.py) library for working with boot images. - -### Unpacking a boot image - -```bash -python bootimagetool.py unpack -``` - -This subcommand unpacks all of the components of the boot image into the current directory by default (see `--help`). The header fields are saved to `header.json` and each blob section is saved to a separate file. Each blob is written to disk as-is, without decompression. - -### Packing a boot image - -```bash -python bootimagetool.py pack -``` - -This subcommand packs a new boot image from the individual components in the current directory by default (see `--help`). The default input filenames are the same as the output filenames for the `unpack` subcommand. - -### Repacking a boot image - -```bash -python bootimagetool.py repack -``` - -This subcommand repacks a boot image without writing the individual components to disk first. This is useful for roundtrip testing of avbroot's boot image parser. The output should be identical to the input, minus any footers, like the AVB footer. The only exception is the VTS signature for v4 boot images, which is always stripped out. - -## `cpiotool` - -This is a frontend to the [`avbroot/formats/compression.py`](../avbroot/formats/compression.py) and [`avbroot/formats/cpio.py`](../avbroot/formats/cpio.py) libraries. It is useful for inspecting compressed and uncompressed cpio archives. - -### Dumping a cpio archive - -```bash -python cpiotool.py dump -``` - -This subcommand dumps all information about a cpio archive to stdout. This includes the compression format, all header fields (including the trailer entry), and all data. If an entry's data can be decoded as UTF-8, then it is printed out as text. Otherwise, the binary data is printed out base64-encoded. The base64-encoded data is truncated to 5 lines by default to avoid outputting too much data, but this behavior can be disabled with `--no-truncate`. - -### Repacking a cpio archive - -```bash -python cpiotool.py repack -``` - -This subcommand repacks a cpio archive, including recompression if needed. This is useful for roundtrip testing of avbroot's cpio parser and compression handling. The uncompressed output should be identical to the uncompressed input, except: - -* files are sorted by name -* inodes are reassigned, starting from 300000 -* there is no excess padding at the end of the file - -The compressed output may differ from what other tools produce because: - -* LZ4 legacy chunks are packed to exactly 8 MiB, except for the last chunk, which may be smaller. -* LZ4 legacy uses the high compression mode with a compression level of 12. -* The GZIP header has the modification timestamp set to 0 (Unix epoch time). -* GZIP uses a compression level of 9. diff --git a/extra/bootimagetool.py b/extra/bootimagetool.py deleted file mode 100755 index ede9d26..0000000 --- a/extra/bootimagetool.py +++ /dev/null @@ -1,168 +0,0 @@ -#!/usr/bin/env python3 - -import argparse -import itertools -import json -import os -import sys - -sys.path.append(os.path.join(sys.path[0], '..')) -from avbroot.formats import bootimage - - -class BytesDecoder(json.JSONDecoder): - def __init__(self): - super().__init__(object_hook=self.from_dict) - - @staticmethod - def from_dict(d): - # This is insufficient for arbitrary data, but we're not dealing with - # arbitrary data - if 'type' in d: - if d['type'] == 'UTF-8': - return d['data'].encode('UTF-8') - elif d['type'] == 'hex': - return bytes.fromhex(d['data']) - - return d - - -class BytesEncoder(json.JSONEncoder): - def default(self, obj): - if isinstance(obj, bytes): - if b'\0' not in obj: - try: - return { - 'type': 'UTF-8', - 'data': obj.decode('UTF-8'), - } - except UnicodeDecodeError: - pass - - return { - 'type': 'hex', - 'data': obj.hex(), - } - - return super().default(obj) - - -def read_or_none(path): - try: - with open(path, 'rb') as f: - return f.read() - except FileNotFoundError: - return None - - -def write_if_not_none(path, data): - if data is not None: - with open(path, 'wb') as f: - f.write(data) - - -def parse_args(): - parser_kwargs = {'formatter_class': argparse.ArgumentDefaultsHelpFormatter} - - parser = argparse.ArgumentParser(**parser_kwargs) - subparsers = parser.add_subparsers(dest='subcommand', required=True, - help='Subcommands') - - base = argparse.ArgumentParser(add_help=False) - base.add_argument('-q', '--quiet', action='store_true', - help='Do not print header information') - - pack = subparsers.add_parser('pack', help='Pack a boot image', - parents=[base], **parser_kwargs) - unpack = subparsers.add_parser('unpack', help='Unpack a boot image', - parents=[base], **parser_kwargs) - repack = subparsers.add_parser('repack', help='Repack a boot image', - parents=[base], **parser_kwargs) - - for p in (pack, unpack): - prefix = '--input-' if p == pack else '--output-' - - p.add_argument('boot_image', help='Path to boot image') - - p.add_argument(prefix + 'header', default='header.json', - help='Path to header JSON') - p.add_argument(prefix + 'kernel', default='kernel.img', - help='Path to kernel') - p.add_argument(prefix + 'ramdisk-prefix', default='ramdisk.img.', - help='Path prefix for ramdisk') - p.add_argument(prefix + 'second', default='second.img', - help='Path to second stage bootloader') - p.add_argument(prefix + 'recovery-dtbo', default='recovery_dtbo.img', - help='Path to recovery dtbo/acpio') - p.add_argument(prefix + 'dtb', default='dtb.img', - help='Path to device tree blob') - p.add_argument(prefix + 'bootconfig', default='bootconfig.txt', - help='Path to bootconfig') - - repack.add_argument('input', help='Path to input boot image') - repack.add_argument('output', help='Path to output boot image') - - return parser.parse_args() - - -def main(): - args = parse_args() - - if args.subcommand == 'pack': - with open(args.input_header, 'r') as f: - data = json.load(f, cls=BytesDecoder) - - img = bootimage.create_from_dict(data) - - img.kernel = read_or_none(args.input_kernel) - img.second = read_or_none(args.input_second) - img.recovery_dtbo = read_or_none(args.input_recovery_dtbo) - img.dtb = read_or_none(args.input_dtb) - img.bootconfig = read_or_none(args.input_bootconfig) - - for i in itertools.count(): - ramdisk = read_or_none(f'{args.input_ramdisk_prefix}{i}') - if ramdisk is None: - break - - img.ramdisks.append(ramdisk) - - if not args.quiet: - print(img) - - with open(args.boot_image, 'wb') as f: - img.generate(f) - - elif args.subcommand == 'unpack': - with open(args.boot_image, 'rb') as f: - img = bootimage.load_autodetect(f) - if not args.quiet: - print(img) - - with open(args.output_header, 'w') as f: - json.dump(img.to_dict(), f, indent=4, cls=BytesEncoder) - - write_if_not_none(args.output_kernel, img.kernel) - write_if_not_none(args.output_second, img.second) - write_if_not_none(args.output_recovery_dtbo, img.recovery_dtbo) - write_if_not_none(args.output_dtb, img.dtb) - write_if_not_none(args.output_bootconfig, img.bootconfig) - - for i, ramdisk in enumerate(img.ramdisks): - write_if_not_none(f'{args.output_ramdisk_prefix}{i}', ramdisk) - - elif args.subcommand == 'repack': - with open(args.input, 'rb') as f: - img = bootimage.load_autodetect(f) - if not args.quiet: - print(img) - - with open(args.output, 'wb') as f: - img.generate(f) - - else: - raise NotImplementedError() - - -if __name__ == '__main__': - main() diff --git a/extra/cpiotool.py b/extra/cpiotool.py deleted file mode 100755 index 6b6153d..0000000 --- a/extra/cpiotool.py +++ /dev/null @@ -1,120 +0,0 @@ -#!/usr/bin/env python3 - -import argparse -import base64 -import os -import sys - -sys.path.append(os.path.join(sys.path[0], '..')) -from avbroot.formats import compression -from avbroot.formats import cpio - - -CONTENT_BEGIN = '----- BEGIN UTF-8 CONTENT -----' -CONTENT_END = '----- END UTF-8 CONTENT -----' -CONTENT_END_NO_NEWLINE = '----- END UTF-8 CONTENT (NO NEWLINE) -----' - -BASE64_BEGIN = '----- BEGIN BASE64 CONTENT -----' -BASE64_END = '----- END BASE64 CONTENT -----' -BASE64_END_TRUNCATED = '----- END BASE64 CONTENT (TRUNCATED) -----' - -NO_DATA = '----- NO DATA -----' - - -def print_content(data, truncate=False): - if not data: - print(NO_DATA) - return - - if b'\0' not in data: - try: - data_str = data.decode('UTF-8') - - if CONTENT_BEGIN not in data_str \ - and CONTENT_END not in data_str \ - and CONTENT_END_NO_NEWLINE not in data_str: - print(CONTENT_BEGIN) - print(data_str, end='') - if data_str[-1] != '\n': - print() - print(CONTENT_END_NO_NEWLINE) - else: - print(CONTENT_END) - - return - except UnicodeDecodeError: - pass - - data_base64 = base64.b64encode(data).decode('ascii') - - print(BASE64_BEGIN) - for i, offset in enumerate(range(0, len(data_base64), 76)): - if truncate and i == 5: - print(BASE64_END_TRUNCATED) - return - - print(data_base64[offset:offset + 76]) - print(BASE64_END) - - -def parse_args(): - parser = argparse.ArgumentParser() - subparsers = parser.add_subparsers(dest='subcommand', required=True, - help='Subcommands') - - dump = subparsers.add_parser('dump', help='Dump cpio headers and data') - repack = subparsers.add_parser('repack', help='Repack cpio archive') - - dump.add_argument('--no-truncate', action='store_true', - help='Do not truncate binary file contents') - - for p in (dump, repack): - p.add_argument('input', help='Path to input cpio file') - - repack.add_argument('output', help='Path to output cpio file') - - return parser.parse_args() - - -def load_archive(path, **cpio_kwargs): - with open(path, 'rb') as f_raw: - with compression.CompressedFile(f_raw, 'rb', raw_if_unknown=True) as f: - return cpio.load(f.fp, **cpio_kwargs), f.format - - -def save_archive(path, entries, format): - with open(path, 'wb') as f_raw: - with compression.CompressedFile(f_raw, 'wb', format=format, - raw_if_unknown=True) as f: - cpio.save(f.fp, entries) - - -def main(): - args = parse_args() - - if args.subcommand == 'dump': - entries, format = load_archive( - args.input, - # We want to show the headers exactly as they are - include_trailer=True, - reassign_inodes=False, - ) - - print('Compression format:', format) - print() - - for entry in entries: - print(entry) - print_content(entry.content, truncate=not args.no_truncate) - print() - - elif args.subcommand == 'repack': - entries, format = load_archive(args.input) - save_archive(args.output, entries, format) - - else: - raise NotImplementedError() - - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/protobuf/ota_metadata.proto b/protobuf/ota_metadata.proto new file mode 100644 index 0000000..689ce80 --- /dev/null +++ b/protobuf/ota_metadata.proto @@ -0,0 +1,115 @@ +/* + * Copyright (C) 2020 The Android Open Source Project + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +// If you change this file, +// Please update ota_metadata_pb2.py by executing +// protoc ota_metadata.proto --python_out +// $ANDROID_BUILD_TOP/build/tools/releasetools + +syntax = "proto3"; + +package build.tools.releasetools; +option optimize_for = LITE_RUNTIME; +option java_package = "android.ota"; +option java_outer_classname = "OtaPackageMetadata"; + +// The build information of a particular partition on the device. +message PartitionState { + string partition_name = 1; + repeated string device = 2; + repeated string build = 3; + // The version string of the partition. It's usually timestamp if present. + // One known exception is the boot image, who uses the kmi version, e.g. + // 5.4.42-android12-0 + string version = 4; + + // TODO(xunchang), revisit other necessary fields, e.g. security_patch_level. +} + +// The build information on the device. The bytes of the running images are thus +// inferred from the device state. For more information of the meaning of each +// subfield, check +// https://source.android.com/compatibility/android-cdd#3_2_2_build_parameters +message DeviceState { + // device name. i.e. ro.product.device; if the field has multiple values, it + // means the ota package supports multiple devices. This usually happens when + // we use the same image to support multiple skus. + repeated string device = 1; + // device fingerprint. Up to R build, the value reads from + // ro.build.fingerprint. + repeated string build = 2; + // A value that specify a version of the android build. + string build_incremental = 3; + // The timestamp when the build is generated. + int64 timestamp = 4; + // The version of the currently-executing Android system. + string sdk_level = 5; + // A value indicating the security patch level of a build. + string security_patch_level = 6; + + // The detailed state of each partition. For partial updates or devices with + // mixed build of partitions, some of the above fields may left empty. And the + // client will rely on the information of specific partitions to target the + // update. + repeated PartitionState partition_state = 7; +} + +message ApexInfo { + string package_name = 1; + int64 version = 2; + bool is_compressed = 3; + int64 decompressed_size = 4; + // Used in OTA + int64 source_version = 5; +} + +// Just a container to hold repeated apex_info, so that we can easily serialize +// a list of apex_info to string. +message ApexMetadata { + repeated ApexInfo apex_info = 1; +} + +// The metadata of an OTA package. It contains the information of the package +// and prerequisite to install the update correctly. +message OtaMetadata { + enum OtaType { + UNKNOWN = 0; + AB = 1; + BLOCK = 2; + BRICK = 3; + }; + OtaType type = 1; + // True if we need to wipe after the update. + bool wipe = 2; + // True if the timestamp of the post build is older than the pre build. + bool downgrade = 3; + // A map of name:content of property files, e.g. ota-property-files. + map property_files = 4; + + // The required device state in order to install the package. + DeviceState precondition = 5; + // The expected device state after the update. + DeviceState postcondition = 6; + + // True if the ota that updates a device to support dynamic partitions, where + // the source build doesn't support it. + bool retrofit_dynamic_partitions = 7; + // The required size of the cache partition, only valid for non-A/B update. + int64 required_cache = 8; + + // True iff security patch level downgrade is permitted on this OTA. + bool spl_downgrade = 9; +} diff --git a/protobuf/update_metadata.proto b/protobuf/update_metadata.proto new file mode 100644 index 0000000..3881464 --- /dev/null +++ b/protobuf/update_metadata.proto @@ -0,0 +1,437 @@ +// +// Copyright (C) 2010 The Android Open Source Project +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. +// + +// Update file format: An update file contains all the operations needed +// to update a system to a specific version. It can be a full payload which +// can update from any version, or a delta payload which can only update +// from a specific version. +// The update format is represented by this struct pseudocode: +// struct delta_update_file { +// char magic[4] = "CrAU"; +// uint64 file_format_version; // payload major version +// uint64 manifest_size; // Size of protobuf DeltaArchiveManifest +// +// // Only present if format_version >= 2: +// uint32 metadata_signature_size; +// +// // The DeltaArchiveManifest protobuf serialized, not compressed. +// char manifest[manifest_size]; +// +// // The signature of the metadata (from the beginning of the payload up to +// // this location, not including the signature itself). This is a serialized +// // Signatures message. +// char metadata_signature_message[metadata_signature_size]; +// +// // Data blobs for files, no specific format. The specific offset +// // and length of each data blob is recorded in the DeltaArchiveManifest. +// struct { +// char data[]; +// } blobs[]; +// +// // The signature of the entire payload, everything up to this location, +// // except that metadata_signature_message is skipped to simplify signing +// // process. These two are not signed: +// uint64 payload_signatures_message_size; +// // This is a serialized Signatures message. +// char payload_signatures_message[payload_signatures_message_size]; +// +// }; + +// The DeltaArchiveManifest protobuf is an ordered list of InstallOperation +// objects. These objects are stored in a linear array in the +// DeltaArchiveManifest. Each operation is applied in order by the client. + +// The DeltaArchiveManifest also contains the initial and final +// checksums for the device. + +// The client will perform each InstallOperation in order, beginning even +// before the entire delta file is downloaded (but after at least the +// protobuf is downloaded). The types of operations are explained: +// - REPLACE: Replace the dst_extents on the drive with the attached data, +// zero padding out to block size. +// - REPLACE_BZ: bzip2-uncompress the attached data and write it into +// dst_extents on the drive, zero padding to block size. +// - MOVE: Copy the data in src_extents to dst_extents. Extents may overlap, +// so it may be desirable to read all src_extents data into memory before +// writing it out. (deprecated) +// - SOURCE_COPY: Copy the data in src_extents in the old partition to +// dst_extents in the new partition. There's no overlapping of data because +// the extents are in different partitions. +// - BSDIFF: Read src_length bytes from src_extents into memory, perform +// bspatch with attached data, write new data to dst_extents, zero padding +// to block size. (deprecated) +// - SOURCE_BSDIFF: Read the data in src_extents in the old partition, perform +// bspatch with the attached data and write the new data to dst_extents in the +// new partition. +// - ZERO: Write zeros to the destination dst_extents. +// - DISCARD: Discard the destination dst_extents blocks on the physical medium. +// the data read from those blocks is undefined. +// - REPLACE_XZ: Replace the dst_extents with the contents of the attached +// xz file after decompression. The xz file should only use crc32 or no crc at +// all to be compatible with xz-embedded. +// - PUFFDIFF: Read the data in src_extents in the old partition, perform +// puffpatch with the attached data and write the new data to dst_extents in +// the new partition. +// +// The operations allowed in the payload (supported by the client) depend on the +// major and minor version. See InstallOperation.Type below for details. + +syntax = "proto2"; + +package chromeos_update_engine; + +// Data is packed into blocks on disk, always starting from the beginning +// of the block. If a file's data is too large for one block, it overflows +// into another block, which may or may not be the following block on the +// physical partition. An ordered list of extents is another +// representation of an ordered list of blocks. For example, a file stored +// in blocks 9, 10, 11, 2, 18, 12 (in that order) would be stored in +// extents { {9, 3}, {2, 1}, {18, 1}, {12, 1} } (in that order). +// In general, files are stored sequentially on disk, so it's more efficient +// to use extents to encode the block lists (this is effectively +// run-length encoding). +// A sentinel value (kuint64max) as the start block denotes a sparse-hole +// in a file whose block-length is specified by num_blocks. + +message Extent { + optional uint64 start_block = 1; + optional uint64 num_blocks = 2; +} + +// Signatures: Updates may be signed by the OS vendor. The client verifies +// an update's signature by hashing the entire download. The section of the +// download that contains the signature is at the end of the file, so when +// signing a file, only the part up to the signature part is signed. +// Then, the client looks inside the download's Signatures message for a +// Signature message that it knows how to handle. Generally, a client will +// only know how to handle one type of signature, but an update may contain +// many signatures to support many different types of client. Then client +// selects a Signature message and uses that, along with a known public key, +// to verify the download. The public key is expected to be part of the +// client. + +message Signatures { + message Signature { + optional uint32 version = 1 [deprecated = true]; + optional bytes data = 2; + + // The DER encoded signature size of EC keys is nondeterministic for + // different input of sha256 hash. However, we need the size of the + // serialized signatures protobuf string to be fixed before signing; + // because this size is part of the content to be signed. Therefore, we + // always pad the signature data to the maximum possible signature size of + // a given key. And the payload verifier will truncate the signature to + // its correct size based on the value of |unpadded_signature_size|. + optional fixed32 unpadded_signature_size = 3; + } + repeated Signature signatures = 1; +} + +message PartitionInfo { + optional uint64 size = 1; + optional bytes hash = 2; +} + +message InstallOperation { + enum Type { + REPLACE = 0; // Replace destination extents w/ attached data. + REPLACE_BZ = 1; // Replace destination extents w/ attached bzipped data. + MOVE = 2 [deprecated = true]; // Move source extents to target extents. + BSDIFF = 3 [deprecated = true]; // The data is a bsdiff binary diff. + + // On minor version 2 or newer, these operations are supported: + SOURCE_COPY = 4; // Copy from source to target partition + SOURCE_BSDIFF = 5; // Like BSDIFF, but read from source partition + + // On minor version 3 or newer and on major version 2 or newer, these + // operations are supported: + REPLACE_XZ = 8; // Replace destination extents w/ attached xz data. + + // On minor version 4 or newer, these operations are supported: + ZERO = 6; // Write zeros in the destination. + DISCARD = 7; // Discard the destination blocks, reading as undefined. + BROTLI_BSDIFF = 10; // Like SOURCE_BSDIFF, but compressed with brotli. + + // On minor version 5 or newer, these operations are supported: + PUFFDIFF = 9; // The data is in puffdiff format. + + // On minor version 8 or newer, these operations are supported: + ZUCCHINI = 11; + + // On minor version 9 or newer, these operations are supported: + LZ4DIFF_BSDIFF = 12; + LZ4DIFF_PUFFDIFF = 13; + } + required Type type = 1; + + // Only minor version 6 or newer support 64 bits |data_offset| and + // |data_length|, older client will read them as uint32. + // The offset into the delta file (after the protobuf) + // where the data (if any) is stored + optional uint64 data_offset = 2; + // The length of the data in the delta file + optional uint64 data_length = 3; + + // Ordered list of extents that are read from (if any) and written to. + repeated Extent src_extents = 4; + // Byte length of src, equal to the number of blocks in src_extents * + // block_size. It is used for BSDIFF and SOURCE_BSDIFF, because we need to + // pass that external program the number of bytes to read from the blocks we + // pass it. This is not used in any other operation. + optional uint64 src_length = 5; + + repeated Extent dst_extents = 6; + // Byte length of dst, equal to the number of blocks in dst_extents * + // block_size. Used for BSDIFF and SOURCE_BSDIFF, but not in any other + // operation. + optional uint64 dst_length = 7; + + // Optional SHA 256 hash of the blob associated with this operation. + // This is used as a primary validation for http-based downloads and + // as a defense-in-depth validation for https-based downloads. If + // the operation doesn't refer to any blob, this field will have + // zero bytes. + optional bytes data_sha256_hash = 8; + + // Indicates the SHA 256 hash of the source data referenced in src_extents at + // the time of applying the operation. If present, the update_engine daemon + // MUST read and verify the source data before applying the operation. + optional bytes src_sha256_hash = 9; +} + +// Hints to VAB snapshot to skip writing some blocks if these blocks are +// identical to the ones on the source image. The src & dst extents for each +// CowMergeOperation should be contiguous, and they're a subset of an OTA +// InstallOperation. +// During merge time, we need to follow the pre-computed sequence to avoid +// read after write, similar to the inplace update schema. +message CowMergeOperation { + enum Type { + COW_COPY = 0; // identical blocks + COW_XOR = 1; // used when src/dst blocks are highly similar + COW_REPLACE = 2; // Raw replace operation + } + optional Type type = 1; + + optional Extent src_extent = 2; + optional Extent dst_extent = 3; + // For COW_XOR, source location might be unaligned, so this field is in range + // [0, block_size), representing how much should the src_extent shift toward + // larger block number. If this field is non-zero, then src_extent will + // include 1 extra block in the end, as the merge op actually references the + // first |src_offset| bytes of that extra block. For example, if |dst_extent| + // is [10, 15], |src_offset| is 500, then src_extent might look like [25, 31]. + // Note that |src_extent| contains 1 extra block than the |dst_extent|. + optional uint32 src_offset = 4; +} + +// Describes the update to apply to a single partition. +message PartitionUpdate { + // A platform-specific name to identify the partition set being updated. For + // example, in Chrome OS this could be "ROOT" or "KERNEL". + required string partition_name = 1; + + // Whether this partition carries a filesystem with post-install program that + // must be run to finalize the update process. See also |postinstall_path| and + // |filesystem_type|. + optional bool run_postinstall = 2; + + // The path of the executable program to run during the post-install step, + // relative to the root of this filesystem. If not set, the default "postinst" + // will be used. This setting is only used when |run_postinstall| is set and + // true. + optional string postinstall_path = 3; + + // The filesystem type as passed to the mount(2) syscall when mounting the new + // filesystem to run the post-install program. If not set, a fixed list of + // filesystems will be attempted. This setting is only used if + // |run_postinstall| is set and true. + optional string filesystem_type = 4; + + // If present, a list of signatures of the new_partition_info.hash signed with + // different keys. If the update_engine daemon requires vendor-signed images + // and has its public key installed, one of the signatures should be valid + // for /postinstall to run. + repeated Signatures.Signature new_partition_signature = 5; + + optional PartitionInfo old_partition_info = 6; + optional PartitionInfo new_partition_info = 7; + + // The list of operations to be performed to apply this PartitionUpdate. The + // associated operation blobs (in operations[i].data_offset, data_length) + // should be stored contiguously and in the same order. + repeated InstallOperation operations = 8; + + // Whether a failure in the postinstall step for this partition should be + // ignored. + optional bool postinstall_optional = 9; + + // On minor version 6 or newer, these fields are supported: + + // The extent for data covered by verity hash tree. + optional Extent hash_tree_data_extent = 10; + + // The extent to store verity hash tree. + optional Extent hash_tree_extent = 11; + + // The hash algorithm used in verity hash tree. + optional string hash_tree_algorithm = 12; + + // The salt used for verity hash tree. + optional bytes hash_tree_salt = 13; + + // The extent for data covered by FEC. + optional Extent fec_data_extent = 14; + + // The extent to store FEC. + optional Extent fec_extent = 15; + + // The number of FEC roots. + optional uint32 fec_roots = 16 [default = 2]; + + // Per-partition version used for downgrade detection, added + // as an effort to support partial updates. For most partitions, + // this is the build timestamp. + optional string version = 17; + + // A sorted list of CowMergeOperation. When writing cow, we can choose to + // skip writing the raw bytes for these extents. During snapshot merge, the + // bytes will read from the source partitions instead. + repeated CowMergeOperation merge_operations = 18; + + // Estimated size for COW image. This is used by libsnapshot + // as a hint. If set to 0, libsnapshot should use alternative + // methods for estimating size. + optional uint64 estimate_cow_size = 19; +} + +message DynamicPartitionGroup { + // Name of the group. + required string name = 1; + + // Maximum size of the group. The sum of sizes of all partitions in the group + // must not exceed the maximum size of the group. + optional uint64 size = 2; + + // A list of partitions that belong to the group. + repeated string partition_names = 3; +} + +message VABCFeatureSet { + optional bool threaded = 1; + optional bool batch_writes = 2; +} + +// Metadata related to all dynamic partitions. +message DynamicPartitionMetadata { + // All updatable groups present in |partitions| of this DeltaArchiveManifest. + // - If an updatable group is on the device but not in the manifest, it is + // not updated. Hence, the group will not be resized, and partitions cannot + // be added to or removed from the group. + // - If an updatable group is in the manifest but not on the device, the group + // is added to the device. + repeated DynamicPartitionGroup groups = 1; + + // Whether dynamic partitions have snapshots during the update. If this is + // set to true, the update_engine daemon creates snapshots for all dynamic + // partitions if possible. If this is unset, the update_engine daemon MUST + // NOT create snapshots for dynamic partitions. + optional bool snapshot_enabled = 2; + + // If this is set to false, update_engine should not use VABC regardless. If + // this is set to true, update_engine may choose to use VABC if device + // supports it, but not guaranteed. + // VABC stands for Virtual AB Compression + optional bool vabc_enabled = 3; + + // The compression algorithm used by VABC. Available ones are "gz", "brotli". + // See system/core/fs_mgr/libsnapshot/cow_writer.cpp for available options, + // as this parameter is ultimated forwarded to libsnapshot's CowWriter + optional string vabc_compression_param = 4; + + // COW version used by VABC. The represents the major version in the COW + // header + optional uint32 cow_version = 5; + + // A collection of knobs to tune Virtual AB Compression + optional VABCFeatureSet vabc_feature_set = 6; +} + +// Definition has been duplicated from +// $ANDROID_BUILD_TOP/build/tools/releasetools/ota_metadata.proto. Keep in sync. +message ApexInfo { + optional string package_name = 1; + optional int64 version = 2; + optional bool is_compressed = 3; + optional int64 decompressed_size = 4; +} + +// Definition has been duplicated from +// $ANDROID_BUILD_TOP/build/tools/releasetools/ota_metadata.proto. Keep in sync. +message ApexMetadata { + repeated ApexInfo apex_info = 1; +} + +message DeltaArchiveManifest { + // Only present in major version = 1. List of install operations for the + // kernel and rootfs partitions. For major version = 2 see the |partitions| + // field. + reserved 1, 2; + + // (At time of writing) usually 4096 + optional uint32 block_size = 3 [default = 4096]; + + // If signatures are present, the offset into the blobs, generally + // tacked onto the end of the file, and the length. We use an offset + // rather than a bool to allow for more flexibility in future file formats. + // If either is absent, it means signatures aren't supported in this + // file. + optional uint64 signatures_offset = 4; + optional uint64 signatures_size = 5; + + // Fields deprecated in major version 2. + reserved 6,7,8,9,10,11; + + // The minor version, also referred as "delta version", of the payload. + // Minor version 0 is full payload, everything else is delta payload. + optional uint32 minor_version = 12 [default = 0]; + + // Only present in major version >= 2. List of partitions that will be + // updated, in the order they will be updated. This field replaces the + // |install_operations|, |kernel_install_operations| and the + // |{old,new}_{kernel,rootfs}_info| fields used in major version = 1. This + // array can have more than two partitions if needed, and they are identified + // by the partition name. + repeated PartitionUpdate partitions = 13; + + // The maximum timestamp of the OS allowed to apply this payload. + // Can be used to prevent downgrading the OS. + optional int64 max_timestamp = 14; + + // Metadata related to all dynamic partitions. + optional DynamicPartitionMetadata dynamic_partition_metadata = 15; + + // If the payload only updates a subset of partitions on the device. + optional bool partial_update = 16; + + // Information on compressed APEX to figure out how much space is required for + // their decompression + repeated ApexInfo apex_info = 17; + + // Security patch level of the device, usually in the format of + // yyyy-mm-dd + optional string security_patch_level = 18; +} diff --git a/requirements.txt b/requirements.txt deleted file mode 100644 index 55e64da..0000000 --- a/requirements.txt +++ /dev/null @@ -1,3 +0,0 @@ -lz4 -# The pregenerated AOSP Python source is for version 3 -protobuf<4 \ No newline at end of file diff --git a/src/boot.rs b/src/boot.rs new file mode 100644 index 0000000..511c865 --- /dev/null +++ b/src/boot.rs @@ -0,0 +1,755 @@ +/* + * SPDX-FileCopyrightText: 2022-2023 Andrew Gunnerson + * SPDX-License-Identifier: GPL-3.0-only + */ + +use std::{ + cmp::Ordering, + collections::HashMap, + fs::File, + io::{self, BufRead, BufReader, Cursor, Read, Seek, Write}, + num::ParseIntError, + ops::Range, + path::{Path, PathBuf}, + sync::{atomic::AtomicBool, Arc}, +}; + +use regex::bytes::Regex; +use ring::digest::Context; +use rsa::RsaPrivateKey; +use thiserror::Error; +use x509_cert::Certificate; +use xz2::{ + stream::{Check, Stream}, + write::XzEncoder, +}; +use zip::{result::ZipError, write::FileOptions, CompressionMethod, ZipArchive, ZipWriter}; + +use crate::{ + crypto, + format::{ + avb::{self, AlgorithmType, Descriptor}, + bootimage::{self, BootImage, BootImageExt, RamdiskMeta}, + compression::{self, CompressedFormat, CompressedReader, CompressedWriter}, + cpio::{self, CpioEntryNew}, + }, + stream::{self, FromReader, HashingWriter, SectionReader, ToWriter}, + util::EscapedString, +}; + +#[derive(Debug, Error)] +pub enum Error { + #[error("Boot image has no vbmeta footer")] + NoFooter, + #[error("No hash descriptor found in vbmeta footer")] + NoHashDescriptor, + #[error("Found multiple hash descriptors in vbmeta footer")] + MultipleHashDescriptors, + #[error("Validation error: {0}")] + Validation(String), + #[error("Failed to parse Magisk version from line: {0:?}")] + ParseMagiskVersion(String, #[source] ParseIntError), + #[error("Failed to determine Magisk version from: {0:?}")] + FindMagiskVersion(PathBuf), + #[error("AVB error")] + Avb(#[from] avb::Error), + #[error("Boot image error")] + BootImage(#[from] bootimage::Error), + #[error("Compression error")] + Compression(#[from] compression::Error), + #[error("Crypto error")] + Crypto(#[from] crypto::Error), + #[error("CPIO error")] + Cpio(#[from] cpio::Error), + #[error("XZ stream error")] + XzStream(#[from] xz2::stream::Error), + #[error("Zip error")] + Zip(#[from] ZipError), + #[error("I/O error")] + IoError(#[from] io::Error), +} + +type Result = std::result::Result; + +fn load_ramdisk(data: &[u8]) -> Result<(Vec, CompressedFormat)> { + let raw_reader = Cursor::new(data); + let mut reader = CompressedReader::new(raw_reader, false)?; + let entries = cpio::load(&mut reader, false)?; + + Ok((entries, reader.format())) +} + +fn save_ramdisk(entries: &[CpioEntryNew], format: CompressedFormat) -> Result> { + let raw_writer = Cursor::new(vec![]); + let mut writer = CompressedWriter::new(raw_writer, format)?; + cpio::save(&mut writer, entries, false)?; + + let raw_writer = writer.finish()?; + Ok(raw_writer.into_inner()) +} + +pub trait BootImagePatcher { + fn patch(&self, boot_image: &mut BootImage, cancel_signal: &Arc) -> Result<()>; +} + +/// Root a boot image with Magisk. +pub struct MagiskRootPatcher { + apk_path: PathBuf, + version: u32, + preinit_device: Option, + random_seed: u64, +} + +impl MagiskRootPatcher { + // - Versions <25102 are not supported because they're missing commit + // 1f8c063dc64806c4f7320ed66c785ff7bc116383, which would leave devices + // that use Android 13 GKIs unable to boot into recovery + // - Versions 25207 through 25210 are not supported because they used the + // RULESDEVICE config option, which stored the writable block device as an + // rdev major/minor pair, which was not consistent across reboots and was + // replaced by PREINITDEVICE + const VERS_SUPPORTED: &[Range] = &[25102..25207, 25211..26200]; + const VER_PREINIT_DEVICE: Range = + 25211..Self::VERS_SUPPORTED[Self::VERS_SUPPORTED.len() - 1].end; + const VER_RANDOM_SEED: Range = 25211..26103; + + pub fn new( + path: &Path, + preinit_device: Option<&str>, + random_seed: Option, + ignore_compatibility: bool, + warning_fn: impl Fn(&str) + Send + 'static, + ) -> Result { + let version = Self::get_version(path)?; + + if !Self::VERS_SUPPORTED.iter().any(|v| v.contains(&version)) { + let msg = format!( + "Unsupported Magisk version {} (supported: {:?})", + version, + Self::VERS_SUPPORTED, + ); + + if ignore_compatibility { + warning_fn(&msg); + } else { + return Err(Error::Validation(msg)); + } + } + + if preinit_device.is_none() && Self::VER_PREINIT_DEVICE.contains(&version) { + let msg = format!( + "Magisk version {} ({:?}) requires a preinit device to be specified", + version, + Self::VER_PREINIT_DEVICE, + ); + + if ignore_compatibility { + warning_fn(&msg); + } else { + return Err(Error::Validation(msg)); + } + } + + Ok(Self { + apk_path: path.to_owned(), + version, + preinit_device: preinit_device.map(|d| d.to_owned()), + // Use a hardcoded random seed by default to ensure byte-for-byte + // reproducibility. + random_seed: random_seed.unwrap_or(0xfedcba9876543210), + }) + } + + fn get_version(path: &Path) -> Result { + let reader = File::open(path)?; + let reader = BufReader::new(reader); + let mut zip = ZipArchive::new(reader)?; + let entry = zip.by_name("assets/util_functions.sh")?; + let mut entry = BufReader::new(entry); + let mut line = String::new(); + + loop { + line.clear(); + let n = entry.read_line(&mut line)?; + if n == 0 { + return Err(Error::FindMagiskVersion(path.to_owned())); + } + + if let Some(suffix) = line.trim_end().strip_prefix("MAGISK_VER_CODE=") { + let version = suffix + .parse() + .map_err(|e| Error::ParseMagiskVersion(suffix.to_owned(), e))?; + return Ok(version); + } + } + } + + /// Compare old and new ramdisk entry lists, creating the Magisk `.backup/` + /// directory structure. `.backup/.rmlist` will contain a sorted list of + /// NULL-terminated strings, listing which files were newly added or + /// changed. The old entries for changed files will be added to the new + /// entries as `.backup/`. + /// + /// Both lists and entries within the lists may be mutated. + fn apply_magisk_backup(old_entries: &mut [CpioEntryNew], new_entries: &mut Vec) { + cpio::sort(old_entries); + cpio::sort(new_entries); + + let mut rm_list = vec![]; + let mut to_back_up = vec![]; + + let mut old_iter = old_entries.iter().peekable(); + let mut new_iter = new_entries.iter().peekable(); + + loop { + match (old_iter.peek(), new_iter.peek()) { + (Some(&old), Some(&new)) => match old.name.cmp(&new.name) { + Ordering::Less => { + to_back_up.push(old); + old_iter.next(); + } + Ordering::Equal => { + if old.content != new.content { + to_back_up.push(old); + } + old_iter.next(); + new_iter.next(); + } + Ordering::Greater => { + rm_list.extend(&new.name); + rm_list.push(b'\0'); + new_iter.next(); + } + }, + (Some(old), None) => { + to_back_up.push(old); + old_iter.next(); + } + (None, Some(new)) => { + rm_list.extend(&new.name); + rm_list.push(b'\0'); + new_iter.next(); + } + (None, None) => break, + } + } + + // Intentially using 000 permissions to match Magisk. + new_entries.push(CpioEntryNew::new_directory(b".backup")); + + for old_entry in to_back_up { + let mut new_entry = old_entry.clone(); + new_entry.name = b".backup/".to_vec(); + new_entry.name.extend(&old_entry.name); + new_entries.push(new_entry); + } + + { + // Intentially using 000 permissions to match Magisk. + let mut entry = CpioEntryNew::new_file(b".backup/.rmlist"); + entry.content = rm_list; + new_entries.push(entry); + } + } +} + +impl BootImagePatcher for MagiskRootPatcher { + fn patch(&self, boot_image: &mut BootImage, cancel_signal: &Arc) -> Result<()> { + let zip_reader = File::open(&self.apk_path)?; + let mut zip = ZipArchive::new(BufReader::new(zip_reader))?; + + // Load the first ramdisk. If it doesn't exist, we have to generate one + // from scratch. + let ramdisk = match boot_image { + BootImage::V0Through2(b) => Some(&b.ramdisk), + BootImage::V3Through4(b) => Some(&b.ramdisk), + BootImage::VendorV3Through4(b) => b.ramdisks.first(), + }; + let (mut entries, ramdisk_format) = match ramdisk { + Some(r) if !r.is_empty() => load_ramdisk(r)?, + _ => (vec![], CompressedFormat::Lz4Legacy), + }; + + let mut old_entries = entries.clone(); + + // Create the Magisk directory structure. + for (path, perms) in [ + (b"overlay.d".as_slice(), 0o750), + (b"overlay.d/sbin".as_slice(), 0o750), + ] { + let mut entry = CpioEntryNew::new_directory(path); + entry.mode |= perms; + entries.push(entry); + } + + // Delete the original init. + entries.retain(|e| e.name != b"init"); + + // Add magiskinit. + { + let mut zip_entry = zip.by_name("lib/arm64-v8a/libmagiskinit.so")?; + let mut data = vec![]; + zip_entry.read_to_end(&mut data)?; + + let mut entry = CpioEntryNew::new_file(b"init"); + entry.mode |= 0o750; + entry.content = data; + entries.push(entry); + } + + // Add xz-compressed magisk32 and magisk64. + let mut xz_files = HashMap::<&str, &[u8]>::new(); + xz_files.insert( + "lib/armeabi-v7a/libmagisk32.so", + b"overlay.d/sbin/magisk32.xz", + ); + xz_files.insert( + "lib/arm64-v8a/libmagisk64.so", + b"overlay.d/sbin/magisk64.xz", + ); + + // Add stub apk, which only exists after Magisk commit + // ad0e6511e11ebec65aa9b5b916e1397342850319. + if zip.file_names().any(|n| n == "assets/stub.apk") { + xz_files.insert("assets/stub.apk", b"overlay.d/sbin/stub.xz"); + } + + for (source, target) in xz_files { + let reader = zip.by_name(source)?; + let raw_writer = Cursor::new(vec![]); + let stream = Stream::new_easy_encoder(9, Check::Crc32)?; + let mut writer = XzEncoder::new_stream(raw_writer, stream); + + stream::copy(reader, &mut writer, cancel_signal)?; + + let raw_writer = writer.finish()?; + let mut entry = CpioEntryNew::new_file(target); + entry.mode |= 0o644; + entry.content = raw_writer.into_inner(); + entries.push(entry); + } + + // Create Magisk .backup directory structure. + Self::apply_magisk_backup(&mut old_entries, &mut entries); + + // Create Magisk config. + let mut magisk_config = String::new(); + magisk_config.push_str("KEEPVERITY=true\n"); + magisk_config.push_str("KEEPFORCEENCRYPT=true\n"); + magisk_config.push_str("PATCHVBMETAFLAG=false\n"); + magisk_config.push_str("RECOVERYMODE=false\n"); + + if Self::VER_PREINIT_DEVICE.contains(&self.version) { + magisk_config.push_str(&format!( + "PREINITDEVICE={}\n", + self.preinit_device.as_ref().unwrap(), + )); + } + + // Magisk normally saves the original SHA1 digest in its config file. It + // uses this to find the original image in /data/magisk_backup_ to + // restore the stock boot image for uninstallation purposes. This is a + // feature we cannot ever use, so just use a dummy value. + magisk_config.push_str("SHA1=0000000000000000000000000000000000000000\n"); + + if Self::VER_RANDOM_SEED.contains(&self.version) { + magisk_config.push_str(&format!("RANDOMSEED={:#x}\n", self.random_seed)); + } + + { + // Intentially using 000 permissions to match Magisk. + let mut entry = CpioEntryNew::new_file(b".backup/.magisk"); + entry.content = magisk_config.into_bytes(); + entries.push(entry); + } + + // Repack ramdisk. + cpio::sort(&mut entries); + cpio::reassign_inodes(&mut entries); + let new_ramdisk = save_ramdisk(&entries, ramdisk_format)?; + + match boot_image { + BootImage::V0Through2(b) => b.ramdisk = new_ramdisk, + BootImage::V3Through4(b) => b.ramdisk = new_ramdisk, + BootImage::VendorV3Through4(b) => { + if b.ramdisks.is_empty() { + b.ramdisks.push(new_ramdisk); + + if let Some(v4) = &mut b.v4_extra { + v4.ramdisk_metas.push(RamdiskMeta { + ramdisk_type: bootimage::VENDOR_RAMDISK_TYPE_NONE, + ramdisk_name: String::new(), + board_id: Default::default(), + }); + } + } else { + b.ramdisks[0] = new_ramdisk; + } + } + } + + Ok(()) + } +} + +/// Replace the OTA certificates in the vendor_boot/recovery image with the +/// custom OTA signing certificate. +pub struct OtaCertPatcher { + cert: Certificate, +} + +impl OtaCertPatcher { + const OTACERTS_PATH: &[u8] = b"system/etc/security/otacerts.zip"; + + pub fn new(cert: Certificate) -> Self { + Self { cert } + } + + fn patch_ramdisk(&self, data: &mut Vec) -> Result { + let (mut entries, ramdisk_format) = load_ramdisk(data)?; + let Some(entry) = entries.iter_mut().find(|e| e.name == Self::OTACERTS_PATH) else { + return Ok(false); + }; + + // Create a new otacerts archive. The old certs are ignored since + // flashing a stock OTA will render the device unbootable. + { + let raw_writer = Cursor::new(vec![]); + let mut writer = ZipWriter::new(raw_writer); + let options = FileOptions::default().compression_method(CompressionMethod::Stored); + writer.start_file("ota.x509.pem", options)?; + + crypto::write_pem_cert(&mut writer, &self.cert)?; + + let raw_writer = writer.finish()?; + entry.content = raw_writer.into_inner(); + } + + // Repack ramdisk. + *data = save_ramdisk(&entries, ramdisk_format)?; + + Ok(true) + } +} + +impl BootImagePatcher for OtaCertPatcher { + fn patch(&self, boot_image: &mut BootImage, _cancel_signal: &Arc) -> Result<()> { + let patched_any = match boot_image { + BootImage::V0Through2(b) => self.patch_ramdisk(&mut b.ramdisk)?, + BootImage::V3Through4(b) => self.patch_ramdisk(&mut b.ramdisk)?, + BootImage::VendorV3Through4(b) => { + let mut patched = false; + + for ramdisk in &mut b.ramdisks { + if self.patch_ramdisk(ramdisk)? { + patched = true; + break; + } + } + + patched + } + }; + + // Fail hard if otacerts does not exist. We don't want to lock the user + // out of future updates if the OTA certificate mechanism has changed. + if !patched_any { + return Err(Error::Validation(format!( + "No ramdisk contains {}", + EscapedString::new(Self::OTACERTS_PATH), + ))); + } + + Ok(()) + } +} + +/// Replace the boot image with a prepatched boot image if it is compatible. +/// +/// An image is compatible if all the non-size-related header fields are +/// identical and the set of included sections (eg. kernel, dtb) are the same. +/// The only exception is the number of ramdisk sections, which is allowed to be +/// higher than the original image. +pub struct PrepatchedImagePatcher { + prepatched: PathBuf, + fatal_level: u8, + warning_fn: Box, +} + +impl PrepatchedImagePatcher { + const MIN_LEVEL: u8 = 0; + const MAX_LEVEL: u8 = 2; + + // We compile without Unicode support so we have to use [0-9] instead of \d. + const VERSION_REGEX: &str = r"Linux version ([0-9]+\.[0-9]+).[0-9]+-(android[0-9]+)-([0-9]+)-"; + + pub fn new( + prepatched: &Path, + fatal_level: u8, + warning_fn: impl Fn(&str) + Send + 'static, + ) -> Self { + Self { + prepatched: prepatched.to_owned(), + fatal_level, + warning_fn: Box::new(warning_fn), + } + } + + fn get_kmi_version(kernel: &[u8]) -> Result> { + let mut decompressed = vec![]; + { + let raw_reader = Cursor::new(kernel); + let mut reader = CompressedReader::new(raw_reader, true)?; + reader.read_to_end(&mut decompressed)?; + } + + let regex = Regex::new(Self::VERSION_REGEX).unwrap(); + let Some(captures) = regex.captures(&decompressed) else { + return Ok(None); + }; + + let kmi_version = captures + .iter() + // Capture #0 is the entire match. + .skip(1) + .flatten() + .map(|c| c.as_bytes()) + // Our regex only matches ASCII bytes. + .map(|c| std::str::from_utf8(c).unwrap()) + .collect::>() + .join("-"); + + Ok(Some(kmi_version)) + } +} + +impl BootImagePatcher for PrepatchedImagePatcher { + fn patch(&self, boot_image: &mut BootImage, _cancel_signal: &Arc) -> Result<()> { + let prepatched_image = { + let raw_reader = File::open(&self.prepatched)?; + BootImage::from_reader(BufReader::new(raw_reader))? + }; + + // Level 0: Warnings that don't affect booting + // Level 1: Warnings that may affect booting + // Level 2: Warnings that are very likely to affect booting + let mut issues = [vec![], vec![], vec![]]; + + macro_rules! check2 { + ($level:literal, $old:expr, $new:expr $(,)?) => { + let old_val = $old; + let new_val = $new; + + if old_val != new_val { + issues[$level].push(format!( + "Field differs: {} ({:?}) -> {} ({:?})", + stringify!($old), + old_val, + stringify!($new), + new_val, + )); + } + }; + } + + let old_kernel; + let new_kernel; + + match (&boot_image, &prepatched_image) { + (BootImage::V0Through2(old), BootImage::V0Through2(new)) => { + check2!(2, old.header_version(), new.header_version()); + check2!(2, old.kernel_addr, new.kernel_addr); + check2!(2, old.ramdisk_addr, new.ramdisk_addr); + check2!(2, old.second_addr, new.second_addr); + check2!(2, old.tags_addr, new.tags_addr); + check2!(2, old.page_size, new.page_size); + check2!(0, old.os_version, new.os_version); + check2!(0, &old.name, &new.name); + check2!(1, &old.cmdline, &new.cmdline); + check2!(0, &old.id, &new.id); + check2!(1, &old.extra_cmdline, &new.extra_cmdline); + check2!(2, old.kernel.is_empty(), new.kernel.is_empty()); + check2!(2, old.second.is_empty(), new.second.is_empty()); + + if let (Some(old_v1), Some(new_v1)) = (&old.v1_extra, &new.v1_extra) { + check2!(2, old_v1.recovery_dtbo_offset, new_v1.recovery_dtbo_offset); + check2!( + 2, + old_v1.recovery_dtbo.is_empty(), + new_v1.recovery_dtbo.is_empty(), + ); + } + + if let (Some(old_v2), Some(new_v2)) = (&old.v2_extra, &new.v2_extra) { + check2!(2, old_v2.dtb_addr, new_v2.dtb_addr); + check2!(2, old_v2.dtb.is_empty(), new_v2.dtb.is_empty()); + } + + // We allow adding a ramdisk. + if !old.ramdisk.is_empty() || new.ramdisk.is_empty() { + check2!(2, old.ramdisk.is_empty(), new.ramdisk.is_empty()); + } + + old_kernel = if old.kernel.is_empty() { + None + } else { + Some(&old.kernel) + }; + new_kernel = if new.kernel.is_empty() { + None + } else { + Some(&new.kernel) + }; + } + (BootImage::V3Through4(old), BootImage::V3Through4(new)) => { + check2!(2, old.header_version(), new.header_version()); + check2!(0, old.os_version, new.os_version); + check2!(0, old.reserved, new.reserved); + check2!(1, &old.cmdline, &new.cmdline); + check2!(2, old.kernel.is_empty(), new.kernel.is_empty()); + + // We allow adding a ramdisk. + if !old.ramdisk.is_empty() || new.ramdisk.is_empty() { + check2!(2, old.ramdisk.is_empty(), new.ramdisk.is_empty()); + } + + old_kernel = if old.kernel.is_empty() { + None + } else { + Some(&old.kernel) + }; + new_kernel = if new.kernel.is_empty() { + None + } else { + Some(&new.kernel) + }; + } + (BootImage::VendorV3Through4(old), BootImage::VendorV3Through4(new)) => { + check2!(2, old.page_size, new.page_size); + check2!(2, old.kernel_addr, new.kernel_addr); + check2!(2, old.ramdisk_addr, new.ramdisk_addr); + check2!(1, &old.cmdline, &new.cmdline); + check2!(2, old.tags_addr, new.tags_addr); + check2!(0, &old.name, &new.name); + check2!(2, old.dtb.is_empty(), new.dtb.is_empty()); + check2!(2, old.dtb_addr, new.dtb_addr); + check2!(2, old.ramdisks.len(), new.ramdisks.len()); + + if let (Some(old_v4), Some(new_v4)) = (&old.v4_extra, &new.v4_extra) { + check2!(2, &old_v4.ramdisk_metas, &new_v4.ramdisk_metas); + check2!(2, &old_v4.bootconfig, &new_v4.bootconfig); + } + + old_kernel = None; + new_kernel = None; + } + _ => { + return Err(Error::Validation( + "Boot image and prepatched image are different boot image types".to_owned(), + )); + } + } + + if let (Some(old), Some(new)) = (old_kernel, new_kernel) { + let old_kmi_version = Self::get_kmi_version(old)?; + let new_kmi_version = Self::get_kmi_version(new)?; + + check2!(2, old_kmi_version, new_kmi_version); + } + + let mut warnings = vec![]; + let mut errors = vec![]; + + for level in Self::MIN_LEVEL..self.fatal_level { + warnings.extend(&issues[level as usize]); + } + for level in self.fatal_level..=Self::MAX_LEVEL { + errors.extend(&issues[level as usize]); + } + + if !warnings.is_empty() { + let mut msg = + "The prepatched boot image may not be compatible with the original:".to_owned(); + for warning in warnings { + msg.push_str("\n- "); + msg.push_str(warning); + } + + (self.warning_fn)(&msg); + } + + if !errors.is_empty() { + let mut msg = + "The prepatched boot image is not compatible with the original:".to_owned(); + for error in errors { + msg.push_str("\n- "); + msg.push_str(error); + } + + return Err(Error::Validation(msg)); + } + + *boot_image = prepatched_image; + + Ok(()) + } +} + +/// Run each patcher against the boot image with the vbmeta footer stripped off +/// and then re-sign the image. +pub fn patch_boot( + mut reader: impl Read + Seek, + writer: impl Write + Seek, + key: &RsaPrivateKey, + patchers: &[Box], + cancel_signal: &Arc, +) -> Result<()> { + let (mut header, footer, image_size) = avb::load_image(&mut reader)?; + let Some(footer) = footer else { + return Err(Error::NoFooter); + }; + + let section_reader = SectionReader::new(reader, 0, footer.original_image_size)?; + let mut boot_image = BootImage::from_reader(section_reader)?; + + for patcher in patchers { + patcher.patch(&mut boot_image, cancel_signal)?; + } + + let mut descriptor_iter = header.descriptors.iter_mut().filter_map(|d| { + if let Descriptor::Hash(h) = d { + Some(h) + } else { + None + } + }); + + let Some(descriptor) = descriptor_iter.next() else { + return Err(Error::NoHashDescriptor); + }; + + // Write new boot image. We reuse the existing salt for the digest. + let mut context = Context::new(&ring::digest::SHA256); + context.update(&descriptor.salt); + let mut hashing_writer = HashingWriter::new(writer, context); + boot_image.to_writer(&mut hashing_writer)?; + let (mut writer, context) = hashing_writer.finish(); + + header.algorithm_type = AlgorithmType::Sha256Rsa4096; + + descriptor.image_size = writer.stream_position()?; + descriptor.hash_algorithm = "sha256".to_owned(); + descriptor.root_digest = context.finish().as_ref().to_vec(); + + if descriptor_iter.next().is_some() { + return Err(Error::MultipleHashDescriptors); + } + + if !header.public_key.is_empty() { + header.sign(key)?; + } + + avb::write_appended_image(writer, &header, &footer, image_size)?; + + Ok(()) +} diff --git a/src/cli/args.rs b/src/cli/args.rs new file mode 100644 index 0000000..116c7e0 --- /dev/null +++ b/src/cli/args.rs @@ -0,0 +1,51 @@ +/* + * SPDX-FileCopyrightText: 2023 Andrew Gunnerson + * SPDX-License-Identifier: GPL-3.0-only + */ + +use std::sync::{atomic::AtomicBool, Arc}; + +use anyhow::Result; +use clap::{Parser, Subcommand}; + +use crate::cli::{avb, boot, completion, key, ota, ramdisk}; + +#[allow(clippy::large_enum_variant)] +#[derive(Debug, Subcommand)] +pub enum Command { + Avb(avb::AvbCli), + Boot(boot::BootCli), + Completion(completion::CompletionCli), + Key(key::KeyCli), + Ota(ota::OtaCli), + Ramdisk(ramdisk::RamdiskCli), + /// (Deprecated: Use `avbroot ota patch` instead.) + Patch(ota::PatchCli), + /// (Deprecated: Use `avbroot ota extract` instead.) + Extract(ota::ExtractCli), + /// (Deprecated: Use `avbroot boot magisk-info` instead.) + MagiskInfo(boot::MagiskInfoCli), +} + +#[derive(Debug, Parser)] +pub struct Cli { + #[command(subcommand)] + pub command: Command, +} + +pub fn main(cancel_signal: &Arc) -> Result<()> { + let cli = Cli::parse(); + + match cli.command { + Command::Avb(c) => avb::avb_main(&c, cancel_signal), + Command::Boot(c) => boot::boot_main(&c), + Command::Completion(c) => completion::completion_main(&c), + Command::Key(c) => key::key_main(&c), + Command::Ota(c) => ota::ota_main(&c, cancel_signal), + Command::Ramdisk(c) => ramdisk::ramdisk_main(&c), + // Deprecated aliases. + Command::Patch(c) => ota::patch_subcommand(&c, cancel_signal), + Command::Extract(c) => ota::extract_subcommand(&c, cancel_signal), + Command::MagiskInfo(c) => boot::magisk_info_subcommand(&c), + } +} diff --git a/src/cli/avb.rs b/src/cli/avb.rs new file mode 100644 index 0000000..f0e0dfb --- /dev/null +++ b/src/cli/avb.rs @@ -0,0 +1,237 @@ +/* + * SPDX-FileCopyrightText: 2023 Andrew Gunnerson + * SPDX-License-Identifier: GPL-3.0-only + */ + +use std::{ + collections::{HashMap, HashSet}, + ffi::OsStr, + fs::{self, File}, + io::{self, BufReader}, + path::{Path, PathBuf}, + str, + sync::{atomic::AtomicBool, Arc}, +}; + +use anyhow::{anyhow, bail, Context, Result}; +use clap::{Parser, Subcommand}; +use rayon::prelude::{IntoParallelRefIterator, ParallelIterator}; +use rsa::RsaPublicKey; + +use crate::{ + cli::{status, warning}, + format::avb::{self, Descriptor}, + stream::PSeekFile, +}; + +fn ensure_name_is_safe(name: &str) -> Result<()> { + if Path::new(name).file_name() != Some(OsStr::new(name)) { + bail!("Unsafe partition name: {name}"); + } + + Ok(()) +} + +/// Recursively verify an image's vbmeta header and all of the chained images. +/// `seen` is used to prevent cycles. `descriptors` will contain all of the hash +/// and hashtree descriptors that need to be verified. +pub fn verify_headers( + directory: &Path, + name: &str, + expected_key: Option<&RsaPublicKey>, + seen: &mut HashSet, + descriptors: &mut HashMap, +) -> Result<()> { + if !seen.insert(name.to_owned()) { + return Ok(()); + } + + ensure_name_is_safe(name)?; + + let path = directory.join(format!("{name}.img")); + let raw_reader = + File::open(&path).with_context(|| anyhow!("Failed to open for reading: {path:?}"))?; + let (header, _, _) = avb::load_image(BufReader::new(raw_reader)) + .with_context(|| anyhow!("Failed to load vbmeta structures: {path:?}"))?; + + // Verify the header's signature. + let public_key = header + .verify() + .with_context(|| anyhow!("Failed to verify header signature: {path:?}"))?; + + if let Some(k) = &public_key { + let prefix = format!("{name} has a signed vbmeta header"); + + if let Some(e) = expected_key { + if k == e { + status!("{prefix}"); + } else { + bail!("{prefix}, but is signed by an untrusted key"); + } + } else { + warning!("{prefix}, but parent does not list a trusted key"); + } + } else { + status!("{name} has an unsigned vbmeta header"); + } + + for descriptor in &header.descriptors { + let Some(target_name) = descriptor.partition_name() else { + continue; + }; + + match descriptor { + avb::Descriptor::Hashtree(_) | avb::Descriptor::Hash(_) => { + if let Some(prev) = descriptors.get(target_name) { + if prev != descriptor { + bail!("{name} descriptor does not match previous encounter"); + } + } else { + descriptors.insert(target_name.to_owned(), descriptor.clone()); + } + } + avb::Descriptor::ChainPartition(d) => { + let target_key = avb::decode_public_key(&d.public_key).with_context(|| { + anyhow!("Failed to decode chained public key for: {target_name}") + })?; + + verify_headers(directory, target_name, Some(&target_key), seen, descriptors)?; + } + _ => {} + } + } + + Ok(()) +} + +pub fn verify_descriptors( + directory: &Path, + descriptors: &HashMap, + cancel_signal: &Arc, +) -> Result<()> { + descriptors + .par_iter() + .map(|(name, descriptor)| { + let path = directory.join(format!("{name}.img")); + let reader = match File::open(&path).map(PSeekFile::new) { + Ok(f) => f, + // Some devices, like bluejay, have vbmeta descriptors that + // refer to partitions that exist on the device, but not in the + // OTA. + Err(e) if e.kind() == io::ErrorKind::NotFound => { + warning!("Partition image does not exist: {path:?}"); + return Ok(()); + } + Err(e) => { + Err(e).with_context(|| format!("Failed to open for reading: {path:?}"))? + } + }; + + match descriptor { + Descriptor::Hashtree(d) => { + status!("Verifying hashtree descriptor for: {name}"); + d.verify( + || Ok(Box::new(BufReader::new(reader.clone()))), + cancel_signal, + ) + .with_context(|| anyhow!("Failed to verify hashtree descriptor for: {name}"))?; + } + Descriptor::Hash(d) => { + status!("Verifying hash descriptor for: {name}"); + d.verify(BufReader::new(reader), cancel_signal) + .with_context(|| anyhow!("Failed to verify hash descriptor for: {name}"))?; + } + _ => unreachable!("Non-verifiable descriptor: {descriptor:?}"), + } + + Ok(()) + }) + .collect() +} + +pub fn avb_main(cli: &AvbCli, cancel_signal: &Arc) -> Result<()> { + match &cli.command { + AvbCommand::Dump(c) => { + let raw_reader = File::open(&c.input) + .with_context(|| anyhow!("Failed to open for reading: {:?}", c.input))?; + let reader = BufReader::new(raw_reader); + let (header, footer, image_size) = avb::load_image(reader) + .with_context(|| anyhow!("Failed to load vbmeta structures: {:?}", c.input))?; + + println!("Image size: {image_size}"); + println!("Header: {header:#?}"); + println!("Footer: {footer:#?}"); + } + AvbCommand::Verify(c) => { + let public_key = if let Some(p) = &c.public_key { + let data = fs::read(p).with_context(|| anyhow!("Failed to read file: {p:?}"))?; + let key = avb::decode_public_key(&data) + .with_context(|| anyhow!("Failed to decode public key: {p:?}"))?; + + Some(key) + } else { + None + }; + + let directory = c.input.parent().unwrap_or_else(|| Path::new(".")); + let name = c + .input + .file_stem() + .with_context(|| anyhow!("Path is not a file: {:?}", c.input))? + .to_str() + .ok_or_else(|| anyhow!("Invalid UTF-8: {:?}", c.input))?; + + let mut seen = HashSet::::new(); + let mut descriptors = HashMap::::new(); + + verify_headers( + directory, + name, + public_key.as_ref(), + &mut seen, + &mut descriptors, + )?; + verify_descriptors(directory, &descriptors, cancel_signal)?; + + status!("Successfully verified all vbmeta signatures and hashes"); + } + } + + Ok(()) +} + +/// Dump AVB header and footer information. +#[derive(Debug, Parser)] +struct DumpCli { + /// Path to input image. + #[arg(short, long, value_name = "FILE", value_parser)] + input: PathBuf, +} + +/// Verify vbmeta signatures. +#[derive(Debug, Parser)] +struct VerifyCli { + /// Path to input image. + #[arg(short, long, value_name = "FILE", value_parser)] + input: PathBuf, + + /// Path to public key in AVB binary format. + /// + /// If this is not specified, the signatures can only be checked for + /// validity, not whether they are trusted. + #[arg(short, long, value_name = "FILE", value_parser)] + public_key: Option, +} + +#[derive(Debug, Subcommand)] +enum AvbCommand { + Dump(DumpCli), + Verify(VerifyCli), +} + +/// Show information about AVB-protected images. +#[derive(Debug, Parser)] +pub struct AvbCli { + #[command(subcommand)] + command: AvbCommand, +} diff --git a/src/cli/boot.rs b/src/cli/boot.rs new file mode 100644 index 0000000..68ebd19 --- /dev/null +++ b/src/cli/boot.rs @@ -0,0 +1,493 @@ +/* + * SPDX-FileCopyrightText: 2023 Andrew Gunnerson + * SPDX-License-Identifier: GPL-3.0-only + */ + +use std::{ + fs::{self, File}, + io::{self, BufReader, BufWriter, Cursor, Write}, + path::{Path, PathBuf}, +}; + +use anyhow::{anyhow, bail, Context, Result}; +use clap::{Parser, Subcommand}; + +use crate::{ + format::{avb::Header, bootimage::BootImage, compression::CompressedReader, cpio}, + stream::{FromReader, ToWriter}, +}; + +fn read_image(path: &Path) -> Result { + let file = File::open(path).with_context(|| format!("Failed to open for reading: {path:?}"))?; + let reader = BufReader::new(file); + let image = BootImage::from_reader(reader) + .with_context(|| format!("Failed to read boot image: {path:?}"))?; + + Ok(image) +} + +fn write_image(path: &Path, image: &BootImage) -> Result<()> { + let file = + File::create(path).with_context(|| format!("Failed to open for writing: {path:?}"))?; + let mut writer = BufWriter::new(file); + image + .to_writer(&mut writer) + .with_context(|| format!("Failed to write boot image: {path:?}"))?; + writer.flush()?; + + Ok(()) +} + +fn read_header(path: &Path) -> Result { + let data = fs::read_to_string(path) + .with_context(|| format!("Failed to read header TOML: {path:?}"))?; + let image = toml_edit::de::from_str(&data) + .with_context(|| format!("Failed to parse header TOML: {path:?}"))?; + + Ok(image) +} + +fn write_header(path: &Path, image: &BootImage) -> Result<()> { + let data = toml_edit::ser::to_string_pretty(image) + .with_context(|| format!("Failed to serialize header TOML: {path:?}"))?; + fs::write(path, data).with_context(|| format!("Failed to write header TOML: {path:?}"))?; + + Ok(()) +} + +fn read_data_if_exists(path: &Path) -> Result>> { + let data = match fs::read(path) { + Ok(f) => f, + Err(e) if e.kind() == io::ErrorKind::NotFound => return Ok(None), + Err(e) => Err(e).with_context(|| format!("Failed to read data: {path:?}"))?, + }; + + Ok(Some(data)) +} + +fn read_text_if_exists(path: &Path) -> Result> { + let data = match fs::read_to_string(path) { + Ok(f) => f, + Err(e) if e.kind() == io::ErrorKind::NotFound => return Ok(None), + Err(e) => Err(e).with_context(|| format!("Failed to read text: {path:?}"))?, + }; + + Ok(Some(data)) +} + +fn read_avb_header_if_exists(path: &Path) -> Result> { + let file = match File::open(path) { + Ok(f) => f, + Err(e) if e.kind() == io::ErrorKind::NotFound => return Ok(None), + Err(e) => Err(e).with_context(|| format!("Failed to open for reading: {path:?}"))?, + }; + let header = Header::from_reader(BufReader::new(file)) + .with_context(|| anyhow!("Failed to read vbmeta header: {path:?}"))?; + + Ok(Some(header)) +} + +fn write_data_if_not_empty(path: &Path, data: &[u8]) -> Result<()> { + if !data.is_empty() { + fs::write(path, data).with_context(|| format!("Failed to write data: {path:?}"))?; + } + + Ok(()) +} + +fn write_text_if_not_empty(path: &Path, text: &str) -> Result<()> { + if !text.is_empty() { + fs::write(path, text.as_bytes()) + .with_context(|| format!("Failed to write text: {path:?}"))?; + } + + Ok(()) +} + +fn write_avb_header(path: &Path, header: &Header) -> Result<()> { + let file = + File::create(path).with_context(|| anyhow!("Failed to open for writing: {path:?}"))?; + header.to_writer(BufWriter::new(file))?; + + Ok(()) +} + +fn display_info(cli: &BootCli, image: &BootImage) { + if !cli.quiet { + if cli.debug { + println!("{image:#?}"); + } else { + println!("{image}"); + } + } +} + +fn unpack_subcommand(boot_cli: &BootCli, cli: &UnpackCli) -> Result<()> { + let image = read_image(&cli.input)?; + display_info(boot_cli, &image); + + write_header(&cli.output_header, &image)?; + + let mut kernel = None; + let mut second = None; + let mut recovery_dtbo = None; + let mut dtb = None; + let mut vts_signature = None; + let mut bootconfig = None; + let mut ramdisks = vec![]; + + match &image { + BootImage::V0Through2(b) => { + kernel = Some(&b.kernel); + second = Some(&b.second); + if let Some(v1) = &b.v1_extra { + recovery_dtbo = Some(&v1.recovery_dtbo); + } + if let Some(v2) = &b.v2_extra { + dtb = Some(&v2.dtb); + } + ramdisks.push(&b.ramdisk); + } + BootImage::V3Through4(b) => { + kernel = Some(&b.kernel); + if let Some(v4) = &b.v4_extra { + vts_signature = v4.signature.as_ref(); + } + ramdisks.push(&b.ramdisk); + } + BootImage::VendorV3Through4(b) => { + dtb = Some(&b.dtb); + if let Some(v4) = &b.v4_extra { + bootconfig = Some(&v4.bootconfig); + } + ramdisks.extend(b.ramdisks.iter()); + } + } + + if let Some(data) = kernel { + write_data_if_not_empty(&cli.output_kernel, data)?; + } + if let Some(data) = second { + write_data_if_not_empty(&cli.output_second, data)?; + } + if let Some(data) = recovery_dtbo { + write_data_if_not_empty(&cli.output_recovery_dtbo, data)?; + } + if let Some(data) = dtb { + write_data_if_not_empty(&cli.output_dtb, data)?; + } + if let Some(header) = vts_signature { + write_avb_header(&cli.output_vts_signature, header)?; + } + if let Some(text) = bootconfig { + write_text_if_not_empty(&cli.output_bootconfig, text)?; + } + + for (i, data) in ramdisks.iter().enumerate() { + let mut path = cli.output_ramdisk_prefix.as_os_str().to_owned(); + path.push(i.to_string()); + + write_data_if_not_empty(Path::new(&path), data)?; + } + + Ok(()) +} + +fn pack_subcommand(boot_cli: &BootCli, cli: &PackCli) -> Result<()> { + let mut image = read_header(&cli.input_header)?; + + let kernel = read_data_if_exists(&cli.input_kernel)?; + let second = read_data_if_exists(&cli.input_second)?; + let recovery_dtbo = read_data_if_exists(&cli.input_recovery_dtbo)?; + let dtb = read_data_if_exists(&cli.input_dtb)?; + let vts_signature = read_avb_header_if_exists(&cli.input_vts_signature)?; + let bootconfig = read_text_if_exists(&cli.input_bootconfig)?; + let mut ramdisks = vec![]; + + for i in 0.. { + let mut path = cli.input_ramdisk_prefix.as_os_str().to_owned(); + path.push(i.to_string()); + + let Some(ramdisk) = read_data_if_exists(Path::new(&path))? else { + break; + }; + + ramdisks.push(ramdisk); + } + + match &mut image { + BootImage::V0Through2(b) => { + b.kernel = kernel.unwrap_or_default(); + b.second = second.unwrap_or_default(); + if let Some(v1) = &mut b.v1_extra { + v1.recovery_dtbo = recovery_dtbo.unwrap_or_default(); + } + if let Some(v2) = &mut b.v2_extra { + v2.dtb = dtb.unwrap_or_default(); + } + if ramdisks.len() > 1 { + bail!("Image type only supports a single ramdisk"); + } + b.ramdisk = ramdisks.into_iter().next().unwrap_or_default(); + } + BootImage::V3Through4(b) => { + b.kernel = kernel.unwrap_or_default(); + if let Some(v4) = &mut b.v4_extra { + v4.signature = vts_signature; + } + if ramdisks.len() > 1 { + bail!("Image type only supports a single ramdisk"); + } + b.ramdisk = ramdisks.into_iter().next().unwrap_or_default(); + } + BootImage::VendorV3Through4(b) => { + b.dtb = dtb.unwrap_or_default(); + if let Some(v4) = &mut b.v4_extra { + v4.bootconfig = bootconfig.unwrap_or_default(); + } + b.ramdisks = ramdisks; + } + } + + display_info(boot_cli, &image); + write_image(&cli.output, &image)?; + + Ok(()) +} + +fn repack_subcommand(boot_cli: &BootCli, cli: &RepackCli) -> Result<()> { + let image = read_image(&cli.input)?; + display_info(boot_cli, &image); + write_image(&cli.output, &image)?; + + Ok(()) +} + +fn info_subcommand(boot_cli: &BootCli, cli: &InfoCli) -> Result<()> { + let image = read_image(&cli.input)?; + display_info(boot_cli, &image); + + Ok(()) +} + +pub fn magisk_info_subcommand(cli: &MagiskInfoCli) -> Result<()> { + let raw_reader = File::open(&cli.image) + .with_context(|| anyhow!("Failed to open for reading: {:?}", cli.image))?; + let boot_image = BootImage::from_reader(BufReader::new(raw_reader)) + .with_context(|| anyhow!("Failed to load boot image: {:?}", cli.image))?; + + let mut ramdisks = vec![]; + + match &boot_image { + BootImage::V0Through2(b) => { + if !b.ramdisk.is_empty() { + ramdisks.push(&b.ramdisk); + } + } + BootImage::V3Through4(b) => { + if !b.ramdisk.is_empty() { + ramdisks.push(&b.ramdisk); + } + } + BootImage::VendorV3Through4(b) => { + ramdisks.extend(b.ramdisks.iter()); + } + } + + for (i, ramdisk) in ramdisks.iter().enumerate() { + let reader = Cursor::new(ramdisk); + let reader = CompressedReader::new(reader, true) + .with_context(|| anyhow!("Failed to load ramdisk #{i}"))?; + let entries = cpio::load(reader, false) + .with_context(|| anyhow!("Failed to load ramdisk #{i} cpio"))?; + + if let Some(e) = entries.iter().find(|e| e.name == b".backup/.magisk") { + io::stdout().write_all(&e.content)?; + return Ok(()); + } + } + + bail!("Not a Magisk-patched boot image"); +} + +pub fn boot_main(cli: &BootCli) -> Result<()> { + match &cli.command { + BootCommand::Unpack(c) => unpack_subcommand(cli, c), + BootCommand::Pack(c) => pack_subcommand(cli, c), + BootCommand::Repack(c) => repack_subcommand(cli, c), + BootCommand::Info(c) => info_subcommand(cli, c), + BootCommand::MagiskInfo(c) => magisk_info_subcommand(c), + } +} + +/// Unpack a boot image. +#[derive(Debug, Parser)] +struct UnpackCli { + /// Path to input boot image. + #[arg(short, long, value_name = "FILE", value_parser)] + input: PathBuf, + + /// Path to output header TOML. + #[arg(long, value_name = "FILE", value_parser, default_value = "header.toml")] + output_header: PathBuf, + + /// Path to output kernel image. + #[arg(long, value_name = "FILE", value_parser, default_value = "kernel.img")] + output_kernel: PathBuf, + + /// Path prefix for output ramdisk images. + #[arg( + long, + value_name = "FILE", + value_parser, + default_value = "ramdisk.img." + )] + output_ramdisk_prefix: PathBuf, + + /// Path to output second stage bootloader image. + #[arg(long, value_name = "FILE", value_parser, default_value = "second.img")] + output_second: PathBuf, + + /// Path to output recovery dtbo/acpio image. + #[arg( + long, + value_name = "FILE", + value_parser, + default_value = "recovery_dtbo.img" + )] + output_recovery_dtbo: PathBuf, + + /// Path to output device tree blob image. + #[arg(long, value_name = "FILE", value_parser, default_value = "dtb.img")] + output_dtb: PathBuf, + + /// Path to output VTS signature. + #[arg( + long, + value_name = "FILE", + value_parser, + default_value = "vts_signature.img" + )] + output_vts_signature: PathBuf, + + /// Path to output bootconfig text. + #[arg( + long, + value_name = "FILE", + value_parser, + default_value = "bootconfig.txt" + )] + output_bootconfig: PathBuf, +} + +/// Pack a boot image. +#[derive(Debug, Parser)] +struct PackCli { + /// Path to output boot image. + #[arg(short, long, value_name = "FILE", value_parser)] + output: PathBuf, + + /// Path to input header TOML. + #[arg(long, value_name = "FILE", value_parser, default_value = "header.toml")] + input_header: PathBuf, + + /// Path to input kernel image. + #[arg(long, value_name = "FILE", value_parser, default_value = "kernel.img")] + input_kernel: PathBuf, + + /// Path prefix for input ramdisk images. + #[arg( + long, + value_name = "FILE", + value_parser, + default_value = "ramdisk.img." + )] + input_ramdisk_prefix: PathBuf, + + /// Path to input second stage bootloader image. + #[arg(long, value_name = "FILE", value_parser, default_value = "second.img")] + input_second: PathBuf, + + /// Path to input recovery dtbo/acpio image. + #[arg( + long, + value_name = "FILE", + value_parser, + default_value = "recovery_dtbo.img" + )] + input_recovery_dtbo: PathBuf, + + /// Path to input device tree blob image. + #[arg(long, value_name = "FILE", value_parser, default_value = "dtb.img")] + input_dtb: PathBuf, + + /// Path to input VTS signature. + #[arg( + long, + value_name = "FILE", + value_parser, + default_value = "vts_signature.img" + )] + input_vts_signature: PathBuf, + + /// Path to input bootconfig text. + #[arg( + long, + value_name = "FILE", + value_parser, + default_value = "bootconfig.txt" + )] + input_bootconfig: PathBuf, +} + +/// Repack a boot image. +#[derive(Debug, Parser)] +struct RepackCli { + /// Path to input boot image. + #[arg(short, long, value_name = "FILE", value_parser)] + input: PathBuf, + + /// Path to output boot image. + #[arg(short, long, value_name = "FILE", value_parser)] + output: PathBuf, +} + +/// Display boot image header information. +#[derive(Debug, Parser)] +struct InfoCli { + /// Path to input boot image. + #[arg(short, long, value_name = "FILE", value_parser)] + input: PathBuf, +} + +/// Print Magisk config from a patched boot image. +#[derive(Debug, Parser)] +pub struct MagiskInfoCli { + /// Path to Magisk-patched boot image. + #[arg(short, long, value_name = "FILE", value_parser)] + pub image: PathBuf, +} + +#[derive(Debug, Subcommand)] +enum BootCommand { + Unpack(UnpackCli), + Pack(PackCli), + Repack(RepackCli), + Info(InfoCli), + MagiskInfo(MagiskInfoCli), +} + +/// Pack or unpack boot images. +#[derive(Debug, Parser)] +pub struct BootCli { + #[command(subcommand)] + command: BootCommand, + + /// Don't print boot image header information. + #[arg(short, long, global = true)] + quiet: bool, + + /// Print boot image header information in debug format. + #[arg(short, long, global = true)] + debug: bool, +} diff --git a/src/cli/completion.rs b/src/cli/completion.rs new file mode 100644 index 0000000..cf3e2dd --- /dev/null +++ b/src/cli/completion.rs @@ -0,0 +1,26 @@ +use std::io; + +use anyhow::Result; +use clap::{CommandFactory, Parser}; +use clap_complete::Shell; + +use crate::cli::args::Cli; + +pub fn completion_main(cli: &CompletionCli) -> Result<()> { + clap_complete::generate( + cli.shell, + &mut Cli::command(), + env!("CARGO_PKG_NAME"), + &mut io::stdout(), + ); + + Ok(()) +} + +/// Generate shell tab completion configs. +#[derive(Debug, Parser)] +pub struct CompletionCli { + /// The shell to generate completions for. + #[arg(short, long, value_name = "SHELL", value_parser)] + pub shell: Shell, +} diff --git a/src/cli/key.rs b/src/cli/key.rs new file mode 100644 index 0000000..0667b7a --- /dev/null +++ b/src/cli/key.rs @@ -0,0 +1,168 @@ +/* + * SPDX-FileCopyrightText: 2023 Andrew Gunnerson + * SPDX-License-Identifier: GPL-3.0-only + */ + +use std::{ + ffi::OsString, + fs, + path::{Path, PathBuf}, + time::Duration, +}; + +use anyhow::{anyhow, Context, Result}; +use clap::{Args, Parser, Subcommand}; + +use crate::{ + crypto::{self, PassphraseSource}, + format::avb, +}; + +fn get_passphrase(group: &PassphraseGroup, key_path: &Path) -> PassphraseSource { + if let Some(v) = &group.pass_env_var { + PassphraseSource::EnvVar(v.clone()) + } else if let Some(p) = &group.pass_file { + PassphraseSource::File(p.clone()) + } else { + PassphraseSource::Prompt(format!("Enter passphrase for {key_path:?}: ")) + } +} + +pub fn key_main(cli: &KeyCli) -> Result<()> { + match &cli.command { + KeyCommand::GenerateKey(c) => { + let passphrase = get_passphrase(&c.passphrase, &c.output); + let private_key = + crypto::generate_rsa_key_pair().context("Failed to generate RSA keypair")?; + + crypto::write_pem_key_file(&c.output, &private_key, &passphrase) + .with_context(|| anyhow!("Failed to write private key: {:?}", c.output))?; + } + KeyCommand::GenerateCert(c) => { + let passphrase = get_passphrase(&c.passphrase, &c.key); + let private_key = crypto::read_pem_key_file(&c.key, &passphrase) + .with_context(|| anyhow!("Failed to load key: {:?}", c.key))?; + + let validity = Duration::from_secs(c.validity * 24 * 60 * 60); + let cert = crypto::generate_cert(&private_key, rand::random(), validity, &c.subject) + .context("Failed to generate certificate")?; + + crypto::write_pem_cert_file(&c.output, &cert) + .with_context(|| anyhow!("Failed to write certificate: {:?}", c.output))?; + } + KeyCommand::ExtractAvb(c) => { + let public_key = if let Some(p) = &c.input.key { + let passphrase = get_passphrase(&c.passphrase, p); + let private_key = crypto::read_pem_key_file(p, &passphrase) + .with_context(|| anyhow!("Failed to load key: {p:?}"))?; + + private_key.to_public_key() + } else if let Some(p) = &c.input.cert { + let certificate = crypto::read_pem_cert_file(p) + .with_context(|| anyhow!("Failed to load certificate: {p:?}"))?; + + crypto::get_public_key(&certificate)? + } else { + unreachable!() + }; + + let encoded = avb::encode_public_key(&public_key) + .with_context(|| anyhow!("Failed to encode public key in AVB format"))?; + + fs::write(&c.output, encoded) + .with_context(|| anyhow!("Failed to write public key: {:?}", c.output))?; + } + } + + Ok(()) +} + +#[derive(Debug, Args)] +#[group(required = true, multiple = false)] +struct PublicKeyInputGroup { + /// Path to private key. + #[arg(short, long, value_name = "FILE", value_parser)] + key: Option, + + /// Path to certificate. + #[arg(short, long, value_name = "FILE", value_parser, conflicts_with_all = ["pass_env_var", "pass_file"])] + cert: Option, +} + +#[derive(Debug, Args)] +struct PassphraseGroup { + /// Environment variable containing private key passphrase. + #[arg(long, value_name = "ENV_VAR", value_parser, group = "pass")] + pass_env_var: Option, + + /// File containing private key passphrase. + #[arg(long, value_name = "FILE", value_parser, group = "pass")] + pass_file: Option, +} + +/// Generate an 4096-bit RSA keypair. +/// +/// The output is saved in the standard PKCS8 format. +#[derive(Debug, Parser)] +struct GenerateKeyCli { + /// Path to output private key. + #[arg(short, long, value_name = "FILE", value_parser)] + output: PathBuf, + + #[command(flatten)] + passphrase: PassphraseGroup, +} + +/// Generate a certificate. +#[derive(Debug, Parser)] +struct GenerateCertCli { + /// Path to input private key. + #[arg(short, long, value_name = "FILE", value_parser)] + key: PathBuf, + + #[command(flatten)] + passphrase: PassphraseGroup, + + /// Path to output certificate. + #[arg(short, long, value_name = "FILE", value_parser)] + output: PathBuf, + + /// Certificate subject with comma-separated components. + #[arg(short, long, default_value = "CN=avbroot")] + subject: String, + + /// Certificate validity in days. + #[arg(short, long, default_value = "10000")] + validity: u64, +} + +/// Extract the AVB public key from a private key or certificate. +/// +/// The public key is stored in both the private key and the certificate. Either +/// one can be used interchangeably. +#[derive(Debug, Parser)] +struct ExtractAvbCli { + /// Path to output AVB public key. + #[arg(short, long, value_name = "FILE", value_parser)] + output: PathBuf, + + #[command(flatten)] + input: PublicKeyInputGroup, + + #[command(flatten)] + passphrase: PassphraseGroup, +} + +#[derive(Debug, Subcommand)] +enum KeyCommand { + GenerateKey(GenerateKeyCli), + GenerateCert(GenerateCertCli), + ExtractAvb(ExtractAvbCli), +} + +/// Generate and convert keys. +#[derive(Debug, Parser)] +pub struct KeyCli { + #[command(subcommand)] + command: KeyCommand, +} diff --git a/src/cli/mod.rs b/src/cli/mod.rs new file mode 100644 index 0000000..ba167ed --- /dev/null +++ b/src/cli/mod.rs @@ -0,0 +1,27 @@ +/* + * SPDX-FileCopyrightText: 2023 Andrew Gunnerson + * SPDX-License-Identifier: GPL-3.0-only + */ + +pub mod args; +pub mod avb; +pub mod boot; +pub mod completion; +pub mod key; +pub mod ota; +pub mod ramdisk; + +macro_rules! status { + ($($arg:tt)*) => { + println!("\x1b[1m[*] {}\x1b[0m", format!($($arg)*)) + } +} + +macro_rules! warning { + ($($arg:tt)*) => { + println!("\x1b[1;31m[WARNING] {}\x1b[0m", format!($($arg)+)) + } +} + +pub(crate) use status; +pub(crate) use warning; diff --git a/src/cli/ota.rs b/src/cli/ota.rs new file mode 100644 index 0000000..3266ad2 --- /dev/null +++ b/src/cli/ota.rs @@ -0,0 +1,1347 @@ +/* + * SPDX-FileCopyrightText: 2022-2023 Andrew Gunnerson + * SPDX-License-Identifier: GPL-3.0-only + */ + +use std::{ + borrow::Cow, + collections::{BTreeSet, HashMap, HashSet}, + ffi::{OsStr, OsString}, + fmt::Display, + fs::{self, File}, + io::{self, BufReader, BufWriter, Cursor, Read, Seek, SeekFrom, Write}, + path::{Path, PathBuf}, + sync::{atomic::AtomicBool, Arc, Mutex}, + time::Instant, +}; + +use anyhow::{anyhow, bail, Context, Result}; +use clap::{value_parser, ArgAction, Args, Parser, Subcommand}; +use phf::phf_map; +use rayon::prelude::{IntoParallelIterator, IntoParallelRefMutIterator, ParallelIterator}; +use rsa::RsaPrivateKey; +use tempfile::{NamedTempFile, TempDir}; +use topological_sort::TopologicalSort; +use x509_cert::Certificate; +use zip::{write::FileOptions, CompressionMethod, ZipArchive, ZipWriter}; + +use crate::{ + boot::{self, BootImagePatcher, MagiskRootPatcher, OtaCertPatcher, PrepatchedImagePatcher}, + cli::{self, status, warning}, + crypto::{self, PassphraseSource}, + format::{ + avb::Header, + avb::{self, AlgorithmType, Descriptor}, + ota::{self, SigningWriter, ZipEntry}, + padding, + payload::{self, CompressedPartitionWriter, PayloadHeader, PayloadWriter}, + }, + protobuf::{ + build::tools::releasetools::OtaMetadata, chromeos_update_engine::DeltaArchiveManifest, + }, + stream::{ + self, CountingWriter, FromReader, HolePunchingWriter, PSeekFile, ReadSeek, SectionReader, + ToWriter, + }, +}; + +static PARTITION_PRIORITIES: phf::Map<&'static str, &[&'static str]> = phf_map! { + // The kernel is always in boot + "@gki_kernel" => &["boot"], + // Devices launching with Android 13 use a GKI init_boot ramdisk + "@gki_ramdisk" => &["init_boot", "boot"], + // OnePlus devices have a recovery image + "@otacerts" => &["recovery", "vendor_boot", "boot"], +}; + +fn joined(into_iter: impl IntoIterator) -> String { + let items = into_iter + .into_iter() + .map(|i| i.to_string()) + .collect::>(); + + items.join(", ") +} + +fn sorted(iter: impl Iterator) -> Vec { + let mut items = iter.collect::>(); + items.sort(); + items +} + +/// Get the set of partitions, grouped by type, based on the priorities listed +/// in [`PARTITION_PRIORITIES`]. The result also includes every vbmeta partition +/// prefixed with `@vbmeta:`. +pub fn get_partitions_by_type(manifest: &DeltaArchiveManifest) -> Result> { + let all_partitions = manifest + .partitions + .iter() + .map(|p| p.partition_name.as_str()) + .collect::>(); + let mut by_type = HashMap::new(); + + for (&t, candidates) in &PARTITION_PRIORITIES { + let &partition = candidates + .iter() + .find(|p| all_partitions.contains(*p)) + .ok_or_else(|| anyhow!("Cannot find partition of type: {t}"))?; + + by_type.insert(t.to_owned(), partition.to_owned()); + } + + for &partition in &all_partitions { + if partition.contains("vbmeta") { + by_type.insert(format!("@vbmeta:{partition}"), partition.to_owned()); + } + } + + Ok(by_type) +} + +/// Get the list of partitions, grouped by type, that need to be patched. For +/// the @vbmeta: type, this may include more partitions than necessary because +/// it's not yet known which vbmeta partitions cover the contents of the other +/// partitions. +pub fn get_required_images( + manifest: &DeltaArchiveManifest, + boot_partition: &str, + with_root: bool, +) -> Result> { + let all_partitions = manifest + .partitions + .iter() + .map(|p| p.partition_name.as_str()) + .collect::>(); + let by_type = get_partitions_by_type(manifest)?; + let mut images = HashMap::new(); + + for (k, v) in &by_type { + if k == "@otacerts" || k.starts_with("@vbmeta:") { + images.insert(k.clone(), v.clone()); + } + } + + if with_root { + if by_type.contains_key(boot_partition) { + images.insert("@rootpatch".to_owned(), by_type[boot_partition].clone()); + } else if all_partitions.contains(boot_partition) { + images.insert("@rootpatch".to_owned(), boot_partition.to_owned()); + } else { + bail!("Boot partition not found: {boot_partition}"); + } + } + + Ok(images) +} + +/// Open all input streams listed in `required_images`. If an image has a path +/// in `external_images`, the real file on the filesystem is opened. Otherwise, +/// the image is extracted from the payload. +fn open_input_streams( + open_payload: impl Fn() -> io::Result> + Sync, + required_images: &HashMap, + external_images: &HashMap, + header: &PayloadHeader, + cancel_signal: &Arc, +) -> Result>> { + let mut input_streams = HashMap::>::new(); + + // We always include replacement images that the user specifies, even if + // they don't need to be patched. + let all_images = required_images + .values() + .chain(external_images.keys()) + .collect::>(); + + for name in all_images { + if let Some(path) = external_images.get(name) { + status!("Opening external image: {name}: {path:?}"); + + let file = File::open(path) + .with_context(|| anyhow!("Failed to open external image: {path:?}"))?; + input_streams.insert(name.clone(), Box::new(file)); + } else { + status!("Extracting from original payload: {name}"); + + let stream = + payload::extract_image_to_memory(&open_payload, header, name, cancel_signal) + .with_context(|| anyhow!("Failed to extract from original payload: {name}"))?; + input_streams.insert(name.clone(), Box::new(stream)); + } + } + + Ok(input_streams) +} + +/// Patch the boot images listed in `required_images`. An [`OtaCertPatcher`] is +/// always applied to the `@otacerts` image to insert `cert_ota` into the +/// trusted certificate list. If `root_patcher` is specified, then it is used to +/// patch the `@rootpatch` image. If the original image is signed, then it will +/// be re-signed with `key_avb`. +fn patch_boot_images( + required_images: &HashMap, + input_streams: &mut HashMap>, + root_patcher: Option>, + key_avb: &RsaPrivateKey, + cert_ota: &Certificate, + cancel_signal: &Arc, +) -> Result<()> { + let mut boot_patchers = HashMap::<&str, Vec>>::new(); + boot_patchers + .entry(&required_images["@otacerts"]) + .or_default() + .push(Box::new(OtaCertPatcher::new(cert_ota.clone()))); + + if let Some(p) = root_patcher { + boot_patchers + .entry(&required_images["@rootpatch"]) + .or_default() + .push(p); + } + + status!( + "Patching boot images: {}", + joined(sorted(boot_patchers.keys())) + ); + + // Temporarily take the streams out of input_streams so we can easily + // run the patchers in parallel. + let patchers_list = boot_patchers + .into_iter() + .map(|(n, p)| (n, p, input_streams.remove(n).unwrap())) + .collect::>(); + + // Patch the boot images. The original readers are dropped. + let patched = patchers_list + .into_par_iter() + .map(|(n, p, s)| -> Result<(&str, Cursor>)> { + let mut writer = Cursor::new(Vec::new()); + + boot::patch_boot(s, &mut writer, key_avb, &p, cancel_signal) + .with_context(|| anyhow!("Failed to patch boot image: {n}"))?; + + Ok((n, writer)) + }) + .collect::>>()?; + + // Put the patched images back into input_streams. + for (name, stream) in patched { + input_streams.insert(name.to_owned(), Box::new(stream)); + } + + Ok(()) +} + +/// From the set of input images (modified partitions + all vbmeta partitions) +/// and determine the order to patch the vbmeta images so that it can be done in +/// a single pass. +fn get_vbmeta_patch_order( + images: &mut HashMap>, + vbmeta_images: &HashSet, +) -> Result)>> { + let mut dep_graph = HashMap::<&str, HashSet>::new(); + let mut headers = HashMap::<&str, Header>::new(); + let mut missing = images.keys().cloned().collect::>(); + + for name in vbmeta_images { + let reader = images.get_mut(name).unwrap(); + let (header, footer, _) = avb::load_image(reader) + .with_context(|| anyhow!("Failed to load vbmeta image: {name}"))?; + + if let Some(f) = footer { + warning!("{name} is a vbmeta partition, but has a footer: {f:?}"); + } + + dep_graph.insert(name, HashSet::new()); + missing.remove(name); + + for descriptor in &header.descriptors { + let Some(partition_name) = descriptor.partition_name() else { + continue; + }; + + // Ignore partitions that are guaranteed to not be modified. + if images.contains_key(partition_name) { + dep_graph + .get_mut(name.as_str()) + .unwrap() + .insert(partition_name.to_owned()); + missing.remove(partition_name); + } + } + + headers.insert(name, header); + } + + if !missing.is_empty() { + warning!("Partitions aren't protected by AVB: {:?}", joined(missing)); + } + + // Prune vbmeta images we don't need. + loop { + let unneeded = dep_graph + .iter() + .find(|(_, d)| d.is_empty()) + .map(|(&n, _)| n.to_owned()); + match unneeded { + Some(name) => { + dep_graph.remove(name.as_str()); + headers.remove(name.as_str()); + + for deps in dep_graph.values_mut() { + deps.remove(name.as_str()); + } + } + None => break, + } + } + + // Compute the patching order. This only includes vbmeta images. + let mut topo = TopologicalSort::::new(); + let mut order = vec![]; + + for (name, deps) in &dep_graph { + for dep in deps { + topo.add_dependency(dep, name.to_owned()); + } + } + + while !topo.is_empty() { + match topo.pop() { + Some(item) => { + // Only include vbmeta images that we need to modify. + if headers.contains_key(item.as_str()) { + order.push(( + item.clone(), + headers.remove(item.as_str()).unwrap(), + dep_graph.remove(item.as_str()).unwrap(), + )); + } + } + None => bail!("vbmeta dependency graph has cycle: {topo:?}"), + } + } + + Ok(order) +} + +/// Update vbmeta descriptors based on the footers from the specified images and +/// then re-sign the vbmeta images. +fn update_vbmeta_descriptors( + images: &mut HashMap>, + order: &mut [(String, Header, HashSet)], + clear_vbmeta_flags: bool, + key: &RsaPrivateKey, + block_size: u64, +) -> Result<()> { + for (name, parent_header, deps) in order { + if parent_header.flags != 0 { + if clear_vbmeta_flags { + parent_header.flags = 0; + } else { + bail!("{name} header flags disable AVB {:#x}", parent_header.flags); + } + } + + // avbroot doesn't support any other types. + if parent_header.algorithm_type != AlgorithmType::Sha256Rsa4096 { + parent_header.algorithm_type = AlgorithmType::Sha256Rsa4096; + + status!( + "{} signature algorithm type changed to {:?}", + name, + parent_header.algorithm_type + ); + } + + for dep in deps.iter() { + // This can't fail since the descriptor must have existed for the + // dependency to exist. + let parent_descriptor = parent_header + .descriptors + .iter_mut() + .find(|d| d.partition_name() == Some(dep)) + .unwrap(); + + let reader = images.get_mut(dep).unwrap(); + let (header, _, _) = avb::load_image(reader) + .with_context(|| anyhow!("Failed to load vbmeta footer from image: {dep}"))?; + + if header.public_key.is_empty() { + // vbmeta is unsigned. Use the existing descriptor. + let Some(descriptor) = header + .descriptors + .iter() + .find(|d| d.partition_name() == Some(dep)) + else { + bail!("{name} has no descriptor for itself"); + }; + + match (parent_descriptor, descriptor) { + (Descriptor::Hash(pd), Descriptor::Hash(d)) => { + *pd = d.clone(); + } + (Descriptor::Hashtree(pd), Descriptor::Hashtree(d)) => { + *pd = d.clone(); + } + _ => { + bail!("{name}'s descriptor for {dep} must match {dep}'s self descriptor"); + } + } + } else { + // vbmeta is signed; Use a chain descriptor. + match parent_descriptor { + Descriptor::ChainPartition(d) => { + d.public_key = header.public_key; + } + _ => { + bail!("{name}'s descriptor for {dep} must be a chain descriptor"); + } + } + } + } + + parent_header + .sign(key) + .with_context(|| anyhow!("Failed to sign vbmeta header for image: {name}"))?; + + let mut writer = Cursor::new(Vec::new()); + parent_header + .to_writer(&mut writer) + .with_context(|| anyhow!("Failed to write vbmeta image: {name}"))?; + + padding::write_zeros(&mut writer, block_size) + .with_context(|| anyhow!("Failed to write vbmeta padding: {name}"))?; + + *images.get_mut(name).unwrap() = Box::new(writer); + } + + Ok(()) +} + +/// Compress an image and update the OTA manifest partition entry appropriately. +fn compress_image( + name: &str, + mut stream: &mut Box, + header: &Mutex, + block_size: u32, + cancel_signal: &Arc, +) -> Result<()> { + stream.rewind()?; + + let writer = Cursor::new(Vec::new()); + let mut compressed = CompressedPartitionWriter::new(writer, block_size)?; + + stream::copy(&mut stream, &mut compressed, cancel_signal)?; + + let mut header_locked = header.lock().unwrap(); + let partition = header_locked + .manifest + .partitions + .iter_mut() + .find(|p| p.partition_name == name) + .unwrap(); + let writer = compressed.finish(partition)?; + + *stream = Box::new(writer); + + Ok(()) +} + +#[allow(clippy::too_many_arguments)] +fn patch_ota_payload( + open_payload: impl Fn() -> io::Result> + Sync, + writer: impl Write, + external_images: &HashMap, + boot_partition: &str, + root_patcher: Option>, + clear_vbmeta_flags: bool, + key_avb: &RsaPrivateKey, + key_ota: &RsaPrivateKey, + cert_ota: &Certificate, + cancel_signal: &Arc, +) -> Result<(String, u64)> { + let header = PayloadHeader::from_reader(open_payload()?) + .with_context(|| anyhow!("Failed to load OTA payload header"))?; + let header = Mutex::new(header); + let header_locked = header.lock().unwrap(); + let all_partitions = header_locked + .manifest + .partitions + .iter() + .map(|p| p.partition_name.as_str()) + .collect::>(); + + // Use external partition images if provided. This may be a larger set than + // what's needed for our patches. + for (name, path) in external_images { + if !all_partitions.contains(name.as_str()) { + bail!("Cannot replace non-existent {name} partition with {path:?}"); + } + } + + // Determine what images need to be patched. For simplicity, we pre-read all + // vbmeta images since they're tiny. They're discarded later if the they + // don't need to be modified. + let required_images = get_required_images( + &header_locked.manifest, + boot_partition, + root_patcher.is_some(), + )?; + let vbmeta_images = required_images + .iter() + .filter(|(n, _)| n.starts_with("@vbmeta:")) + .map(|(_, p)| p.clone()) + .collect::>(); + + // The set of source images to be inserted into the new payload, replacing + // what was in the original payload. Initially, this refers to either real + // files on the filesystem (--replace option) or in-memory files (extracted + // from the old payload). The values will be replaced later if the images + // need to be patched (eg. boot or vbmeta image). + let mut input_streams = open_input_streams( + &open_payload, + &required_images, + external_images, + &header_locked, + cancel_signal, + )?; + + patch_boot_images( + &required_images, + &mut input_streams, + root_patcher, + key_avb, + cert_ota, + cancel_signal, + )?; + + let mut vbmeta_order = get_vbmeta_patch_order(&mut input_streams, &vbmeta_images)?; + + status!( + "Patching vbmeta images: {}", + joined(vbmeta_order.iter().map(|(n, _, _)| n)), + ); + + // Get rid of input readers for vbmeta partitions we don't need to modify. + for name in &vbmeta_images { + // Linear search is fast enough. + if !vbmeta_order.iter().any(|v| v.0 == *name) { + input_streams.remove(name); + } + } + + update_vbmeta_descriptors( + &mut input_streams, + &mut vbmeta_order, + clear_vbmeta_flags, + key_avb, + header_locked.manifest.block_size.into(), + )?; + + status!( + "Compressing replacement images: {}", + joined(sorted(input_streams.keys())), + ); + + let block_size = header_locked.manifest.block_size; + drop(header_locked); + + input_streams + .par_iter_mut() + .map(|(name, stream)| -> Result<()> { + compress_image(name, stream, &header, block_size, cancel_signal) + .with_context(|| anyhow!("Failed to compress image: {name}")) + }) + .collect::>()?; + + status!("Generating new OTA payload"); + + let header_locked = header.lock().unwrap(); + let mut payload_writer = PayloadWriter::new(writer, header_locked.clone(), key_ota.clone()) + .with_context(|| anyhow!("Failed to write payload header"))?; + let mut orig_payload_reader = open_payload()?; + + while payload_writer + .begin_next_operation() + .with_context(|| anyhow!("Failed to begin next payload blob entry"))? + { + let name = payload_writer.partition().unwrap().partition_name.clone(); + let operation = payload_writer.operation().unwrap(); + + let Some(data_length) = operation.data_length else { + // Otherwise, this is a ZERO/DISCARD operation. + continue; + }; + + if let Some(mut reader) = input_streams.remove(&name) { + // Copy from our replacement image. + reader.rewind()?; + + stream::copy_n(&mut reader, &mut payload_writer, data_length, cancel_signal) + .with_context(|| anyhow!("Failed to copy from replacement image: {name}"))?; + } else { + // Copy from the original payload. + let pi = payload_writer.partition_index().unwrap(); + let oi = payload_writer.operation_index().unwrap(); + let orig_partition = &header_locked.manifest.partitions[pi]; + let orig_operation = &orig_partition.operations[oi]; + + let data_offset = orig_operation + .data_offset + .and_then(|o| o.checked_add(header_locked.blob_offset)) + .ok_or_else(|| anyhow!("Missing data_offset in partition #{pi} operation #{oi}"))?; + + orig_payload_reader + .seek(SeekFrom::Start(data_offset)) + .with_context(|| anyhow!("Failed to seek original payload to {data_offset}"))?; + + stream::copy_n( + &mut orig_payload_reader, + &mut payload_writer, + data_length, + cancel_signal, + ) + .with_context(|| anyhow!("Failed to copy from original payload: {name}"))?; + } + } + + let (_, properties, metadata_size) = payload_writer + .finish() + .with_context(|| anyhow!("Failed to finalize payload"))?; + + Ok((properties, metadata_size)) +} + +#[allow(clippy::too_many_arguments)] +fn patch_ota_zip( + raw_reader: &PSeekFile, + zip_reader: &mut ZipArchive, + mut zip_writer: &mut ZipWriter, + external_images: &HashMap, + boot_partition: &str, + mut root_patch: Option>, + clear_vbmeta_flags: bool, + key_avb: &RsaPrivateKey, + key_ota: &RsaPrivateKey, + cert_ota: &Certificate, + cancel_signal: &Arc, +) -> Result<(OtaMetadata, u64)> { + let mut missing = BTreeSet::from([ + ota::PATH_METADATA_PB, + ota::PATH_OTACERT, + ota::PATH_PAYLOAD, + ota::PATH_PROPERTIES, + ]); + + // Keep in sorted order for reproducibility and to guarantee that the + // payload is processed before its properties file. + let paths = zip_reader + .file_names() + .map(|p| p.to_owned()) + .collect::>(); + + for path in &paths { + missing.remove(path.as_str()); + } + + if !missing.is_empty() { + bail!("Missing entries in OTA zip: {:?}", joined(missing)); + } + + let mut metadata_pb_raw = None; + let mut properties = None; + let mut payload_metadata_size = None; + let mut entries = vec![]; + let mut last_entry_used_zip64 = false; + + for path in &paths { + let mut reader = zip_reader + .by_name(path) + .with_context(|| anyhow!("Failed to open zip entry: {path}"))?; + + // Android's libarchive parser is broken and only reads data descriptor + // size fields as 64-bit integers if the central directory says the file + // size is >= 2^32 - 1. We'll turn on zip64 if the input is above this + // threshold. This should be sufficient since the output file is likely + // to be larger. + let use_zip64 = reader.size() >= 0xffffffff; + let options = FileOptions::default() + .compression_method(CompressionMethod::Stored) + .large_file(use_zip64); + + match path.as_str() { + ota::PATH_METADATA => { + // Ignore because the plain-text legacy metadata file is + // regenerated from the new protobuf metadata. + continue; + } + ota::PATH_METADATA_PB => { + // Processed at the end after all other entries are written. + let mut buf = vec![]; + reader + .read_to_end(&mut buf) + .with_context(|| anyhow!("Failed to read OTA metadata: {path}"))?; + metadata_pb_raw = Some(buf); + continue; + } + _ => {} + } + + // All remaining entries are written immediately. + zip_writer + .start_file_with_extra_data(path, options) + .with_context(|| anyhow!("Failed to begin new zip entry: {path}"))?; + let offset = zip_writer + .end_extra_data() + .with_context(|| anyhow!("Failed to end new zip entry: {path}"))?; + let mut writer = CountingWriter::new(&mut zip_writer); + + match path.as_str() { + ota::PATH_OTACERT => { + // Use the user's certificate + status!("Replacing zip entry: {path}"); + + crypto::write_pem_cert(&mut writer, cert_ota) + .with_context(|| anyhow!("Failed to write entry: {path}"))?; + } + ota::PATH_PAYLOAD => { + status!("Patching zip entry: {path}"); + + if reader.compression() != CompressionMethod::Stored { + bail!("{path} is not stored uncompressed"); + } + + let payload_offset = reader.data_start(); + let payload_size = reader.size(); + + let (p, m) = patch_ota_payload( + || { + // The zip library doesn't provide us with a seekable + // reader, so we make our own from the underlying file. + Ok(Box::new(SectionReader::new( + BufReader::new(raw_reader.clone()), + payload_offset, + payload_size, + )?)) + }, + &mut writer, + external_images, + boot_partition, + // There's only one payload in the OTA. + root_patch.take(), + clear_vbmeta_flags, + key_avb, + key_ota, + cert_ota, + cancel_signal, + ) + .with_context(|| anyhow!("Failed to patch payload: {path}"))?; + + properties = Some(p); + payload_metadata_size = Some(m); + } + ota::PATH_PROPERTIES => { + status!("Patching zip entry: {path}"); + + // payload.bin is guaranteed to be patched first. + writer + .write_all(properties.as_ref().unwrap().as_bytes()) + .with_context(|| anyhow!("Failed to write payload properties: {path}"))?; + } + _ => { + status!("Copying zip entry: {path}"); + + stream::copy(&mut reader, &mut writer, cancel_signal) + .with_context(|| anyhow!("Failed to copy zip entry: {path}"))?; + } + } + + // Cannot fail. + let size = writer.stream_position()?; + + entries.push(ZipEntry { + name: path.clone(), + offset, + size, + }); + + last_entry_used_zip64 = use_zip64; + } + + status!("Generating new OTA metadata"); + + let data_descriptor_size = if last_entry_used_zip64 { 24 } else { 16 }; + let metadata = ota::add_metadata( + &entries, + zip_writer, + // Offset where next entry would begin + entries.last().map(|e| e.offset + e.size).unwrap() + data_descriptor_size, + &metadata_pb_raw.unwrap(), + payload_metadata_size.unwrap(), + ) + .with_context(|| anyhow!("Failed to write new OTA metadata"))?; + + Ok((metadata, payload_metadata_size.unwrap())) +} + +fn extract_ota_zip( + raw_reader: &PSeekFile, + directory: &Path, + payload_offset: u64, + payload_size: u64, + header: &PayloadHeader, + images: &BTreeSet, + cancel_signal: &Arc, +) -> Result<()> { + for name in images { + if Path::new(name).file_name() != Some(OsStr::new(name)) { + bail!("Unsafe partition name: {name}"); + } + } + + fs::create_dir_all(directory) + .with_context(|| anyhow!("Failed to create directory: {directory:?}"))?; + + status!("Extracting from the payload: {}", joined(images)); + + // Pre-open all output files. + let output_files = images + .iter() + .map(|name| { + let path = directory.join(format!("{name}.img")); + let file = File::create(&path) + .map(PSeekFile::new) + .with_context(|| anyhow!("Failed to open for writing: {path:?}"))?; + Ok((name.as_str(), file)) + }) + .collect::>>()?; + + // Extract the images. Each time we're asked to open a new file, we just + // clone the relevant PSeekFile. We only ever have one actual kernel file + // descriptor for each file. + payload::extract_images( + || { + Ok(Box::new(SectionReader::new( + BufReader::new(raw_reader.clone()), + payload_offset, + payload_size, + )?)) + }, + |name| Ok(Box::new(BufWriter::new(output_files[name].clone()))), + header, + images.iter().map(|n| n.as_str()), + cancel_signal, + ) + .with_context(|| anyhow!("Failed to extract images from payload"))?; + + Ok(()) +} + +pub fn patch_subcommand(cli: &PatchCli, cancel_signal: &Arc) -> Result<()> { + let output = cli.output.as_ref().map_or_else( + || { + let mut s = cli.input.clone().into_os_string(); + s.push(".patched"); + Cow::Owned(PathBuf::from(s)) + }, + Cow::Borrowed, + ); + + let passphrase_avb = if let Some(v) = &cli.pass_avb_env_var { + PassphraseSource::EnvVar(v.clone()) + } else if let Some(p) = &cli.pass_avb_file { + PassphraseSource::File(p.clone()) + } else { + PassphraseSource::Prompt(format!("Enter passphrase for {:?}: ", cli.key_avb)) + }; + let passphrase_ota = if let Some(v) = &cli.pass_ota_env_var { + PassphraseSource::EnvVar(v.clone()) + } else if let Some(p) = &cli.pass_ota_file { + PassphraseSource::File(p.clone()) + } else { + PassphraseSource::Prompt(format!("Enter passphrase for {:?}: ", cli.key_ota)) + }; + + let key_avb = crypto::read_pem_key_file(&cli.key_avb, &passphrase_avb) + .with_context(|| anyhow!("Failed to load key: {:?}", cli.key_avb))?; + let key_ota = crypto::read_pem_key_file(&cli.key_ota, &passphrase_ota) + .with_context(|| anyhow!("Failed to load key: {:?}", cli.key_ota))?; + let cert_ota = crypto::read_pem_cert_file(&cli.cert_ota) + .with_context(|| anyhow!("Failed to load certificate: {:?}", cli.cert_ota))?; + + if !crypto::cert_matches_key(&cert_ota, &key_ota)? { + bail!( + "Private key {:?} does not match certificate {:?}", + cli.key_ota, + cli.cert_ota, + ); + } + + let mut external_images = HashMap::new(); + + for item in cli.replace.chunks_exact(2) { + let name = item[0] + .to_str() + .ok_or_else(|| anyhow!("Invalid partition name: {:?}", item[0]))?; + let path = Path::new(&item[1]); + + external_images.insert(name.to_owned(), path.to_owned()); + } + + let root_patcher: Option> = if cli.root.rootless { + None + } else if let Some(magisk) = &cli.root.magisk { + let patcher = MagiskRootPatcher::new( + magisk, + cli.magisk_preinit_device.as_deref(), + cli.magisk_random_seed, + cli.ignore_magisk_warnings, + move |s| warning!("{s}"), + ) + .with_context(|| anyhow!("Failed to create Magisk boot image patcher"))?; + + Some(Box::new(patcher)) + } else if let Some(prepatched) = &cli.root.prepatched { + let patcher = + PrepatchedImagePatcher::new(prepatched, cli.ignore_prepatched_compat + 1, move |s| { + warning!("{s}"); + }); + + Some(Box::new(patcher)) + } else { + unreachable!() + }; + + let start = Instant::now(); + + let raw_reader = File::open(&cli.input) + .map(PSeekFile::new) + .with_context(|| anyhow!("Failed to open for reading: {:?}", cli.input))?; + let mut zip_reader = ZipArchive::new(BufReader::new(raw_reader.clone())) + .with_context(|| anyhow!("Failed to read zip: {:?}", cli.input))?; + + // Open the output file for reading too, so we can verify offsets later. + let temp_writer = NamedTempFile::with_prefix_in( + output + .file_name() + .unwrap_or_else(|| OsStr::new("avbroot.tmp")), + output.parent().unwrap_or_else(|| Path::new(".")), + ) + .with_context(|| anyhow!("Failed to open temporary output file"))?; + let temp_path = temp_writer.path().to_owned(); + let hole_punching_writer = HolePunchingWriter::new(temp_writer); + let buffered_writer = BufWriter::new(hole_punching_writer); + let signing_writer = SigningWriter::new(buffered_writer); + let mut zip_writer = ZipWriter::new_streaming(signing_writer); + + let (metadata, payload_metadata_size) = patch_ota_zip( + &raw_reader, + &mut zip_reader, + &mut zip_writer, + &external_images, + &cli.boot_partition, + root_patcher, + cli.clear_vbmeta_flags, + &key_avb, + &key_ota, + &cert_ota, + cancel_signal, + ) + .with_context(|| anyhow!("Failed to patch OTA zip"))?; + + let sign_writer = zip_writer + .finish() + .with_context(|| anyhow!("Failed to finalize output zip"))?; + let buffered_writer = sign_writer + .finish(&key_ota, &cert_ota) + .with_context(|| anyhow!("Failed to sign output zip"))?; + let hole_punching_writer = buffered_writer + .into_inner() + .with_context(|| anyhow!("Failed to flush output zip"))?; + let mut temp_writer = hole_punching_writer.into_inner(); + temp_writer + .flush() + .with_context(|| anyhow!("Failed to flush output zip"))?; + + // We do a lot of low-level hackery. Reopen and verify offsets. + status!("Verifying metadata offsets"); + temp_writer.rewind()?; + ota::verify_metadata( + BufReader::new(&mut temp_writer), + &metadata, + payload_metadata_size, + ) + .with_context(|| anyhow!("Failed to verify OTA metadata offsets"))?; + + status!("Completed after {:.1}s", start.elapsed().as_secs_f64()); + + // NamedTempFile forces 600 permissions on temp files because it's the safe + // option for a shared /tmp. Since we're writing to the output file's + // directory, just mimic umask. + #[cfg(unix)] + { + use std::{fs::Permissions, os::unix::prelude::PermissionsExt}; + + use rustix::{fs::Mode, process::umask}; + + let mask = umask(Mode::empty()); + umask(mask); + + // Mac uses a 16-bit value. + #[allow(clippy::useless_conversion)] + let mode = u32::from(0o666 & !mask.bits()); + + temp_writer + .as_file() + .set_permissions(Permissions::from_mode(mode)) + .with_context(|| anyhow!("Failed to set permissions to {mode:o}: {temp_path:?}"))?; + } + + temp_writer.persist(output.as_ref()).with_context(|| { + anyhow!("Failed to move temporary file to output path: {temp_path:?} -> {output:?}") + })?; + + Ok(()) +} + +pub fn extract_subcommand(cli: &ExtractCli, cancel_signal: &Arc) -> Result<()> { + let raw_reader = File::open(&cli.input) + .map(PSeekFile::new) + .with_context(|| anyhow!("Failed to open for reading: {:?}", cli.input))?; + let mut zip = ZipArchive::new(BufReader::new(raw_reader.clone())) + .with_context(|| anyhow!("Failed to read zip: {:?}", cli.input))?; + let payload_entry = zip + .by_name(ota::PATH_PAYLOAD) + .with_context(|| anyhow!("Failed to open zip entry: {:?}", ota::PATH_PAYLOAD))?; + let payload_offset = payload_entry.data_start(); + let payload_size = payload_entry.size(); + + // Open the payload data directly. + let mut payload_reader = SectionReader::new( + BufReader::new(raw_reader.clone()), + payload_offset, + payload_size, + )?; + + let header = PayloadHeader::from_reader(&mut payload_reader) + .with_context(|| anyhow!("Failed to load OTA payload header"))?; + let mut unique_images = BTreeSet::new(); + + if cli.all { + unique_images.extend( + header + .manifest + .partitions + .iter() + .map(|p| &p.partition_name) + .cloned(), + ); + } else { + let images = get_required_images(&header.manifest, &cli.boot_partition, true)?; + + if cli.boot_only { + unique_images.insert(images["@rootpatch"].clone()); + } else { + unique_images.extend(images.into_values()); + } + } + + extract_ota_zip( + &raw_reader, + &cli.directory, + payload_offset, + payload_size, + &header, + &unique_images, + cancel_signal, + )?; + + Ok(()) +} + +pub fn verify_subcommand(cli: &VerifyCli, cancel_signal: &Arc) -> Result<()> { + let raw_reader = File::open(&cli.input) + .map(PSeekFile::new) + .with_context(|| anyhow!("Failed to open for reading: {:?}", cli.input))?; + let mut reader = BufReader::new(raw_reader); + + status!("Verifying whole-file signature"); + + let embedded_cert = ota::verify_ota(&mut reader, cancel_signal)?; + + let (metadata, ota_cert, header, properties) = ota::parse_zip_ota_info(&mut reader)?; + if embedded_cert != ota_cert { + bail!( + "CMS embedded certificate does not match {}", + ota::PATH_OTACERT, + ); + } else if let Some(p) = &cli.cert_ota { + let verify_cert = crypto::read_pem_cert_file(p) + .with_context(|| anyhow!("Failed to load certificate: {:?}", p))?; + + if embedded_cert != verify_cert { + bail!("OTA has a valid signature, but was not signed with: {p:?}"); + } + } else { + warning!("Whole-file signature is valid, but its trust is unknown"); + } + + ota::verify_metadata(&mut reader, &metadata, header.blob_offset) + .with_context(|| anyhow!("Failed to verify OTA metadata offsets"))?; + + status!("Verifying payload"); + + let pfs_raw = metadata + .property_files + .get(ota::PF_NAME) + .ok_or_else(|| anyhow!("Missing property files: {}", ota::PF_NAME))?; + let pfs = ota::parse_property_files(pfs_raw) + .with_context(|| anyhow!("Failed to parse property files: {}", ota::PF_NAME))?; + let pf_payload = pfs + .iter() + .find(|pf| pf.name == ota::PATH_PAYLOAD) + .ok_or_else(|| anyhow!("Missing property files entry: {}", ota::PATH_PAYLOAD))?; + + let section_reader = SectionReader::new(&mut reader, pf_payload.offset, pf_payload.size)?; + + payload::verify_payload(section_reader, &ota_cert, &properties, cancel_signal)?; + + status!("Extracting partition images to temporary directory"); + + let temp_dir = + TempDir::new().with_context(|| anyhow!("Failed to create temporary directory"))?; + let raw_reader = reader.into_inner(); + let unique_images = header + .manifest + .partitions + .iter() + .map(|p| &p.partition_name) + .cloned() + .collect::>(); + + extract_ota_zip( + &raw_reader, + temp_dir.path(), + pf_payload.offset, + pf_payload.size, + &header, + &unique_images, + cancel_signal, + )?; + + status!("Verifying AVB signatures"); + + let public_key = if let Some(p) = &cli.public_key_avb { + let data = fs::read(p).with_context(|| anyhow!("Failed to read file: {p:?}"))?; + let key = avb::decode_public_key(&data) + .with_context(|| anyhow!("Failed to decode public key: {p:?}"))?; + + Some(key) + } else { + None + }; + + let mut seen = HashSet::::new(); + let mut descriptors = HashMap::::new(); + + cli::avb::verify_headers( + temp_dir.path(), + "vbmeta", + public_key.as_ref(), + &mut seen, + &mut descriptors, + )?; + cli::avb::verify_descriptors(temp_dir.path(), &descriptors, cancel_signal)?; + + status!("Signatures are all valid!"); + + Ok(()) +} + +pub fn ota_main(cli: &OtaCli, cancel_signal: &Arc) -> Result<()> { + match &cli.command { + OtaCommand::Patch(c) => patch_subcommand(c, cancel_signal), + OtaCommand::Extract(c) => extract_subcommand(c, cancel_signal), + OtaCommand::Verify(c) => verify_subcommand(c, cancel_signal), + } +} + +// We currently use the `conflicts_with_all` option instead of `requires` +// because the latter currently doesn't work when the dependent is an argument +// inside a group: https://github.com/clap-rs/clap/issues/4707. Even if that +// were fixed, the former option's error message is much more user friendly. + +#[derive(Debug, Args)] +#[group(required = true, multiple = false)] +pub struct RootGroup { + /// Path to Magisk APK. + #[arg(long, value_name = "FILE", value_parser)] + pub magisk: Option, + + /// Path to prepatched boot image. + #[arg(long, value_name = "FILE", value_parser)] + pub prepatched: Option, + + /// Skip applying root patch. + #[arg(long)] + pub rootless: bool, +} + +/// Patch a full OTA zip. +#[derive(Debug, Parser)] +pub struct PatchCli { + /// Patch to original OTA zip. + #[arg(short, long, value_name = "FILE", value_parser)] + pub input: PathBuf, + + /// Path to new OTA zip. + #[arg(short, long, value_name = "FILE", value_parser)] + pub output: Option, + + /// Private key for signing vbmeta images. + #[arg(long, alias = "privkey-avb", value_name = "FILE", value_parser)] + pub key_avb: PathBuf, + + /// Private key for signing the OTA. + #[arg(long, alias = "privkey-ota", value_name = "FILE", value_parser)] + pub key_ota: PathBuf, + + /// Certificate for OTA signing key. + #[arg(long, value_name = "FILE", value_parser)] + pub cert_ota: PathBuf, + + /// Environment variable containing AVB private key passphrase. + #[arg( + long, + alias = "passphrase-avb-env-var", + value_name = "ENV_VAR", + value_parser, + group = "pass_avb" + )] + pub pass_avb_env_var: Option, + + /// File containing AVB private key passphrase. + #[arg( + long, + alias = "passphrase-avb-file", + value_name = "FILE", + value_parser, + group = "pass_avb" + )] + pub pass_avb_file: Option, + + /// Environment variable containing OTA private key passphrase. + #[arg( + long, + alias = "passphrase-ota-env-var", + value_name = "ENV_VAR", + value_parser, + group = "pass_ota" + )] + pub pass_ota_env_var: Option, + + /// File containing OTA private key passphrase. + #[arg( + long, + alias = "passphrase-ota-file", + value_name = "FILE", + value_parser, + group = "pass_ota" + )] + pub pass_ota_file: Option, + + /// Use partition image from a file instead of the original payload. + #[arg(long, value_names = ["PARTITION", "FILE"], value_parser = value_parser!(OsString), num_args = 2)] + pub replace: Vec, + + #[command(flatten)] + pub root: RootGroup, + + /// Magisk preinit block device. + #[arg(long, value_name = "PARTITION", conflicts_with_all = ["prepatched", "rootless"])] + pub magisk_preinit_device: Option, + + /// Magisk random seed. + #[arg(long, value_name = "NUMBER", conflicts_with_all = ["prepatched", "rootless"])] + pub magisk_random_seed: Option, + + /// Ignore Magisk compatibility/version warnings. + #[arg(long, conflicts_with_all = ["prepatched", "rootless"])] + pub ignore_magisk_warnings: bool, + + /// Ignore compatibility issues with prepatched boot images. + #[arg(long, action = ArgAction::Count, conflicts_with_all = ["magisk", "rootless"])] + pub ignore_prepatched_compat: u8, + + /// Forcibly clear vbmeta flags if they disable AVB. + #[arg(long)] + pub clear_vbmeta_flags: bool, + + /// Boot partition name. + #[arg(long, value_name = "PARTITION", default_value = "@gki_ramdisk")] + pub boot_partition: String, +} + +/// Extract partition images from an OTA zip's payload. +#[derive(Debug, Parser)] +pub struct ExtractCli { + /// Path to OTA zip. + #[arg(short, long, value_name = "FILE", value_parser)] + pub input: PathBuf, + + /// Output directory for extracted images. + #[arg(short, long, value_parser, default_value = ".")] + pub directory: PathBuf, + + /// Extract all images from the payload. + #[arg(short, long, group = "extract")] + pub all: bool, + + /// Extract only the boot image. + #[arg(long, group = "extract")] + pub boot_only: bool, + + /// Boot partition name. + #[arg(long, value_name = "PARTITION", default_value = "@gki_ramdisk")] + pub boot_partition: String, +} + +/// Verify signatures of an OTA. +/// +/// This includes both the whole-file signature and the payload signature. +#[derive(Debug, Parser)] +pub struct VerifyCli { + /// Path to OTA zip. + #[arg(short, long, value_name = "FILE", value_parser)] + pub input: PathBuf, + + /// Certificate for verifying the OTA signatures. + /// + /// If this is omitted, the check only verifies that the signatures are + /// valid, not that they are trusted. + #[arg(long, value_name = "FILE", value_parser)] + pub cert_ota: Option, + + /// Public key for verifying the vbmeta signatures. + /// + /// If this is omitted, the check only verifies that the signatures are + /// valid, not that they are trusted. + #[arg(long, value_name = "FILE", value_parser)] + pub public_key_avb: Option, +} + +#[allow(clippy::large_enum_variant)] +#[derive(Debug, Subcommand)] +enum OtaCommand { + Patch(PatchCli), + Extract(ExtractCli), + Verify(VerifyCli), +} + +/// Patch or extract OTA images. +#[derive(Debug, Parser)] +pub struct OtaCli { + #[command(subcommand)] + command: OtaCommand, +} diff --git a/src/cli/ramdisk.rs b/src/cli/ramdisk.rs new file mode 100644 index 0000000..cad2bc5 --- /dev/null +++ b/src/cli/ramdisk.rs @@ -0,0 +1,153 @@ +/* + * SPDX-FileCopyrightText: 2023 Andrew Gunnerson + * SPDX-License-Identifier: GPL-3.0-only + */ + +use std::{ + fs::File, + path::{Path, PathBuf}, + str, +}; + +use anyhow::{Context, Result}; +use clap::{Parser, Subcommand}; + +use crate::{ + format::{ + compression::{CompressedFormat, CompressedReader, CompressedWriter}, + cpio::{self, CpioEntryNew}, + }, + util::EscapedString, +}; + +static CONTENT_BEGIN: &str = "----- BEGIN UTF-8 CONTENT -----"; +static CONTENT_END: &str = "----- END UTF-8 CONTENT -----"; +static CONTENT_END_NO_NEWLINE: &str = "----- END UTF-8 CONTENT (NO NEWLINE) -----"; + +static BINARY_BEGIN: &str = "----- BEGIN BINARY CONTENT -----"; +static BINARY_END: &str = "----- END BINARY CONTENT -----"; +static BINARY_END_TRUNCATED: &str = "----- END BINARY CONTENT (TRUNCATED) -----"; + +static NO_DATA: &str = "----- NO DATA -----"; + +fn print_content(data: &[u8], truncate: bool) { + if data.is_empty() { + println!("{NO_DATA}"); + return; + } + + if !data.contains(&b'\0') { + if let Ok(s) = str::from_utf8(data) { + if !s.contains(CONTENT_BEGIN) + && !s.contains(CONTENT_END) + && !s.contains(CONTENT_END_NO_NEWLINE) + { + println!("{CONTENT_BEGIN}"); + print!("{s}"); + if data.last() == Some(&b'\n') { + println!("{CONTENT_END}"); + } else { + println!(); + println!("{CONTENT_END_NO_NEWLINE}"); + } + + return; + } + } + } + + println!("{BINARY_BEGIN}"); + + if data.len() > 512 && truncate { + println!("{}", EscapedString::new_unquoted(&data[..512])); + println!("{BINARY_END_TRUNCATED}"); + } else { + println!("{}", EscapedString::new_unquoted(&data)); + println!("{BINARY_END}"); + } +} + +fn load_archive( + path: &Path, + include_trailer: bool, +) -> Result<(Vec, CompressedFormat)> { + let file = File::open(path)?; + let reader = CompressedReader::new(file, true)?; + let format = reader.format(); + let entries = cpio::load(reader, include_trailer)?; + + Ok((entries, format)) +} + +fn save_archive(path: &Path, entries: &[CpioEntryNew], format: CompressedFormat) -> Result<()> { + let file = File::create(path)?; + let mut writer = CompressedWriter::new(file, format)?; + cpio::save(&mut writer, entries, false)?; + writer.finish()?; + + Ok(()) +} + +pub fn ramdisk_main(cli: &RamdiskCli) -> Result<()> { + match &cli.command { + RamdiskCommand::Dump(c) => { + let (entries, format) = load_archive(&c.input, true) + .with_context(|| format!("Failed to read cpio: {:?}", c.input))?; + + println!("Compression format: {format:?}"); + println!(); + + for entry in entries { + println!("{entry}"); + print_content(&entry.content, !c.no_truncate); + println!(); + } + } + RamdiskCommand::Repack(c) => { + let (entries, format) = load_archive(&c.input, false) + .with_context(|| format!("Failed to read cpio: {:?}", c.input))?; + + save_archive(&c.output, &entries, format) + .with_context(|| format!("Failed to write cpio: {:?}", c.output))?; + } + } + + Ok(()) +} + +/// Dump cpio headers and data +#[derive(Debug, Parser)] +struct DumpCli { + /// Path to input cpio file. + #[arg(short, long, value_name = "FILE", value_parser)] + input: PathBuf, + + /// Do not truncate binary file contents. + #[arg(long)] + no_truncate: bool, +} + +/// Repack cpio archive +#[derive(Debug, Parser)] +struct RepackCli { + /// Path to input cpio file. + #[arg(short, long, value_name = "FILE", value_parser)] + input: PathBuf, + + /// Path to output cpio file. + #[arg(short, long, value_name = "FILE", value_parser)] + output: PathBuf, +} + +#[derive(Debug, Subcommand)] +enum RamdiskCommand { + Dump(DumpCli), + Repack(RepackCli), +} + +/// Show information about ramdisk cpio archives. +#[derive(Debug, Parser)] +pub struct RamdiskCli { + #[command(subcommand)] + command: RamdiskCommand, +} diff --git a/src/crypto.rs b/src/crypto.rs new file mode 100644 index 0000000..53d9762 --- /dev/null +++ b/src/crypto.rs @@ -0,0 +1,406 @@ +/* + * SPDX-FileCopyrightText: 2023 Andrew Gunnerson + * SPDX-License-Identifier: GPL-3.0-only + */ + +use std::{ + env::{self, VarError}, + ffi::OsString, + fs::{self, File, OpenOptions}, + io::{self, BufReader, BufWriter, Read, Write}, + path::{Path, PathBuf}, + time::Duration, +}; + +use cms::{ + cert::{CertificateChoices, IssuerAndSerialNumber}, + content_info::{CmsVersion, ContentInfo}, + signed_data::{ + CertificateSet, DigestAlgorithmIdentifiers, EncapsulatedContentInfo, SignatureValue, + SignedData, SignerIdentifier, SignerInfo, SignerInfos, + }, +}; +use pkcs8::{ + pkcs5::{pbes2, scrypt}, + DecodePrivateKey, EncodePrivateKey, EncodePublicKey, EncryptedPrivateKeyInfo, LineEnding, + PrivateKeyInfo, +}; +use rand::RngCore; +use rsa::{pkcs1v15::SigningKey, Pkcs1v15Sign, RsaPrivateKey, RsaPublicKey}; +use sha2::Sha256; +use thiserror::Error; +use x509_cert::{ + builder::{Builder, CertificateBuilder, Profile}, + der::{pem::PemLabel, referenced::OwnedToRef, Any, Decode, DecodePem, EncodePem}, + serial_number::SerialNumber, + spki::{AlgorithmIdentifierOwned, SubjectPublicKeyInfoOwned}, + time::Validity, + Certificate, +}; + +#[derive(Debug, Error)] +pub enum Error { + #[error("Passphrases do not match")] + ConfirmPassphrase, + #[error("Failed to read environment variable: {0:?}")] + InvalidEnvVar(OsString, #[source] VarError), + #[error("PEM has start tag, but no end tag")] + PemNoEndTag, + #[error("Failed to load encrypted private key")] + LoadKeyEncrypted(#[source] pkcs8::Error), + #[error("Failed to load unencrypted private key")] + LoadKeyUnencrypted(#[source] pkcs8::Error), + #[error("Failed to save encrypted private key")] + SaveKeyEncrypted(#[source] pkcs8::Error), + #[error("Failed to save unencrypted private key")] + SaveKeyUnencrypted(#[source] pkcs8::Error), + #[error("X509 error")] + X509(#[from] x509_cert::builder::Error), + #[error("SPKI error")] + Spki(#[from] pkcs8::spki::Error), + #[error("DER error")] + Der(#[from] x509_cert::der::Error), + #[error("RSA error")] + RsaSign(#[from] rsa::Error), + #[error("I/O error")] + Io(#[from] io::Error), +} + +type Result = std::result::Result; + +pub enum PassphraseSource { + Prompt(String), + EnvVar(OsString), + File(PathBuf), +} + +impl PassphraseSource { + pub fn acquire(&self, confirm: bool) -> Result { + let passphrase = match self { + Self::Prompt(p) => { + let first = rpassword::prompt_password(p)?; + + if confirm { + let second = rpassword::prompt_password("Confirm: ")?; + + if first != second { + return Err(Error::ConfirmPassphrase); + } + } + + first + } + Self::EnvVar(v) => env::var(v).map_err(|e| Error::InvalidEnvVar(v.clone(), e))?, + Self::File(p) => fs::read_to_string(p)? + .trim_end_matches(&['\r', '\n']) + .to_owned(), + }; + + Ok(passphrase) + } +} + +/// Generate an 4096-bit RSA key pair. +pub fn generate_rsa_key_pair() -> Result { + let mut rng = rand::thread_rng(); + + // avbroot supports 4096-bit keys only. + let key = RsaPrivateKey::new(&mut rng, 4096)?; + + Ok(key) +} + +/// Generate a self-signed certificate. +pub fn generate_cert( + key: &RsaPrivateKey, + serial: u64, + validity: Duration, + subject: &str, +) -> Result { + let public_key_der = key.to_public_key().to_public_key_der()?; + let signing_key = SigningKey::::new(key.clone()); + + let builder = CertificateBuilder::new( + Profile::Root, + SerialNumber::from(serial), + Validity::from_now(validity)?, + subject.parse()?, + SubjectPublicKeyInfoOwned::from_der(public_key_der.as_bytes())?, + &signing_key, + )?; + + let mut rng = rand::thread_rng(); + let cert = builder.build_with_rng(&mut rng)?; + + Ok(cert) +} + +/// x509_cert/pem follow rfc7468 strictly instead of implementing a lenient +/// parser. The PEM decoder rejects lines in the base64 section that are longer +/// than 64 characters, excluding whitespace. We'll reformat the data to deal +/// with this because there are certificates that do not follow the spec, like +/// the signing cert for the Pixel 7 Pro official OTAs. +fn reformat_pem(data: &[u8]) -> Result> { + let mut result = vec![]; + let mut base64 = vec![]; + let mut inside_base64 = false; + + for mut line in data.split(|&c| c == b'\n') { + while !line.is_empty() && line[line.len() - 1].is_ascii_whitespace() { + line = &line[..line.len() - 1]; + } + + if line.is_empty() { + continue; + } else if line.starts_with(b"-----BEGIN CERTIFICATE-----") { + inside_base64 = true; + } else if line.starts_with(b"-----END CERTIFICATE-----") { + inside_base64 = false; + + for chunk in base64.chunks(64) { + result.extend_from_slice(chunk); + result.push(b'\n'); + } + + base64.clear(); + } else if inside_base64 { + base64.extend_from_slice(line); + continue; + } + + result.extend_from_slice(line); + result.push(b'\n'); + } + + if inside_base64 { + return Err(Error::PemNoEndTag); + } + + Ok(result) +} + +/// Read PEM-encoded certificate from a reader. +pub fn read_pem_cert(mut reader: impl Read) -> Result { + let mut data = vec![]; + reader.read_to_end(&mut data)?; + + let data = reformat_pem(&data)?; + let certificate = Certificate::from_pem(data)?; + + Ok(certificate) +} + +/// Write PEM-encoded certificate to a writer. +pub fn write_pem_cert(mut writer: impl Write, cert: &Certificate) -> Result<()> { + let data = cert.to_pem(LineEnding::LF)?; + + writer.write_all(data.as_bytes())?; + + Ok(()) +} + +/// Read PEM-encoded certificate from a file. +pub fn read_pem_cert_file(path: &Path) -> Result { + let file = File::open(path)?; + let reader = BufReader::new(file); + + read_pem_cert(reader) +} + +/// Write PEM-encoded certificate to a file. +pub fn write_pem_cert_file(path: &Path, cert: &Certificate) -> Result<()> { + let file = File::create(path)?; + let writer = BufWriter::new(file); + + write_pem_cert(writer, cert) +} + +/// Read PEM-encoded PKCS8 private key from a reader. +pub fn read_pem_key(mut reader: impl Read, source: &PassphraseSource) -> Result { + let mut data = String::new(); + reader.read_to_string(&mut data)?; + + if data.contains("ENCRYPTED") { + let passphrase = source.acquire(false)?; + + RsaPrivateKey::from_pkcs8_encrypted_pem(&data, passphrase).map_err(Error::LoadKeyEncrypted) + } else { + RsaPrivateKey::from_pkcs8_pem(&data).map_err(Error::LoadKeyUnencrypted) + } +} + +/// Write PEM-encoded PKCS8 private key to a writer. +pub fn write_pem_key( + mut writer: impl Write, + key: &RsaPrivateKey, + source: &PassphraseSource, +) -> Result<()> { + let passphrase = source.acquire(true)?; + + let data = if passphrase.is_empty() { + key.to_pkcs8_pem(LineEnding::LF) + .map_err(Error::SaveKeyUnencrypted)? + } else { + let mut rng = rand::thread_rng(); + + // Normally, we'd just use key.to_pkcs8_encrypted_pem(). However, it + // uses scrypt with n = 32768. This is high enough that openssl can no + // longer read the file and craps out with `memory limit exceeded`. + // Although we can read those files just fine, let's match openssl's + // default parameters for better compatibility. + // + // Per `man openssl-pkcs8`: -scrypt Uses the scrypt algorithm for + // private key encryption using default parameters: currently N=16384, + // r=8 and p=1 and AES in CBC mode with a 256 bit key. + + let mut salt = [0u8; 16]; + rng.fill_bytes(&mut salt); + + let mut iv = [0u8; 16]; + rng.fill_bytes(&mut iv); + + // 14 = log_2(16384), 32 bytes = 256 bits + let scrypt_params = scrypt::Params::new(14, 8, 1, 32).unwrap(); + let pbes2_params = pbes2::Parameters::scrypt_aes256cbc(scrypt_params, &salt, &iv).unwrap(); + + let plain_text_der = key.to_pkcs8_der().map_err(Error::SaveKeyEncrypted)?; + let private_key_info = + PrivateKeyInfo::try_from(plain_text_der.as_bytes()).map_err(Error::SaveKeyEncrypted)?; + + let secret_doc = private_key_info + .encrypt_with_params(pbes2_params, passphrase) + .map_err(Error::SaveKeyEncrypted)?; + + secret_doc.to_pem(EncryptedPrivateKeyInfo::PEM_LABEL, LineEnding::LF)? + }; + + writer.write_all(data.as_bytes())?; + + Ok(()) +} + +/// Read PEM-encoded PKCS8 private key from a file. +pub fn read_pem_key_file(path: &Path, source: &PassphraseSource) -> Result { + let file = File::open(path)?; + let reader = BufReader::new(file); + + read_pem_key(reader, source) +} + +/// Save PEM-encoded PKCS8 private key to a file. +pub fn write_pem_key_file( + path: &Path, + key: &RsaPrivateKey, + source: &PassphraseSource, +) -> Result<()> { + let mut options = OpenOptions::new(); + options.write(true); + options.create(true); + options.truncate(true); + + #[cfg(unix)] + { + use std::os::unix::fs::OpenOptionsExt; + options.mode(0o600); + } + + let file = options.open(path)?; + let writer = BufWriter::new(file); + + write_pem_key(writer, key, source) +} + +/// Get the RSA public key from a certificate. +pub fn get_public_key(cert: &Certificate) -> Result { + let public_key = + RsaPublicKey::try_from(cert.tbs_certificate.subject_public_key_info.owned_to_ref())?; + + Ok(public_key) +} + +/// Check if a certificate matches a private key. +pub fn cert_matches_key(cert: &Certificate, key: &RsaPrivateKey) -> Result { + let public_key = get_public_key(cert)?; + + Ok(key.to_public_key() == public_key) +} + +/// Parse a CMS [`SignedData`] structure from raw DER-encoded data. +pub fn parse_cms(data: &[u8]) -> Result { + let ci = ContentInfo::from_der(data)?; + let sd = ci.content.decode_as::()?; + + Ok(sd) +} + +/// Get a list of all standard X509 certificates contained within a +/// [`SignedData`] structure. +pub fn get_cms_certs(sd: &SignedData) -> Vec { + sd.certificates.as_ref().map_or_else(Vec::new, |certs| { + certs + .0 + .iter() + .filter_map(|cc| { + if let CertificateChoices::Certificate(c) = cc { + Some(c.clone()) + } else { + None + } + }) + .collect() + }) +} + +/// Create a CMS signature from an external digest. This implementation does not +/// use signed attributes because AOSP recovery's otautil/verifier.cpp is not +/// actually CMS compliant. It simply uses the CMS [`SignedData`] structure as +/// a transport mechanism for a raw signature. Thus, we need to ensure that the +/// signature covers nothing but the raw data. +pub fn cms_sign_external( + key: &RsaPrivateKey, + cert: &Certificate, + digest: &[u8], +) -> Result { + let scheme = Pkcs1v15Sign::new::(); + let signature = key.sign(scheme, digest)?; + + let digest_algorithm = AlgorithmIdentifierOwned { + oid: const_oid::db::rfc5912::ID_SHA_256, + parameters: None, + }; + + let signed_data = SignedData { + version: CmsVersion::V1, + digest_algorithms: DigestAlgorithmIdentifiers::try_from(vec![digest_algorithm.clone()])?, + encap_content_info: EncapsulatedContentInfo { + econtent_type: const_oid::db::rfc5911::ID_DATA, + econtent: None, + }, + certificates: Some(CertificateSet::try_from(vec![ + CertificateChoices::Certificate(cert.clone()), + ])?), + crls: None, + signer_infos: SignerInfos::try_from(vec![SignerInfo { + version: CmsVersion::V1, + sid: SignerIdentifier::IssuerAndSerialNumber(IssuerAndSerialNumber { + issuer: cert.tbs_certificate.issuer.clone(), + serial_number: cert.tbs_certificate.serial_number.clone(), + }), + digest_alg: digest_algorithm, + signed_attrs: None, + signature_algorithm: AlgorithmIdentifierOwned { + oid: const_oid::db::rfc5912::SHA_256_WITH_RSA_ENCRYPTION, + parameters: None, + }, + signature: SignatureValue::new(signature)?, + unsigned_attrs: None, + }])?, + }; + + let signed_data = ContentInfo { + content_type: const_oid::db::rfc5911::ID_SIGNED_DATA, + content: Any::encode_from(&signed_data)?, + }; + + Ok(signed_data) +} diff --git a/src/format/avb.rs b/src/format/avb.rs new file mode 100644 index 0000000..f9a02a3 --- /dev/null +++ b/src/format/avb.rs @@ -0,0 +1,1691 @@ +/* + * SPDX-FileCopyrightText: 2023 Andrew Gunnerson + * SPDX-License-Identifier: GPL-3.0-only + */ + +use std::{ + cmp, fmt, + io::{self, Cursor, Read, Seek, SeekFrom, Write}, + str, + sync::{ + atomic::{AtomicBool, Ordering}, + Arc, + }, +}; + +use byteorder::{BigEndian, ReadBytesExt, WriteBytesExt}; +use num_bigint_dig::{ModInverse, ToBigInt}; +use num_traits::{Pow, ToPrimitive}; +use rayon::prelude::{IntoParallelIterator, ParallelIterator}; +use ring::digest::{Algorithm, Context}; +use rsa::{traits::PublicKeyParts, BigUint, Pkcs1v15Sign, RsaPrivateKey, RsaPublicKey}; +use sha2::{Digest, Sha256, Sha512}; +use thiserror::Error; + +use crate::{ + format::padding, + stream::{ + self, CountingReader, FromReader, ReadDiscardExt, ReadSeek, ReadStringExt, ToWriter, + WriteStringExt, WriteZerosExt, + }, + util::{self, EscapedString}, +}; + +pub const VERSION_MAJOR: u32 = 1; +pub const VERSION_MINOR: u32 = 2; +pub const VERSION_SUB: u32 = 0; + +pub const FOOTER_VERSION_MAJOR: u32 = 1; +pub const FOOTER_VERSION_MINOR: u32 = 0; + +pub const HEADER_MAGIC: [u8; 4] = *b"AVB0"; +pub const FOOTER_MAGIC: [u8; 4] = *b"AVBf"; + +#[derive(Debug, Error)] +pub enum Error { + #[error("Failed to read {0:?} field: {1}")] + ReadFieldError(&'static str, io::Error), + #[error("Failed to write {0:?} field: {1}")] + WriteFieldError(&'static str, io::Error), + #[error("{0:?} field does not have NULL terminator")] + StringNotNullTerminated(&'static str), + #[error("{0:?} field is not ASCII encoded: {1:?}")] + StringNotAscii(&'static str, String), + #[error("{0:?} field exceeds integer bounds")] + IntegerTooLarge(&'static str), + #[error("Descriptor padding is too long or data was not consumed")] + PaddingTooLong, + #[error("{0:?} field padding contains non-zero bytes")] + PaddingNotZero(&'static str), + #[error("{0:?} field + {1:?} field is out of bounds")] + OutOfBounds(&'static str, &'static str), + #[error("{0:?} field size does not equal size of contained items")] + IncorrectCombinedSize(&'static str), + #[error("Invalid VBMeta header magic: {0:?}")] + InvalidHeaderMagic([u8; 4]), + #[error("Invalid VBMeta footer magic: {0:?}")] + InvalidFooterMagic([u8; 4]), + #[error("RSA public key exponent not supported: {0}")] + UnsupportedRsaPublicExponent(BigUint), + #[error("Signature algorithm not supported: {0:?}")] + UnsupportedAlgorithm(AlgorithmType), + #[error("Hashing algorithm not supported: {0:?}")] + UnsupportedHashAlgorithm(String), + #[error("Incorrect key size ({0} bytes) for algorithm {1:?} ({2} bytes)")] + IncorrectKeySize(usize, AlgorithmType, usize), + #[error("Expected root digest {0}, but have {1}")] + InvalidRootDigest(String, String), + #[error("Expected hash tree {0}, but have {1}")] + InvalidHashtree(String, String), + #[error("Failed to RSA sign digest")] + RsaSignError(rsa::Error), + #[error("Failed to RSA verify signature")] + RsaVerifyError(rsa::Error), + #[error("{0} byte image size is too small to fit header or footer")] + ImageSizeTooSmall(u64), + #[error("I/O error")] + IoError(#[from] io::Error), +} + +type Result = std::result::Result; + +#[derive(Clone, Copy, Debug, Eq, PartialEq)] +pub enum AlgorithmType { + None, + Sha256Rsa2048, + Sha256Rsa4096, + Sha256Rsa8192, + Sha512Rsa2048, + Sha512Rsa4096, + Sha512Rsa8192, + Unknown(u32), +} + +impl AlgorithmType { + pub fn from_raw(value: u32) -> Self { + match value { + 0 => Self::None, + 1 => Self::Sha256Rsa2048, + 2 => Self::Sha256Rsa4096, + 3 => Self::Sha256Rsa8192, + 4 => Self::Sha512Rsa2048, + 5 => Self::Sha512Rsa4096, + 6 => Self::Sha512Rsa8192, + v => Self::Unknown(v), + } + } + + pub fn to_raw(self) -> u32 { + match self { + Self::None => 0, + Self::Sha256Rsa2048 => 1, + Self::Sha256Rsa4096 => 2, + Self::Sha256Rsa8192 => 3, + Self::Sha512Rsa2048 => 4, + Self::Sha512Rsa4096 => 5, + Self::Sha512Rsa8192 => 6, + Self::Unknown(v) => v, + } + } + + pub fn hash_len(self) -> usize { + match self { + Self::None | Self::Unknown(_) => 0, + Self::Sha256Rsa2048 | Self::Sha256Rsa4096 | Self::Sha256Rsa8192 => { + Sha256::output_size() + } + Self::Sha512Rsa2048 | Self::Sha512Rsa4096 | Self::Sha512Rsa8192 => { + Sha512::output_size() + } + } + } + + pub fn signature_len(self) -> usize { + match self { + Self::None | Self::Unknown(_) => 0, + Self::Sha256Rsa2048 | Self::Sha512Rsa2048 => 256, + Self::Sha256Rsa4096 | Self::Sha512Rsa4096 => 512, + Self::Sha256Rsa8192 | Self::Sha512Rsa8192 => 1024, + } + } + + pub fn public_key_len(self) -> usize { + match self { + Self::None | Self::Unknown(_) => 0, + Self::Sha256Rsa2048 | Self::Sha512Rsa2048 => 8 + 2 * 2048 / 8, + Self::Sha256Rsa4096 | Self::Sha512Rsa4096 => 8 + 2 * 4096 / 8, + Self::Sha256Rsa8192 | Self::Sha512Rsa8192 => 8 + 2 * 8192 / 8, + } + } + + pub fn hash(self, data: &[u8]) -> Vec { + match self { + Self::None | Self::Unknown(_) => vec![], + Self::Sha256Rsa2048 | Self::Sha256Rsa4096 | Self::Sha256Rsa8192 => { + Sha256::digest(data).to_vec() + } + Self::Sha512Rsa2048 | Self::Sha512Rsa4096 | Self::Sha512Rsa8192 => { + Sha512::digest(data).to_vec() + } + } + } + + pub fn sign(self, key: &RsaPrivateKey, digest: &[u8]) -> Result> { + let signature = match self { + Self::None | Self::Unknown(_) => vec![], + Self::Sha256Rsa2048 | Self::Sha256Rsa4096 | Self::Sha256Rsa8192 => { + let scheme = Pkcs1v15Sign::new::(); + key.sign(scheme, digest).map_err(Error::RsaSignError)? + } + Self::Sha512Rsa2048 | Self::Sha512Rsa4096 | Self::Sha512Rsa8192 => { + let scheme = Pkcs1v15Sign::new::(); + key.sign(scheme, digest).map_err(Error::RsaSignError)? + } + }; + + Ok(signature) + } + + pub fn verify(self, key: &RsaPublicKey, digest: &[u8], signature: &[u8]) -> Result<()> { + match self { + Self::None | Self::Unknown(_) => {} + Self::Sha256Rsa2048 | Self::Sha256Rsa4096 | Self::Sha256Rsa8192 => { + let scheme = Pkcs1v15Sign::new::(); + key.verify(scheme, digest, signature) + .map_err(Error::RsaVerifyError)?; + } + Self::Sha512Rsa2048 | Self::Sha512Rsa4096 | Self::Sha512Rsa8192 => { + let scheme = Pkcs1v15Sign::new::(); + key.verify(scheme, digest, signature) + .map_err(Error::RsaVerifyError)?; + } + } + + Ok(()) + } +} + +trait DescriptorTag { + const TAG: u64; + + fn get_tag(&self) -> u64 { + Self::TAG + } +} + +#[derive(Clone, Eq, PartialEq)] +pub struct PropertyDescriptor { + pub key: String, + pub value: Vec, +} + +impl fmt::Debug for PropertyDescriptor { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + f.debug_struct("PropertyDescriptor") + .field("key", &self.key) + .field("value", &EscapedString::new(&self.value)) + .finish() + } +} + +impl DescriptorTag for PropertyDescriptor { + const TAG: u64 = 0; +} + +impl FromReader for PropertyDescriptor { + type Error = Error; + + fn from_reader(mut reader: R) -> Result { + let key_size = reader + .read_u64::()? + .to_usize() + .ok_or_else(|| Error::IntegerTooLarge("key_size"))?; + let value_size = reader + .read_u64::()? + .to_usize() + .ok_or_else(|| Error::IntegerTooLarge("value_size"))?; + + let key = reader + .read_string_exact(key_size) + .map_err(|e| Error::ReadFieldError("key", e))?; + + let mut null = [0u8; 1]; + reader.read_exact(&mut null)?; + if null[0] != b'\0' { + return Err(Error::StringNotNullTerminated("key")); + } + + let mut value = vec![0u8; value_size]; + reader.read_exact(&mut value)?; + + // The non-string value is also null terminated + reader.read_exact(&mut null)?; + if null[0] != b'\0' { + return Err(Error::StringNotNullTerminated("value")); + } + + Ok(Self { key, value }) + } +} + +impl ToWriter for PropertyDescriptor { + type Error = Error; + + fn to_writer(&self, mut writer: W) -> Result<()> { + writer.write_u64::(self.key.len().to_u64().unwrap())?; + writer.write_u64::(self.value.len().to_u64().unwrap())?; + writer.write_all(self.key.as_bytes())?; + writer.write_all(b"\0")?; + writer.write_all(&self.value)?; + writer.write_all(b"\0")?; + + Ok(()) + } +} + +#[derive(Clone, Eq, PartialEq)] +pub struct HashtreeDescriptor { + pub dm_verity_version: u32, + pub image_size: u64, + pub tree_offset: u64, + pub tree_size: u64, + pub data_block_size: u32, + pub hash_block_size: u32, + pub fec_num_roots: u32, + pub fec_offset: u64, + pub fec_size: u64, + pub hash_algorithm: String, + pub partition_name: String, + pub salt: Vec, + pub root_digest: Vec, + pub flags: u32, + pub reserved: [u8; 60], +} + +impl fmt::Debug for HashtreeDescriptor { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + f.debug_struct("HashtreeDescriptor") + .field("dm_verity_version", &self.dm_verity_version) + .field("image_size", &self.image_size) + .field("tree_offset", &self.tree_offset) + .field("tree_size", &self.tree_size) + .field("data_block_size", &self.data_block_size) + .field("hash_block_size", &self.hash_block_size) + .field("fec_num_roots", &self.fec_num_roots) + .field("fec_offset", &self.fec_offset) + .field("fec_size", &self.fec_size) + .field("hash_algorithm", &self.hash_algorithm) + .field("partition_name", &self.partition_name) + .field("salt", &hex::encode(&self.salt)) + .field("root_digest", &hex::encode(&self.root_digest)) + .field("flags", &self.flags) + .field("reserved", &hex::encode(self.reserved)) + .finish() + } +} + +impl HashtreeDescriptor { + /// Calculate the hash tree digests for a single level of the tree. If the + /// reader's position is block-aligned and `image_size` is a multiple of the + /// block size, then this function can also be used to calculate the digests + /// for a portion of a level. + /// + /// NOTE: The result is **not** padded to the block size. + fn hash_one_level( + mut reader: impl Read, + mut image_size: u64, + block_size: u32, + algorithm: &'static Algorithm, + salt: &[u8], + cancel_signal: &Arc, + ) -> io::Result> { + // Each digest must be a power of 2. + let digest_padding = algorithm.output_len.next_power_of_two() - algorithm.output_len; + let mut buf = vec![0u8; block_size as usize]; + let mut result = vec![]; + + while image_size > 0 { + if cancel_signal.load(Ordering::SeqCst) { + return Err(io::Error::new( + io::ErrorKind::Interrupted, + "Received cancel signal", + )); + } + + let n = image_size.min(buf.len() as u64) as usize; + reader.read_exact(&mut buf[..n])?; + + // For undersized blocks, we still hash the whole buffer, except + // with padding. + buf[n..].fill(0); + + let mut context = Context::new(algorithm); + context.update(salt); + context.update(&buf); + + // Add the digest to the tree level. Each tree node must be a power + // of two. + let digest = context.finish(); + result.extend(digest.as_ref()); + result.resize(result.len() + digest_padding, 0); + + image_size -= n as u64; + } + + Ok(result) + } + + /// Calls [`Self::hash_one_level()`] in parallel. + /// + /// NOTE: The result is **not** padded to the block size. + fn hash_one_level_parallel( + open_input: impl Fn() -> io::Result> + Sync, + image_size: u64, + block_size: u32, + algorithm: &'static Algorithm, + salt: &[u8], + cancel_signal: &Arc, + ) -> io::Result> { + assert!( + image_size > block_size as u64, + "Images smaller than block size must use a normal hash", + ); + + // Parallelize in 16 MiB chunks to avoid too much seek thrashing. + let chunk_size = padding::round(16 * 1024 * 1024, u64::from(block_size)).unwrap(); + let chunk_count = image_size / chunk_size + u64::from(image_size % chunk_size != 0); + + let pieces = (0..chunk_count) + .into_par_iter() + .map(|c| -> io::Result> { + let start = c * chunk_size; + let size = chunk_size.min(image_size - start); + + let mut reader = open_input()?; + reader.seek(SeekFrom::Start(start))?; + + Self::hash_one_level(reader, size, block_size, algorithm, salt, cancel_signal) + }) + .collect::>>()?; + + Ok(pieces.into_iter().flatten().collect()) + } + + /// Calculate the hash tree for the given input in parallel. + fn calculate_hash_tree( + open_input: impl Fn() -> io::Result> + Sync, + image_size: u64, + block_size: u32, + algorithm: &'static Algorithm, + salt: &[u8], + cancel_signal: &Arc, + ) -> io::Result<(Vec, Vec)> { + // Small files are hashed directly, exactly like a hash descriptor. + if image_size <= u64::from(block_size) { + let mut reader = open_input()?; + let mut buf = vec![0u8; block_size as usize]; + reader.read_exact(&mut buf)?; + + let mut context = Context::new(algorithm); + context.update(salt); + context.update(&buf); + let digest = context.finish(); + + return Ok((digest.as_ref().to_vec(), vec![])); + } + + // Large files use the hash tree. + let mut levels = Vec::>::new(); + let mut level_size = image_size; + + while level_size > u64::from(block_size) { + let mut level = if let Some(prev_level) = levels.last() { + // Hash the previous level. + Self::hash_one_level( + Cursor::new(prev_level), + level_size, + block_size, + algorithm, + salt, + cancel_signal, + )? + } else { + // Initially read from file. + Self::hash_one_level_parallel( + &open_input, + level_size, + block_size, + algorithm, + salt, + cancel_signal, + )? + }; + + // Pad to the block size. + level.resize(padding::round(level.len(), block_size as usize).unwrap(), 0); + + level_size = level.len() as u64; + levels.push(level); + } + + // Calculate the root hash. + let mut context = Context::new(algorithm); + context.update(salt); + context.update(levels.last().unwrap()); + let root_hash = context.finish().as_ref().to_vec(); + + // The tree is oriented such that the leaves are at the end. + let hash_tree = levels.into_iter().rev().flatten().collect(); + + Ok((root_hash, hash_tree)) + } + + /// Verify the root hash and hash tree against the input reader in parallel. + /// `open_input` will be called from multiple threads and must return + /// independently seekable handles the the same file. + pub fn verify( + &self, + open_input: impl Fn() -> io::Result> + Sync, + cancel_signal: &Arc, + ) -> Result<()> { + let algorithm = match self.hash_algorithm.as_str() { + "sha256" => &ring::digest::SHA256, + "sha512" => &ring::digest::SHA512, + a => return Err(Error::UnsupportedHashAlgorithm(a.to_owned())), + }; + let tree_size = self + .tree_size + .to_usize() + .ok_or_else(|| Error::IntegerTooLarge("tree_size"))?; + + let (actual_root_digest, actual_hash_tree) = Self::calculate_hash_tree( + &open_input, + self.image_size, + self.data_block_size, + algorithm, + &self.salt, + cancel_signal, + )?; + + if self.root_digest != actual_root_digest { + return Err(Error::InvalidRootDigest( + hex::encode(&self.root_digest), + hex::encode(actual_root_digest), + )); + } + + let mut reader = open_input()?; + reader.seek(SeekFrom::Start(self.tree_offset))?; + + let mut hash_tree = vec![0u8; tree_size]; + reader.read_exact(&mut hash_tree)?; + + if hash_tree != actual_hash_tree { + // These are multiple megabytes, so only report the hashes. + let expected = ring::digest::digest(algorithm, &hash_tree); + let actual = ring::digest::digest(algorithm, &actual_hash_tree); + + return Err(Error::InvalidRootDigest( + hex::encode(expected), + hex::encode(actual), + )); + } + + Ok(()) + } +} + +impl DescriptorTag for HashtreeDescriptor { + const TAG: u64 = 1; +} + +impl FromReader for HashtreeDescriptor { + type Error = Error; + + fn from_reader(mut reader: R) -> Result { + let dm_verity_version = reader.read_u32::()?; + let image_size = reader.read_u64::()?; + let tree_offset = reader.read_u64::()?; + let tree_size = reader.read_u64::()?; + let data_block_size = reader.read_u32::()?; + let hash_block_size = reader.read_u32::()?; + let fec_num_roots = reader.read_u32::()?; + let fec_offset = reader.read_u64::()?; + let fec_size = reader.read_u64::()?; + + let hash_algorithm = reader + .read_string_padded(32) + .map_err(|e| Error::ReadFieldError("hash_algorithm", e))?; + if !hash_algorithm.is_ascii() { + return Err(Error::StringNotAscii("hash_algorithm", hash_algorithm)); + } + + let partition_name_len = reader.read_u32::()?; + let salt_len = reader.read_u32::()?; + let root_digest_len = reader.read_u32::()?; + let flags = reader.read_u32::()?; + + let mut reserved = [0u8; 60]; + reader.read_exact(&mut reserved)?; + + // Not NULL terminated + let partition_name = reader + .read_string_exact(partition_name_len.to_usize().unwrap()) + .map_err(|e| Error::ReadFieldError("partition_name", e))?; + + let mut salt = vec![0u8; salt_len.to_usize().unwrap()]; + reader.read_exact(&mut salt)?; + + let mut root_digest = vec![0u8; root_digest_len.to_usize().unwrap()]; + reader.read_exact(&mut root_digest)?; + + let descriptor = Self { + dm_verity_version, + image_size, + tree_offset, + tree_size, + data_block_size, + hash_block_size, + fec_num_roots, + fec_offset, + fec_size, + hash_algorithm, + partition_name, + salt, + root_digest, + flags, + reserved, + }; + + Ok(descriptor) + } +} + +impl ToWriter for HashtreeDescriptor { + type Error = Error; + + fn to_writer(&self, mut writer: W) -> Result<()> { + writer.write_u32::(self.dm_verity_version)?; + writer.write_u64::(self.image_size)?; + writer.write_u64::(self.tree_offset)?; + writer.write_u64::(self.tree_size)?; + writer.write_u32::(self.data_block_size)?; + writer.write_u32::(self.hash_block_size)?; + writer.write_u32::(self.fec_num_roots)?; + writer.write_u64::(self.fec_offset)?; + writer.write_u64::(self.fec_size)?; + + if !self.hash_algorithm.is_ascii() { + return Err(Error::StringNotAscii( + "hash_algorithm", + self.hash_algorithm.clone(), + )); + } + writer + .write_string_padded(&self.hash_algorithm, 32) + .map_err(|e| Error::WriteFieldError("hash_algorithm", e))?; + + let partition_name_len = self + .partition_name + .len() + .to_u32() + .ok_or_else(|| Error::IntegerTooLarge("partition_name_len"))?; + writer.write_u32::(partition_name_len)?; + + let salt_len = self + .salt + .len() + .to_u32() + .ok_or_else(|| Error::IntegerTooLarge("salt_len"))?; + writer.write_u32::(salt_len)?; + + let root_digest_len = self + .root_digest + .len() + .to_u32() + .ok_or_else(|| Error::IntegerTooLarge("root_digest_len"))?; + writer.write_u32::(root_digest_len)?; + + writer.write_u32::(self.flags)?; + writer.write_all(&self.reserved)?; + writer.write_all(self.partition_name.as_bytes())?; + writer.write_all(&self.salt)?; + writer.write_all(&self.root_digest)?; + + Ok(()) + } +} + +#[derive(Clone, Eq, PartialEq)] +pub struct HashDescriptor { + pub image_size: u64, + pub hash_algorithm: String, + pub partition_name: String, + pub salt: Vec, + pub root_digest: Vec, + pub flags: u32, + pub reserved: [u8; 60], +} + +impl fmt::Debug for HashDescriptor { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + f.debug_struct("HashDescriptor") + .field("image_size", &self.image_size) + .field("hash_algorithm", &self.hash_algorithm) + .field("partition_name", &self.partition_name) + .field("salt", &hex::encode(&self.salt)) + .field("root_digest", &hex::encode(&self.root_digest)) + .field("flags", &self.flags) + .field("reserved", &hex::encode(self.reserved)) + .finish() + } +} + +impl HashDescriptor { + /// Verify the root hash against the input reader. + pub fn verify(&self, reader: impl Read, cancel_signal: &Arc) -> Result<()> { + let algorithm = match self.hash_algorithm.as_str() { + "sha256" => &ring::digest::SHA256, + "sha512" => &ring::digest::SHA512, + a => return Err(Error::UnsupportedHashAlgorithm(a.to_owned())), + }; + + let mut context = Context::new(algorithm); + context.update(&self.salt); + + stream::copy_n_inspect( + reader, + io::sink(), + self.image_size, + |data| context.update(data), + cancel_signal, + )?; + + let digest = context.finish(); + + if self.root_digest != digest.as_ref() { + return Err(Error::InvalidRootDigest( + hex::encode(&self.root_digest), + hex::encode(digest), + )); + } + + Ok(()) + } +} + +impl DescriptorTag for HashDescriptor { + const TAG: u64 = 2; +} + +impl FromReader for HashDescriptor { + type Error = Error; + + fn from_reader(mut reader: R) -> Result { + let image_size = reader.read_u64::()?; + + let hash_algorithm = reader + .read_string_padded(32) + .map_err(|e| Error::ReadFieldError("hash_algorithm", e))?; + if !hash_algorithm.is_ascii() { + return Err(Error::StringNotAscii("hash_algorithm", hash_algorithm)); + } + + let partition_name_len = reader.read_u32::()?; + let salt_len = reader.read_u32::()?; + let root_digest_len = reader.read_u32::()?; + let flags = reader.read_u32::()?; + + let mut reserved = [0u8; 60]; + reader.read_exact(&mut reserved)?; + + // Not NULL terminated + let partition_name = reader + .read_string_exact(partition_name_len.to_usize().unwrap()) + .map_err(|e| Error::ReadFieldError("partition_name", e))?; + + let mut salt = vec![0u8; salt_len.to_usize().unwrap()]; + reader.read_exact(&mut salt)?; + + let mut root_digest = vec![0u8; root_digest_len.to_usize().unwrap()]; + reader.read_exact(&mut root_digest)?; + + let descriptor = Self { + image_size, + hash_algorithm, + partition_name, + salt, + root_digest, + flags, + reserved, + }; + + Ok(descriptor) + } +} + +impl ToWriter for HashDescriptor { + type Error = Error; + + fn to_writer(&self, mut writer: W) -> Result<()> { + writer.write_u64::(self.image_size)?; + + if !self.hash_algorithm.is_ascii() { + return Err(Error::StringNotAscii( + "hash_algorithm", + self.hash_algorithm.clone(), + )); + } + writer + .write_string_padded(&self.hash_algorithm, 32) + .map_err(|e| Error::WriteFieldError("hash_algorithm", e))?; + + let partition_name_len = self + .partition_name + .len() + .to_u32() + .ok_or_else(|| Error::IntegerTooLarge("partition_name_len"))?; + writer.write_u32::(partition_name_len)?; + + let salt_len = self + .salt + .len() + .to_u32() + .ok_or_else(|| Error::IntegerTooLarge("salt_len"))?; + writer.write_u32::(salt_len)?; + + let root_digest_len = self + .root_digest + .len() + .to_u32() + .ok_or_else(|| Error::IntegerTooLarge("root_digest_len"))?; + writer.write_u32::(root_digest_len)?; + + writer.write_u32::(self.flags)?; + writer.write_all(&self.reserved)?; + writer.write_all(self.partition_name.as_bytes())?; + writer.write_all(&self.salt)?; + writer.write_all(&self.root_digest)?; + + Ok(()) + } +} + +#[derive(Clone, Debug, Eq, PartialEq)] +pub struct KernelCmdlineDescriptor { + pub flags: u32, + pub cmdline: String, +} + +impl DescriptorTag for KernelCmdlineDescriptor { + const TAG: u64 = 3; +} + +impl FromReader for KernelCmdlineDescriptor { + type Error = Error; + + fn from_reader(mut reader: R) -> Result { + let flags = reader.read_u32::()?; + let cmdline_len = reader.read_u32::()?; + + // Not NULL terminated + let cmdline = reader + .read_string_exact(cmdline_len.to_usize().unwrap()) + .map_err(|e| Error::ReadFieldError("cmdline", e))?; + + let descriptor = Self { flags, cmdline }; + + Ok(descriptor) + } +} + +impl ToWriter for KernelCmdlineDescriptor { + type Error = Error; + + fn to_writer(&self, mut writer: W) -> Result<()> { + writer.write_u32::(self.flags)?; + + let cmdline_len = self + .cmdline + .len() + .to_u32() + .ok_or_else(|| Error::IntegerTooLarge("cmdline_len"))?; + writer.write_u32::(cmdline_len)?; + + writer.write_all(self.cmdline.as_bytes())?; + + Ok(()) + } +} + +#[derive(Clone, Eq, PartialEq)] +pub struct ChainPartitionDescriptor { + pub rollback_index_location: u32, + pub partition_name: String, + pub public_key: Vec, + pub reserved: [u8; 64], +} + +impl fmt::Debug for ChainPartitionDescriptor { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + f.debug_struct("ChainPartitionDescriptor") + .field("rollback_index_location", &self.rollback_index_location) + .field("partition_name", &self.partition_name) + .field("public_key", &hex::encode(&self.public_key)) + .field("reserved", &hex::encode(self.reserved)) + .finish() + } +} + +impl DescriptorTag for ChainPartitionDescriptor { + const TAG: u64 = 4; +} + +impl FromReader for ChainPartitionDescriptor { + type Error = Error; + + fn from_reader(mut reader: R) -> Result { + let rollback_index_location = reader.read_u32::()?; + let partition_name_len = reader.read_u32::()?; + let public_key_len = reader.read_u32::()?; + + let mut reserved = [0u8; 64]; + reader.read_exact(&mut reserved)?; + + // Not NULL terminated + let partition_name = reader + .read_string_padded(partition_name_len.to_usize().unwrap()) + .map_err(|e| Error::ReadFieldError("partition_name", e))?; + + let mut public_key = vec![0u8; public_key_len.to_usize().unwrap()]; + reader.read_exact(&mut public_key)?; + + let descriptor = Self { + rollback_index_location, + partition_name, + public_key, + reserved, + }; + + Ok(descriptor) + } +} + +impl ToWriter for ChainPartitionDescriptor { + type Error = Error; + + fn to_writer(&self, mut writer: W) -> Result<()> { + writer.write_u32::(self.rollback_index_location)?; + + let partition_name_len = self + .partition_name + .len() + .to_u32() + .ok_or_else(|| Error::IntegerTooLarge("partition_name_len"))?; + writer.write_u32::(partition_name_len)?; + + let public_key_len = self + .public_key + .len() + .to_u32() + .ok_or_else(|| Error::IntegerTooLarge("public_key_len"))?; + writer.write_u32::(public_key_len)?; + + writer.write_all(&self.reserved)?; + writer.write_all(self.partition_name.as_bytes())?; + writer.write_all(&self.public_key)?; + + Ok(()) + } +} + +#[derive(Clone, Debug, Eq, PartialEq)] +pub enum Descriptor { + Property(PropertyDescriptor), + Hashtree(HashtreeDescriptor), + Hash(HashDescriptor), + KernelCmdline(KernelCmdlineDescriptor), + ChainPartition(ChainPartitionDescriptor), + Unknown(u64, Vec), +} + +impl Descriptor { + pub fn partition_name(&self) -> Option<&str> { + match self { + Self::Hashtree(d) => Some(&d.partition_name), + Self::Hash(d) => Some(&d.partition_name), + Self::ChainPartition(d) => Some(&d.partition_name), + _ => None, + } + } +} + +impl FromReader for Descriptor { + type Error = Error; + + fn from_reader(mut reader: R) -> Result { + let tag = reader.read_u64::()?; + let nbf_len = reader.read_u64::()?; + + let mut inner_reader = CountingReader::new(reader.take(nbf_len)); + + let descriptor = match tag { + PropertyDescriptor::TAG => { + let d = PropertyDescriptor::from_reader(&mut inner_reader)?; + Self::Property(d) + } + HashtreeDescriptor::TAG => { + let d = HashtreeDescriptor::from_reader(&mut inner_reader)?; + Self::Hashtree(d) + } + HashDescriptor::TAG => { + let d = HashDescriptor::from_reader(&mut inner_reader)?; + Self::Hash(d) + } + KernelCmdlineDescriptor::TAG => { + let d = KernelCmdlineDescriptor::from_reader(&mut inner_reader)?; + Self::KernelCmdline(d) + } + ChainPartitionDescriptor::TAG => { + let d = ChainPartitionDescriptor::from_reader(&mut inner_reader)?; + Self::ChainPartition(d) + } + _ => { + let nbf = nbf_len + .to_usize() + .ok_or_else(|| Error::IntegerTooLarge("num_bytes_following"))?; + let mut data = vec![0u8; nbf]; + inner_reader.read_exact(&mut data)?; + + Self::Unknown(tag, data) + } + }; + + // The descriptor data is always aligned to 8 bytes. + padding::read_discard(&mut inner_reader, 8)?; + if inner_reader.stream_position()? != nbf_len { + return Err(Error::PaddingTooLong); + } + + Ok(descriptor) + } +} + +impl ToWriter for Descriptor { + type Error = Error; + + fn to_writer(&self, mut writer: W) -> Result<()> { + let mut inner_writer = Cursor::new(Vec::new()); + + let tag = match self { + Self::Property(d) => { + d.to_writer(&mut inner_writer)?; + d.get_tag() + } + Self::Hashtree(d) => { + d.to_writer(&mut inner_writer)?; + d.get_tag() + } + Self::Hash(d) => { + d.to_writer(&mut inner_writer)?; + d.get_tag() + } + Self::KernelCmdline(d) => { + d.to_writer(&mut inner_writer)?; + d.get_tag() + } + Self::ChainPartition(d) => { + d.to_writer(&mut inner_writer)?; + d.get_tag() + } + Self::Unknown(tag, data) => { + inner_writer.write_all(data)?; + *tag + } + }; + + let inner_data = inner_writer.into_inner(); + let inner_len = inner_data.len().to_u64().unwrap(); + let padding_len = padding::calc(inner_len, 8); + let nbf = inner_len + .checked_add(padding_len) + .ok_or_else(|| Error::IntegerTooLarge("num_bytes_following"))?; + + writer.write_u64::(tag)?; + writer.write_u64::(nbf)?; + writer.write_all(&inner_data)?; + writer.write_zeros_exact(padding_len)?; + + Ok(()) + } +} + +#[derive(Clone, Eq, PartialEq)] +pub struct Header { + pub required_libavb_version_major: u32, + pub required_libavb_version_minor: u32, + pub algorithm_type: AlgorithmType, + pub hash: Vec, + pub signature: Vec, + pub public_key: Vec, + pub public_key_metadata: Vec, + pub descriptors: Vec, + pub rollback_index: u64, + pub flags: u32, + pub rollback_index_location: u32, + pub release_string: String, + pub reserved: [u8; 80], +} + +impl fmt::Debug for Header { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + f.debug_struct("Header") + .field( + "required_libavb_version_major", + &self.required_libavb_version_major, + ) + .field( + "required_libavb_version_minor", + &self.required_libavb_version_minor, + ) + .field("algorithm_type", &self.algorithm_type) + .field("hash", &hex::encode(&self.hash)) + .field("signature", &hex::encode(&self.signature)) + .field("public_key", &hex::encode(&self.public_key)) + .field( + "public_key_metadata", + &hex::encode(&self.public_key_metadata), + ) + .field("descriptors", &self.descriptors) + .field("rollback_index", &self.rollback_index) + .field("flags", &self.flags) + .field("rollback_index_location", &self.rollback_index_location) + .field("release_string", &self.release_string) + .field("reserved", &hex::encode(self.reserved)) + .finish() + } +} + +impl Header { + pub const SIZE: usize = 256; + + fn to_writer_internal(&self, mut writer: impl Write, skip_auth_block: bool) -> Result<()> { + let mut descriptors_writer = Cursor::new(Vec::new()); + for d in &self.descriptors { + d.to_writer(&mut descriptors_writer)?; + } + let descriptors_raw = descriptors_writer.into_inner(); + + // Auth block + + let hash_offset = 0u64; + let hash_size = self.hash.len().to_u64().unwrap(); + + let signature_offset = hash_offset + .checked_add(hash_size) + .ok_or_else(|| Error::IntegerTooLarge("signature_offset"))?; + let signature_size = self.signature.len().to_u64().unwrap(); + + let auth_block_data_size = signature_offset + .checked_add(signature_size) + .ok_or_else(|| Error::IntegerTooLarge("authentication_data_block_size"))?; + let auth_block_padding_size = padding::calc(auth_block_data_size, 64); + let auth_block_size = auth_block_data_size + .checked_add(auth_block_padding_size) + .ok_or_else(|| Error::IntegerTooLarge("authentication_data_block_size"))?; + + // Aux block + + let descriptors_offset = 0u64; + let descriptors_size = descriptors_raw.len().to_u64().unwrap(); + + let public_key_offset = descriptors_offset + .checked_add(descriptors_size) + .ok_or_else(|| Error::IntegerTooLarge("public_key_offset"))?; + let public_key_size = self.public_key.len().to_u64().unwrap(); + + let public_key_metadata_offset = public_key_offset + .checked_add(public_key_size) + .ok_or_else(|| Error::IntegerTooLarge("public_key_metadata_offset"))?; + let public_key_metadata_size = self.public_key_metadata.len().to_u64().unwrap(); + + let aux_block_data_size = public_key_metadata_offset + .checked_add(public_key_metadata_size) + .ok_or_else(|| Error::IntegerTooLarge("auxiliary_data_block_size"))?; + let aux_block_padding_size = padding::calc(aux_block_data_size, 64); + let aux_block_size = aux_block_data_size + .checked_add(aux_block_padding_size) + .ok_or_else(|| Error::IntegerTooLarge("auxiliary_data_block_size"))?; + + writer.write_all(&HEADER_MAGIC)?; + writer.write_u32::(self.required_libavb_version_major)?; + writer.write_u32::(self.required_libavb_version_minor)?; + writer.write_u64::(auth_block_size)?; + writer.write_u64::(aux_block_size)?; + writer.write_u32::(self.algorithm_type.to_raw())?; + writer.write_u64::(hash_offset)?; + writer.write_u64::(hash_size)?; + writer.write_u64::(signature_offset)?; + writer.write_u64::(signature_size)?; + writer.write_u64::(public_key_offset)?; + writer.write_u64::(public_key_size)?; + writer.write_u64::(public_key_metadata_offset)?; + writer.write_u64::(public_key_metadata_size)?; + writer.write_u64::(descriptors_offset)?; + writer.write_u64::(descriptors_size)?; + writer.write_u64::(self.rollback_index)?; + writer.write_u32::(self.flags)?; + writer.write_u32::(self.rollback_index_location)?; + + writer + .write_string_padded(&self.release_string, 48) + .map_err(|e| Error::WriteFieldError("release_string", e))?; + + writer.write_all(&self.reserved)?; + + // Auth block + if !skip_auth_block { + writer.write_all(&self.hash)?; + writer.write_all(&self.signature)?; + writer.write_zeros_exact(auth_block_padding_size)?; + } + + // Aux block + writer.write_all(&descriptors_raw)?; + writer.write_all(&self.public_key)?; + writer.write_all(&self.public_key_metadata)?; + writer.write_zeros_exact(aux_block_padding_size)?; + + Ok(()) + } + + pub fn sign(&mut self, key: &RsaPrivateKey) -> Result<()> { + let key_raw = encode_public_key(&key.to_public_key())?; + + // RustCrypto does not support 8192-bit keys + match self.algorithm_type { + AlgorithmType::Sha256Rsa8192 + | AlgorithmType::Sha512Rsa8192 + | AlgorithmType::Unknown(_) => { + return Err(Error::UnsupportedAlgorithm(self.algorithm_type)); + } + _ => {} + } + + if key_raw.len() != self.algorithm_type.public_key_len() { + return Err(Error::IncorrectKeySize( + key_raw.len(), + self.algorithm_type, + self.algorithm_type.public_key_len(), + )); + } + + // The public key and the sizes of the hash and signature are included + // in the data that's about to be signed. + self.public_key = key_raw; + self.hash.resize(self.algorithm_type.hash_len(), 0); + self.signature + .resize(self.algorithm_type.signature_len(), 0); + + let mut without_auth_writer = Cursor::new(Vec::new()); + self.to_writer_internal(&mut without_auth_writer, true)?; + let without_auth = without_auth_writer.into_inner(); + + let hash = self.algorithm_type.hash(&without_auth); + let signature = self.algorithm_type.sign(key, &hash)?; + + self.hash = hash; + self.signature = signature; + + Ok(()) + } + + /// Verify the header's digest and signature against the embedded public key + /// and return the public key. If the header is not signed, then `None` is + /// returned. + pub fn verify(&self) -> Result> { + // RustCrypto does not support 8192-bit keys + match self.algorithm_type { + AlgorithmType::None => return Ok(None), + a @ AlgorithmType::Sha256Rsa8192 + | a @ AlgorithmType::Sha512Rsa8192 + | a @ AlgorithmType::Unknown(_) => return Err(Error::UnsupportedAlgorithm(a)), + _ => {} + } + + // Reconstruct the public key. + let public_key = decode_public_key(&self.public_key)?; + + let mut without_auth_writer = Cursor::new(Vec::new()); + self.to_writer_internal(&mut without_auth_writer, true)?; + let without_auth = without_auth_writer.into_inner(); + + let hash = self.algorithm_type.hash(&without_auth); + self.algorithm_type + .verify(&public_key, &hash, &self.signature)?; + + Ok(Some(public_key)) + } +} + +impl FromReader for Header { + type Error = Error; + + fn from_reader(mut reader: R) -> Result { + let mut magic = [0u8; 4]; + reader.read_exact(&mut magic)?; + + if magic != HEADER_MAGIC { + return Err(Error::InvalidHeaderMagic(magic)); + } + + let required_libavb_version_major = reader.read_u32::()?; + let required_libavb_version_minor = reader.read_u32::()?; + let authentication_data_block_size = reader + .read_u64::()? + .to_usize() + .ok_or_else(|| Error::IntegerTooLarge("authentication_data_block_size"))?; + let auxiliary_data_block_size = reader + .read_u64::()? + .to_usize() + .ok_or_else(|| Error::IntegerTooLarge("auxiliary_data_block_size"))?; + + let algorithm_type_raw = reader.read_u32::()?; + let algorithm_type = AlgorithmType::from_raw(algorithm_type_raw); + + let hash_offset = reader + .read_u64::()? + .to_usize() + .ok_or_else(|| Error::IntegerTooLarge("hash_offset"))?; + let hash_size = reader + .read_u64::()? + .to_usize() + .ok_or_else(|| Error::IntegerTooLarge("hash_size"))?; + let signature_offset = reader + .read_u64::()? + .to_usize() + .ok_or_else(|| Error::IntegerTooLarge("signature_offset"))?; + let signature_size = reader + .read_u64::()? + .to_usize() + .ok_or_else(|| Error::IntegerTooLarge("signature_size"))?; + + let auth_block_combined = hash_size + signature_size; + let auth_block_padding = padding::calc(auth_block_combined, 64); + if authentication_data_block_size != auth_block_combined + auth_block_padding { + return Err(Error::IncorrectCombinedSize( + "authentication_data_block_size", + )); + } + + let public_key_offset = reader + .read_u64::()? + .to_usize() + .ok_or_else(|| Error::IntegerTooLarge("public_key_offset"))?; + let public_key_size = reader + .read_u64::()? + .to_usize() + .ok_or_else(|| Error::IntegerTooLarge("public_key_size"))?; + let public_key_metadata_offset = reader + .read_u64::()? + .to_usize() + .ok_or_else(|| Error::IntegerTooLarge("public_key_metadata_offset"))?; + let public_key_metadata_size = reader + .read_u64::()? + .to_usize() + .ok_or_else(|| Error::IntegerTooLarge("public_key_metadata_size"))?; + let descriptors_offset = reader + .read_u64::()? + .to_usize() + .ok_or_else(|| Error::IntegerTooLarge("descriptors_offset"))?; + let descriptors_size = reader + .read_u64::()? + .to_usize() + .ok_or_else(|| Error::IntegerTooLarge("descriptors_size"))?; + + let aux_block_combined = public_key_size + public_key_metadata_size + descriptors_size; + let aux_block_padding = padding::calc(aux_block_combined, 64); + if auxiliary_data_block_size != aux_block_combined + aux_block_padding { + return Err(Error::IncorrectCombinedSize("auxiliary_data_block_size")); + } + + let rollback_index = reader.read_u64::()?; + let flags = reader.read_u32::()?; + let rollback_index_location = reader.read_u32::()?; + + let release_string = reader + .read_string_padded(48) + .map_err(|e| Error::ReadFieldError("release_string", e))?; + + let mut reserved = [0u8; 80]; + reader.read_exact(&mut reserved)?; + + let mut auth_block = vec![0u8; authentication_data_block_size]; + reader.read_exact(&mut auth_block)?; + + let mut aux_block = vec![0u8; auxiliary_data_block_size]; + reader.read_exact(&mut aux_block)?; + + // When we verify() the signatures, we're doing so on re-serialized + // fields. The padding is the only thing that can escape this, so make + // sure they don't contain any data. + if !util::is_zero( + &auth_block[auth_block_combined..auth_block_combined + auth_block_padding], + ) { + return Err(Error::PaddingNotZero("authentication_data_block")); + } + if !util::is_zero(&aux_block[aux_block_combined..aux_block_combined + aux_block_padding]) { + return Err(Error::PaddingNotZero("auxiliary_data_block")); + } + + // Auth block data + + if hash_offset + .checked_add(hash_size) + .map_or(false, |s| s > auth_block.len()) + { + return Err(Error::OutOfBounds("hash_offset", "hash_size")); + } + let hash = &auth_block[hash_offset..hash_offset + hash_size]; + + if signature_offset + .checked_add(signature_size) + .map_or(false, |s| s > auth_block.len()) + { + return Err(Error::OutOfBounds("signature_offset", "signature_size")); + } + let signature = &auth_block[signature_offset..signature_offset + signature_size]; + + // Aux block data + + if public_key_offset + .checked_add(public_key_size) + .map_or(false, |s| s > aux_block.len()) + { + return Err(Error::OutOfBounds("public_key_offset", "public_key_size")); + } + let public_key = &aux_block[public_key_offset..public_key_offset + public_key_size]; + + if public_key_metadata_offset + .checked_add(public_key_metadata_size) + .map_or(false, |s| s > aux_block.len()) + { + return Err(Error::OutOfBounds( + "public_key_metadata_offset", + "public_key_metadata_size", + )); + } + let public_key_metadata = &aux_block + [public_key_metadata_offset..public_key_metadata_offset + public_key_metadata_size]; + + let mut descriptors: Vec = vec![]; + let mut descriptor_reader = Cursor::new(&aux_block); + let mut pos = descriptor_reader.seek(SeekFrom::Start(descriptors_offset as u64))?; + + while pos < (descriptors_offset + descriptors_size) as u64 { + let descriptor = Descriptor::from_reader(&mut descriptor_reader)?; + descriptors.push(descriptor); + pos = descriptor_reader.stream_position()?; + } + + let header = Self { + required_libavb_version_major, + required_libavb_version_minor, + algorithm_type, + hash: hash.to_owned(), + signature: signature.to_owned(), + public_key: public_key.to_owned(), + public_key_metadata: public_key_metadata.to_owned(), + descriptors, + rollback_index, + flags, + rollback_index_location, + release_string, + reserved, + }; + + Ok(header) + } +} + +impl ToWriter for Header { + type Error = Error; + + fn to_writer(&self, writer: W) -> Result<()> { + self.to_writer_internal(writer, false) + } +} + +#[derive(Clone, Eq, PartialEq)] +pub struct Footer { + pub version_major: u32, + pub version_minor: u32, + pub original_image_size: u64, + pub vbmeta_offset: u64, + pub vbmeta_size: u64, + pub reserved: [u8; 28], +} + +impl fmt::Debug for Footer { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + f.debug_struct("Footer") + .field("version_major", &self.version_major) + .field("version_minor", &self.version_minor) + .field("original_image_size", &self.original_image_size) + .field("vbmeta_offset", &self.vbmeta_offset) + .field("vbmeta_size", &self.vbmeta_size) + .field("reserved", &hex::encode(self.reserved)) + .finish() + } +} + +impl Footer { + pub const SIZE: usize = 64; +} + +impl FromReader for Footer { + type Error = Error; + + fn from_reader(mut reader: R) -> Result { + let mut magic = [0u8; 4]; + reader.read_exact(&mut magic)?; + + if magic != FOOTER_MAGIC { + return Err(Error::InvalidFooterMagic(magic)); + } + + let version_major = reader.read_u32::()?; + let version_minor = reader.read_u32::()?; + let original_image_size = reader.read_u64::()?; + let vbmeta_offset = reader.read_u64::()?; + let vbmeta_size = reader.read_u64::()?; + + let mut reserved = [0u8; 28]; + reader.read_exact(&mut reserved)?; + + let footer = Self { + version_major, + version_minor, + original_image_size, + vbmeta_offset, + vbmeta_size, + reserved, + }; + + Ok(footer) + } +} + +impl ToWriter for Footer { + type Error = Error; + + fn to_writer(&self, mut writer: W) -> Result<()> { + writer.write_all(&FOOTER_MAGIC)?; + writer.write_u32::(self.version_major)?; + writer.write_u32::(self.version_minor)?; + writer.write_u64::(self.original_image_size)?; + writer.write_u64::(self.vbmeta_offset)?; + writer.write_u64::(self.vbmeta_size)?; + writer.write_all(&self.reserved)?; + Ok(()) + } +} + +/// Encode a public key in the AVB binary format. +pub fn encode_public_key(key: &RsaPublicKey) -> Result> { + if key.e() != &BigUint::from(65537u32) { + return Err(Error::UnsupportedRsaPublicExponent(key.e().clone())); + } + + // libavb expects certain values to be precomputed so that the bootloader's + // verification operations can run faster. + // + // Values: + // n0inv = -1 / n[0] (mod 2 ^ 32) + // - Guaranteed to fit in a u32 + // r = 2 ^ (key size in bits) + // rr = r^2 (mod N) + // - Guaranteed to fit in key size bits + + let b = BigUint::from(2u64.pow(32)); + let n0inv = b.to_bigint().unwrap() - key.n().mod_inverse(&b).unwrap(); + let r = BigUint::from(2u32).pow(key.n().bits()); + let rrmodn = r.modpow(&BigUint::from(2u32), key.n()); + + let key_bits = (key.size() * 8).to_u32().unwrap(); + + let mut data = vec![]; + data.extend_from_slice(&key_bits.to_be_bytes()); + data.extend_from_slice(&n0inv.to_u32().unwrap().to_be_bytes()); + + let modulus_raw = key.n().to_bytes_be(); + data.resize(data.len() + key.size() - modulus_raw.len(), 0); + data.extend_from_slice(&modulus_raw); + + let rrmodn_raw = rrmodn.to_bytes_be(); + data.resize(data.len() + key.size() - rrmodn_raw.len(), 0); + data.extend_from_slice(&rrmodn_raw); + + Ok(data) +} + +/// Decode a public key from the AVB binary format. +pub fn decode_public_key(data: &[u8]) -> Result { + let mut reader = Cursor::new(data); + let key_bits = reader + .read_u32::()? + .to_usize() + .ok_or_else(|| Error::IntegerTooLarge("key_bits"))?; + + // Skip n0inv + reader.read_discard_exact(4)?; + + let mut modulus_raw = vec![0u8; key_bits / 8]; + reader.read_exact(&mut modulus_raw)?; + + let modulus = BigUint::from_bytes_be(&modulus_raw); + let public_key = + RsaPublicKey::new(modulus, BigUint::from(65537u32)).map_err(Error::RsaVerifyError)?; + + Ok(public_key) +} + +/// Load the vbmeta header and footer from the specified reader. A footer is +/// present only if the file is not a vbmeta partition image (ie. the header +/// follows actual data). +pub fn load_image(mut reader: impl Read + Seek) -> Result<(Header, Option