Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LZMA data error on s390x #1970

Closed
nikita-fuchs opened this issue Aug 21, 2019 · 39 comments · Fixed by #1972
Closed

LZMA data error on s390x #1970

nikita-fuchs opened this issue Aug 21, 2019 · 39 comments · Fixed by #1972
Labels

Comments

@nikita-fuchs
Copy link

Problem
I cannot find a working way to use rust's beta or nightly releases on an IBM mainframe running Ubuntu:

root@zla19054:~/.cargo/bin# ./rustup default nightly
info: syncing channel updates for 'nightly-s390x-unknown-linux-gnu'
info: latest update on 2019-08-21, rust version 1.39.0-nightly (bea0372a1 2019-08-20)
info: downloading component 'rustc'
 54.3 MiB /  54.3 MiB (100 %)  13.0 MiB/s in  4s ETA:  0s
info: downloading component 'rust-std'
221.3 MiB / 221.3 MiB (100 %)  16.9 MiB/s in 14s ETA:  0s
info: downloading component 'cargo'
  4.5 MiB /   4.5 MiB (100 %)   1.8 MiB/s in  1s ETA:  0s
info: installing component 'rustc'
info: rolling back changes
error: failed to extract package (perhaps you ran out of disk space?)
info: caused by: lzma data error

Steps

  1. Connect to the IBM mainframe of your choice running Ubuntu (reach out to me if you need something here ;) )
  2. Run rustup default nightly(if the link is not being created for you like in my case, directly run the binary: ~/.cargo/bin# ./rustup default nightly)
    2.5 same with with the beta
  3. Receive error

Of course there is enough space available.

Possible Solution(s)
Is there a way to obtain these packages for s390xarchitecture somewhere directly and unpack the files / get them unpacked already, somewhere, and install them manually?

Notes

Output of rustup --version: rustup 1.18.3 (435397f48 2019-05-22)
Output of rustup show:

./rustup show
Default host: s390x-unknown-linux-gnu

stable-s390x-unknown-linux-gnu (default)
rustc 1.37.0 (eae3437df 2019-08-13)
@kinnison
Copy link
Contributor

Hi,

Based on the info: caused by: lzma data error I'd guess that the s390x nightly archives are somehow damaged/corrupted and that it happened before the channel manifests were created. Are you able to install from a different nightly (e.g. one from a few days ago)?

Thanks,

D.

@pietroalbini
Copy link
Member

pietroalbini commented Aug 21, 2019

@nikita-fuchs could you try running this command on your s390x machine?

rustup toolchain install nightly-x86_64-unknown-linux-gnu

@nikita-fuchs
Copy link
Author

Thanks for the fast responses !

@pietroalbini that produces the same error, unfortunately.
@kinnison Sorry I can't seem to find a list of release dates for that (I'm no rust developer unfortunately, so maybe there is even a simple way that I've just not stumbled upon) , could you hint me a command?

@pietroalbini
Copy link
Member

@nikita-fuchs then it's not an infrastructure problem, the packages in the archive are fine.

You could download Rust manually, but if there is a miscompilation in rustup I doubt it's going to be reliable anyway. It would be useful to see which nightly introduced the regression though: a nightly is released every day and has a toolchain name of nightly-YYYY-MM-DD (for example nightly-2019-08-21 for today's one), so you could try doing a (manual) binary search on them.

@nikita-fuchs
Copy link
Author

nikita-fuchs commented Aug 21, 2019

@pietroalbini ./rustup default nightly-2019-07-05 works:

  nightly-2019-07-05-s390x-unknown-linux-gnu installed - rustc 1.37.0-nightly (24a9bcbb7 2019-07-04)

(note it installs the commit from the day before)

./rustup default nightly-2019-07-06 doesn't, throwing that out of space error. In the issue I've referenced this one in (see above) somebody mentioned, that this behavior would also appear in the next stable if it's not fixed?

Thank you very much for your help.

@kinnison
Copy link
Contributor

The "maybe you're out of space?" is purely a hint because that failure mode is most common. The lzma data error thing is more likely that the lzma library is flakey on s390x and somehow the issue isn't tickled by the 2019-07-05 toolchain, or perhaps previous stables.

Are you able to try a local rustup build?

$ git clone https://github.com/rust-lang/rustup.rs
$ cd rustup.rs
$ cargo build --features vendored-openssl
$ target/debug/rustup-init --no-modify-path -y
$ rustup --no-self-update toolchain install nightly

@kinnison kinnison changed the title "error: failed to extract package (perhaps you ran out of disk space?)" LZMA data error on s390x Aug 21, 2019
@pietroalbini
Copy link
Member

PRs merged between nightly-2019-07-05 and nightly-2019-07-06:

rust-lang/rust#62153 seems really suspicious, but I don't have time to investigate right now.

In the issue I've referenced this one in (see about) somebody mentioned, that this behavior would also appear in the next stable if it's not fixed?

That's correct.

@pietroalbini
Copy link
Member

Oh, another thing that might be useful to debug: could you download this file and extract it manually?

@nikita-fuchs
Copy link
Author

@pietroalbini it extracts without an issue, seems it's all there:

root@zla19054:~/.cargo/bin/nightly-manual-download/rustc-nightly-s390x-unknown-linux-gnu# ls -l
total 88
-rw-rw-r-- 1 buwd buwd     6 Aug 20 21:59 components
-rw-r--r-- 1 buwd buwd  9322 Aug 20 21:59 COPYRIGHT
-rw-rw-r-- 1 buwd buwd    40 Aug 20 21:59 git-commit-hash
-rwxr-xr-x 1 buwd buwd 27888 Aug 20 21:59 install.sh
-rw-r--r-- 1 buwd buwd 10847 Aug 20 21:59 LICENSE-APACHE
-rw-r--r-- 1 buwd buwd  1023 Aug 20 21:59 LICENSE-MIT
-rw-r--r-- 1 buwd buwd  9579 Aug 20 21:59 README.md
drwxrwxr-x 5 buwd buwd  4096 Aug 21 12:02 rustc
-rw-rw-r-- 1 buwd buwd     2 Aug 20 21:59 rust-installer-version
-rw-rw-r-- 1 buwd buwd    37 Aug 20 21:59 version

any chance to roll with that somehow?

@pietroalbini
Copy link
Member

@kinnison suggested we could switch rustup to use gzip instead of xz on s390x, and that could work, but I agree with them it would be best to find out the root cause of this if possible.

cc @alexcrichton: apparently after rust-installer started using multithreaded compression it's not possible to extract any XZ tarball on s390x anymore using rustup (tar-rs and xz2-rs), while it works fine with the tar CLI. The error is just lzma data error and on x86 everything is fine. Do you have any idea on the cause for this?

@alexcrichton
Copy link
Member

Hm interesting! This does indeed sort of sound like a bug in xz2-rs on s390x, although I wouldn't really know how best to track that down.

@nikita-fuchs would you be able to check out https://github.com/alexcrichton/xz2-rs on your s390x machine and run the test suite in debug/release mode? Or perhaps write a small example using XzDecoder to decode one of the bad tarballs?

@nikita-fuchs
Copy link
Author

@alexcrichton I would love to do so, but my rust knowledge is near zero unfortunately, just trying to build an existing application on mainframe, but if you could give me the commands to try out what you suggested, I would do so. I'm talking to IBM about providing the community some means of access to mainframes, it's in their best interest.

@tesuji
Copy link
Contributor

tesuji commented Aug 21, 2019

I believe all you want to do in this case are:

cargo test
cargo test --features tokio
LZMA_API_STATIC=1 cargo run --manifest-path systest/Cargo.toml

but Alex could confirm.

@nikita-fuchs
Copy link
Author

thank you @lzutao, failing at cargo test:

error: failed to run custom build command for `lzma-sys v0.1.14 (/home/pocadm2/rustDebugging/xz2-rs/lzma-sys)`
process didn't exit successfully: `/home/pocadm2/rustDebugging/xz2-rs/target/debug/build/lzma-sys-dfccfe5e9677a392/build-script-build` (exit code: 101)
--- stdout
cargo:rerun-if-env-changed=LZMA_API_STATIC
cargo:include=/home/pocadm2/rustDebugging/xz2-rs/lzma-sys/xz-5.2/src/liblzma/api

--- stderr
thread 'main' panicked at 'failed to read dir xz-5.2/src/liblzma/common: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/libcore/result.rs:997:5
note: Run with `RUST_BACKTRACE=1` environment variable to display a backtrace.

warning: build failed, waiting for other jobs to finish...
error: build failed

same with the other two commands.

@nikita-fuchs
Copy link
Author

$ rustup --no-self-update toolchain install nightly

@kinnison the local rustup build unfortunately fails with the same old issue:

info: installing component 'rustc'
info: rolling back changes
error: failed to extract package (perhaps you ran out of disk space?)
info: caused by: lzma data error

thank you all for your great support.

@mati865
Copy link
Contributor

mati865 commented Aug 22, 2019

@nikita-fuchs looks like you are missing git submodule.
Please run git submodule update --init --recursive inside xz2-rs and test xz2 again.

@nikita-fuchs
Copy link
Author

nikita-fuchs commented Aug 22, 2019

Edit: LZMA_API_STATIC=1 cargo run --manifest-path systest/Cargo.toml passes !

RUNNING ALL TESTS
PASSED 470 tests

@mati865 thanks, unfortunately after this, ~/rustDebugging/xz2-rs$ cargo test returns

test write::tests::smoke ... FAILED
test read::tests::qc ... FAILED

failures:

---- write::tests::smoke stdout ----
thread 'write::tests::smoke' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.

---- read::tests::qc stdout ----
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }', src/libcore/result.rs:999:5
thread 'read::tests::qc' panicked at '[quickcheck] TEST FAILED (runtime error). Arguments: ([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1])
Error: "called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: Data }"', /home/pocadm2/.cargo/registry/src/github.com-eae4ba8cbf2ce1c7/quickcheck-0.8.5/src/tester.rs:176:28


failures:
    read::tests::qc
    write::tests::smoke

test result: FAILED. 10 passed; 2 failed; 0 ignored; 0 measured; 0 filtered out

error: test failed, to rerun pass '--lib'

same for --tokio

@kinnison
Copy link
Contributor

This definitely sounds like @alexcrichton and @nikita-fuchs could do with putting their heads together over xz2-rs :D

@alexcrichton
Copy link
Member

Ok thanks for the confirmation @nikita-fuchs! I've done some testing myself and was able to reproducing using qemu emulators.

I believe we were building liblzma incorrectly (apparently we needed to tell them it was a big-endian target) and tests are now green! I've published a new version of lzma-sys with the fix, but can you confirm that tests pass locally for you? If so then rustup likely just needs to update its lockfile and this issue should be good to go.

@kinnison
Copy link
Contributor

Wonderful news @alexcrichton thank you!

@nikita-fuchs If you can run the test-suite for xz2-rs and if that works, you can build a local rustup updated with it and check that (cargo update -p lzma-sys in your rustup checkout, and then rebuild and install as per the above comment). Assuming that goes well, I can push an update to rustup's lockfile before we make our new release due soon.

@nikita-fuchs
Copy link
Author

Hey, sorry for coming back late, needed to be abroad.

@kinnison I hope I did everything correct, but after reinstalling rustup and running
rustup toolchain install nightly I'm getting the same old lzma data error.

@kinnison
Copy link
Contributor

kinnison commented Sep 4, 2019

Did you build that rustup from master? The change was merged but not yet released.

@nikita-fuchs
Copy link
Author

Almost thought I needed to ;)
So building from master (simply cargo build is the way to go, right?) returned:

error: failed to run custom build command for `openssl-sys v0.9.49`

Caused by:
  process didn't exit successfully: `/home/pocadm2/rustDebugging/rustup.rs/target/debug/build/openssl-sys-b30631ed0c8389a9/build-script-main` (exit code: 101)
--- stdout
cargo:rustc-cfg=const_fn
cargo:rerun-if-env-changed=S390X_UNKNOWN_LINUX_GNU_OPENSSL_LIB_DIR
S390X_UNKNOWN_LINUX_GNU_OPENSSL_LIB_DIR unset
cargo:rerun-if-env-changed=OPENSSL_LIB_DIR
OPENSSL_LIB_DIR unset
cargo:rerun-if-env-changed=S390X_UNKNOWN_LINUX_GNU_OPENSSL_INCLUDE_DIR
S390X_UNKNOWN_LINUX_GNU_OPENSSL_INCLUDE_DIR unset
cargo:rerun-if-env-changed=OPENSSL_INCLUDE_DIR
OPENSSL_INCLUDE_DIR unset
cargo:rerun-if-env-changed=S390X_UNKNOWN_LINUX_GNU_OPENSSL_DIR
S390X_UNKNOWN_LINUX_GNU_OPENSSL_DIR unset
cargo:rerun-if-env-changed=OPENSSL_DIR
OPENSSL_DIR unset
run pkg_config fail: "Failed to run `\"pkg-config\" \"--libs\" \"--cflags\" \"openssl\"`: No such file or directory (os error 2)"

--- stderr
thread 'main' panicked at '

Could not find directory of OpenSSL installation, and this `-sys` crate cannot
proceed without this knowledge. If OpenSSL is installed and this crate had
trouble finding it,  you can set the `OPENSSL_DIR` environment variable for the
compilation process.

Make sure you also have the development packages of openssl installed.
For example, `libssl-dev` on Ubuntu or `openssl-devel` on Fedora.

If you're in a situation where you think the directory *should* be found
automatically, please open a bug at https://github.com/sfackler/rust-openssl
and include information about your system as well as this message.

$HOST = s390x-unknown-linux-gnu
$TARGET = s390x-unknown-linux-gnu
openssl-sys = 0.9.49


It looks like you're compiling on Linux and also targeting Linux. Currently this
requires the `pkg-config` utility to find OpenSSL but unfortunately `pkg-config`
could not be found. If you have OpenSSL installed you can likely fix this by
installing `pkg-config`.

', /root/.cargo/registry/src/github.com-eae4ba8cbf2ce1c7/openssl-sys-0.9.49/build/find_normal.rs:150:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.

warning: build failed, waiting for other jobs to finish...
error: build failed

@tesuji
Copy link
Contributor

tesuji commented Sep 4, 2019

You need cargo build --release --features vendored-openssl or you will want to install libssl-dev.

@nikita-fuchs
Copy link
Author

the building succeeded, how do I install from it ?

@tesuji
Copy link
Contributor

tesuji commented Sep 4, 2019

Just copy it to ~/.cargo/bin/rustup.

@nikita-fuchs
Copy link
Author

Sorry for failing with basic things, but where does the build churn out the rustup to copy over to ~/.cargo/bin/rustup ?

@tesuji
Copy link
Contributor

tesuji commented Sep 4, 2019

Sorry, I should make it clearer. There is rustup-init file in target/release directory.

@mati865
Copy link
Contributor

mati865 commented Sep 4, 2019

Another option is to run cargo install --path . -f inside rustup clone.

@kinnison
Copy link
Contributor

kinnison commented Sep 4, 2019

@nikita-fuchs The easiest way is, assuming you built in debug mode, to run target/debug/rustup-init --no-modify-path -y which will install rustup over the top of your current installation. Swap debug for release if you built in release mode.

You should then remember to always pass --no-self-update to any rustup install or rustup update you run, otherwise it will "update" itself back to the release.

If you give me a 👍 then I'll consider that pretty much the last flag before I need to start turning the release crank on a proper release which contains the fix.

@nikita-fuchs
Copy link
Author

nikita-fuchs commented Sep 4, 2019

Thanks everybody - @kinnison with the built release I get this when installing the nightly:

rustup toolchain install nightly --no-self-update
info: syncing channel updates for 'nightly-s390x-unknown-linux-gnu'
info: latest update on 2019-09-04, rust version 1.39.0-nightly (b9de4ef89 2019-09-03)
info: downloading component 'rustc'
info: downloading component 'rust-std'
224.4 MiB / 224.4 MiB (100 %) 196.4 MiB/s in  1s ETA:  0s
info: downloading component 'cargo'
info: installing component 'rustc'
 54.4 MiB /  54.4 MiB (100 %)  12.6 MiB/s in  4s ETA:  0s
info: installing component 'rust-std'
info: rolling back changes
error: File too big rust-std-nightly-s390x-unknown-linux-gnu/rust-std-s390x-unknown-linux-gnu/lib/rustlib/s390x-unknown-linux-gnu/lib/librustc-0b4e15ecb8f78581.rlib 107228482

And it's not the disc:

df -h
Filesystem                Size  Used Avail Use% Mounted on
udev                      7.8G     0  7.8G   0% /dev
tmpfs                     1.6G  340K  1.6G   1% /run
/dev/mapper/mpatha-part1  196G   37G  150G  20% /
tmpfs                     7.8G     0  7.8G   0% /dev/shm
tmpfs                     5.0M     0  5.0M   0% /run/lock
tmpfs                     7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/loop1                 86M   86M     0 100% /snap/docker/378
/dev/loop2                 53M   53M     0 100% /snap/snapcraft/3255
/dev/loop3                 86M   86M     0 100% /snap/docker/385
/dev/loop4                 81M   81M     0 100% /snap/core/7271
/dev/loop6                 81M   81M     0 100% /snap/core/7394
/dev/loop5                 53M   53M     0 100% /snap/snapcraft/3309
tmpfs                     1.6G     0  1.6G   0% /run/user/1004

@kinnison
Copy link
Contributor

kinnison commented Sep 4, 2019

Oh fascinating -- we have a check in Rustup that ensures that the unpacked files don't appear bogus -- assuming any given file won't be larger than 100 megs.

I'mma file a bug now.

@mati865
Copy link
Contributor

mati865 commented Sep 4, 2019

@kinnison this check is 95.37 MiB BTW 😁

@kinnison
Copy link
Contributor

kinnison commented Sep 4, 2019

@nikita-fuchs If you change the line pointed to in #1982 to something like 200_000_000 then you can test your install.

@nikita-fuchs
Copy link
Author

@kinnison I tried, but for some strange reason I cannot build at all anymore:

cargo build --release --features vendored-openssl

gives me:

 Compiling rustup v1.18.3 (/home/pocadm2/rustDebugging/rustup.rs)
error: enum variants on type aliases are experimental
  --> src/config.rs:27:13
   |
27 |             Self::Environment => write!(f, "environment override by RUSTUP_TOOLCHAIN"),
   |             ^^^^^^^^^^^^^^^^^

error: enum variants on type aliases are experimental
  --> src/config.rs:28:13
   |
28 |             Self::OverrideDB(path) => write!(f, "directory override for '{}'", path.display()),
   |             ^^^^^^^^^^^^^^^^^^^^^^

error: enum variants on type aliases are experimental
  --> src/config.rs:29:13
   |
29 |             Self::ToolchainFile(path) => write!(f, "overridden by '{}'", path.display()),
   |             ^^^^^^^^^^^^^^^^^^^^^^^^^

error: enum variants on type aliases are experimental
  --> src/diskio/threaded.rs:23:9
   |
23 |         Self::Sentinel
   |         ^^^^^^^^^^^^^^

error: enum variants on type aliases are experimental
   --> src/dist/manifest.rs:288:13
    |
288 |             Self::Wildcard(tpkg) => Some(tpkg),
    |             ^^^^^^^^^^^^^^^^^^^^

error: enum variants on type aliases are experimental
   --> src/dist/manifest.rs:289:13
    |
289 |             Self::Targeted(tpkgs) => tpkgs.get(target),
    |             ^^^^^^^^^^^^^^^^^^^^^

error: enum variants on type aliases are experimental
   --> src/dist/manifest.rs:294:13
    |
294 |             Self::Wildcard(tpkg) => Some(tpkg),
    |             ^^^^^^^^^^^^^^^^^^^^

error: enum variants on type aliases are experimental
   --> src/dist/manifest.rs:295:13
    |
295 |             Self::Targeted(tpkgs) => tpkgs.get_mut(target),
    |             ^^^^^^^^^^^^^^^^^^^^^

error: enum variants on type aliases are experimental
   --> src/utils/raw.rs:280:13
    |
280 |             Self::Io(e) => write!(f, "Io: {}", e),
    |             ^^^^^^^^^^^

error: enum variants on type aliases are experimental
   --> src/utils/raw.rs:281:13
    |
281 |             Self::Status(s) => write!(f, "Status: {}", s),
    |             ^^^^^^^^^^^^^^^

error: aborting due to 10 previous errors

error: Could not compile `rustup`.

To learn more, run the command again with --verbose.

cargo is on v 1.36 if that matters somehow.

@kinnison
Copy link
Contributor

kinnison commented Sep 5, 2019

Oh dear, I have a feeling you might need v1.37 since we're moving forward with clippy lint updates as we go. Give me a bit and I'll see if I can sort out a cross-build for you.

@kinnison
Copy link
Contributor

kinnison commented Sep 5, 2019

@nikita-fuchs I obviously can't vouch for it since I just cross-built it, but...

rustup-init-s390x.zip

Give that a go?

@nikita-fuchs
Copy link
Author

nikita-fuchs commented Sep 5, 2019

I can't believe it:

  • The 1.37 build worked
  • Building rustup from master worked
  • Installing the nightly worked

🥳

What can be done to support you supporting s390x better? I'll talk with some guys at IBM again to provide some access to mainframes, all their wishing for longterm support of their architecture will remain unheard as long maintainers don't have access to their machines..

@kinnison
Copy link
Contributor

kinnison commented Sep 5, 2019

What can be done to support you supporting s390x better? I'll talk with some guys at IBM again to provide some access to mainframes, all their wishing for longterm support of their architecture will remain unheard as long maintainers don't have access to their machines..

From our perspective, all we can do is run builds and tests on things which integrate with github PRs etc. We use Travis and Appveyor for now, and may switch to Azure pipelines in the future. If you know of an s390x capable service which binds to github and which we could use to actually build and run tests on s390x then I'd be interested. if you do, please open an issue to add support for that :D In the meantime, be careful to always use --no-self-update until the next rustup release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants