Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Strange performance and scalability issues with some of the build systems #3163

Closed
VorpalBlade opened this issue Jun 18, 2022 · 17 comments · Fixed by #3165 or #3167
Closed

Strange performance and scalability issues with some of the build systems #3163

VorpalBlade opened this issue Jun 18, 2022 · 17 comments · Fixed by #3165 or #3167

Comments

@VorpalBlade
Copy link

Describe the bug

After reading a reading a recent Phoronix benchmark (a bit down the page) I decided to investigate why Arch Linux was so much slower (10-20x) for zstd performance. It turned out that something is wrong with some of the build systems included with zstd!

When zstd is built with the cmake or meson build systems there is negative scaling with the number of threads, while when building with the Makefile in the top level directory, there is positive scaling with the number of threads.

To Reproduce
Steps to reproduce the behavior:

  1. Build with build system you want to investigate. One of:
    • Plain make
    • mkdir build && cmake ../zstd-1.5.2/build/cmake/ && make
    • meson setup builddir && cd builddir && ninja
  2. Test the resulting binary on a large file, I used the FreeBSD image as this is what Phoronix Test Suite used, albeit from a older version that I can't find. I can however reproduce the same issue with the linked file. Use the following pair of benchmark commands and compare scaling:
  • path/to/zstd -T1 -b4 path/to/FreeBSD-13.1-RELEASE-amd64-memstick.img
  • path/to/zstd -T6 -b4 path/to/FreeBSD-13.1-RELEASE-amd64-memstick.img (adjust -T6 based on the number of cores you have)

Note! I see the same pattern at other compression levels such as 6 and 8, not just 4. So that value doesn't really appear to matter, as long as it is consistent of course.

Expected behavior
I expect that all build systems should result in binaries with roughly the same behaviour. Performance and scaling should be similar.

Actual results

The output below has been abbreviated for clarity, repeated command lines has been elided only showing the output. Three runs for each combination of program and flags has been performed. As can be seen the results are relatively consistent run-to-run (at least consistent enough given the huge discrepancies).

  1. CMake
$ programs/zstd -T1 -b4 ~/Downloads/FreeBSD-13.1-RELEASE-amd64-memstick.img
4#md64-memstick.img :1172165120 -> 781156418 (x1.501), 1108.5 MB/s, 4999.1 MB/s
4#md64-memstick.img :1172165120 -> 781156418 (x1.501), 1152.9 MB/s, 5006.7 MB/s
4#md64-memstick.img :1172165120 -> 781156418 (x1.501), 1102.1 MB/s, 4978.5 MB/s

$ programs/zstd -T6 -b4 ~/Downloads/FreeBSD-13.1-RELEASE-amd64-memstick.img
4#md64-memstick.img :1172165120 -> 781637623 (x1.500),  717.0 MB/s, 4940.0 MB/s
4#md64-memstick.img :1172165120 -> 781637623 (x1.500),  759.3 MB/s, 4893.5 MB/s
4#md64-memstick.img :1172165120 -> 781637623 (x1.500),  697.5 MB/s, 4869.6 MB/s
  1. Meson
$ programs/zstd -T1 -b4 ~/Downloads/FreeBSD-13.1-RELEASE-amd64-memstick.img
4#md64-memstick.img :1172165120 -> 781156418 (x1.501), 1097.0 MB/s, 5029.3 MB/s
4#md64-memstick.img :1172165120 -> 781156418 (x1.501), 1098.2 MB/s, 4970.2 MB/s
4#md64-memstick.img :1172165120 -> 781156418 (x1.501), 1117.8 MB/s, 4952.6 MB/s

$ programs/zstd -T6 -b4 ~/Downloads/FreeBSD-13.1-RELEASE-amd64-memstick.img
4#md64-memstick.img :1172165120 -> 781637623 (x1.500),  735.0 MB/s, 4982.8 MB/s
4#md64-memstick.img :1172165120 -> 781637623 (x1.500),  758.6 MB/s, 4966.9 MB/s
4#md64-memstick.img :1172165120 -> 781637623 (x1.500),  727.9 MB/s, 4949.7 MB/s
  1. Makefile
$ ./zstd -T1 -b4 ~/Downloads/FreeBSD-13.1-RELEASE-amd64-memstick.img
4#md64-memstick.img :1172165120 -> 781156418 (x1.501), 1118.2 MB/s, 4971.0 MB/s
4#md64-memstick.img :1172165120 -> 781156418 (x1.501), 1105.4 MB/s, 4931.3 MB/s
4#md64-memstick.img :1172165120 -> 781156418 (x1.501), 1150.6 MB/s, 4930.1 MB/s

$ ./zstd -T6 -b4 ~/Downloads/FreeBSD-13.1-RELEASE-amd64-memstick.img
4#md64-memstick.img :1172165120 -> 781637623 (x1.500), 3518.0 MB/s, 4898.2 MB/s
4#md64-memstick.img :1172165120 -> 781637623 (x1.500), 3486.3 MB/s, 4917.0 MB/s
4#md64-memstick.img :1172165120 -> 781637623 (x1.500), 3528.1 MB/s, 4900.8 MB/s

Analysis of results

For CMake and Meson: it can be seen that the performance goes down between 1 thread and 6 threads: ~1100 MB/s to ~700 MB/s.

For plain make, the performance goes up between 1 thread and 6 threads: ~1100 MB/s to ~3500 MB/s.

Decompression speed (the second value) does not seem to vary significantly across the experiments however.

Desktop (please complete the following information):

  • OS: Arch Linux
  • Version 1.5.2 (upstream tarball)
  • Compiler: GCC 12.1.0
  • Flags: Defaults for each build system, though I tested with some basics such as -O2, but it did not affect the overall behaviour.
  • Other relevant hardware specs: AMD Ryzen 5 5600X 6-Core Processor
  • Build system: Multiple ones, that is the whole point of this bug

Additional context
Add any other context about the problem here.

@VorpalBlade
Copy link
Author

VorpalBlade commented Jun 18, 2022

In the downstream bug report for Arch (https://bugs.archlinux.org/task/75104), "Antonio Rojas (arojas)" made an interesting discovery: Apparently the issue is with line 28 in AddZstdCompilationFlags.cmake:

EnableCompilerFlag("-std=c99" true false)

Commenting out this line brings performance up to par with the plain make version. Setting it to c11 also works for fixing the performance. I have not checked for similar things in the meson build system.

I have no idea why the C standard version would matter for threaded performance, but if so the fix is rather simple.

Sidenote: It seems silly to maintain so many different build systems. The risk of something like this happening is much greater, and the testing matrix becomes much larger.

@StefanBruens
Copy link

The default for GCC would be -std=gnu18, i.e. using -std=c99 disables any "GNU" GCC extension.

man gcc

gnu17
gnu18
GNU dialect of ISO C17. This is the default for C code.

@berolinux
Copy link

Confirmed on OpenMandriva (building with clang rather than gcc by default) as well. -std=c99 lowers performance significantly (especially with multiple threads), and higher standard is fine. No significant difference between cXX and gnuXX.
We'll stick with building with cmake, but have added
sed -i -e 's,c99,c18,g' build/cmake/CMakeModules/AddZstdCompilationFlags.cmake
to the build script.

Also confirmed that (even on a machine with many CPUs -- ThreadRipper 1950x, 16 cores, 32 threads) adding more threads beyond a certain point actually lowers performance.

$ zstd -T1 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781156418 (x1.501),  887.9 MB/s, 3376.3 MB/s
$ zstd -T2 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781637623 (x1.500), 1435.1 MB/s, 3326.1 MB/s
$ zstd -T3 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781637623 (x1.500), 1874.7 MB/s, 3334.7 MB/s
$ zstd -T4 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781637623 (x1.500), 2093.7 MB/s  3348.9 MB/s
$ zstd -T5 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781637623 (x1.500), 1934.6 MB/s, 3382.0 MB/s
$ zstd -T6 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781637623 (x1.500), 2206.8 MB/s  3326.2 MB/s
$ zstd -T7 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781637623 (x1.500), 2285.5 MB/s  3312.2 MB/s
$ zstd -T8 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781637623 (x1.500), 2290.9 MB/s  3363.8 MB/s
$ zstd -T9 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781637623 (x1.500), 2073.0 MB/s  3330.5 MB/s
$ zstd -T10 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781637623 (x1.500), 2083.5 MB/s  3343.3 MB/s
[...] keeps going down slightly from there [...]
$ zstd -T16 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781637623 (x1.500), 2011.2 MB/s  3375.1 MB/s
$ zstd -T32 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781637623 (x1.500), 1690.4 MB/s, 3342.7 MB/s

Might be good to cap -T0 at something way lower than the number of available CPUs.

@DanielGibson
Copy link

DanielGibson commented Jun 19, 2022

Regarding the degradation with more threads, I could observe this on my system as well, but at other points.
I have a Ryzen 5950X (also 16 cores, 32 threads), running (X)Ubuntu 22.04 with kernel 5.15.0-39-generic, zstd 1.4.8+dfsg-3build1 (from the Ubuntu package, not compiled myself).

For me the peak is at

$ zstd -T17 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),4173.4 MB/s ,4231.1 MB/s 

afterwards it degrades slowly (though sometimes one more thread is a bit faster again; see below for all results).
It probably makes sense that the peak is at (about) the number of physical cores, though I'm not sure why it's at 8 threads for @berolinux who also has 16 cores with 32 threads.
Maybe because the Threadripper 1950X has 32MB L3 cache, while my Ryzen 5950X has 64MB? (L2 cache per core is identical for both CPUs with 512KB)
Another factor might be how much the cores are clocked down when more cores are under load?

Anyway, point being, it might not be straightforward to automatically decide the ideal number of threads to use, because it seems depend heavily on the specific CPU (and not just its core/thread count).

Click to see all the benchmarking results
$ zstd -T1 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781297398 (1.500), 800.4 MB/s ,4267.4 MB/s 
$ zstd -T2 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),1470.3 MB/s ,4298.6 MB/s 
$ zstd -T3 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),2035.4 MB/s ,4286.4 MB/s 
$ zstd -T4 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),2370.5 MB/s ,4262.8 MB/s 
$ zstd -T5 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),2721.0 MB/s ,4273.2 MB/s 
$ zstd -T6 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),3082.8 MB/s ,4236.2 MB/s 
$ zstd -T7 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),3323.1 MB/s ,4165.4 MB/s 
$ zstd -T8 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),3425.0 MB/s ,4273.7 MB/s 
$ zstd -T9 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),3711.1 MB/s ,4176.9 MB/s 
$ zstd -T10 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),3828.7 MB/s ,4235.2 MB/s 
$ zstd -T11 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),3860.6 MB/s ,4192.0 MB/s 
$ zstd -T12 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),3893.5 MB/s ,4199.7 MB/s 
$ zstd -T13 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),3928.0 MB/s ,4236.2 MB/s 
$ zstd -T14 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),3968.6 MB/s ,4277.2 MB/s 
$ zstd -T15 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),3981.8 MB/s ,4253.3 MB/s 
$ zstd -T16 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),4028.1 MB/s ,4145.1 MB/s 
$ zstd -T17 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),4173.4 MB/s ,4231.1 MB/s 
$ zstd -T18 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),4105.2 MB/s ,4220.5 MB/s 
$ zstd -T19 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),4029.3 MB/s ,4195.1 MB/s 
$ zstd -T20 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),4128.9 MB/s ,4234.0 MB/s 
$ zstd -T21 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),4054.9 MB/s ,4234.9 MB/s 
$ zstd -T22 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),3959.1 MB/s ,4212.5 MB/s 
$ zstd -T23 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),3973.4 MB/s ,4207.6 MB/s 
$ zstd -T24 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),3799.0 MB/s ,4167.8 MB/s 
$ zstd -T25 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),3744.3 MB/s ,4275.2 MB/s 
$ zstd -T26 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),3791.0 MB/s ,4185.5 MB/s 
$ zstd -T27 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),3761.7 MB/s ,4269.4 MB/s 
$ zstd -T28 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),3632.5 MB/s ,4316.9 MB/s 
$ zstd -T29 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),3661.5 MB/s ,4277.4 MB/s 
$ zstd -T30 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),3524.4 MB/s ,4301.5 MB/s 
$ zstd -T31 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),3583.2 MB/s ,4222.2 MB/s 
$ zstd -T32 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),3540.3 MB/s ,4176.2 MB/s 
$ zstd -T33 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),3514.0 MB/s ,4184.6 MB/s 
$ zstd -T34 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781785560 (1.499),3358.4 MB/s ,4257.7 MB/s

Update: Just confirmed that the behavior is similar with 1.5.2 (built with just make), though there the peak is at -T16 instead of -T17 and the performance is generally better than with 1.4.8:

Click for results
$ ./zstd -T1 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781156418 (x1.501), 1043.2 MB/s, 4763.3 MB/s
$ ./zstd -T2 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781637623 (x1.500), 1827.6 MB/s, 4722.2 MB/s
...
$ ./zstd -T8 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781637623 (x1.500), 3898.2 MB/s, 4722.0 MB/s
$ ./zstd -T9 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781637623 (x1.500), 4161.9 MB/s, 4696.4 MB/s
...
$ ./zstd -T15 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781637623 (x1.500), 4592.8 MB/s, 4754.7 MB/s
$ ./zstd -T16 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781637623 (x1.500), 4618.3 MB/s, 4671.8 MB/s
$ ./zstd -T17 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781637623 (x1.500), 4583.5 MB/s, 4709.7 MB/s
$ ./zstd -T18 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781637623 (x1.500), 4349.8 MB/s, 4705.1 MB/s
...
$ ./zstd -T24 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781637623 (x1.500), 4135.3 MB/s, 4655.8 MB/s
...
$ ./zstd -T31 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781637623 (x1.500), 3931.4 MB/s, 4725.2 MB/s
$ ./zstd -T32 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781637623 (x1.500), 3892.2 MB/s, 4755.7 MB/s
$ ./zstd -T33 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781637623 (x1.500), 3756.0 MB/s, 4790.4 MB/s
$ ./zstd -T34 -b4 FreeBSD-13.1-RELEASE-amd64-memstick.img 
 4#md64-memstick.img :1172165120 -> 781637623 (x1.500), 3645.7 MB/s, 4772.4 MB/s

@jbeich
Copy link

jbeich commented Jun 19, 2022

Also discovered on FreeBSD half a year ago, see
https://lists.freebsd.org/archives/freebsd-current/2021-December/001183.html

@eli-schwartz
Copy link
Contributor

Sidenote: It seems silly to maintain so many different build systems. The risk of something like this happening is much greater, and the testing matrix becomes much larger.

Officially only Make is supported, the others are all third-party contributions and, I believe, primarily exist to enable functionality such as Meson subproject wraps or cmake add_subdirectory().

Somewhat relevant as well: #2261

Also interesting: #1609 changed meson from c99 to gnu99 for unclear reasons? which is inconsistent with both cmake and make.

@ismail
Copy link

ismail commented Jun 19, 2022

Can't confirm on macOS 12.4 arm64; hence it seems to be a gcc issue.

@mirh
Copy link

mirh commented Jun 19, 2022

Sidenote: It seems silly to maintain so many different build systems. The risk of something like this happening is much greater, and the testing matrix becomes much larger.

The build systems have no inherent fault, the culprits are presumably the compiler and the code here relying on different clock assumptions.
Having different defaults was ironically a good thing insofar as it allowed to discover this mindblowing bug.

@th0rex
Copy link

th0rex commented Jun 19, 2022

I had a quick look at the code, maybe this is due to this (/* relies on standard C90 (note : clock_t measurements can be wrong when using multi-threading) */)? That comment would indicate that timing can be wrong with multithreading, and the #else would happen in c90/c99 mode I think.

To test I ran it under hyperfine (i.e. not trusting the numbers the program outputs).
This is the result for a zstd built with make:

  Time (mean ± σ):      1.898 s ±  0.175 s    [User: 5.375 s, System: 3.132 s]
  Range (min … max):    1.711 s …  2.209 s    10 runs

And this is the result for cmake:

  Time (mean ± σ):      1.962 s ±  0.196 s    [User: 5.102 s, System: 3.199 s]
  Range (min … max):    1.592 s …  2.224 s    10 runs

Invocation for both cases was

hyperfine --warmup 3 --show-output 'yes | ./programs/zstd -T6 ~/some_large_file'

Maybe the way I'm benchmarking it with hyperfine is stupid, or I'm missing something, but to me it looks like the performance of the binaries is basically identically (at least nowhere as different as in the original comment), and the timing used to calculate throughput is just wrong/broken with multithreading and c90/c99 (this would also explain why the results stay the same for the single threaded case imo). What do others think?

Edit: Oh, this was already discovered on the freebsd mailing list post, sorry.

@firasuke
Copy link

What's the official upstream build system for zstd anyways? Why is that not clarified in the README file?

It also seems that the different build systems have different install targets and different configuration options as well...

Cyan4973 added a commit that referenced this issue Jun 19, 2022
it's not expected to be useful
and can actually lead to subtle side effects
such as #3163.
@Technetium1
Copy link

This issue has been featured on Phoronix.

@rmader
Copy link

rmader commented Jun 20, 2022

@Cyan4973: from what I understand #3167 should only fix the CMake case but not Meson. Is that correct and should the issue be reopened until that is fixed as well?

@Cyan4973
Copy link
Contributor

I don't see any c99 in the meson script.

@DanielGibson
Copy link

No, but gnu99 :)
https://github.com/facebook/zstd/blob/dev/build/meson/meson.build#L15

@Cyan4973
Copy link
Contributor

fixed

@Cyan4973
Copy link
Contributor

Cyan4973 commented Feb 1, 2023

yep

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
14 participants