Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement some sort of testing #4

Open
spasa47 opened this issue Jun 13, 2024 · 3 comments
Open

Implement some sort of testing #4

spasa47 opened this issue Jun 13, 2024 · 3 comments
Labels
enhancement New feature or request question Further information is requested

Comments

@spasa47
Copy link
Owner

spasa47 commented Jun 13, 2024

I am unsure as to how to do this yet, maybe different input files in different formats

  • and checking if the result looks as expected(pdf)?
  • or maybe checking if the correct OpTeX output has been outputted?
    • but that would require us to parse OpTeX code...
@spasa47 spasa47 added enhancement New feature or request question Further information is requested labels Jun 13, 2024
@robertbachmann
Copy link

Here's what I do for pandoc-optex:

  1. I have a test input like tests/<CASENAME>.{md,native,html}
  2. I have an optional file tests/<CASENAME>.flags which has command line flags (e.g: --standalone or -Mtoc)
  3. I have the expected result in tests/<CASENAME>.tex
  4. I have a run-test.sh script that runs pandoc on a test case and compares the results with diff -u.

e.g: run-test.sh tests/case1.md

# pseudo-code for run-test.sh

TESTNAME=$1
BASENAME=$( echo $1 | sed -e 's/\.md//g' -e 's/\.native//g' -e 's/\.html//g') # strip suffix
FLAGS=""
if [[  -f "$BASENAME.flags" ]] ; then
    FLAGS=$(cat $BASENAME.flags)
fi
pandoc -t optex-writer-script.lua "$FLAGS" "$TESTNAME" -o "$BASENAME.tmp"

if [[ diff -u "$BASENAME.tex" "$BASENAME.tmp" > "$BASNAME-diff.tmp"  ]] ; then
   echo PASS: $TESTNAME
else
  echo FAIL: $TESTNAME
  cat $BASNAME-diff.tmp
fi

@robertbachmann
Copy link

Big picture concept:

  1. Most of my tests use the approach mentioned above. The goal of these tests is to check the converter - i.e: given input X does it produce Y
  2. Additionally I have a couple of "normal" Lua unit tests to check the result of various custom utility functions that I use. (e.g: parse_margin(), parse_picture_dimension(), parse_color())
  3. The tests from (1) and (2) I just run with make test. The path to the pandoc executable can be overwritten with PANDOC=/.../pandoc-3.1 make test so I can easily test multiple pandoc versions.
  4. I have an additional make test-pdf target that then typsets the .tex files with optex -interaction=batchmode and just checks if a .tex file could be typset/"compiled" or not. If I add/update a test case I do a manual visual inspection of the .pdf file.

Technical notes:

  • make test-pdf needs a working TexLive installation (which is a bit of a bigger download), so I'm not going to run it on GitLab's CI infrastructure.
  • make test only needs make, bash and pandoc so it's easy and fast to get it running a GitLab CI job.
- export V=3.2.1
- curl -L https://github.com/jgm/pandoc/releases/download/$V/pandoc-$V-linux-amd64.tar.gz -o pandoc.tgz
- tar -xzf pandoc.tgz
- export PANDOC=$PWD/pandoc-$V/bin/pandoc
- make test
  • On a physical machine it makes sense to execute tests in parallel. (for example using xargs -P8 or make -j8). I currently have ~370 test files. Running them sequentially takes about 35 seconds, while using -P8 brings it down to 7 seconds.

P.S: I put some general notes (not related to testing) here olsak/OpTeX#85 (comment)

@spasa47
Copy link
Owner Author

spasa47 commented Jul 25, 2024

Thanks for your input, I am definitely going to implement something along the lines of what you described. I am currently thinking of giving the bats project a try, as to not reinvent the wheel when it comes to unit testing in bash.

Concerning CI, it seems that it should be possible to get a full texlive distribution using GitHub actions, which would eliminate the need to install and setup texlive on every pipeline run. I have 0 experience with GitHub CI/CD (I've only ever used GitLab CI/CD), so there might be some surprises waiting for me, but as it looks now, it should be possible to run all tests via CI/CD.

I've also read through your general notes and will implement some of those ideas in the future, but for now I will focus on testing and unsupported elements (such as span) 🙏 .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request question Further information is requested
Projects
None yet
Development

When branches are created from issues, their pull requests are automatically linked.

2 participants