Pipeline actions are executed when a student submits a new solution. It can be used to build a submitted program to check if all parts of source code were uploaded or evaluate the solution against prepared tests.
Actions are defined in config.yml
file in the task directory.
Builtin actions are implemented directly in Kelvin source code, but it is also possible to run the action in docker container with concrete compilers, so they don't need to be installed on the server.
If you add a new Docker pipeline, don't forget to also modify the config validation rules
in frontend/src/PipelineValidation.js
(variable rules
).
Submits are evaluated in a sandboxed environment using Docker. Executing the student's submit is constrained to the default wall clock time, memory, number of created processes etc. This can be overriden by the yaml configuration:
limits:
fsize: 5M # max filesize
wall-time: 5 # max 5 seconds per test
cg-mem: 5M # max 5MB of memory usage
Action for compiling all source codes in a submitted solution with or without a makefile. The errors and warnings are shown in the Result tab.
pipeline:
- type: gcc
output: main
flags: -Wall -Wextra -g -fsanitize=address
ldflags: -lm
Action for compiling a .NET (core) application and producing a standalone binary executable file as an output. Optionally you can build and run unit tests.
pipeline:
- type: dotnet
unittests: true
Verifies the student's program against predefined input/output/files tests. All tests are automatically collected and provided to students.
Tests must be enabled in the pipeline:
pipeline:
- type: gcc
- type: tests
Static tests can be defined by files in the task directory. In these examples, the first line with the hash denotes filename. These files are grouped together by the matching filename prefix, which denotes a single test or a scenario. It is recommended to prepend the test number to each test because tests are ordered by name.
This test will execute the student program and checks if it prints 2020
in the standard output.
# 01_year.out
2020
The standard input is passed to the program and then the student's result on the output is compared to the expected stdout result.
# 02_sum.in
1 2 3 4
# 02_sum.out
10
Checks if the student's program created file result.txt
with the expected content.
# 03_nums.file_out.result.txt
1 2 3 4 5 6 7 8 9 10
Provides the input file data.txt
to student's program.
It can be combined with stdout or file comparing.
# 04_nums.file_in.data.txt
1 2 3 4 5 6 7 8 9 10
Arguments passed to the program can be defined in yaml configuration:
tests:
- name: 06_argumenty_programu
title: 5. argumenty programu
args:
- 127.0.0.1
- 80
Program's exit code must be zero in order to pass the test. Different program's exit code or disabling of this check can be configured in yaml.
tests:
- name: 07_exit_code
exit_code: 42
- name: 08_any_exit_code
exit_code: null
Some bigger or dynamic tests can be configured by script.py
in the task directory.
Tests are created in the function gen_tests
- you can also use numpy for generating the output for your random input.
This function can be also used for generating variants of student tasks, that can be defined simply in markdown files.
# script.py
import random
def gen_tests(evaluation):
r = random.randint(0, 100)
test = evaluation.create_test('01_dynamic_test')
test.args = [f'input{r}.txt', f'output{r}.txt', str(r), evaluation.meta['login']]
test.exit_code = r
f = test.add_memory_file('stdin', input=True)
f.write(f'stdin {evaluation.meta["login"]}'.encode('utf-8'))
f = test.add_memory_file('stdout')
f.write(f'stdout {evaluation.meta["login"]}'.encode('utf-8'))
f = test.add_memory_file('stderr')
f.write(f'stderr {evaluation.meta["login"}'.encode('utf-8'))
f = test.add_memory_file('input.txt', input=True)
f.write(f'input.txt {evaluation.meta["login"]}'.encode('utf-8'))
f = test.add_memory_file('output.txt')
f.write(f'output.txt {evaluation.meta["login"]}'.encode('utf-8'))
Automatically assigns points gained in tests evaluation. Manually assigned points are replaced when the submit is reevaluated - you have been warned.
pipeline:
- type: auto_grader
propose: false # show points in result instead of assigning them directly
overwrite: false # overwrite points if they are already assigned to THAT submit
after_deadline_multiplier: 0.9 # give only 90% of maximal points for submits after the deadline
Adds comments to the source code from the clang-tidy linter. Checks can give student's helpful feedback about leaker memory, misused pointer or misstyped assignment in a condition instead of comparison.
Individual checks can be enabled or disabled.
Following example enables all checks *
and disables -
the remaining ones.
pipeline:
- type: clang-tidy
checks:
- '*'
- '-cppcoreguidelines-avoid-magic-numbers'
- '-readability-magic-numbers'
- '-cert-*'
- '-llvm-include-order'
- '-cppcoreguidelines-init-variables'
- '-clang-analyzer-security*'
files:
- main.cpp
- lib.cpp
Custom programs can be executed in a docker container. This can be used for simply executing the students program and showing the result to the Results tab.
pipeline:
- type: run
commands:
- ./main
- ./main > out
- cat /etc/passwd | ./main
- cmd: timeout 2 ./main
cmd_show: ./main
- '# apt install faketime # executed but hidden from the output'
- cmd: timeout 5 ./main || true
cmd_show: ./main
asciinema: true
- display: ['*.ppm', '*.pgm', '*.jpg']
Own private actions can be implemented in any language in a docker container and published to the official docker hub.
Currently, the action has access to all source codes and artifacts from the previously executed actions like an executable file.
When the action starts, the docker entry point program is executed.
The action can generate result.html
which will be shown in the Result tab to students.