This directory is the root of a fully-worked simple example that allows a user to create a claim for one or more S3 buckets and get their endpoints and IAM policy.
The example is itself trivial but exercises an end-to-end solution that is completely written in cue, including the generation of the claims interface, schemas, xrds, composition definitions, and the runtime code that returns composed objects.
While it is entirely possible to define all XRDs, functions, and compositions in YAML and only rely on cue for the composition implementation, this approach fails to take into account the benefits of having the entire code base in cue, and using schemas and validations everywhere.
The Makefile
has all the commands used for the generation.
Command | Description |
---|---|
make |
run all generation tasks, run tests, and create helm artifacts |
make help |
see available commands |
make k8s |
render the YAML for XRDS, functions, and compositions |
make user-s3 |
see the YAML for claims and namespaces for the S3 bucket XR. |
make test |
run unit tests for compositions outside of a crossplane context |
- cue.mod/ - contains the module definition and the required cue libraries for crossplane, k8s, as cue
schemas, populated via
cue get go
). - pkg/ - root of all cue source files excluding generated files and tests. Allows
grep -r
on source code.- api/ - has the schema for the user facing types (i.e. what the user needs to set in a claim). Since types and values look the same in cue, it also serves as an example of the input the user can provide. One file per XRD.
- compositions/ - root for all composition code including XRDs, function definitions, composition
definitions and their implementation.
- s3bucket/ - composition implementation for the S3 bucket XRD.
- s3bucket.cue - XRD and composition definition for the S3 bucket.
- index.cue - returns all XRDs and composition definitions.
- resources.cue - returns all XRDS, composition definitions and function definitions.
- tests/ - unit test files
- compositions/s3/bucket/ - unit test file for the S3 bucket composition
- user - the "user" objects like namespaces and test claims.
- zz_generated/ - root for all generated files
- schemas/generated-schemas.cue - schemas for XRDs generated from
pkg/api/
- schemas/generated-schemas.cue - schemas for XRDs generated from
- helm/ - root of helm chart
- zz_generated/*.yaml - YAML files generated by
make helm
- zz_generated/*.yaml - YAML files generated by
In addition:
-
Makefile has commands to generates schemas, self-contained cue scripts as the function implementation, and the ability to render and apply k8s resources forall objects.
-
.cuelibs - contains a list of libraries that the Makefile uses to run
cue get go
-
schema.go contains blank imports for all external types that are pulled in using
cue get go
- the versions of these external dependencies are declared in the go.mod file. This way you can always usego mod tidy
in this directory.
- You start by defining the api you want to expose. example
- You then use
make schemas
to generate the openAPI schemas corresponding to the cue types example - You then define an XRD pulling in the types from the schema generated in the previous step. example
- You create the composition subpackage. Initially you will just return an empty object from the implementation such that it is a noop. The file will look like this:
_request: {...}
- You generate a script for the composition using
make scripts
. - Now you can create the composition object that embeds this script in the composition function pipeline example.
- At this point, you can render all objects objects using
make k8s
and apply them usingmake k8s-apply
. You will get function definitions, an XRD, and composition against which you can write a claim. - You create a namespace and a claim using
make user-s3-apply
. This will start running the composition function and show you the inputs in the debug logs for the function pod. - You can now use this input to see what exactly is available and write the composition implementation. You can develop this incrementally focusing on one managed resource at a time and iterate over it for multiple objects.
The basic idea is get the request from the function runner and transform it to a set of managed resources.
- You start with an implementation that returns an empty object
- After applying the composition with debug turned on, you'll see the request object in the function pod logs
- Copy this locally and write your response based on what you see in this object. Start with one managed resource.
- Re-apply the composition with the new script and debug.
- Rinse and repeat until all the functionality is present.
The nice thing about cue is how it unifies pieces of an object and puts them together. This allows you to write a "module" (i.e. a separate file) for each resource that you want to compose.
For example, the example implementation has self-contained files, one for creating the primary bucket and setting its ready state and status, one for the secondary buckets, and another for the IAM policy.
xp-function-cue
has a subcommand called cue-test
that allows you to write unit tests for various inputs and outputs.
This is extremely primitive but still very useful.
This works as follows:
- Your composition implementation lives in a specific package (e.g.
pkg/compositions/s3bucket
) and your tests live in a different package (e.g.tests/compositions/s3bucket
) - Every test file is a cue file that is guarded by a
@if
tag which has the same name as the test file name. For example, a file calledinitial.cue
will have a line called@if(initial)
at the top of the file (*). This means that at any point only one file actually produces output based on the tag that is set. - In this test file you define the
_request
object fully (by copying it from the pod logs) and write what the response to the request should be. You can copy the response from the function's pod logs as well if you have already implemented something and have manually checked the output. - When you run
xp-function-cue cue-test --test-dir ./tests/compositions/s3bucket --pkg ./pkg/compositions/s3bucket
it does the following:- creates a self-contained script from the code package just like
xp-function-cue package-script
would do. - figures out the tags for tests using the file names in the
tests/
directory - for each such tag:
- it evaluates the
tests
subdirectory with that tag turned on and extracts the_request
object from it. - it does the same evaluation but now extracts the expected response as the full object that is returned
- it runs the script with the
_request
object as obtained from the test and gets the actual result - it compares the expected and actual results using a YAML diff so that the differences are clearly visible.
- it evaluates the
- creates a self-contained script from the code package just like
- See the examples for more details.
(*) - the assumptions around tag names and the files they live in is antithetical to cue principles of "put whatever you want anywhere". We'd like some cue experts to weigh in on how they would have approached the unit testing problem.
Cue is an awesome language that feels like magic and makes it really easy to create complex output. Its ability
to consolidate all reachable definitions via cue def --inline-imports
and create a sef-contained program is
amazing in concept. The unification of resources allow you to work piece-meal on different resources independently
without having to create a response as one giant object. Community support on the slack channel is also great.
That said there are still a few rough edges and bugs that can frustrate the cue composition writer and some hardening is needed :)
- It has a steep learning curve and meager documentation. The use-case that we use it for (getting a dynamic, mostly-schemaless object and turning it into a set of resources) is not a first-class use-case in the available docs.
- The functional programming paradigm takes getting used to.
- Conditional statements are extremely verbose requiring knowledge of various patterns. Unfortunately, for compositions, we need to use them quite a bit since what we emit depends on observed statuses of various objects that change and may or may not even be available at different points in time. The builtins proposal, when implemented, would go a long way in making this much simpler to implement.
- The code needs to be hardened and has some bugs. Examples: cue def, cue fmt