1 Introduction
This document describes the acceptance criteria for the Radicle CI broker, as well as how to verify that they are met. Acceptance criteria here means a requirement that must be met for the software to be acceptable to its stakeholders.
This file is used by Subplot to generate and
run test code as part of running cargo test
.
2 Stakeholders
For the purposes of this document, a stakeholder is someone whose opinion matters for setting acceptance criteria. The CI broker has the following stakeholders, grouped so that specific people only need to be named in one place:
cib-devs
– the people who develop the CI broker itself- Lars Wirzenius
adapter-devs
– the people who develop adapters- Lars Wirzenius
- Michalis
- Yorgos Saslis
node-ops
– the people operating a Radicle node, when they also run Radicle CI on it- Lars Wirzenius
- Yorgos Saslis
devs
– the people for whose repositories Radicle CI runs; this means the people who contribute to any repository hosted on Radicle, when any node runs CI for that repository, as opposed to the people who develop the Radicle CI software- Lars Wirzenius
- Michalis
Some stakeholders are named explicitly so that it will be easier to ask them more information that is captured in this document. Note that the list will evolve over time. Please suggest missing stakeholders to the developers and maintainers of the CI broker.
3 Data files shared between scenarios
3.1 Broker configuration
db: ci-broker.db report_dir: reports default_adapter: mcadapterface queue_len_interval: 1min adapters: mcadapterface: command: ./adapter.sh config: foo: bar config_env: RADICLE_NATIVE_CI env: PATH: /bin sensitive_env: API_KEY: xyzzy filters: - !Branch "main"
db: ci-broker.db report_dir: reports queue_len_interval: 1min adapters: mcadapterface: command: ./adapter.sh env: RADICLE_NATIVE_CI: native-ci.yaml sensitive_env: API_KEY: xyzzy triggers: - adapter: mcadapterface filters: - !Branch "main"
db: ci-broker.db report_dir: reports queue_len_interval: 1min adapters: mcadapterface: command: ./adapter.sh env: RADICLE_NATIVE_CI: native-ci.yaml sensitive_env: API_KEY: xyzzy triggers: - adapter: mcadapterface filters: - !Branch "main"
db: ci-broker.db report_dir: reports queue_len_interval: 1min adapters: mcadapterface: command: ./adapter.sh env: RADICLE_NATIVE_CI: native-ci.yaml sensitive_env: API_KEY: xyzzy triggers: - adapter: mcadapterface filters: - !Branch "main" - adapter: mcadapterface filters: - !Branch "main"
3.2 A dummy adapter
This adapter does nothing, just reports a run ID and a successful run.
Note that this adapter always outputs a message to its standard error output, even though it doesn't fail. This is useful for verifying that the CI broker logs adapter error output, and doesn't harm other uses of the adapter.
#!/bin/sh set -eu cat > /dev/null echo '{"response":"triggered","run_id":{"id":"xyzzy"}}' echo '{"response":"finished","result":"success"}' ( echo "This is an adapter error: Mordor" echo "Environment:" env if [ "${RADICLE_NATIVE_CI:-}" != "" ]; then echo "Adapter config:" nl "$RADICLE_NATIVE_CI" fi ) 1>&2
3.3 A failing adapter with a successful run
This adapter does nothing, just reports a run ID and a successful run, but then fails.
#!/bin/sh set -eu cat > /dev/null echo '{"response":"triggered","run_id":{"id":"xyzzy"}}' echo '{"response":"finished","result":"success"}' ( echo "This is an adapter error: Mordor" echo "Environment:" env if [ "${RADICLE_NATIVE_CI:-}" != "" ]; then echo "Adapter config:" nl "$RADICLE_NATIVE_CI" fi ) 1>&2 exit 1
3.4 A failing adapter with a failed run
This adapter does nothing, just reports a run ID and a failed run, but then fails.
#!/bin/sh set -eu cat > /dev/null echo '{"response":"triggered","run_id":{"id":"xyzzy"}}' echo '{"response":"finished","result":"failure"}' ( echo "This is an adapter error: Mordor" echo "Environment:" env if [ "${RADICLE_NATIVE_CI:-}" != "" ]; then echo "Adapter config:" nl "$RADICLE_NATIVE_CI" fi ) 1>&2 exit 1
3.5 List job COBs
Job COBs are a way for the CI broker to record that it's run CI for a change. This script lists the job COBs in a given repository.
#!/bin/bash set -euo pipefail RID="$(rad ls --all | awk -v R="$1" '$2 == R { print $3 }')" if [ -z "$RID" ]; then echo "Unknown repository $1" 1>&2 exit 1 fi rad cob list --repo "$RID" --type xyz.radworks.job
4 Custom scenario steps
In this document we use scenarios to show how to verify that the CI broker does what we expect of it. For this, we define several custom scenario steps. In this chapter we describe those steps, and also verify that the steps work.
4.1 Set up a node
This step creates a Radicle node, the Radicle CI broker, and a CI adapter.
given a Radicle node, with CI configured with {config} and adapter {adapter}
The captured parts of the step are:
config
— the name of the embedded file (somewhere in this document) with the configuration for the CI brokeradapter
— the name of the embedded file with the CI adapter implementation; we use simple shell script dummy adapter implementations, as in this document we only care about the broker/adapter interface, not that the adapter actually performs a CI run
This step installs binaries (or makes them available to be run), and creates some files. It doesn't not start long-lived processes, in particular not the Radicle node process.
We verify that this scenario works by examining the results. For clarity, we split the scenario into many snippets.
The programs we'll need are available to run. To check this, we use a helper shell script to verify that. This avoids us to work around limitations in Subplot for command parsing: Subplot does not parse steps the way the shell does, so there is no way to pass text that contains space characters to command as a single argument.
#!/bin/sh # We use Bash build-in command as that's portable. "which" is not. command -v "$1"
The configuration file must now exist.
The adapter is to be installed as adapter.sh
and it must be
executable.
There is a Radicle home directory.
We also need way to set up environment variables for commands we run,
especially for rad
to use the right node. Subplot does not have
built in support for this (at least not yet), but we work around that
by creating a shell script env.sh
that sets them up.
4.2 Create a repository
This step creates a Git repository and makes it into a Radicle repository.
given a Git repository {name} in the Radicle node
The captured part of the step is:
name
— the Git and Radicle repository name
We run the step and look at the results. We need the node creation step first.
The Git repository must exist.
It must also be a Radicle repository and in the local node.
4.3 Queue a node event for processing
This step queues a node event to be processed later by the
synthetic-events
test helper tool that is part of the CI broker. The
step does this by creating a fake refsUpdated
node event and writing
that to file with a specific name.
given the Radicle node emits a refsUpdated event for {repodir}
The captured part of the step is:
repodir
— the directory where the repository is for which the event is created
To set up this step, we need to have node and a repository first.
We check that the event file looks roughly correct by querying it with
the jq
tool.
This is a very rudimentary check, but if the event file is incorrect, then Radicle code will reject it. We don't want to duplicate the logic to do that verification in detail.
5 Acceptance criteria
5.1 Shows config as JSON
Want: The CI broker can write out the configuration it uses at run time as JSON.
Why: This is helpful for the node operator to verify that they have configured the program correctly.
Who: cib-devs
Our verification here is quite simplistic, and only checks that the output is in the JSON format. It does not try to make sure the JSON matches the YAML semantically.
5.2 Shows adapter specification
Want: The CI broker can write out the specification for an adapter.
Why: This is helpful for the node operator to verify that they have specified the adapter correctly.
Who: cib-devs
5.3 Refuses config with an unknown field
Want: The CI broker refused to load a configuration file that has unknown fields.
Why: This is helpful for detecting typos and other mistakes in configuration files instead of ignoring them silently.
Who: cib-devs
, node-ops
db: ci-broker.db report_dir: reports queue_len_interval: 1min adapters: mcadapterface: command: ./adapter.sh env: RADICLE_NATIVE_CI: native-ci.yaml sensitive_env: API_KEY: xyzzy triggers: - adapter: mcadapterface filters: - !Branch "main" xyzzy: "this field is unknown"
5.4 Smoke test: Runs adapter
Want: CI broker can run its adapter.
Why: This is obviously necessary. If this doesn't work, nothing else has a hope of working.
Who: cib-devs
5.5 Handles adapter failing on a successful run
Want: If the adapter fails, the CI broker creates a job COB and report pages anyway.
Why: This is necessary for the CI broker to be robust.
Who: cib-devs
5.6 Handles adapter failing on a failed run
Want: If the adapter fails, the CI broker creates a job COB and report pages anyway.
Why: This is necessary for the CI broker to be robust.
Who: cib-devs
5.7 Runs adapter with configuration
Want: CI broker can run its adapter and give it the configuration in the CI broker adapter specification.
Why: Being able to embed the adapter configuration in the cib
configuration file makes is more convenient for the node operators to
specify different adapter configurations for different purposes.
Who: node-ops
5.8 Runs adapter without a report directory
Want: CI broker can run without a report directory.
Why: We don't require the report directory to be specified, or
exist, but we do require cib
to handle this.
Who: cib-devs
5.9 Runs adapters for all matching triggers
Want: CI broker can run its adapter.
Why: This is obviously necessary. If this doesn't work, nothing else has a hope of working.
Who: cib-devs
5.10 Runs adapter on each type of event
Want: CI broker runs the adapter for each type of CI event.
Why: The adapter needs to handle each type of CI event.
Who: cib-devs
We verify this by adding CI events to the event queue using cibtool
and checking that cib
can process those. This is simpler and more
direct than emitting node events that result in the desired CI events.
We are here not concerned about whether cib
handles node events or
turns those into the correct CI events: we verify that in other ways.
We first set things up, including creating a repository xyzzy
, and a
Radicle patch in that repository. The id of the patch is in the file
patch-id.txt
so that it can be used.
Verify that cib
can process a branch creation event.
when I run rm -f ci-broker.db when I run cibtool --db ci-broker.db event add --repo xyzzy --kind branch-created --id-file id.txt when I run ./env.sh cib --config broker.yaml queued when I run cibtool --db ci-broker.db run list then stdout has one line
Verify that cib
can process a branch update event.
when I run rm -f ci-broker.db when I run cibtool --db ci-broker.db event add --repo xyzzy --ref brancy --base main --kind branch-updated --id-file id.txt when I run ./env.sh cib --config broker.yaml queued when I run cibtool --db ci-broker.db run list then stdout has one line
Verify that cib
can process a branch deletion event.
when I run rm -f ci-broker.db when I run cibtool --db ci-broker.db event add --repo xyzzy --kind branch-deleted --id-file id.txt when I run ./env.sh cib --config broker.yaml queued when I run cibtool --db ci-broker.db run list then stdout has one line
Verify that cib
can process a patch creation event.
when I run rm -f ci-broker.db when I run cibtool --db ci-broker.db event add --repo xyzzy --kind patch-created --patch-id-file patch-id.txt --id-file id.txt when I run ./env.sh cib --config broker.yaml queued when I run cibtool --db ci-broker.db run list then stdout has one line
Verify that cib
can process a patch update event.
when I run rm -f ci-broker.db when I run cibtool --db ci-broker.db event add --repo xyzzy --kind patch-updated --patch-id-file patch-id.txt --id-file id.txt when I run ./env.sh cib --config broker.yaml queued when I run cibtool --db ci-broker.db run list then stdout has one line
#!/bin/sh set -eu touch foo git add foo git commit -m foo EDITOR=/bin/true git push rad HEAD:refs/patches rad patch list | awk 'NR == 4 { print $3 }' | xargs rad patch show | awk 'NR == 3 { print $3 }' >"$1"
5.11 Reports it version
Want: cib
and cibtool
report their version, if invoked with the
--version
potion.
Why: This helps node operators include the version in any bug reports.
Who: cib-devs
5.12 Adapter can provide URL for info on run
Want: The adapter can provide a URL for information about the run, such a run log. This optional.
Why: The CI broker does not itself store the run log, but it's useful to be able to point users at one. The CI broker can put that into a Radicle COB or otherwise store it so that users can see it. Note, however, that the adapter gets to decide which URL to provide: it need not be the run log. It might, for example, be a URL to the web view of a "pipeline" in GitLab CI instead, from which the user can access individual logs.
Who: cib-devs
#!/bin/sh set -eu echo '{"response":"triggered","run_id":{"id":"xyzzy"},"info_url":"https://ci.example.com/xyzzy"}' echo '{"response":"finished","result":"success"}'
5.13 Gives helpful error message if node socket can't be found
Want: If the CI broker can't connect to the Radicle node control socket, it gives an error message that helps the user to understand the problem.
Why: This helps users deal with problems themselves and reduces the support burden on the Radicle project.
Who: cib-devs
5.14 Gives helpful error message if it doesn't understand its configuration file
Want: If the CI broker is given a configuration file that it can't understand, it gives an error message that explains the problem to the user.
Why: This helps users deal with problems themselves and reduces the support burden on the Radicle project.
Who: cib-devs
Comment: This is a very basic scenario. Error handling is by nature a thing that can always be made better. We can later add more scenarios if we tighten the acceptance criteria.
This file is not YAML.
5.15 Stops if the node connection breaks
Want: If the connection to the Radicle node, via its control socket, breaks, the CI broker terminates with a message saying why.
Why: The CI broker can either keep running and trying to
re-connect, or it can terminate. Either is workable. However, it's a
simpler design and less code to terminate and allow re-starting to be
handled by a dedicated system, such as systemd
.
Who: cib-devs
5.16 Shuts down when requested
Want: The test suite can request the CI broker to shut down cleanly, and it doesn't result in an error.
Why: In the integration test suite, we need to start and stop the CI broker many times. We need to easily detect errors.
Who: cib-devs
We use a special magic fake node event to signal shutdown: a
RefsFetched
event with a skipped update for a ref "shutdown
" and
an object id of all zeros. This should be sufficiently impossible to
happen in real life.
5.17 Produces a report page upon request
Want: The node operator can run a command to produce a report of all CI runs a CI broker instance has performed.
Why: This is useful for diagnosis, if nothing else.
Who: cib-devs
This doesn't check that there is a per-repository HTML file, because we have not convenient way to know the repository ID.
5.18 Logs adapter stderr output
What: The CI broker should log, to its own log output, the adapter's stderr output.
Why: This allows the adapter to output its own log to its standard error output. This makes it easier to debug adapter problems.
Who: adapter-devs
, node-ops
This adapter outputs a broken response message, and after that something to its stderr. The CI broker is meant to read and log both.
#!/bin/sh set -eu cat > /dev/null echo '{"response":"Rivendell"}}' echo "This is an adapter error: Mordor" 1>&2
5.19 Allows setting minimum log level
What: The node admin should be able to set the minimum log level for log messages that get output to stderr.
Why: This allows controlling how much log spew log admins have to see.
Who: node-ops
5.20 Fails run if building trigger fails, but does not crash
Want: The CI broker fails a CI run if it can't create a trigger message from a CI event, but it continues running and processing other events.
Why: If it's not possible to create a trigger message, the CI run can't succeed, unless the failure is temporary. However, we have no way of knowing if the failure is temporary, so the safe thing is to mark the CI run as having failed and removing the CI event from the queue. Further, the CI broker should not crash and should process other events.
Who: cib-dev
A failure to create a trigger message happens if the CI event refers to a repository, commit, or Git ref that doesn't exist in the repository on the local node. This should not ever happen, as the CI event is only emitted by the node after the changes are on the node. However, it has happened due to a programming error in the CI broker. By handling the error and removing the event, the CI broker is a little bit more robust.
We verify this by inserting two events into the queue and then running
cib queued
to process them. We arrange things so that the first
event fails, but the second one succeeds.
6 Acceptance criteria for event filtering
The scenarios in this chapter verify that the event filters work as
intended. Each scenario sets up the event queue with an event, and
runs cib queued
to process the event queue, and then verifies that
CI was run, or not run, as appropriate.
In each scenario we verify by running CI twice: once to make sure the filter allows what it should, and once to make sure it doesn't allow what it shouldn't
6.1 Filter predicate Repository
Want: We can allow an event that is for a specific repository.
Why: We want to constrain CI to a specific repository.
db: ci-broker.db adapters: default: command: ./adapter.sh triggers: - adapter: default filters: - !Repository "REPOID"
#!/bin/sh set -eu dir="$1" yaml="$2" rid="$(cd "$dir" && rad .)" sed -i "s/REPOID/$rid/g" "$yaml"
6.2 Filter predicate Node
Want: We can allow an event that originates in a given node.
Why: We want to constrain CI to a specific developer.
db: ci-broker.db adapters: default: command: ./adapter.sh triggers: - adapter: default filters: - !Node "NODEID"
#!/bin/sh set -eu dir="$1" yaml="$2" rid="$(cd "$dir" && rad self --nid)" sed -i "s/NODEID/$rid/g" "$yaml"
6.3 Filter predicate Tag
Want: We can allow an event that is about a specific tag.
Why: We want to constrain CI to specific tags, such as for releases.
db: ci-broker.db adapters: default: command: ./adapter.sh triggers: - adapter: default filters: - !Tag "v\\d+(\\.\\d+)"
6.4 Filter predicate Branch
Want: We can allow an event that is about a specific branch.
Why: We want to constrain CI to specific branches, such as the
main
branch.
db: ci-broker.db adapters: default: command: ./adapter.sh triggers: - adapter: default filters: - !Branch "main"
6.5 Filter predicate BranchCreated
Want: We can allow an event for a branch having been created.
Why: We want to constrain CI to only new branches.
db: ci-broker.db adapters: default: command: ./adapter.sh triggers: - adapter: default filters: - !BranchCreated
6.6 Filter predicate BranchUpdated
Want: We can allow an event for a branch having been updated.
Why: We want to constrain CI to only updated branches, as distinct from new branches.
db: ci-broker.db adapters: default: command: ./adapter.sh triggers: - adapter: default filters: - !BranchUpdated
6.7 Filter predicate BranchDeleted
Want: We can allow an event for a branch having been deleted.
Why: We want to constrain CI to only deleted branches, e.g., to update a mirror.
db: ci-broker.db adapters: default: command: ./adapter.sh triggers: - adapter: default filters: - !BranchDeleted
6.8 Filter predicate Allow
Want: We can allow all events.
Why: This is for consistency.
db: ci-broker.db adapters: default: command: ./adapter.sh triggers: - adapter: default filters: - !Allow
6.9 Filter predicate Deny
Want: We can allow no events.
Why: This is for consistency.
db: ci-broker.db adapters: default: command: ./adapter.sh triggers: - adapter: default filters: - !Deny
6.10 Filter predicate And
Want: We can allow a combination of events if they are all allowed individually.
Why: This is for consistency.
db: ci-broker.db adapters: default: command: ./adapter.sh triggers: - adapter: default filters: - !And - !Allow - !Allow
6.11 Filter predicate Or
Want: We can allow a combination of events if any of them are allowed individually.
Why: This is for consistency.
db: ci-broker.db adapters: default: command: ./adapter.sh triggers: - adapter: default filters: - !Or - !Allow - !Deny
6.12 Filter predicate Not
Want: We can allow an event if the contained filter denies it.
Why: This is for consistency.
db: ci-broker.db adapters: default: command: ./adapter.sh triggers: - adapter: default filters: - !Not - !Allow
6.13 Filter predicate DefaultBranch
Want: We can allow an event if the event refers to the default branch.
Why: This is so that the user doesn't need to spell out the name explicitly.
db: ci-broker.db adapters: default: command: ./adapter.sh triggers: - adapter: default filters: - !DefaultBranch
6.14 Filter predicate HasFile
Want: We can allow an event if its commit contains a file or directory by this name.
Why: This is so that the user can choose a suitable adapter.
db: ci-broker.db adapters: default: command: ./adapter.sh triggers: - adapter: default filters: - !HasFile "file.dat"
db: ci-broker.db adapters: default: command: ./adapter.sh triggers: - adapter: default filters: - !HasFile "does-not-exist"
7 Acceptance criteria for test tooling
The event synthesizer is a helper to feed the CI broker node events in a controlled fashion.
7.1 We can run rad
Want: We can run rad.
Why: For many of the verification scenarios for the CI broker we
need to run the Radicle rad
command line tool. Depending on the
environment we use for verification, rad
may be installed in various
places. Commonly, if installed using the Radicle installer, rad
is
installed into ~/.radicle/bin
and edits the shell initialization
file to add that to $PATH
. However, in a CI context, that
initialization is not necessarily done and so the radenv.sh
helper
script adds that directory to the $PATH
just in case. We verify in
this scenario that we can run rad
at all.
Who: cib-devs
7.2 Dummy adapter runs successfully
Want: The dummy adapter (in embedded file dummy.sh
) runs
successfully.
Why: Test scenarios using the dummy adapter need to be able to rely that it works.
Who: cib-devs
7.3 Adapter with URL runs successfully
Want: The adapter with a URL (in embedded file
adapter-with-url.sh
) runs successfully.
Why: Test scenarios using this adapter need to be able to rely that it works.
Who: cib-devs
7.4 Event synthesizer terminates after first connection
Want: The event synthesizer runs in the background, but terminates after the first connection.
Why: This is needed so that it can be invoked in Subplot scenarios.
Who: cib-devs
We use the synthetic-events --client
option to connect to the daemon
and wait for the daemon to delete the socket file. This is more
easily portable than using a generic tool such as nc
, which has many
variants across operating systems.
We wait for up to ten seconds the synthetic-events
daemon to remove
the socket file before we check that it's been deleted, but checking
for that every second. This avoids the trap of waiting for a fixed
time: if the time is too short, the scenario fails spuriously, and if
it's very long, the scenario takes longer than necessary.
8 Acceptance criteria for persistent database
The CI broker uses an SQLite database for persistent data. Many processes may need to access or modify the database at the same time. While SQLite is good at managing that, it needs to be used in the right way for everything to work correctly. The acceptance criteria in this chapter address that.
To enable the verification of these acceptance criteria, the CI broker database allows for a "counter", as a single row in a dedicated table. Concurrency is tested by having multiple processes update the counter at the same time and verifying the end result is as intended and that every value is set exactly once.
8.1 Count in a single process
Want: A single process can increment the test counter correctly.
Why: If this doesn't work with a single process, it won't work of multiple processes, either.
Who: cib-devs
8.2 Insert events into queue
Want: Insert broker events generated from node events into persistent event queue in the database, when allowed by the CI broker event filter.
Why: This is fundamental for running CI when repositories in a node change.
Who: cib-devs
8.3 Insert many events into queue
Want: Insert many events that arrive quickly.
Why: We need at least some rudimentary performance testing.
Who: cib-devs
when I run synthetic-events synt.sock refsfetched.json --log synt.log --repeat 1000
8.4 Process queued events
Want: It's possible to run the CI broker in a mode where it only processes events from its persistent event queue.
Why: This is primarily useful for testing the CI broker queuing implementation.
Who: cib-devs
We verify this by adding events to the queue with cibtool
, and then
running the CI broker and verifying it terminates after processing the
events. We carefully add a shutdown event so that the CI broker shuts
down.
8.5 Count in concurrent processes
Want: Two process can concurrently increment the test counter correctly.
Why: This is necessary, if not necessarily sufficient, for concurrent database use to work correctly.
Who: cib-devs
Due to limitations in Subplot we mange the concurrent processes using
a helper shell script,k count.sh
, found below. It runs two
concurrent cibtool
processes that update the same database file, and
count to a desired goal. The script then verifies that everything went
correctly.
#!/bin/sh set -eu run() { cibtool --db "$DB" counter count --goal "$goal" } DB=count.db goal="$1" reps="$2" for x in $(seq "$reps"); do echo "Repetition $x" rm -f "$DB" ./?.out run >1.out 2>&1 & one=$! run >2.out 2>&1 & two=$! if ! wait "$one"; then echo "first run failed" cat 1.out exit 1 fi if ! wait "$two"; then echo "second run failed" cat 2.out exit 1 fi if grep ERROR ./?.out; then echo found ERRORs exit 1 fi n="$(sqlite3 "$DB" 'select counter from counter_test')" [ "$n" == "$goal" ] || ( echo "wrong count $n" exit 1 ) if awk '/increment to/ { print $NF }' ./?.out | sort -n | uniq -d | grep .; then echo "duplicate increments" exit 1 fi done echo OK
9 Acceptance criteria for management tool
The cibtool
management tool can be used to examine and change the CI
broker database, and thus indirectly manage what the CI broker does.
9.1 Events can be queued and removed from queue
Want: cibtool
can show the queued events, can inject an
event, and remove an event.
Why: This is the minimum functionality needed to manage the event queue.
Who: cib-devs
We verify that this works by adding a new broker event, and then removing it. We randomly choose the repository id for the CI broker itself for this test, but the id shouldn't matter, it just needs to be of the correct form.
9.2 Can remove all queued events
Want: cibtool
can remove all queued events in one operation.
Why: This will be useful if the CI broker changes how CI events or their serialization in an incompatible way, again, or when the node operator wants to prevent many CI runs from happening.
Who: cib-devs
, node-ops
9.3 Can add shutdown event to queue
Want: cibtool
can add a shutdown event to the queued
events.
Why: This is needed for testing, and for the node operator to be able to do this cleanly.
Who: cib-devs
9.4 Can add a branch creation event to queue
Want: cibtool
can add an event for branch being created to the
queued events.
Why: This is needed for testing.
Who: cib-devs
9.5 Can add a branch update event to queue
Want: cibtool
can add an event for branch being updated to the
queued events.
Why: This is needed for testing.
Who: cib-devs
9.6 Can add a branch deletion event to queue
Want: cibtool
can add an event for branch being deleted to the
queued events.
Why: This is needed for testing.
Who: cib-devs
9.7 Can add a patch creation event to queue
Want: cibtool
can add an event for a branch being created to the
queued events.
Why: This is needed for testing.
Who: cib-devs
9.8 Can add a patch update event to queue
Want: cibtool
can add an event for a branch being updated to the
queued events.
Why: This is needed for testing.
Who: cib-devs
9.9 Can trigger a CI run
Want: The node operator can easily trigger a CI run without changing the repository.
Why: This allows running CI on a schedule, for example. It's also useful for CI broker development.
Who: cib-devs
9.10 Can output trigger message for a CI run
Want: The cibtool
command can output the CI event to trigger a CI
run to the standard output or a file.
Why: This is helpful for debugging the CI broker at least.
Who: cib-devs
9.11 Add information about triggered run to database
Want: cibtool
can add information about a triggered CI run.
Why: This is primarily needed for testing.
Who: cib-devs
9.12 Add information about run that's running to database
Want: cibtool
can add information about a CI run that's running.
Why: This is primarily needed for testing.
Who: cib-dev
.
9.13 Add information about run that's finished successfully to database
Want: cibtool
can add information about a CI run that's finished
successfully.
Why: This is primarily needed for testing.
Who: cib-dev
.
9.14 Add information about run that's finished in failure to database
Want: cibtool
can add information about a CI run that's failed.
Why: This is primarily needed for testing.
Who: cib-dev
.
9.15 Remove information about a run from the database
Want: cibtool
can removed information about a CI run.
Why: This is primarily for completeness.
Who: cib-devs
9.16 Update and show information about run to running
Want: cibtool
can update information about a CI run.
Why: This is primarily needed for testing.
Who: cib-dev
.
9.17 Don't insert event for non-existent repository
Want: cibtool
won't insert an event to the queue for a repository
that isn't in the local node.
Why: This prevents adding events that can't ever trigger a CI run.
Who: cib-devs
Note that we verify both lookup by name and by repository ID, and by
cibtool event add
and cibtool trigger
, to cover all the cases.
9.18 Record node events
What: Node operator can record node events into a file.
Why: This can be helpful for remote debugging, it's very helpful for CI broker development to see what events actually happen, and it's useful for gathering data for trying out event filters.
Who: cib-devs
, node-ops
9.19 Convert recorded node events into CI events
What: Node operator can see what CI events are created from node events.
Why: This is helpful so that node operators can see what CI events are created from node events, which may have been previously recorded. It's also helpful for CI broker developers as a development tool.
Who: cib-dev
, node-ops
9.20 Filter recorded CI events
What: Node operator can see what CI events an event filter allows.
Why: This is helpful so that node operators can see verify their event filters work as they expect.
Who: cib-dev
, node-ops
filters: - !Branch "main"
filters: - !Branch "this-does-not-exist"
9.21 Extract cib
log from journald and pretty print
Want: cibtool
can extract cib
log messages from the systemd
journal sub-system, and pretty print them, and optionally filter the
messages.
Why: systemd is the common service manager for Linux systems, and it
needs to be convenient to extract cib
log messages from its system
logging sub-system, journald. This is especially important for CI
broker developers who need to diagnose problems on remote cib
instances to which they don't have direct access.
Who: cib-devs
{"__CURSOR":"s=f30566e7c34e4ba190c6c671ff826507;i=31360;b=da4ba18425ea4e34bd0f0731f0270572;m=c3256a61;t=6290db5b772ca;x=280fbc307405cb81","_SYSTEMD_UNIT":"radicle-ci-broker.service","_STREAM_ID":"9c46f4f6f0074e07986a0c3769f6545e","_SYSTEMD_SLICE":"system.slice","_CMDLINE":"/bin/cib --log-level trace --config /home/_rad/ci-broker.yaml process-events","_PID":"526","MESSAGE":"{\"timestamp\":\"2024-12-12T07:32:00.276073Z\",\"level\":\"TRACE\",\"fields\":{\"message\":\"event queue length\",\"msg_id\":\"QueueProcQueueLength\",\"kind\":\"debug\",\"len\":\"0\"}}","_RUNTIME_SCOPE":"system","_CAP_EFFECTIVE":"0","_GID":"1001","_SELINUX_CONTEXT":"unconfined\n","_SYSTEMD_INVOCATION_ID":"949b57247eb24bb5aefcec93f049beab","__MONOTONIC_TIMESTAMP":"3274009185","PRIORITY":"6","_COMM":"cib","_UID":"1001","_SYSTEMD_CGROUP":"/system.slice/radicle-ci-broker.service","_HOSTNAME":"radicle-ci","_BOOT_ID":"da4ba18425ea4e34bd0f0731f0270572","__REALTIME_TIMESTAMP":"1733988720276170","SYSLOG_IDENTIFIER":"cib","_TRANSPORT":"stdout","_EXE":"/usr/bin/cib","_MACHINE_ID":"35c654cc069c402d8cb0b34b91f12e5e","SYSLOG_FACILITY":"3"} {"_GID":"1001","_RUNTIME_SCOPE":"system","_UID":"1001","_BOOT_ID":"da4ba18425ea4e34bd0f0731f0270572","_PID":"526","_SYSTEMD_CGROUP":"/system.slice/radicle-ci-broker.service","_SYSTEMD_INVOCATION_ID":"949b57247eb24bb5aefcec93f049beab","_CMDLINE":"/bin/cib --log-level trace --config /home/_rad/ci-broker.yaml process-events","_CAP_EFFECTIVE":"0","_SELINUX_CONTEXT":"unconfined\n","_TRANSPORT":"stdout","SYSLOG_IDENTIFIER":"cib","_MACHINE_ID":"35c654cc069c402d8cb0b34b91f12e5e","_STREAM_ID":"9c46f4f6f0074e07986a0c3769f6545e","MESSAGE":"{\"timestamp\":\"2024-12-12T07:32:01.276463Z\",\"level\":\"TRACE\",\"fields\":{\"message\":\"event queue length\",\"msg_id\":\"QueueProcQueueLength\",\"kind\":\"debug\",\"len\":\"0\"}}","_SYSTEMD_SLICE":"system.slice","__MONOTONIC_TIMESTAMP":"3275009597","PRIORITY":"6","SYSLOG_FACILITY":"3","_HOSTNAME":"radicle-ci","_EXE":"/usr/bin/cib","_COMM":"cib","_SYSTEMD_UNIT":"radicle-ci-broker.service","__CURSOR":"s=f30566e7c34e4ba190c6c671ff826507;i=31361;b=da4ba18425ea4e34bd0f0731f0270572;m=c334ae3d;t=6290db5c6b6a5;x=a9c49aef9ad2a509","__REALTIME_TIMESTAMP":"1733988721276581"} {"__MONOTONIC_TIMESTAMP":"3276010022","SYSLOG_IDENTIFIER":"cib","_CAP_EFFECTIVE":"0","_SYSTEMD_INVOCATION_ID":"949b57247eb24bb5aefcec93f049beab","_HOSTNAME":"radicle-ci","_EXE":"/usr/bin/cib","_UID":"1001","_BOOT_ID":"da4ba18425ea4e34bd0f0731f0270572","_MACHINE_ID":"35c654cc069c402d8cb0b34b91f12e5e","_SYSTEMD_SLICE":"system.slice","_PID":"526","SYSLOG_FACILITY":"3","_COMM":"cib","PRIORITY":"6","_STREAM_ID":"9c46f4f6f0074e07986a0c3769f6545e","_SELINUX_CONTEXT":"unconfined\n","_TRANSPORT":"stdout","_SYSTEMD_UNIT":"radicle-ci-broker.service","MESSAGE":"{\"timestamp\":\"2024-12-12T07:32:02.276881Z\",\"level\":\"TRACE\",\"fields\":{\"message\":\"event queue length\",\"msg_id\":\"QueueProcQueueLength\",\"kind\":\"debug\",\"len\":\"0\"}}","__CURSOR":"s=f30566e7c34e4ba190c6c671ff826507;i=31362;b=da4ba18425ea4e34bd0f0731f0270572;m=c343f226;t=6290db5d5fa8e;x=843418c04ac1932f","_GID":"1001","_SYSTEMD_CGROUP":"/system.slice/radicle-ci-broker.service","__REALTIME_TIMESTAMP":"1733988722277006","_RUNTIME_SCOPE":"system","_CMDLINE":"/bin/cib --log-level trace --config /home/_rad/ci-broker.yaml process-events"} {"SYSLOG_FACILITY":"3","_MACHINE_ID":"35c654cc069c402d8cb0b34b91f12e5e","__REALTIME_TIMESTAMP":"1733988723277782","_CAP_EFFECTIVE":"0","_PID":"526","_SYSTEMD_INVOCATION_ID":"949b57247eb24bb5aefcec93f049beab","_EXE":"/usr/bin/cib","_COMM":"cib","_RUNTIME_SCOPE":"system","_UID":"1001","__MONOTONIC_TIMESTAMP":"3277010798","_TRANSPORT":"stdout","_SYSTEMD_UNIT":"radicle-ci-broker.service","_SYSTEMD_SLICE":"system.slice","_SELINUX_CONTEXT":"unconfined\n","MESSAGE":"{\"timestamp\":\"2024-12-12T07:32:03.277600Z\",\"level\":\"TRACE\",\"fields\":{\"message\":\"event queue length\",\"msg_id\":\"QueueProcQueueLength\",\"kind\":\"debug\",\"len\":\"0\"}}","PRIORITY":"6","_SYSTEMD_CGROUP":"/system.slice/radicle-ci-broker.service","SYSLOG_IDENTIFIER":"cib","_STREAM_ID":"9c46f4f6f0074e07986a0c3769f6545e","_GID":"1001","__CURSOR":"s=f30566e7c34e4ba190c6c671ff826507;i=31363;b=da4ba18425ea4e34bd0f0731f0270572;m=c353376e;t=6290db5e53fd6;x=f0840763406ccc13","_HOSTNAME":"radicle-ci","_CMDLINE":"/bin/cib --log-level trace --config /home/_rad/ci-broker.yaml process-events","_BOOT_ID":"da4ba18425ea4e34bd0f0731f0270572"} {"_SYSTEMD_UNIT":"radicle-ci-broker.service","_SYSTEMD_CGROUP":"/system.slice/radicle-ci-broker.service","__CURSOR":"s=f30566e7c34e4ba190c6c671ff826507;i=31364;b=da4ba18425ea4e34bd0f0731f0270572;m=c3627c05;t=6290db5f4846c;x=5bded561567c656f","MESSAGE":"{\"timestamp\":\"2024-12-12T07:32:04.278174Z\",\"level\":\"TRACE\",\"fields\":{\"message\":\"event queue length\",\"msg_id\":\"QueueProcQueueLength\",\"kind\":\"debug\",\"len\":\"0\"}}","_RUNTIME_SCOPE":"system","_HOSTNAME":"radicle-ci","_GID":"1001","PRIORITY":"6","_SELINUX_CONTEXT":"unconfined\n","_BOOT_ID":"da4ba18425ea4e34bd0f0731f0270572","__MONOTONIC_TIMESTAMP":"3278011397","_UID":"1001","_CMDLINE":"/bin/cib --log-level trace --config /home/_rad/ci-broker.yaml process-events","_SYSTEMD_INVOCATION_ID":"949b57247eb24bb5aefcec93f049beab","SYSLOG_IDENTIFIER":"cib","_MACHINE_ID":"35c654cc069c402d8cb0b34b91f12e5e","_TRANSPORT":"stdout","_PID":"526","_COMM":"cib","__REALTIME_TIMESTAMP":"1733988724278380","SYSLOG_FACILITY":"3","_CAP_EFFECTIVE":"0","_SYSTEMD_SLICE":"system.slice","_EXE":"/usr/bin/cib","_STREAM_ID":"9c46f4f6f0074e07986a0c3769f6545e"} {"PRIORITY":"6","SYSLOG_FACILITY":"3","_STREAM_ID":"9c46f4f6f0074e07986a0c3769f6545e","_TRANSPORT":"stdout","_SYSTEMD_INVOCATION_ID":"949b57247eb24bb5aefcec93f049beab","_GID":"1001","__REALTIME_TIMESTAMP":"1733988725278778","MESSAGE":"{\"timestamp\":\"2024-12-12T07:32:05.278657Z\",\"level\":\"TRACE\",\"fields\":{\"message\":\"event queue length\",\"msg_id\":\"QueueProcQueueLength\",\"kind\":\"debug\",\"len\":\"0\"}}","_PID":"526","_CMDLINE":"/bin/cib --log-level trace --config /home/_rad/ci-broker.yaml process-events","_SYSTEMD_UNIT":"radicle-ci-broker.service","_EXE":"/usr/bin/cib","_MACHINE_ID":"35c654cc069c402d8cb0b34b91f12e5e","SYSLOG_IDENTIFIER":"cib","_HOSTNAME":"radicle-ci","_COMM":"cib","_SYSTEMD_SLICE":"system.slice","__MONOTONIC_TIMESTAMP":"3279011794","_RUNTIME_SCOPE":"system","_UID":"1001","_BOOT_ID":"da4ba18425ea4e34bd0f0731f0270572","_SYSTEMD_CGROUP":"/system.slice/radicle-ci-broker.service","__CURSOR":"s=f30566e7c34e4ba190c6c671ff826507;i=31367;b=da4ba18425ea4e34bd0f0731f0270572;m=c371bfd2;t=6290db603c83a;x=b50947c211c1edcb","_SELINUX_CONTEXT":"unconfined\n","_CAP_EFFECTIVE":"0"} {"SYSLOG_IDENTIFIER":"cib","_BOOT_ID":"da4ba18425ea4e34bd0f0731f0270572","_TRANSPORT":"stdout","_PID":"526","PRIORITY":"6","_SYSTEMD_INVOCATION_ID":"949b57247eb24bb5aefcec93f049beab","_GID":"1001","_SYSTEMD_SLICE":"system.slice","__REALTIME_TIMESTAMP":"1733988726279175","_STREAM_ID":"9c46f4f6f0074e07986a0c3769f6545e","_COMM":"cib","__CURSOR":"s=f30566e7c34e4ba190c6c671ff826507;i=31368;b=da4ba18425ea4e34bd0f0731f0270572;m=c381039f;t=6290db6130c07;x=7b8169758b14d38c","_SELINUX_CONTEXT":"unconfined\n","_RUNTIME_SCOPE":"system","_EXE":"/usr/bin/cib","_CAP_EFFECTIVE":"0","MESSAGE":"{\"timestamp\":\"2024-12-12T07:32:06.279078Z\",\"level\":\"TRACE\",\"fields\":{\"message\":\"event queue length\",\"msg_id\":\"QueueProcQueueLength\",\"kind\":\"debug\",\"len\":\"0\"}}","_HOSTNAME":"radicle-ci","_MACHINE_ID":"35c654cc069c402d8cb0b34b91f12e5e","_SYSTEMD_UNIT":"radicle-ci-broker.service","SYSLOG_FACILITY":"3","_UID":"1001","_CMDLINE":"/bin/cib --log-level trace --config /home/_rad/ci-broker.yaml process-events","_SYSTEMD_CGROUP":"/system.slice/radicle-ci-broker.service","__MONOTONIC_TIMESTAMP":"3280012191"} {"MESSAGE":"{\"timestamp\":\"2024-12-12T07:32:07.279440Z\",\"level\":\"TRACE\",\"fields\":{\"message\":\"event queue length\",\"msg_id\":\"QueueProcQueueLength\",\"kind\":\"debug\",\"len\":\"0\"}}","_STREAM_ID":"9c46f4f6f0074e07986a0c3769f6545e","_EXE":"/usr/bin/cib","_RUNTIME_SCOPE":"system","_TRANSPORT":"stdout","_SYSTEMD_INVOCATION_ID":"949b57247eb24bb5aefcec93f049beab","_SELINUX_CONTEXT":"unconfined\n","_CMDLINE":"/bin/cib --log-level trace --config /home/_rad/ci-broker.yaml process-events","_PID":"526","_HOSTNAME":"radicle-ci","_MACHINE_ID":"35c654cc069c402d8cb0b34b91f12e5e","__REALTIME_TIMESTAMP":"1733988727279541","_SYSTEMD_UNIT":"radicle-ci-broker.service","PRIORITY":"6","_GID":"1001","_UID":"1001","SYSLOG_FACILITY":"3","_COMM":"cib","SYSLOG_IDENTIFIER":"cib","__CURSOR":"s=f30566e7c34e4ba190c6c671ff826507;i=31369;b=da4ba18425ea4e34bd0f0731f0270572;m=c390474d;t=6290db6224fb5;x=faafb09a153285ff","_BOOT_ID":"da4ba18425ea4e34bd0f0731f0270572","_SYSTEMD_CGROUP":"/system.slice/radicle-ci-broker.service","_CAP_EFFECTIVE":"0","_SYSTEMD_SLICE":"system.slice","__MONOTONIC_TIMESTAMP":"3281012557"} {"_PID":"526","_CMDLINE":"/bin/cib --log-level trace --config /home/_rad/ci-broker.yaml process-events","_SYSTEMD_CGROUP":"/system.slice/radicle-ci-broker.service","_HOSTNAME":"radicle-ci","__MONOTONIC_TIMESTAMP":"3282012856","_MACHINE_ID":"35c654cc069c402d8cb0b34b91f12e5e","_TRANSPORT":"stdout","SYSLOG_IDENTIFIER":"cib","_SYSTEMD_SLICE":"system.slice","_COMM":"cib","_CAP_EFFECTIVE":"0","_SELINUX_CONTEXT":"unconfined\n","__REALTIME_TIMESTAMP":"1733988728279841","_SYSTEMD_UNIT":"radicle-ci-broker.service","_SYSTEMD_INVOCATION_ID":"949b57247eb24bb5aefcec93f049beab","_STREAM_ID":"9c46f4f6f0074e07986a0c3769f6545e","_BOOT_ID":"da4ba18425ea4e34bd0f0731f0270572","_GID":"1001","SYSLOG_FACILITY":"3","__CURSOR":"s=f30566e7c34e4ba190c6c671ff826507;i=3136a;b=da4ba18425ea4e34bd0f0731f0270572;m=c39f8ab8;t=6290db6319321;x=d04fd63cc7903199","PRIORITY":"6","_EXE":"/usr/bin/cib","MESSAGE":"{\"timestamp\":\"2024-12-12T07:32:08.279755Z\",\"level\":\"TRACE\",\"fields\":{\"message\":\"event queue length\",\"msg_id\":\"QueueProcQueueLength\",\"kind\":\"debug\",\"len\":\"0\"}}","_RUNTIME_SCOPE":"system","_UID":"1001"} {"MESSAGE":"{\"timestamp\":\"2024-12-12T07:32:09.280049Z\",\"level\":\"TRACE\",\"fields\":{\"message\":\"event queue length\",\"msg_id\":\"QueueProcQueueLength\",\"kind\":\"debug\",\"len\":\"0\"}}","_UID":"1001","_CMDLINE":"/bin/cib --log-level trace --config /home/_rad/ci-broker.yaml process-events","_HOSTNAME":"radicle-ci","_PID":"526","_TRANSPORT":"stdout","__REALTIME_TIMESTAMP":"1733988729280146","_SELINUX_CONTEXT":"unconfined\n","_SYSTEMD_CGROUP":"/system.slice/radicle-ci-broker.service","_RUNTIME_SCOPE":"system","_MACHINE_ID":"35c654cc069c402d8cb0b34b91f12e5e","_GID":"1001","PRIORITY":"6","_BOOT_ID":"da4ba18425ea4e34bd0f0731f0270572","_STREAM_ID":"9c46f4f6f0074e07986a0c3769f6545e","_CAP_EFFECTIVE":"0","_SYSTEMD_UNIT":"radicle-ci-broker.service","_EXE":"/usr/bin/cib","SYSLOG_IDENTIFIER":"cib","_COMM":"cib","_SYSTEMD_INVOCATION_ID":"949b57247eb24bb5aefcec93f049beab","_SYSTEMD_SLICE":"system.slice","SYSLOG_FACILITY":"3","__CURSOR":"s=f30566e7c34e4ba190c6c671ff826507;i=3136b;b=da4ba18425ea4e34bd0f0731f0270572;m=c3aece29;t=6290db640d692;x=b38aefb25148da02","__MONOTONIC_TIMESTAMP":"3283013161"} {"__MONOTONIC_TIMESTAMP":"1284501864","_PID":"526","__REALTIME_TIMESTAMP":"1733986730768848","_EXE":"/usr/bin/cib","__CURSOR":"s=f30566e7c34e4ba190c6c671ff826507;i=2d8f2;b=da4ba18425ea4e34bd0f0731f0270572;m=4c8ff168;t=6290d3f21f9d0;x=d9b8c7ef2053cc0f","_SYSTEMD_CGROUP":"/system.slice/radicle-ci-broker.service","_CAP_EFFECTIVE":"0","_SYSTEMD_SLICE":"system.slice","_SELINUX_CONTEXT":"unconfined\n","_CMDLINE":"/bin/cib --log-level trace --config /home/_rad/ci-broker.yaml process-events","_UID":"1001","SYSLOG_IDENTIFIER":"cib","_RUNTIME_SCOPE":"system","_BOOT_ID":"da4ba18425ea4e34bd0f0731f0270572","_MACHINE_ID":"35c654cc069c402d8cb0b34b91f12e5e","_SYSTEMD_UNIT":"radicle-ci-broker.service","_SYSTEMD_INVOCATION_ID":"949b57247eb24bb5aefcec93f049beab","SYSLOG_FACILITY":"3","_COMM":"cib","_STREAM_ID":"9c46f4f6f0074e07986a0c3769f6545e","_HOSTNAME":"radicle-ci","_GID":"1001","MESSAGE":"{\"timestamp\":\"2024-12-12T06:58:50.768787Z\",\"level\":\"INFO\",\"fields\":{\"message\":\"Finish CI run\",\"msg_id\":\"BrokerRunEnd\",\"kind\":\"finish_run\",\"run\":\"Run { broker_run_id: RunId { id: \\\"62c45727-a4d8-4a29-9dae-88c6e8b61655\\\" }, adapter_run_id: Some(RunId { id: \\\"fa233c62-51df-4812-865c-7b989915c1f3\\\" }), adapter_info_url: Some(\\\"http://radicle-ci/fa233c62-51df-4812-865c-7b989915c1f3/log.html\\\"), repo_id: RepoId(rad:z3gqcJUoA1n9HaHKufZs5FCSGazv5), repo_name: \\\"heartwood\\\", timestamp: \\\"2024-12-12 06:58:03Z\\\", whence: Branch { name: \\\"master\\\", commit: Oid(d9c76893a144fd787654613f2bfb919613014a71), who: Some(\\\"did:key:z6MkiB8T5cBEQHnrs2MgjMVqvpSVj42X81HjKfFi2XBoMbtr (radicle-ci)\\\") }, state: Finished, result: Some(Failure) }\"},\"span\":{\"broker_run_id\":\"62c45727-a4d8-4a29-9dae-88c6e8b61655\",\"name\":\"execute_ci_run\"},\"spans\":[{\"broker_run_id\":\"62c45727-a4d8-4a29-9dae-88c6e8b61655\",\"name\":\"execute_ci_run\"}]}","_TRANSPORT":"stdout","PRIORITY":"6"}
10 Acceptance criteria for logging
The CI broker writes log messages to its standard error output (stderr), which the node operator can capture to a suitable persistent location. The logs are structured: each line is a JSON object. The structured logs are meant to be easier to process by programs, for example to extract information for monitoring, and alerting the node operator about problems.
An example log message might look like below (here formatted on multiple lines for human consumption):
{ "msg": "CI broker starts", "level": "INFO", "ts": "2024-08-14T13:38:36.733953135Z", }
Because logs are crucial for managing a system, we record acceptance criteria for the minimum logging that the CI broker needs to do.
10.1 Logs start and successful end
What: cib
logs a message when it starts and ends.
Why: The program starting to run can be important information, for example, to know when it's not running. It's also important to know if the CI broker terminates successfully.
Who: cib-dev
.
We verify this by starting cib
in a mode where it processes any
events already in the event queue, and then terminates. We don't add
any events, so cib
just terminates at once. All of this will work,
when properly set up.
10.2 Logs termination due to error
What: cib
logs a message when it ends due to an unrecoverable
error.
Why: It's quite important to know this. Note that a recoverable error does not terminate the CI broker.
Who: cib-dev
.
We check this by running the CI broker without a local node. This is an error it can't recover from.
11 Acceptance criteria for reports
The CI broker creates HTML and JSON reports on a schedule, as well as when CI runs end. The scenarios in this chapter verify that those reports are as wanted.
11.1 Produces a JSON status file
What: cib
produces a JSON status file with information about the
current state of the CI broker.
Why: This makes it easy to monitor the CI broker using an automated monitoring system.
Who: node-ops
12 Acceptance criteria for upgrades
What: The node operator can safely upgrade the CI broker. At the very least, the CI broker developers need to know if they are making a breaking change.
Why: If software upgrades are tedious or risky, they happen less often, to the detriment of everyone.
Who: cib-dev
, node-ops
It is important that those running the CI broker can upgrade confidently. This requires, at least, that CI broker upgrades in existing installations do not break anything, or at least not without warning. The scenario in this chapter verifies that in a simple, even simplistic manner.
Note that this upgrade testing is very much in its infancy. It is expected to be fleshed out over time. There will probably be more scenarios later.
The overall approach is as follows:
- we run various increasing versions of the CI broker
- we use the same configuration file and database for each version
- we have an isolated test node so that the CI broker can validate repository and commit
- for each version, we use
cibtool trigger
andcib queued
to run CI - after each version, we verify that the database has all the CI runs it had before running the version, plus one more
Note that because this scenario may be run outside the developer's development environment, it is currently difficult to access the Git tags that represent the CI broker releases. Thus we verify upgrades to the Git commit identifiers instead. Note that this should be commits, not tag objects, as the tests may need to run in clone of the a Git repository without tags.
This scenario needs to be updated when a new release has been made, to avoid the test suite taking too long to run. The goal is to verify, across releases, that upgrades from each release to the next is verified to work. Thus, given releases 1, 2, 3, etc, we amend the scenario to drop all but latest release, and add any missing release. However, if we've neglected to update the scenario for a release, we make sure we don't break the chain.
release | scenario has |
---|---|
1 | none |
2 | 1 HEAD |
3 | 1 2 HEAD |
4 | 2 3 HEAD |
5 | 2 3 HEAD |
6 | 3 4 5 HEAD |
7 | 5 6 HEAD |
Release can't do upgrade tests, but it's long in the past so that's OK. Release 2 upgrades from release 1 to HEAD, the current tip of the branch. Release 3 upgrades from 1 to 2 to HEAD. Release 4 can drop release 1, but adds 3. After release 5 we forgot to update the scenario, so for release 6 we include testing upgrade to release 4. For release 7 we can again trip the list.
This doesn't verify that upgrades work if we skip releases. We're OK with that, until users say they want to skip and are having trouble.
#!/bin/sh # # Given a list of CI runs and a CI broker version, build and run that # version so that it triggers and runs CI on a given change. Then # verify the CI broker database has the CI runs in the list, plus one # more, and then update the list. set -eu REPO="testy" LIST="$1" VERSION="$2" # Unset this so that the Cargo cache doesn't get messed up. (This # smells like a caching bug, or my misundestanding.) unset CARGO_TARGET_DIR # Remember where various things are. db="$(pwd)/ci-broker.db" reports="$(pwd)/reports" adapter="$(pwd)/adapter.sh" # Remember where the config is and update config to use correct # database and report directory. config="$(pwd)/broker.yaml" sed -i "s,^db:.*,db: $db," "$config" sed -i "s,^report_dir:.*,report_dir: $reports," "$config" sed -i "s,command:.*,command: $adapter," "$config" nl "$config" # Get source code for CI broker. The scenario that uses this script # set $SRCDIR to point at the source tree, so we get the source code # from there to avoid having to fetch things from the network. rm -rf ci-broker html mkdir ci-broker html export SRCDIR="$CARGO_MANIFEST_DIR" (cd "$SRCDIR" && git archive "$VERSION") | tar -C ci-broker -xf - # Do things in the exported CI broker source tree. Capture stdout to a # new list of CI run. ( cd ci-broker # Build source code. find -name '*.rs' -exec sed -Ei '/\[deny\(/d' '{}' + cargo build --all-targets (echo "Old CI run lists:" cargo run -q --bin cibtool -- --db "$db" run list 1>&2 cargo run -q --bin cibtool -- --db "$db" run list --json) 1>&2 # Trigger a CI run. Hide the event ID that cibtool writes to # stdout. cargo run -q --bin cibtool -- --db "$db" trigger --repo "$REPO" --ref main --commit HEAD >/dev/null # Run CI on queued events. cargo run -q --bin cib -- --config "$config" queued # List CI runs now in database. cargo run -q --bin cibtool -- --db "$db" run list ) >"$LIST.new" # Check that new list contains everything in old list, plus one more. removed="$(diff -u <(sort "$LIST") <(sort "$LIST.new") | sed '1,/^@@/d' | grep -c "^-" || true)" added="$(diff -u <(sort "$LIST") <(sort "$LIST.new") | sed '1,/^@@/d' | grep -c "^+" || true)" if [ "$removed" = 0 ] && [ "$added" = 1 ]; then echo "CI broker $VERSION ran OK" mv "$LIST.new" "$LIST" else echo "CI broker removed $removed, added $added CI runs." 1>&2 exit 1 fi