With the continued development and integration of
oconfigure underway, I soon ran into
problems where one portability measure (e.g., adding
_DEFAULT_SOURCE on a glibc
Linux for including crypt(3)) would cause fallout
on other systems. In this example, because
_XOPEN_SOURCE had already been defined,
and conflicted with the new define.
I needed a tool for making sure commits on the current focus were not causing fallout on other
With only one other system, I could keep both open in terminals.
As that number grew, it became impractical.
This is where CI (
continuous integration) tools are handy for verifying…
Searching for CI tools produced lots of useless information.
buzzword and results were heavily skewed toward heavy-weight vendors.
I'm sure there are many high-quality, light-weight systems hidden in the results, but I couldn't
In the end, it seemed like a much simpler task to simply write one myself. At heart a CI tool consists of a test runner, which verifies for a single host; and the report server, which manages the reports of test runners. A test runner for BSD.lv tools would need only:
makewith various rules)
The server would need only:
In the end, this was accomplished with a shell script, an openradtool data model, some driver code, and some nice CSS.
I ended (but am starting here) with the report server. The report server handles the set of test reports, which it must accept and produce. Each report consists of:
For simplicity, I combined the accept and produce functionality into one tool backed by openradtool. Reports are accepted over a simple HTTP POST and produced in HTML5 from an HTTP GET from the same CGI resource.
The data model is described in db.ort and the back-end driver in main.c. The driver both accepts reports and formats them in HTML. The rest is just CSS.
For the test runner, I opted for a simple POSIX shell script,
So far this has worked fine on all target systems, with some snags along the way in portable
idioms for UNIX tools, e.g.,
head -n1 vs.
head -1 or
echo -n vs.
I have the HTTP POST component be handled by
curl, which is available on all systems I use—often as
part of the base system.
The BSD.lv tools are all mirrored from a local CVS repository to
GitHub, making it easy to fetch sources from one
The reason for using git(1) instead of
anoncvs is the concept of a
source tree identified by the last commit.
This way, if any two source trees have the same last-commit identifier, they're identical.
CVS doesn't have this concept, so tracking the same source trees is more difficult.
The minci.sh test runner
uses this identifier to check whether sources need to be updated and re-checked.
The test runner runs periodically via crontab(1). This introduces a delay in testing sources, but this is at best a minor inconvenience. Most heavy-weight continuous integration tools are triggered on each commit, but I considered this unnecessary complexity.
Instead of relying on complicated usernames and passwords, the test runner identifies itself with an email address and shared secret token (hashed in the POST submission) in its configuration file. Very straightforward and very effective. Adding a new tester to the database is as simple as generating a token and assigning that to an e-mail address.
Since the BSD.lv tools have been fitted to use
for detecting libraries, it's easy for the test runner user to override system libraries, which
might be out of date, with those installed locally.
The user need only set
PKG_CONFIG_PATH to where the package specification files are
This comes particularly in handy for downloading LibreSSL
or newer version of OpenSSL.
For systems like
kcaldav, I was also able to use this for local
versions of kcgi when it wasn't available from a
system's third-party package manager.
At present, there's no automated way to deploy dependencies for the test runners: they must be manually installed. As I introduce more inter-depending systems (like kcaldav), this will probably change.
The last feature added to the test runner was an auto-update, which works much in the same way as the other parts checking for source freshness. Auto-updating isn't a long-term feature, but is perfect for in situ updates as I flesh out the continuous integration tool itself.
With continuous integration in place on all platforms, I'm free to make changes to individual repositories knowing that within an hour or so, I can see whether the changes have affected on platforms.
You can see the results for the BSD.lv tools in action at kristaps.bsd.lv/cgi-bin/minci.cgi.
One unexpected (but excellent) result of minci is that it puts more pressure on me to provide regression tests. This is specific to the SPARC64 hardware: it demonstrates that standard operation of my tools don't have hidden alignment issues. Since starting this continuous integration, I've added dozens of regression tests to oconfigure, kcgi, and even kcaldav.
For future work, I'll add a notification tool for the report server. The idea is to store a token of the last check time, run a check on reports since then that have passed or failed, then pass these along to a script such as hotplugd(8) or other tools do. This way, I needn't actually check the reports dashboard, and just wait for an e-mail if updates have failed on any systems.