You have a dozen systems and aim to port software to all of them.
How to you make sure that your porting efforts work on all systems?
Let's play portability whack-a-mole!
This section of the portability series introduces how developing
minci
helped us keep on top of portability across all systems.
Building minci was a straight-forward choice
based upon our needs and the complexity of other available tools, but either way you're going to
need some sort of machinery to verify that your continued portability efforts doesn't cause
fallout on other systems.
This is especially true with umbrella systems, such as Linux distributions or SunOS derivatives,
where portability measures for a target umbrella cause fallout for specific systems in that umbrella.
why continuous integration
With the continued development and integration of
oconfigure underway, I soon ran into
problems where one portability measure (e.g., adding _DEFAULT_SOURCE on a glibc
Linux for including crypt(3)) would cause fallout
on other systems. In this example, because _XOPEN_SOURCE had already been defined,
and conflicted with the new define.
I needed a tool for making sure commits on the current focus were not causing fallout on other
already-ported systems.
With only one other system, I could keep both open in terminals.
As that number grew, it became impractical.
This is where CI (continuous integration) tools are handy for verifying…
compilation: that the source code compiles and links (catches missing
functions and required libraries, feature tests, …);
regressions: run-time testing of the system in motion (catches
alignment issues, different function behaviour, …); and
distribution: assets bundled with the distributed system are sufficient
for building, regressions, and installation (checks forgotten assets).
Searching for CI tools produced lots of useless information.
It's a buzzword and results were heavily skewed toward heavy-weight vendors.
I'm sure there are many high-quality, light-weight systems hidden in the results, but I couldn't
find any.
In the end, it seemed like a much simpler task to simply write one myself.
At heart a CI tool consists of a test runner, which verifies for a single host; and the
report server, which manages the reports of test runners.
A test runner for BSD.lv tools would need only:
freshen sources
configure sources (run configure)
build and test regressions and distribution (run make with various rules)
report success or failure to the server
The server would need only:
collect reports
display reports
In the end, this was accomplished with a shell script, an
openradtool data model, some driver code, and
some nice CSS.
building a continuous integratationator
I ended (but am starting here) with the report server. The report server handles the set of
test reports, which it must accept and produce. Each report consists of:
what software was tested
what system it was run on
when stages of freshing, configuring, and building finished (or not)
identify of the tester
For simplicity, I combined the accept and produce functionality into one tool backed by
openradtool.
Reports are accepted over a simple HTTP POST and produced in HTML5 from an HTTP
GET from the same CGI resource.
The data model is described in
db.ort
and the back-end driver in
main.c.
The driver both accepts reports and formats them in HTML.
The rest is just CSS.
For the test runner, I opted for a simple POSIX shell script,
minci.sh.
So far this has worked fine on all target systems, with some snags along the way in portable
idioms for UNIX tools, e.g., head -n1 vs. head -1 or
echo -n vs. printf.
I have the HTTP POST component be handled by
curl, which is available on all systems I use—often as
part of the base system.
The BSD.lv tools are all mirrored from a local CVS repository to
GitHub, making it easy to fetch sources from one
location.
The reason for using git(1) instead of
cvs(1) via anoncvs is the concept of a
source tree identified by the last commit.
This way, if any two source trees have the same last-commit identifier, they're identical.
CVS doesn't have this concept, so tracking the same source trees is more difficult.
The minci.sh test runner
uses this identifier to check whether sources need to be updated and re-checked.
The test runner runs periodically via
crontab(1).
This introduces a delay in testing sources, but this is at best a minor inconvenience.
Most heavy-weight continuous integration tools are triggered on each commit, but I considered
this unnecessary complexity.
Instead of relying on complicated usernames and passwords, the test runner identifies itself
with an email address and shared secret token (hashed in the POST submission)
in its configuration file.
Very straightforward and very effective.
Adding a new tester to the database is as simple as generating a token and assigning that to an
e-mail address.
local test-runner configuration
Since the BSD.lv tools have been fitted to use
pkg-config(1)
for detecting libraries, it's easy for the test runner user to override system libraries, which
might be out of date, with those installed locally.
The user need only set PKG_CONFIG_PATH to where the package specification files are
found.
This comes particularly in handy for downloading LibreSSL
or newer version of OpenSSL.
For systems like
kcaldav, I was also able to use this for local
versions of kcgi when it wasn't available from a
system's third-party package manager.
At present, there's no automated way to deploy dependencies for the test runners: they must be
manually installed.
As I introduce more inter-depending systems (like
kcaldav), this will probably change.
The last feature added to the test runner was an auto-update, which works much in the same way
as the other parts checking for source freshness.
Auto-updating isn't a long-term feature, but is perfect for in situ updates as I flesh
out the continuous integration tool itself.
results and future work
With continuous integration in place on all platforms, I'm free to make changes to individual
repositories knowing that within an hour or so, I can see whether the changes have affected on
platforms.
One unexpected (but excellent) result of minci is that
it puts more pressure on me to provide regression tests.
This is specific to the SPARC64 hardware: it demonstrates that standard operation of my tools don't have
hidden alignment issues.
Since starting this continuous integration, I've added dozens of regression tests to
oconfigure,
kcgi, and even
kcaldav.
For future work, I'll add a notification tool for the report server.
The idea is to store a token of the last check time, run a check on reports since then that have passed
or failed, then pass these along to a script such as
hotplugd(8) or other tools do.
This way, I needn't actually check the reports dashboard, and just wait for an e-mail if updates have
failed on any systems.