kcgi is an open source CGI and FastCGI library for C web applications. It is minimal, secure, and auditable—a useful addition to your BCHS application stack. To start, install the library then read the usage guide. Use the GitHub tracker for questions or comments. (Or find contact information there.) kcgi is a BSD.lv project.
For a fuller example, see sample.c, or jump to the Usage section.
kcgi supports many features: auto-compression, handling of all HTTP input operations (query strings, cookies, page bodies, multipart) with validation, authentication, configurable output caching, request debugging, and so on. Its strongest differentiating feature is using sandboxing and process separation for handling the untrusted input path.
First, check if kcgi isn't already a third-part port for your system, such as for OpenBSD or FreeBSD. If so, install using that system.
If not, you'll need a modern UNIX system.
To date, kcgi has been built and run on GNU/Linux machines, BSD (OpenBSD, FreeBSD), and Mac OSX (Snow Leopard, Lion) on i386 and AMD64.
It has been deployed under Apache, nginx, and OpenBSD's httpd(8)
(the latter two natively over FastCGI and via the
The only hard dependency is GNU make.
If you're running the regression tests (see Testing), you'll need libcurl and libbsd (on Linux).
Begin by downloading kcgi.tgz and verify the archive with kcgi.tgz.sha512.
Once downloaded, compile the software with
gmake on OpenBSD systems), which
will automatically run a
configuration script to conditionally deploy portability glue.
Finally, install the software using
make install, optionally specifying the
PREFIX if you don't intend to use /usr/local.
If you'd like to contribute to kcgi or to use the bleeding-edge version between releases, the CVS repository is mirrored on GitHub. Installation instructions tracking the repository version may be found on that page.
If kcgi doesn't compile, please send me the config.log
file and the output of the failed compilation.
If it's non-trivial, it'll help if I have access to the system (or one like it) where the error occurred.
If you're running on an operating system with an unsupported sandbox, let me know and we can work
together to fit it into the configuration and portability layer.
Lastly, I'd love to compile with mingw for Microsoft machines:
please contact me if you can do the small amount of work (I think?) to port the
other non-Microsoft functions.
The kcgi manpages, starting with kcgi(3), are the canonical source of documentation. You can also see all functions; or if it's easier to start by example, you can use kcgi-framework as an initial boilerplate to start your project. The following are introductory materials to the system.
In addition to these resources, the following conference sessions have referenced kcgi.
And the following relate to extending standards:
Applications using kcgi behave just like any other application.
To compile kcgi applications, just include the kcgi.h
header file and make sure it appears in the compiler inclusion path.
(You'll need to include stdint.h before it for the
stdarg.h for the
va_list type, and
stddef.h for the
Linking is similarly normative: link to libkcgi and, if your system has
compression support, libz.
Well-deployed web servers, such as the default OpenBSD server, by
default are deployed within a chroot(2). If
this is the case, you'll need to statically link your binary.
If running within a chroot(2) and on
OpenBSD prior to 5.9 (i.e., with systrace(4)),
be aware that the sandbox method requires /dev/systrace within the
By default, this file does not exist in the web server root.
Moreover, the default web server root mount-point, /var, is mounted
This complication does not exist for the other sandboxes.
FastCGI applications may either be started directly by the web server (which is popular with Apache) or
externally given a socket and kfcgi(8) (this method is normative for OpenBSD's httpd(8) and
suggested for the security precautions taken by the wrapper).
The bulk of kcgi's CGI handling lies in khttp_parse(3), which fully parses the HTTP request. Application developers must invoke this function before all others. For FastCGI, this function is split between khttp_fcgi_init(3), which initialises context; and khttp_fcgi_parse(3), which receives new parsed requests. In either case, requests must be freed by khttp_free(3).
All functions isolate the parsing and validation of untrusted network data within a sandboxed child process. Sandboxes limit the environment available to a process, so exploitable errors in the parsing process (or validation with third-party libraries) cannot touch the system environment. This parsed data is returned to the parent process over a socket. In the following, the HTTP parser and input validator manage a single HTTP request, while connection delegator accepts new HTTP requests and passes them along.
This method of sandboxing the untrusted parsing process follows OpenSSH, and requires special handling for each operating system:
setrlimit(2)limiting. For the time being, this feature is only available for x86, x86_64, and arm architectures. If you're using another one, please send me your
uname -mand, if you know if it, the correct
chroot(2), which is strongly suggested. If you're using a stock OpenBSD, make sure that the mount-point of /dev/systrace isn't mounted
pure computationas provided in Mac OS X Leopard and later. This is supplemented by resource limiting via
Since validation occurs within the sandbox, special care must be taken that validation routines don't access the environment (e.g., by opening files, network connections, etc.), as the child might be abruptly killed by the sandbox facility. (Not all sandboxes do this.) If required, this kind of validation can take place after the parse validation sequence.
The connection delegator is similar, but has different sandboxing rules, as it must manage an open socket connection and respond to new requests.
kcgi is shipped with a fully automated testing framework executed with
Interfacing systems can also make use of this by working with the kcgiregress(3) library.
This framework acts as a mini-webserver, listening on a local port, translating an HTTP document into a
minimal CGI request, and passing the request to a kcgi CGI client.
For internal tests, test requests are constructed with libcurl.
The binding local port is fixed: if you plan on running the regression suite, you may need to
tweak its access port.
Another testing framework exists for use with the American
To use this, you'll need to compile the
make afl target with your compiler of choice, e.g.,
make clean, then
make afl CC=afl-gcc.
Then run the
afl-fuzz tool on the
afl-urlencoded binaries using the test cases (and dictionaries, for the first) provided.
The system has also been passed through a Coverity scan, with the results available at projects/kcgi. Coverity has been discontinued as an ongoing mechanism due to the lack of an OpenBSD client.
Security comes at a price—but not a stiff price. By design, kcgi incurs overhead in three ways: first, spawning a child to process the untrusted network data; second, enacting the sandbox framework; and third, passing parsed pairs back to the parent context. In the case of running CGI scripts, kcgi performance is bound to the operating system's ability to spawn and reap processes. For FastCGI, the bottleneck becomes the transfer of data. In the following graph, I graph the responsiveness of kcgi against the baseline web-server performance.
This shows the empirical cumulative distribution of a statisically-significant number of page requests
as measured by ab(1) with 10 concurrent
The CGI line is the CGI sample included in the source;
the FastCGI line is the FastCGI sample;
the CGI (simple) simply emits a 200 HTTP status and
Hello, World; and
the static is a small static file on the web server.
The operating system is Mac OS X 10.7.5
Air laptop (1.86 GHz Intel Core 2 Duo, 2 GB RAM) with the
The FastCGI server was started using the kfcgi(8) defaults.