FastCGI Extensions for Management Control

Introduction

One significant weakness of FastCGI is that the application method (not the protocol) prohibits the worker manager from increasing the worker pool in response to load. The prohibition arises from section 3.2 of the specification, Accepting Transport Connections, which stipulates that workers must themselves accept incoming FastCGI connections on the transport socket.

Accepting FastCGI Connections
Relationship between web server, manager, and workers.

In the figure, the manager initiates the transport socket and starts its workers (steps 1–3). The workers then inherit the open transport socket and wait to accept transport connections. These are passed directly from the web server (steps 4–6). Unfortunately, if the transport socket backlog is filled, there is no way in the FastCGI specification for the manager to be appraised of the fact: no (known) systems supported by kcgi allow querying of the backlog size or being notified of backlog saturation. Since the manager is blind to anything but worker process status, the burden falls to the operator to pre-allocate workers.

Existing Solutions

The usual solution for this is to pre-allocate workers and simply fail on resource exhaustion: when the web server tries connecting to a saturated transport socket, it fails and the request is rejected by the web server. This is the method used by httpd(8) and other servers when using the external process mode. Obviously, this is sub-optimal. Many web servers address this by acting themselves as managers.

Accepting FastCGI Connections
Web server assuming the role of a manager.

In the figure's configuration (which is standard in mod_fcgid among other FastCGI implementations), the web server will start the transport socket and manage connections itself. Since it keeps track of worker activity by passing HTTP connections into the transport socket, it's able to allocate new workers on-demand.

While this is an attractive solution, it puts a considerable burden of complexity on the web server to act both as an I/O broker and now a process manager as well. Moreover, the security model of the web server is compromised: since the FastCGI clients may need to run in the root file-system or without resource constraints, the web server must also run in this environment. This poses a considerable burden on the server developer: it must, to maintain separation of capabilities, manage connections in one process and manage connections in another, with a channel of communication between servers.

Potential Solutions

One method of solving this—and perhaps the best method—is for the worker manager to allocate a transport socket for each of its workers. It would then accept transport connections on behalf of the workers and then channel data to and from the workers' transport sockets and the main transport socket.

Accepting FastCGI Connections
Process manager multi-plexing transport sockets.

While an attractive fix, this puts a considerable burden on the transport manager to act both as a process manager and an I/O broker—the same problem in the Existing Solutions above, but for the manager instead of the server! Moreover, it puts considerable I/O overhead on the system for copying data: the manager will not be appraised of terminating transport connections unless it inspects the data itself. The result is that FastCGI responses cannot be spliced—it must be analysed.

Another option is to provide each worker the ability to notify the manager of connection saturation. A saturated connection is one where accepting a socket happens instantly—this can be easily reckoned by making a non-blocking poll on the socket prior to accepting, and seeing if a connection is immediately available. If so, then the connection is saturated at that moment and the manager might want to add more workers. Unfortunately, there is no trivial way for the worker to talk back to the manager: signals will be consolidated, so multiple I'm saturated signals will become one, and other means (such as shared memory) are increasingly complex.

Implemented Solution

The solution implemented by kcgi is very simple, but extends the language of the specification. In short, if the FCGI_LISTENSOCK_DESCRIPTORS environment variable is passed into the client, it will wait to receive a file descriptor (and a cookie) across the transport socket instead of accepting the descriptor itself. Then, when the connection is complete, the worker must respond with the cookie to the transport socket.

Accepting FastCGI Connections
Process manager passing file descriptors to workers.

In the figure, the manager listens on the transport socket and passes accepted descriptors directly to the workers, who then operate on the FastCGI data. This avoids the penalty (and complexity) of channeling I/O, but allows the manager to keep track of connections and allocate more workers, if necessary. This changes the current logic of section 3.2 as follows:

  1. accept descriptor from the FCGI_LISTENSOCK_FILENO descriptor (standard input)
  2. operate on FastCGI data
  3. close descriptor

To the following, noting the additional shut-down sequence that changes section 3.5.

  1. read descriptor and a 64-bit cookie from the FCGI_LISTENSOCK_DESCRIPTORS descriptor as specified in the environment
  2. operate on FastCGI data
  3. close descriptor
  4. write the 64-bit cookie value back to the FCGI_LISTENSOCK_DESCRIPTORS descriptor

Applications implementing this can switch on whether the FCGI_LISTENSOCK_DESCRIPTORS value is a valid natural number to decide whether to use the existing or new functionality.

Drawbacks

There are some minor draw-backs with this approach. First, it is not supported for operating systems that cannot pass file descriptors. Second, it stipulates depending upon an environment variable, which may be undesirable.

Last, and most significantly, is that a manager wishing to use this feature can only do so with workers who are compiled to support the operation. In other words, there is no fall-back mechanism for the manager.