Lactate
A simple yet featureful static file server for Node.js
Lactate is a static file handler for node. It integrates with basic node HTTP server, Express, and theoretically any framework on top of Node that adheres to the req > res pattern. It was designed with performance in-mind, and outperforms most other solutions out of the box. Lactate uses streams wherever possible, gzips when it can, handles client-side expiration, caches files in memory, and watches files for updates. To learn more about Lactate's features, see the features section.
Why
Lactate began its existence in a pet project whose requirements were not met by the available options. After much research it was decided that your author would feel more comfortable knowing precisely what's under the hood. If that is your interest, go for it. Otherwise, notify Weltschmerz of any feature requests or issues and he will rapidly fix any overthought to save himself further embarrassment.
If the name Lactate strikes your curiosity, I have only to say that it is symbolic for the act of distributing resources. These guys like resources.
- In-memory caching with options for expiration, max items and max size
- Client-side caching using Last-Modified and Cache-Control response headers
- Human-readable max-age settings, e.g. one day or ten years
- Automatic gzipping
- Automatic CSS / JS minification
- Asset bundling (combining and minifying scripts / styles)
- Custom 404 pages and functions
- Custom response headers
- Connect middleware export
- Drop-in replacement for Express.static
- Default error pages for Not Found, Forbidden and Internal Error responses
- on(status) response code listeners
- Colored log output
- Global executable with full options, using cluster for utilization of all CPU cores
Features
A brief comparison of Lactate's features to those of popular solutions.
Module |
Hits fs per request |
Streams |
Client caching |
In-memory caching |
Watch files |
Gzip |
Minify |
Lactate |
no1 |
yes |
yes |
yes |
yes |
yes2 |
yes |
node-static |
yes |
no3 |
yes |
yes4 |
no |
no |
no |
connect |
yes |
yes |
yes |
no5 |
no |
no |
no |
ecstatic |
yes |
yes |
yes |
no |
no |
no |
no |
Notes
- Hits fs per request means that the filesystem will be hit for each new client requesting a file. All of the solutions comprehend client-side caching, so multiple requests from the same client would not usually cause another hit. If you have n(unique client requests), that means n(filesystem hits), usually in the form of fs.stat() or fs.exists(), unless you are using Lactate.
- Of the tested modules, Lactate is the only one that comprehends gzip response encoding, despite widespread support.
- Though node-static uses fs.createReadStream(), it does not actually write any data until the file is fully read. It behaves identically to callback-oriented design.
- node-static caches files in-memory so that they don't need to be fully read for every request; however, each request will call fs.stat(). It does this to compare mtimes, to determine whether to send a 304 Not Modified response. Lactate avoids this issue by watching files for modifications, and updating the in-memory cache. If a file exists in Lactate's cache, it is fresh.
- connect et al do not cache because they assume that you will use a separate caching layer such as Varnish. As such, connect does not concern itself with anything beyond streaming files. This means minimal complexity, which might be just fine for you. See the section on caveats for details.
- Note that connect is analogous to Express.static
Performance
Despite its features, Lactate does not suffer a performance hit. In fact it is faster than popular solutions. The following plot was generated using the R statistical programming environment. It represents three runs of Apache Bench.
ab -n 10000 -c 100 http://localhost:8080/jquery.min.js
Notes
- Latest version of each module as of Thu Oct 25 15:26:04 CDT 2012
- Requested file is jquery.min.js
- File size is ~96kb
- Node version is v0.8.10
- ab is run with 10,000 requests, 100 concurrent
- These tests utilize one CPU core of an Intel core i5 sandy bridge @ 2.3ghz
- The disparity between connect and node-static exists because connect does not cache files. This behavior is problematic for benchmarking, but in practice you might use a more highly-performing caching layer between connect et al and the client. The point here is that if you are expecting performance out of a pure-Node static file server, connect AKA Express.static is not a good idea.
Lactate is published via NPM under the name lactate.
For help installing Node, see the Installation Guide on the Node wiki. For more information about using NPM, see npmjs.org
Local NPM install
npm install lactate
This will place Lactate in your node_modules directory for programmatic access. You can now require('lactate') from within any node program.
Global NPM install
sudo npm install -g lactate
Installing Lactate globally will provide you with the lactate command, which will run a clustered Lactate server in the current working directory, utilizing all CPU cores with the cluster module. Try lactate --help for a list of the available options.
Lactate may be used with plain node HTTP server, or with Express as a drop-in replacement for Express.static, or anything that looks like Express. There are also adaptors for node-static API. Simply, there's no wrong way to Lactate a file.
See the Github page for more documentation.
Creating a Lactate server
Creating a directory handler
Using directory middleware
Integrating with Express
Using Express.static API
Using node-static API
Serving individual files
Setting options
Bundling assets
Using custom 404 pages
Using custom response headers
Setting cache options
Lactate's in-memory cache has the following limitations:
- max size the maximum size in megabytes to store in-memory. Adding items always works; old items will be pruned instead.
- max keys the maximum number of file paths to keep track of.
- expirationa duration in seconds to keep a file in memory. When a file is touched, the clock is reset.
- segmentation threshold this option refers to file size in kb. After this threshold is reached, Lactate will use a separate strategy for serving the cached file; relying on multiple writes wrapped in process.nextTick() instead of a single call to .write(). This is a safeguard against slow clients and large files. Behavior can be found here
In no particular order, caveats:
- Lactate delivers reasonably fast performance with cached files. With caching disabled, Lactate can be slower than some modules without caching ability. Assuming that you have a caching mechanism such as Varnish, which is the assumption that connect makes by default, you shouldn't suffer much from disabled caching using either Lactate or connect. The point here is that you have nothing to lose, but something to gain, as all of Lactate's extra abilities are disable-able.
- Minification is achieved with child processes piped to either YUICompressor or Uglifyjs. It's quite slow and consumptive, so it would be advisable to pre-bundle your styles and scripts using dir.bundle(type, name, callback). Another option is to enable minify. This will automatically minify scripts and styles as they are requested. With caching enabled, this means only the first request will take ages, as subsequent requests will simply access the already minified script or style. Minification and gzipping are not mutually exclusive; you may use either or both. Of course, you needn't use Lactate at all for minification or bundling.
- Lactate comprehends gzip, but doesn't care for deflate compression. This is because of gzip's widespread and long history of support. Pretty much anything will accept gzip encoding. There is no risk to enabling gzip option; it will only gzip with appropriate accept-encoding request headers. Lactate does not re-gzip; it stores the fully mangled files in its cache.