Providing trackable download servers seems to be a tricky business. I wanted to reliably log download over http and associate them with an authenticated user, where that authentication is carried out over https (from WordPress), but not pass basic auth parameters to http in the clear for obvious reasons. Further, I wanted to track the outcome of this download in an easily parseable manner.

As far as I can tell, there is no easy way to do this. So I wrote a WordPress module and a perl CGI script to do this. It is intended to be used where:

  • A website (such as a WordPress website) wants to offer a downlaod facility and record the downloads made. Let’s call this the ‘source website’.
  • The download should come from a different website (possibly because the source website is https and large downloads over http are resource consumptive). Let’s call this the ‘download web site’.
  • It is imperative that the download must be tracked by authenticated username, and by time and success.

The strategy used is as follows:

  1. The source website contains a link to a download page, which appears to be on the source website, but in fact is redirect page.
  2. The redirect page redirects to a dynamically constructed URL on on the download site. That URL is a URL for a CGI script with the following parameters: the file to be downloaded (or the name of a symlink to such a file), the id of the user against whom the download is to be logged, the UNIX time (since epoch), and a hash of the above plus a shared secret
  3. The download script checks the parameters, checks the time is within a few seconds, and checks the hash value. If these match, it serves the file, logging start, success and errors. The purpose of the time check is so that the URL can’t realistically be distributed to others. The hash prevents tampering with the parameters.

It seems to work. I’ve put it under the MIT licence. Basically you can do anything you want with it. It’s in git here. Comments welcome.