RSS Filters

Once you get used to RSS (introduction, introduction, introduction) aggregators, sites without feeds become measurably inconvenient (especially if they’re keen on lying about Last-Modified). These are scrapers I’ve written for a few sites to produce RSS that, at the very least, flags new content. Rather than having to flit aimlessly from site to site, I’ve essentially reduced my interaction with the web to a single giant “reload” button — but in a good way.

Hosted

Live filtered feeds:

(Updated every six hours, guaranteed to break, please ask site operators to support a proper feed if you find these useful.)

Alumni

Sites with feeds:

For those playing at home...

You can run these locally, too, in case you want to fix them up, or integrate them into something even more Heath Robinsonesque. Why not set up a web service to convert them from RSS into Atom?

Example

  1. Download, unzip and make sure the necessaries are installed (apt-get install libdatetime-perl libdatetime-timezone-perl libhtml-parser-perl libxml-writer-perl xsltproc tidy curl)
  2. Run a filter: ./fetch-and-filter filter-toothpastefordinner.pl >toothpaste.rss
  3. Examine: less toothpaste.rss
  4. Use the resultant file: portalizer.pl output.html toothpaste.rss

Detail

There are three main types — text-parsing Perl, HTML parser event-driven Perl, and XSLT based on tidy’d output (idea from Bill Humphries via Danny O’Brien) — and they’re all pretty much equally likely to break when the site structure changes.

There’s also filter-directory.pl, for generating a feed from the contents of a directory (specifically, a /junk/ folder). Run it locally with:

filter-directory.pl <local directory> <public URL> [RSS title] [RSS description]
e.g.,
filter-directory.pl /srv/junk http://www.example.com/ '/junk/' 'The title says it all...' >/srv/junk/.index.rss

and then publish the output. It ignores dotfiles, and only shows the last ten entries. Arguably, the Right Thing is to parse Apache-generated indexes on the client, but I’m not sure I could participate in that particular feat of convolution with a clean conscience (now, if Apache’s default indexer generated RDF...).

My aggregator uses these filters when it finds non-RSS content in its feeds directory, and there’s a script included that uses the same logic to fetch the current content for a filter and process it. You can probably feed it to your aggregator somehow (it fits pretty well into a Cheesegrater-like model).

Sites

Filters are included for: http://www.alessonislearned.com/archive.php, http://www.bash.org/?latest, http://www.boltcity.com/copper/, http://www.mcsweeneys.net/links/lists/ and http://www.pennandteller.com/03/coolstuff/roadpenn.html.

Download

NameSizeModified
filters.tar.gz98132008-08-10

Licence

Copyright © 2003, 2004 Joseph Walton <joe@kafsemo.org>

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to
deal in the Software without restriction, including without limitation the
rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
sell copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Joseph Walton, 26th January 2004 [K]