Month: September 2009

I’m really enjoying watching Dave reboot the real-time RSS Cloud. We ran cloud-enabled feeds on UserLand’s servers for a year or two, but eventually we turned them off because there were a limited number of RSS readers that supported it. (Radio may have been the only one, but I’m not sure.)

LoveRSS.gif:
At the time also think there just wasn’t that much demand, relative to both the implementation and operational costs. Part of the promise of the <cloud/> was that you could dial down polling, but there just weren’t any aggregators that understood the feature, so in practice that didn’t really pan out in the early RSS days.

The world has changed since then. Incrementally, and over time, to be sure, but where we are now is quite different from where we were then.

First off, we have lots of social networks that at least appear to be real-time, or near-real-time. Twitter is the example that Dave keeps going back to with his “loosely coupled 140 character message network”, that he and Jay Rosen have been discussing quite a bit over on the Rebooting The News podcast. But the popularity of Facebook has changed things too, with their SMS capabilities, iPhone app, and its users’ addiction to the Refresh button.

What’s new now is that an hour old is just too old. Users are beginning to expect that everything happens in or near real-time — “On Demand” as it were. And not just on social networks, and not even just on the Internet.

Look at how common-place DVRs have become. Every cable and satellite service offers them. Once there was just TiVo, and now DVRs are everywhere. I bought a Series 1 TiVo in 2001, and a couple of years later explaining RSS to a reporter, I likened it to “TiVo for the Web” — not an original turn of phrase, I’m sure, but it captured something fundamental about how I was using both technologies.

We have Video On Demand from cable companies, and Internet video from all the TV networks, YouTube, and a thousand other places. We get push-email over the air on our mobile devices, and it often arrives in our hands before it shows up on our PCs, in spite of their thin grip on the Net, and slow CPUs.

Prediction: Within two years or less, users will expect nearly everything to come to them. Nearly instantly. The <cloud/> is a part of that infrastructure, and I’m glad to have been at UserLand when we were building the 0.9x version.

Now what about OPML Reading Lists?

I certainly hope there’s eventually as much excitement around OPML reading lists, since I think it’s in some ways a more important and more ground-breaking feature, but like the <cloud/> it’s a bit complex, and I don’t think the users are quite there yet.

I actually implemented an OPML reading list Tool for Oracle back in early 2004, when we did the Oracle Blogs site. It wasn’t cloud-aware, but once an hour it knew how to update the list of blogs on the Oracle Blogs home page by reading an OPML file (and its transcluded files).

There were a few reasons we wanted this:

The first was that we implemented Manila’s RSS aggregator for all the Oracle bloggers, and we wanted to be able to automatically update the subscriptions so that the Oracle bloggers could all easily see each others’ stuff.

The second reason was we thought the service could eventually outgrow a single server, and we wanted to be able to spread it out onto multiple machines, while including links to all the blogs on the home page. This would be super easy because Manila knows how to generate an OPML file for all the sites on a server, so it was a simple matter of pointing at that file.

Last, they had some blogs that were on other sites like WordPress and Blogger, and it was easy to point to them by just adding them to an OPML file. Ouila: These sites all showed up on the home page, and the aggregator read their feeds. No extra work required!

RSS