That old hopeful feeling
Rogers Cadenhead wants a gluttonous feedreader/publisher that will let him:
- Scan a headlines page, selecting items that sound interesting. Click Submit to put them all on a queue.
- Skim the queue, which adds item descriptions, and visit links. Select items that should be dropped from the queue, then click Submit to dump them.
- Publish the queue once an hour (in my case, using the MetaWeblog and Movable Type APIs to send items over XML-RPC to a Movable Type weblog).
Sounds good to me. He’s hacked up a prototype with MagpieRSS and Edd’s XML-RPC library to suck things down and spit them back out to an MT weblog.
Les Orchards wants his homegrown Python aggregator (which uses Mark Pilgrim’s Universal Feed Parser library) to keep track of more things:
- What do you think is more important? Do you value one group of feeds over another? Personally, I want to see every single web comic that appears in my queue, most items from Engadget and Boing Boing, and maybe only a few from some of the firehoses I’ve hooked myself up to. Also, there are some bloggers who post somewhat infrequently, but I don’t want to miss a thing when they do post. I need to be able to group and prioritize manually.
- What do you demonstrate as important? Which feeds’ items receive more of your attention, and within those feeds, what topics and phrases appear most frequently? The machine should be able to make some observations about your history of behavior and give some input into the organization of items presented. Also, it should give me some way to give feedback to its recommendations with a simple and lazy thumbs up and thumbs down.
- Republishing of interesting items to a linkblog is a must. On the flip-side, it would be nice to somehow pull in others’ linkblogs in a more meaningful way than simply watching their feeds. I should be able to triangulate some things and get some recommendations based on mutual links predicting future interest in items. We need to start chasing ant trails unconsciously and automatically.
- Time-limited subscriptions which expire after a set time, or request renewal from the user. Use these to track comment threads which offer RSS feeds. (Like this one.)
- More statistics and health monitoring of subscriptions. How active are your feeds? Which are dead & gone, or merely just in hiatus? Have any moved?
Kellan McCrea wants to make it easy to use Feed on Feeds (which, circularly enough, is an aggregator based on Kellan’s MagpieRSS parsing library) to republish items of interest, so, well, he did.
So, what do all three have in common? Several things, all of which are making me feel better about syndication than I have in quite a while.
Not one of them says anything about having any real interest in the format used to deliver the feeds they consume. Why would they? That’s a library function, parsing feeds and normalizing them into native data structures. It’s the data that’s interesting, not the format that you never see.
They are looking at ways to handle large numbers of feeds. I read every word (well, not every word about phones, which don’t interest me, but every other word) in, as of today, 310 feeds, but that’s about my limit, and I’m not doing all that well with it. It certainly doesn’t feel like I could add the two or three hundred more I’d like to keep track of, with at least partial attention.
They are looking at ways to painlessly republish. Reading a lot of feeds is useful, but it’s not nearly as useful as reading a lot and sharing the best of what you read. When I swing through my feeds, I usually open twenty or thirty tabs as maybe being things I could post about, leave them open for a while, then either bookmark them all, lose them in a crash, or close them in frustration, and settle for a linklog post or two for the things I know I’ll want to find again someday. I want to skim more, read less, and share more of the best with less effort.
Most important of all, they are thinking differently about aggregation. We’ve got more than enough desktop aggregators that make syndicated feeds look more-or-less like email. We’ve had one aggregator that knew more about republishing than just that it’s a menu item that passes off some text and a link to some other program, Radio Userland (and especially its precursor idea that was even closer to what I’m thinking about, My Userland On The Desktop, when Radio was much less a weblog writing app, and more an RSS item re-routing app).
I subscribe to probably fifty feeds because the authors sometimes write about Movable Type. I read about their cats, and their diets, and their hobbies, and what movies they’ve seen, and sometimes I find that interesting, and sometimes I don’t. If you are interested in Movable Type news, would you rather subscribe to all fifty feeds, too, whether or not you already have enough feeds written by people whose writing about cats and diets and movies you like, or would you rather subscribe to my republished feed of just their MT news?
Of course, with all things dealing with obscene numbers of feeds, you’ll inevitably think of Scoble, with his 16 million (and dropping) feeds, and the reblogging blog that got him in trouble. Luckily, I think the reason it got him in trouble is pretty simple, and easy to avoid: he published HTML. It wasn’t especially pleasant HTML to read back when it was however much of an entry the feed gave him, and now that it’s an often-broken little fragment or nothing, it’s really awful. It’s now the sort of poorly-abbreviated crud that he says would make him unsubscribe, if it was someone else’s feed, not his own reblog. Well, here’s the simple answer: publish a feed, not an HTML page. People didn’t like seeing their entries republished where it wasn’t obvious who wrote them, and where it would confuse search engines into thinking the wrong copy was the original? Fine, don’t publish HTML.
None of the syndication formats currently make it as easy as they should to republish (you know, syndicate) an entry while keeping all the original metadata intact, but it’s possible to get close enough, and eventually Bob Wyman will have his way, and an Atom entry will be able to stand on its own, bringing along all the metadata it inherited from its original feed. Once that happens, if I throw in an entry from someone you already subscribe to, it will look to your aggregator exactly like the entry you saw in the original feed, and you won’t have to see it again, but when I throw in an entry from a feed you don’t subscribe to, it can be displayed exactly like it would be if you did subscribe.
Somebody hold me: I’m actually getting excited about weblog syndication again!
Another thing they have in common is that they are all thinking about the concept of ’items of interest’. Eventually every user of an aggregator hits the information overload wall where there’s too much information to read and no clear way to figure out what should be read and what should be ignored. This is something I definitely want to tackle in RSS Bandit in the coming months.
I’ve been excited lately about this kind of work, in combination with synthetic/search-generated feeds, resulting in some pretty cool apps for creating all-wheat, no-chaff syndication services. Eyebeam did a great job with this on reblog, as well. See Mike and Jonah’s notes here.
Now all I need is an orange button that says 100% Beef.
getting excited about syndication again
http://philringnalda.com/blog/2004/06/that_old_hopeful_feeling.php…