I’m a voracious consumer of written content. Nothing on the web makes me happier than filling my Instapaper bucket with shiny pebbles. It doesn’t matter that I might not end up reading them all, but I will try.
Finding the pebbles is a hard thing that has been made astonishingly easy—too easy, for reasons I’ll explore—with the recent rise in aggregators that span the automatic to light-touch. The services I use regularly to help navigate content are the following.
There are many more services that pop up with every other Techcrunch post, all attempting to snare morsels of rarefied attention in trying to solve the problem of delivering the most compelling content to you on an Internet that is unfathomably overcrowded with stuff.
Here I’ve deliberately ordered the services in terms of how transparent they are in sourcing the content that they feel is most relevant to you.
News.me replaced the excellent Summify for me, which disappeared once it was bought by Twitter, and which powers their new Discovery features. Taking your Twitter friends as its source, it attempts to digest all the most relevant links, and sends a daily email telling you what to read. Each link is suffixed by the faces of your Twitter friends who shared the piece of content.
By the same team behind News.me is the new Digg uses metrics that are far outside of your control to determine what to show. It doesn’t take into account who you follow on Twitter or who you are connected to on Facebook; it aggregates general consensus. This is an important distinction between it and the old Digg. In the old Digg, you invest time in trusting the community to vote up relevant articles, yet new Digg assumes you to trust the entire internet’s social community in determining what is good (which, as Twitter makes very transparent, is something you shouldn’t assume).
It now turns out that News.me is due to be superseded by Digg: personalised aggregation to be replaced by automation.
The part that is missing in these services is the why. The act of sharing a link in itself is stripped of nuance. Someone might have shared a link because it was the most extraordinarily wonderful piece of writing, as much as because it was the most despicable, bigoted piece of writing ever committed to a blog. Yet this nuance is lost, and it’s the nuance that I miss.
This loss was addressed in The Filter Bubble by Eli Pariser: an excellent book if you like pop socio-/psych-/techn-ology books. This passage resonated with me:
David Gelernter, a Yale professor and early supercomputing visionary, believes that computers will only serve us well when they can incorporate dream logic. “One of the hardest, most fascinating problems of this cyber-century is how to add ‘drift’ to the net,” he writes, “so that your view sometimes wanders (as your mind wanders when you’re tired) into places you hadn’t planned to go. Touching the machine brings the original topic back. We need help overcoming rationality sometimes, and allowing our thoughts to wander and metamorphose as they do in sleep.”
In an era of computational aggregation, how do we re-introduce human touch?
An exciting new services that helps bring back verbosity and nuance is Reading.am. It offers you content that your friends are reading right now. It doesn’t matter whether the piece was good or bad, or whether it makes you appear cool or dull. While it demands that your friends use the bookmarklet to mark what they are reading, the output is a very comprehensive set of articles that are likely to interest you, since you’re interested in your friends.
I don’t use Reading.am it in the way it has been designed. I read all my content offline on Instapaper, and so I can’t share what I am reading easily. And while it has a commenting system baked-in, since I rarely read on screen I’m not compelled to enter into the conversation. I consume the content it outputs yet I’m very aware that the content behind it is being shared by just a handful of people I respect.
Reading.am is a step towards what I consider the future of curation and aggregation. No amount of natural language analysis or computation can analyse deeply personal taste, quality, nor provide context or meaningful links between content (at least not yet) and so, we need to build platforms and services that are high-touch. So—perhaps paradoxically—we will increasingly rely on editors to help us navigate the web.
I keep thinking about the ratio 100:9:1. (I can’t remember who coined it or referenced it, so if you know please let me know.) It refers to there being one creator of content, nine people who share/curate/edit, for the one hundred consumers. The internet has helped us build platforms for each segment of this model of consumption: readers have Twitter and countless other ways to consume, and of course creators have extraordinary tools at their disposal. Yet the same develops can be said for the nine editors: tech has made us jump to explore computerised, automated curation, without considering that a more valuable proposition might exist with the piece in the middle which gives people who are considered sharers of good content a democratic editorial platform.
I don’t know what this platform might look like yet, but I’m excited to explore it further to bridge the gap between noisy Twitter and the relative calm of a traditional editor. I’d assert that verbose human/high-touch content discovery is something we should strive towards to help us find fewer but shinier pebbles.