A good, dumb way to learn from Libraries

Share:

\"\"

Too bad we can’t put to work the delicious usage data gathered by libraries.

Research libraries may not know as much as click-obsessed Amazon does about how people interact with their books. What they do know, however, reflects the behavior of a community of scholars, and it’s unpolluted by commercial imperatives.

But privacy concerns have forestalled making library usage data available to application developers outside the library staff, and often even within. And the data are the definition of incompatible: Libraries collect them in different formats at different levels of granularity and at different time scales, making them hard to work with.

But suppose we could get at it. Library search engines could be tuned to what’s shown itself to be relevant to their communities. Researchers could explore usage patterns over time and across disciplines, schools, geographies, and economies. Libraries could be guided in their acquisitions by what they’ve learned from the behavior of communities around the corner and around the globe.

We can dream, but solving the policy and technical problems intelligently would take many years and probably more will than we can muster. If only there was a big, dumb way to start putting community-usage data to work quickly.

So, here’s an idea: Any library that would like to make its usage data public is encouraged to create a “stackscore” for each item in its collection. A stackscore is a number from 1 to 100 that represents how relevant an item is to the library’s patrons as measured by how they’ve used it.

There are many types of relevant data: Check-ins. Usage broken down by class of patron (faculty? grad student? undergrad?). Renewals. Number of copies in the collection. Whether an item has been put on reserve for a course. Inclusion in a librarian-created guide. Ratings by users on the library’s website. Early call-backs from loans. Citations. Being listed on a syllabus. Being added to a user-created list. Which of these factors should figure into stackscore? It’s the sort of question standards committees argue about until they are red in the face. There is no right answer.

So, stackscore gives up. Each library is left to compute its stackscore using whatever metrics it wants, giving factors whatever relative values they want. In the interest of transparency, libraries should publish their formulae, but they are not beholden to any other library’s idea of relevance.

Here’s how stackscore gets around the two main issues inhibiting the use of library usage data: privacy and the interoperability of the data.

Privacy.
Libraries are generally paralyzed when it comes to making decisions about what data are safe enough to release publicly. For example, knowing what books are checked out together would be a wonderful way for a library to start seeing how users think items are related. It’s imperfect, of course, for people often check out unrelated works. But statistical patterns could start to emerge.

Libraries generally don’t make that information available, even when thoroughly anonymized. Recent history has taught us that malicious hackers can de-anonymize data that were thought to be innocuous. And then there’s the problem that it’s easy to imagine the feds figuring out whom to interrogate based on the fact that “How to Blow Stuff Up” was recently checked out with “Repair Guide for the 1973 Mustang.” Since it’s not yet clear what constitutes sufficient privacy protection, libraries hesitate to expose any usage data. Better safe than, Sorry you were hacked.

Stackscore gets around this by not releasing any usage data directly. Stackscore is a computed number. For example, check-outs might be part of that calculation, but hackers couldn’t go backwards from, say, a once-a-year stackscore to conclusions about what was checked out together. Libraries could even include some randomized information in the computation, for a stacksore need not be perfectly accurate to be useful. In fact, “community relevance” is inherently imprecise.

Interoperability. The second problem is a technical one: The data that libraries keep are wildly varied in type and format. Even if libraries were willing to release granular usage data, it would be a nightmare to try to compare this information across libraries. Every time you wanted to do a comparison — what are the books on, say, hermeneutics that are highly relevant to researchers at both the Yale Divinity School and MIT — the IT folks would have to hit the mats to figure out how to intersect the available data.

With stackscore, it’s easy because it’s dumb. It could not be easier to compare integers between 1 and 100 across institutions. Of course, it’s not quite that simple. For example, to compute cross-library stackscores it would probably help to include distribution curves so that libraries with bell-curve stackscores aren’t overwhelmed by libraries whose “relevancy inflation” gives all their items a score of, say, 50 or above. Providing some guidance on expected curves might be helpful, too.

Even so, would stackscores be so dumb that they’re useless? We have evidence to the contrary.

Over the past four years, the Harvard Library Innovation Lab, which until recently I co-directed, has developed an alternative way of browsing Harvard Library’s 13 million items. StackLife, as it is called, always shows items in a context of other items, which we display as spines on a shelf. We use stackscore to “heat map” each work: the deeper the blue, the higher the stackscore. And we generally sort the shelves in stackscore order. When you’re browsing to explore an unfamiliar area, being guided by a metric of community relevance often turns out to be extraordinarily helpful.

Of course, it’s not always helpful. And there’s a very real danger to this, for sorting by stackscore turns new eyes to the same old works the community has used, in turn raising the likelihood that those works’ stackscores will increase. This is a self-reinforcing loop to be avoided.

That’s why the interoperability of stackscore is so important. Ultimately, each research community should be learning from every other. We would love for StackLife to show you not only the works that the Harvard community is consulting on a topic, but also the works other research communities are using that Harvard is not.

The need for such a measure is becoming more imperative, for there are currently at least a half-dozen efforts in the United States alone to provide multi-library platforms that gather up and interrelate data of all sorts. These include Zepheira’s LibHub, a proposal from BiblioCommons, work under way by the Digital Public Library of America, OCLC’s massive platform, and the Linked Data for Libraries project initiated by Cornell and underwritten by Mellon, with Stanford Library and the Harvard Lab as partners.

Such platforms could put usage data to wonderful work. But given the policy and technical issues, the only way to get there in a reasonable time requires a stroke of dumbness.

Stackscore.

Author Bio: David Weinberger, a senior researcher at the Berkman Center for Internet and Society at Harvard Law School, is the author, most recently, of Too Big to Know: Rethinking Knowledge Now That the Facts Aren’t the Facts, Experts Are Everywhere, and the Smartest Person in the Room Is the Room (Basic Books, 2012).

Tags: