Thoughts On Automated Recommendation Services for Libraries – A Correction

Re-reading my post from last month, Thoughts On Automated Recommendation Services for Libraries, I realize that I’m somewhat wrong about the role of locality in automated recommendation systems. Amazon and Netflix can (and I think they do) integrate user location data in their recommendation algorithm. They can skew results based on strong trends that appear among proximate user groups.

I’m also somewhat wrong about how recommendations work in BiblioCommons—ratings and reviews for individual titles are aggregated from multiple libraries, as well as user-created reading lists. Reviews from local library staff are prioritized over others, but I don’t know if local library user-created reading lists are prioritized. Regardless, these are individually curated pieces of content, these sections aren’t automated in the same way that Netflix is. This content is related to an individual item in the catalog and doesn’t generate lists of recommendations as one typically expects from Readers’ Advisory services.

The “Similar Titles” type of content in the sidebar in BiblioCommons is what I think of when I look for reading recommendations, and this content is also the most directly analogous to Netflix / Amazon. However, this content doesn’t get generated by BiblioCommons at all—these titles come from third party services like NoveList.

So locality can be a meaningful factor in automated recommendation systems—such systems are smart enough to recognize that local trends are important, even if they aren’t yet intelligent enough to know what local trends mean.

But libraries still don’t have enough data to make such algorithms work. We still rely on curation and the personal touch to create real value for our patrons.

The Unnatural Phenomenon of Using a Mouse

My first real encounter with a usability challenge in a digital environment happened when I worked as a file clerk in a medical records office. We converted our office from paper records to electronic and had to learn a whole new computer-based record keeping system. One of the women who worked in the file room had never used a computer before. It was my job to teach her this new electronic system.

I’ve told this story before…

The biggest challenge for me was that I failed to comprehend the depth and breadth of my coworker’s digital illiteracy.

She didn’t know how to use a mouse. How do you teach someone to use a data system which functions through a GUI when they don’t know how to use a mouse?

Before I could even begin to teach her the new record keeping system, I had to teach her how to use computer peripherals.

Continue reading “The Unnatural Phenomenon of Using a Mouse”

Web Design Can’t Fix Digital Illiteracy

For some time now, I’ve argued that it should be possible to create digital interfaces that are intuitive enough for anyone to pick up and use successfully regardless of previous experience or knowledge.

As an ideal, I think this is a good one.

In practice, of course, it’s a lot more complicated.

I’ve had a couple of conversations recently that brought home to me an obvious fact about designing digital environments:

Usability isn’t just a matter of design. It’s also a matter of digital literacy. But here’s the thing—design can’t make up for a user’s lack of digital literacy.

By itself, web design is a tool insufficient for the job of teaching digital literacy. No matter how easy to use a website or interface may be, no matter how intuitively the information architecture is constructed, if a user has no experience with digital technology and doesn’t feel comfortable interacting with a digital environment, they won’t know what to do. They’re going to be lost.
Continue reading “Web Design Can’t Fix Digital Illiteracy”