By Suzanne Grubb
It’s pretty common to talk about information services as a flow, dropping metaphors about content streams and information pipelines (or fire hose-strength deluges). But it’s much less common to find librarians grappling with the practicalities of working with a flow-based medium. I was inspired by a recent blog post philosophizing about the application of flow-pacing processes to UX strategy to take a deeper look at my own library, and how our platforms, users, and policies are starting to evolve into a flow-based model.
Information Events (Static) vs. Information Performances (Dynamic Flow)
It’s traditional to view information sharing as a static event: We provide articles, citations, search results, compiled facts and headlines. We have gotten very good at tracking usage in downloads, page views, and people served. We’ve developed a wide variety of mechanisms to evaluate the success of a library program in terms of the information delivery event — How many times did we connect a person with a resource? On a scale of 1 to 5, how useful was this resource? Do users find what they are looking for?
But we don’t have a lot of ways to track and evaluate our information services as a dynamic “performance” that occurs across time, with varying levels of intensity. When you analyze the flow of information in your library, it raises questions like these:
- At what rate do users typically digest library content (i.e., the amount of information or resources provided / the amount of time set aside during the day for study and review)?
- How many words-per-minute — or resources-per-minute — does a user skim through while searching? How does this rate vary on the library platform versus a standard Google search?
- How often does a user re-read/re-play/re-visit a selected resource…on the day of discovery? … later that week? … later that year?
- How frequently do users re-run the same search queries? … and for what reasons?
- How many times each day (or hour) does a user refresh the data on a dynamically updated page? How does the rate-of-refresh vary…from morning to afternoon? …by location?
We live in a world where it no longer makes sense to think of book, articles, reports and resources as static objects. While tracking information events is a relatively easy, brute force way to see trends in our reach and demand for services, tracking information flow provides a nuanced, sophisticated model for how our services support the larger organizational/academic/public ecosystem.
More importantly, it forces us to redefine information service delivery in a more strategic, forward-looking way. Instead of asking the old-fashioned question of, “What can we do to deliver the right information to the right people?” we need to start thinking, “What can we do to help people better integrate this information into the existing rhythms of their work/study/life?”
Information Flow in the Wild
Once you make the mental shift from “static” to “dynamic” information systems, it’s easy to spot evidence of a global shift toward flow-based strategies. Here are a few of my favorite examples of ways information publishers, users, and platforms incorporating components of time and fluidity into their models:
Content Streams and Scholarly Communication
- Many prominent journals have shifted to “continuous publishing” models, releasing new contributions to the science base upon acceptance (“papers in press”) and electronically publishing outside of monthly or quarterly print issues (“online first”). While the concept has been in existence for over a decade, publishers and librarians are still struggling to resolve several technical issues in managing metadata and records for articles that are part of this flow.
- Interestingly, the October-released OCLC Whitepaper on Success Strategies for Electronic Content Discovery and Access addresses flow-based publishing challenges with recommendations for standards for change management records and scheduled data synchronization.
- Academics and information workers are still figuring out what it means to move idea exchange “from the Cathedral to the Bazaar” where the accessibility of real time exchange and discourse is changing our timescales for scientific discourse, as well as our measures (e.g., altmetrics alongside citation tracking).
- Automated big data information flows have created a valuable, broadly accessible stream of content-snapshot products that are force us to redefine the way we deliver, evaluate, and track information products (e.g., the GDELT project has been a recent obsession of mine, with its ability to generate daily trend reports, daily world leaders sentiment analysis, and on demand ad hoc reports through Google BigQuery).
- Continuous flow of information isn’t just a creator-to-library phenomenon, but also a library-to-user expectation. Traditionally, digital products were delivered to desktops: now, they are delivered to people – wherever and whenever they are. This goes beyond considerations of responsive design into models for just in time library services.
- While push notifications, social sharing, and targeted RSS channels have long let users control how they tap into “streams” of information, we’ve only recently starting solving the problem of maintaining citation metadata within the flow of user-directed snippets and remixes. My library has recently started experimenting with adding org and Open Graph data to our own websites to help metadata flow with our content, and we’re keeping an eye on emerging services like figshare which promote the citable sharing of figures and other objects traditionally embedded within larger works or repositories.
- One of the most overt nods to the user-time continuum I’ve seen online is the recent inclusion of a calculated “average reading time” for articles on content platforms like Medium, and it will be interesting to see the impact it has on user engagement.
Platforms and Tracking
While we are still largely lacking in metrics and vocabulary to talk about information service delivery in terms of time-based rhythms, the “real time” reporting feature in Google Analytics can be a great help in wrapping your brain around how to start visualizing rates of information flow.
- For my library, I’ve reworked a few of our content analysis reports: Where I previously monitored month-to-month changes in user interest across categories and keywords, I’ve now also started monitoring trends such as the rate of change of user interests across time and categories. Data collection is still in its early stages, but I’m excited to see what this analysis reveals, and whether I can use this information to predict future content needs more strategically.
Of course, that’s just scratching the surface of what’s possible and what’s out there. If anyone else is experimenting with building flow-pacing into their library services, monitoring user information rhythms, or deploying other tools and protocols to evolve into a continuous-information universe, be sure to drop a note or a link in the comments.
Suzanne Grubb is a digital librarian/instructional designer and all-purpose info-geek, currently building a Clinical Research Education Library for a DC-based association.