Document Type

Conference Proceeding

Original Publication Date

2023

Journal/Book/Conference Title

LibPMC - The International Conference on Libraries and Performance Measurement

Comments

This paper was presented at the 2023 (LibPMC The International Conference on Libraries and Performance Measurement).

Date of Submission

October 2023

Abstract

Why did you do this activity, project, or research?
Patron interaction statistics tell compelling stories. However, rich interactions are often reduced to a single number. The READ (Reference Effort Assessment Data) Scale adds a welcome qualitative dimension. Academic libraries embraced the scale to measure the effort and knowledge provided by those staffing information and research service points. The 6-point scale is relatively simple and easy to implement, with interactions involving no specialized knowledge and effort at one end with those requiring the most on the other end. Developed by Gerlich and Berardt in 2003, it is still widely used in the United States. In fact, VCU Libraries adopted it over a decade later as part of a move to a new reporting system that incorporated it. Our experience was positive, but it became clear the application of ratings was inconsistent and the standard guidance and examples did not always resonate with local and current practice. One major concern was that frontline employees felt that the nature of their patron interactions were not reflected in the original READ scale's examples and therefore frequently rated their patron interactions as being less complex than they actually were.

How did you do this?
A workshop with a series of interactive activities was held to bring multiple departments together to calibrate everyone’s understanding of the rating levels and collaboratively develop updated READ Scale guidance and examples that reflected the reality of their patron interactions. We began with a pre-test where employees were asked to anonymously score 10 scenarios using the READ Scale. In the workshop, we divided attendees into small groups and gave each group new scenarios to work on. Group members talked through how they'd individually score each scenario and came to a consensus READ score for each scenario. This was followed by a full-group reflective discussion, a post-test with the original 10 scenarios, and elicitation of example patron interactions for each READ Scale score. These examples informed the development of our localized READ Scale.

What did you discover? What are the limitations?
The resulting READ Scale documentation and examples provided much-needed clarification and allowed us to gather more accurate statistics on how we engaged with our users and ultimately make better data-informed decisions. The process itself was also transformative as three departments with different roles learned from each other and worked together to improve an important but often frustrating assessment procedure.

How have findings been applied? What lessons did you learn? What is the potential value to the wider performance measurement/assessment/user experience library community?
The updated READ Scale has served us well for over three years. In addition to the increased confidence in our statistics, the experience gave us a common language to continue the discussion as new questions and challenges arose. The original READ Scale offers great promise, but it has become dated since its creation in 2003 and may not reflect local circumstances. Other libraries who currently use the READ Scale or who are considering its adoption could potentially find our experience helpful to create more relevant guidance and examples.

Creative Commons License

Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
This work is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 4.0 International License.

Is Part Of

VCU Libraries Faculty and Staff Publications

Share

COinS