I just read an interesting article from the latest issue of Reference and User Services Quarterly. The article seemed like a discussion about surveys, but on closer reading it was more about user perception of information literacy, and whether the learn how to evaluate information while at the reference desk. I recommend reading the short article.
Jonathan Miller, Quick and Easy Reference Evaluation: Gathering Users' and Providers’' Perspectives, 47 Reference & User Services Quarterly 218 (Spring 2008). The article talks about a simple survey to determine if users and reference providers share the same perceptions about the outcome of a reference interview.
The questions were intended to determine:
1. The user gets the information they need.
2. The user learns something about how to find information.
3. The user learns something about how to evaluate information.
4. The user is satisfied with the interaction.
Here is a quote from the article's result section:
The purpose of this column is not to report on a particular
research study but to introduce a method for evaluating reference
service that captures both the user's and the provider's
perspective. If the reader is interested in a full report on our
use of the instrument in Pittsburgh academic libraries, the
PowerPoint presentation to the Reference Research Forum
at the 2006 ALA Annual Conference can be found at http://
This questionnaire measures user and provider perceptions
of the success of individual reference transactions as
measured by whether users received the information they
needed, learned something about how to find information,
learned how to evaluate information, and whether both parties
were satisfied with the reference interaction. With enough
responses it is possible to drill down into the results to explore
distinctions between the perceptions of various groups of users
or providers (in our test, for instance, the mean responses
for those transactions involving a librarian were consistently
higher than those for stafQ and also, with appropriate statistical
software, to be able to explore the relationships between
the data. For instance, do undergraduate students perceive
that they are learning how to evaluate information during
the reference interaction? Or does the provider's perception
of a successful reference interaction correspond to the user's
perception of the same interaction?
One example of what can be learned from the questionnaire
concerns the role of information evaluation in reference.
The lowest mean results we received in our test were
for questions three and seven, concerning the outcome "the
user learns something about how to evaluate information,"
and that was also the question with the lowest or number of
paired responses. This result reflects the fact that we got the
highest number of "not applicable" responses to this question
(and these came largely from providers, not users.) We
can interpret this result to mean that our users perceived that
they were learning how to evaluate information during reference
transactions, even when our reference providers did not
think that any teaching, learning, or modeling of information
evaluation was taking place. Or if such instruction was taking
place, providers did not perceive that users were learning how
to evaluate information. Evaluation is a major component
of information literacy, and as such is a major part of what
we do at the academic reference desk. In this case our users
seemed to recognize that, but our providers did not. Perhaps
the lower mean was a result of that lack of conscious attention
to the issue of evaluation by providers. This result was
an opportunity to discuss the role of information evaluation
during the reference interaction and to find ways to improve
our ability to explicitly raise issues of evaluation during reference
Another example of how the results can improve our
understanding of reference service was that the providers'
perceptions were lower than those of the users by about one
half point on the five-point scale. The differences were not
immense, but the gaps were consistent and statistically significant.
This could be a reflection of the different perspective
of the learner and the teacher, or of either over-confidence
on the part of the user and under-confidence (perhaps realism?)
on the part of the provider, or both. Whatever the case,
it provided a good opportunity to raise morale. Reference
providers can be pretty hard on themselves. The nature of
the job—dealing every day with struggling users and repeat
questions—can give us the impression that we are making
no progress. Showing how satisfied our users were and that
we consistently rated ourselves lower was a positive lesson
for our reference providers.
One final example from our test: the users' mean responses
for transactions involving a librarian were consistently
higher than for those involving staff. Again, from a management
perspective, this is a good thing to know. Librarians are
expensive. This result may help us justify why we have them
on the reference desk instead of less formally qualified staff.
Our users got more help and were more satisfied with the
results in transactions involving librarians rather than staff.