LibQUAL+™ Canada 2013
a national survey project
FREQUENTLY ASKED QUESTIONS
Completing the Survey
LibQUAL+ Survey Overview
[To top of page]
The survey typically takes from 10 to 15 minutes to complete.
Do not skip any questions in the LibQUAL+ survey! If you do not wish to answer a question or feel a question does not apply to you, select NA (not applicable). Surveys whose core questions are not completely filled out are not counted in the aggregate scores.
There is no compensation per se for completing the survey, though an incentive prize will be offered as thanks to participants. At the end of the web-based survey, respondents may elect to include an email address, which will enter them in a drawing for a [prize] valued at [$]. Award winner will be announced in [date], via [announcment mechanism]4. What Web browsers are supported for the survey
The LibQUAL+ survey has been developed to work in many settings, including public libraries and community colleges. It does not rely on erratically supported browser features such as Java or cookies. Any browser should work so long as it is not too old.
Respondents not able to complete an online questionnaire may obtain a paper
copy of the survey by:
Yes, the survey is compatible with the JAWS screen reader software.
The survey is administered through the Association of Research Libraries and Texas A & M University, and the survey and data are housed on secure Texas A & M servers.
Yes. You can get another copy of the web link. Contact [local LibQUAL Project Coordinator]
If you have trouble opening the URL from within your email message, you can copy the URL and paste it into your Web browser. If you still cannot access the survey, you can contact [local LibQUAL Project Coordinator]
Only those returns from the random sample will be forwarded to LibQUAL+™ for multi-institutional analysis and comparison. However, the University Library continuously seek feedback from library users, to help establish priorities and to improve collections and services. If you are not part of the present random sample, but would like to complete a survey, you may download the print version from the library's web site: [local],or contact [local LibQUAL Project Coordinator]. The University Library welcomes completed surveys from other interested users, and will review these returns along with those gathered from the random sample.
The survey examines a variety of dimensions of library services, each represented by a number of questions. Repetition or redundancy in questions allows the survey designers to analyze the validity of each service quality dimension through statistical methods.
Due to security and confidentiality features, everyone surveyed will receive reminders, even those who have already responded. When submitted, survey responses and identifying information are immediately separated, so we have no way of knowing who has already responded. Reminders, therefore, are distributed to everyone in the survey group.
Reminders are also sent because research indicates that the single highest predictor of response rates in web-based surveys is the number of contacts made, including reminders. (See: Cook, Heath, and Thompson, "A meta-analysis of response rates in web- or internet-based surveys", Educational and Psychological Measurement, v. 60, 2000, p.821-836.)
Because this is a multi-institution survey, discipline categories have been standardized for ease of comparison. This will assist with future benchmarking activities. If you are in an interdisciplinary field or in doubt as to what discipline you should select from the drop-down list on the survey, select "Interdisciplinary" or "Other". Staff not engaged in discipline-based research may choose "Other".
[To top of page]
Service quality has always been the focus of libraries; LibQUAL+ is intended to provide a measure of the value of library service quality across multiple academic and research libraries. The current LibQUAL+ instrument measures library users’ perceptions of their libraries’ service quality and identifies gaps between minimal, desired and perceived levels of service.
LibQUAL+ is a suite of services that libraries use to solicit, track, understand, and act upon users’ opinions of service quality. These services are offered to the library community by the Association of Research Libraries (ARL). The program’s centerpiece is a rigorously tested Web-based survey bundled with training that helps libraries assess and improve library services, change organizational culture, and market the library. As of spring 2006, more than 400 institutions have participated in LibQUAL+, including colleges and universities, community colleges, health sciences libraries, law libraries, and public libraries—some through various consortia, others as independent participants. LibQUAL+ has also expanded internationally, with participating institutions in Canada, the UK, and Europe. The growing community of participants and its extensive dataset are rich resources for improving library services.
The instrument addresses three service quality dimensions that have been found to be valid in previous assessments of library services: Affect of Service, Library as Place, Information Control. Each question has three parts that ask respondents to indicate (1) the minimum service level they will accept, (2) the desired service level they expect, and (3) the perceived level of service currently provided. This design will permit analysis of gaps between expectations, perception, and minimum acceptance level of service.
As individual libraries receive information about areas needing improvement, this project will allow libraries to compare their service quality with other peer institutions, to develop benchmarks, and to surface best practices across institutions. By using the LibQUAL+ instrument and initiating action based on the results of this survey, the Libraries can be more responsive to users’ needs and provide services that are better aligned to users’ expectations.
4. How and when is the [institution's name] survey being conducted?
A random sample of email addresses has been drawn from the Library’s patron database, representing [number] undergraduate students, [number] graduate students, [number] staff and [number] faculty members. On [date], these individuals will receive a pre-survey email message from [name], University Librarian, advising them that they will soon receive a web-based "Library Service Quality Survey", and encouraging them to complete it. Five days later, on [date], these individuals will receive another email from the University Librarian, with an embedded URL for the actual survey. Automatic reminder notices from the will be sent on [specify other dates, if any].
The data for all participating institutions will be collected on secure servers located in the Texas A&M University Library. Each response will be stored separately as it reaches the server, and survey results will ultimately be reported back to the participating institutions as aggregate mean score data.
[institution's name] will receive initial results in May, and will share final results with the campus community by the fall.
Yes. The LibQUAL+ approach to confidentiality is guided by the ethical standards of the American Psychological Association (see http://www.apa.org/ethics/code.html, section 5). Although some information is captured from respondents, such as network and email address, privacy is protected in two ways. First, only very indirect information is captured which would be difficult to trace back to an individual. Second, everything possible is done to separate personal information from survey responses. Email addresses are not saved with the responses, and once they are saved there is no way to link an individual's responses to their email address -- assuring confidentiality when entering the incentive drawings. After the draw, the email addresses are discarded.
The LibQUAL+ survey evolved from a conceptual model based on the SERVQUAL instrument, a popular tool for assessing service quality in the private sector grounded in the "Gap Theory of Service Quality". It was developed by Leonard L. Berry (Distinguished Professor, Texas A&M University), A. Parasuraman, and Valarie A. Zeithaml. The Texas A&M University Libraries and other libraries used modified SERVQUAL instruments for several years; those applications revealed the need for a newly adapted tool that would serve the particular requirements of libraries. From 1999, ARL, representing the largest research libraries in North America, partnered with Texas A&M University Libraries to develop, test, and refine LibQUAL+. This effort was supported in part by a three-year grant from the U.S. Department of Education’s Fund for the Improvement of Post-Secondary Education (FIPSE).
The questionnaire is straightforward and involves no deception or coercion. Potential respondents may elect not to proceed with the survey after reading the guarantees of confidentiality and privacy.
All [number] libraries participating in the LibQUAL+ survey will use the same 22 core questions and demographic questions. In addition, each Library may select 5 questions from a list of 122 optional questions. [institution's name] has selected optional questions covering [list of topics].
[To top of page]
Survey data are transmitted directly from the LibQUAL+ server to a database. The data are then analyzed and reports are generated for individual libraries that provide information on how users perceive the quality of their service. Participating institutions will have access to summary results for each institution, allowing for comparisons among peer institutions and all participating academic institutions. This will aid in developing benchmarks and understanding best practices across institutions, and will help [institution's name] to align services with user expectations.
Summary statistics only are shared with other institutions. The survey summary results will be made available to participants via the World Wide Web on a password-protected Web site. Users' comments (from the comments section) will be made available to the users' institution.
Survey results will include aggregate summaries, demographics by library, item summaries, dimension summaries, and dimensions measured for survey implementation.
Results will be compiled in a report that will be posted at this Web site [local site], announced in [institution's media], and summarized on this site:[local site]