We are pleased to announce the results from the first phase of the "ARL E-Metrics Project: Developing Statistics and Performance Measures to Describe Electronic Information Services and Resources for ARL Libraries," which began in May 2000. 1 Overall, Phase I finds that a number of ARL libraries participating in the project have created practicable strategies and approaches for developing statistics and performance measures to describe use, users, and uses of electronic and networked information services and resources. Despite these strategies it appears to be too early to offer "best practices" in developing and using such statistics and performance measures. The study also identified a number of key issues that will require additional attention as the project continues into Phase II.
The three primary goals of this project are to:
- develop, test, and refine selected statistics and performance measures to describe electronic services and resources in ARL libraries;
- engage in a collaborative effort with selected database vendors to establish an ongoing means to produce selected descriptive statistics on database use, users, and services; and
- develop a proposal for external funding to maintain the development and refinement of networked statistics and performance measures.
The two objectives of this initial phase were to (1) identify and describe the current state of the art of statistics and performance measures for networked services and resources in ARL libraries and (2) organize an ARL Working Group on Database Vendor Statistics to begin discussions with database vendors.
Phase I relied on the following types of data collection methods:
- survey questionnaires;
- site visits to selected libraries;
- sample vendor reports supplied by members of the Vendor Statistics Working Group;
- sample library-generated reports obtained from project participants; and
- follow-up interviews as necessary.
These efforts produced a number of findings and identified key issues and recommendations that are summarized in this report. However, it is important to stress that the findings and recommendations are based on data from 24 participating libraries and may not be generalizable to the larger group of ARL libraries.
A summary of the key findings from Phase I of the study follows.
Findings from the Survey
Analysis of the E-Metrics survey responses reveals a wide range of data collection and use activities among the 24 project participants. It appears that measures related to patron-accessible resources and costs are collected more consistently and systematically than measures related to electronic resource use or users of those resources. Due to the often inconsistent and non-comparable nature of vendor-supplied statistics, libraries have considerable difficulty in tracking overall electronic database usage and use patterns.
The collected data seem to be shared widely among library staff and with parent institutions. However, the manner in which the information is communicated and the nature of the reporting process appear to be limited. Data are most often used to make purchasing decisions for licensed vendor materials. People also indicated various uses of the data for the purpose of internal and external reporting and service assessment and evaluation.
Regarding the most important issues related to performance measurement of networked resources and services, the majority of respondents cite the lack of consistent and comparable statistics from database vendors as the most serious problem. Relatively few respondents recognized or identified problems associated with the library's inability to process and utilize collected data.
Findings from Vendor Reports
Analysis of usage statistics from 12 major database vendors reveal that there is a wide range of different practices and that progress is necessary in several areas, including standardization of core statistics, report delivery method, and assuring the provision of definitions of reported statistics. There are some signs in the way vendors report data that indicate increased cooperation between libraries and vendors.
Findings from Site Visits
Libraries reside in different operating environments and have very different needs in terms of data to describe electronic services and resources. The environment differs because of the institution's involvement with the library operation, the library's top management attitude toward evaluation efforts, and the library's data-related needs. To analyze to what extent these differences may affect efforts to find a common set of e-metric measures for research libraries, four libraries were visited (Virginia Tech, University of Pennsylvania, Yale University, and New York Public Library). The site visits proved to be very useful for documenting current practices and elaborating on some of the results of the survey. For example, libraries have a serious problem managing information describing the use of electronic resources and services. This is particularly the case with regard to licensed vendor materials primarily because descriptive data often reside under vendor control. Libraries often have to manage different interfaces to obtain different types of resources, and, accordingly, usage statistics typically are distributed among several dozen database vendors and consortia. Due to a lack of standardized reporting practices, usage reports are difficult to consolidate, or it takes an enormous amount of effort to collect such data. Non-vendor-based data collection efforts to describe electronic services and resources appear to have received less attention than vendor database statistics efforts.
Phase I also identified a number of issues that will require additional discussion and resolution:
- Complexity of the topic: participating libraries, vendors, the study team, and users may not all have a full and shared understanding of the complexity of developing statistics and performance measures for electronic services and resources.
- Diverse context for developing statistics and performance measures: each ARL library operates in a unique setting that affects the development and use of specific statistics and measures.
- ARL library responsibilities and level of effort: there are a range of internal factors that affect the degree to which the library can provide resources and an adequate level of effort to collect data.
- Focus on non-vendor-based data sources: there are a number of statistics and measures to develop that do not depend on the database vendors. Coordination among libraries and library organizations: there are numerous libraries and organizations, such as the National Information Standards Organization, National Commission on Library and Information Science, International Coalition of Library Consortia, Digital Library Federation, etc., who are interested in developing standards for measuring electronic and networked services and resources. Information sharing and coordination of efforts will maximize the usefulness of each initiative for all libraries.
The full report on Phase I discusses in greater detail these issues, which will be important areas for attention in Phase II of the study.
Although findings from Phase I of the study did not identify a set of "best practices" for developing electronic and networked statistics and performance measures, the study team can recommend a number of very specific strategies that can assist participating libraries better prepare for data collection to produce such statistics. These strategies include creating a culture of evaluation; stressing the use and development of statistics and measures in strategic planning documents; reorganizing the library for assessment, data collection, and reporting; and developing a data advocate within the library.
The next steps to be taken in Phase II include:
- developing and field-testing possible statistics and performance measures to describe services and resources in the electronic environment (see accompanying proposal);
- addressing the key issues outlined;
- convening the Vendor Statistics Working Group and meeting with selected vendors; and
- conducting or participating in a number of meetings to coordinate the library community's efforts to develop such statistics and measures. Phase II will be completed in June 2001 and will result in a short manual that proposes statistics and measures that libraries can use to describe and assess electronic services and resources.
The complete Phase I Project Report is available at <http://www.arl.org/stats/newmeas/emetrics/phaseone.pdf>.
1. A group of 24 ARL member libraries funded the study and are participating in it; this project is under contract with Florida State University's Information Use Management and Policy Institute and is directed by Charles R. McClure, Wonsik "Jeff" Shim, and John Carlo Bertot under the leadership of project co-chairs, Sherrie Schmidt, Dean of University Libraries, Arizona State University Library, and Rush Miller, University Librarian and Director, University of Pittsburgh.
Proposal for Phase II Field Tests
During Phase II of the ARL E-Metrics Project, the focus shifts to the identification and field testing of a preliminary set of statistics and measures. This is an essential step toward uniform reporting practice across ARL libraries. Following is a set of preliminary data elements or statistics that are under consideration for field testing at select ARL libraries.
Electronic Resources and Services:
- number of electronic full-text journals (hosted by library);
- number of librarians providing electronic reference;
- virtual visits to networked library resources;
- electronic reference transactions; andnumber of public-access workstations.
- number of electronic full-text journals (through subscription);
- logins (sessions)*;
- queries (searches)*;
- items examined (viewed, downloaded, emailed, printed)*; turn-aways (requests exceed simultaneous user limit)*; andtotal user connection time to vendor databases.
- number of people participated in user instruction on electronic resources.
Cost of Electronic Databases and Services:
- cost of electronic database subscriptions;
- cost of internal digital collection construction; and
- cost per items examined (subscribed databases).
- % electronic reference transactions of total reference;
- % electronic materials use of total library materials use; % remote library visits of all library visits; and
- ratio of public access workstations to university population (number of faculty, staff, and students).
The study team emphasizes the preliminary and experimental nature of these proposed statistics and measures.
* From the November 1998 ICOLC (International Coalition of Library Consortia) Guidelines for Statistical Measures of Usage of Web-Based Resources <http://www.library.yale.edu/consortia/webstats.html>.