It's remarkable to me how much public computing has changed since I began working for the Vanderbilt University library system in the mid-1980s. We, like most libraries, have followed a progression of technologies that began with dumb terminals connected to a mainframe, moved to PCs with access to the online catalog and CD-ROM-based information products, and eventually developed into our current environment that relies on the Web to deliver information resources to our users both locally and remotely. A review of the historical path that public access library computing has taken reveals the significant limitations of earlier times, illustrates the continual expansion of information that libraries have been able to offer to their users, and sheds light on the opportunities and challenges we will face in the future. Recalling the limitations of the not-too-distant past helps us to put today's information overload problems into perspective.
At each phase of library computing history, it has seemed as if prevailing computing power and communications speed far exceeded previous capabilities and delivered more computing power than could be effectively used by the current applications. In retrospect, the earlier computing platforms have seemed hopelessly inadequate, and we've chuckled at the enormous physical size of the previous generation of computing equipment relative to its paltry capabilities.
While the advancements in computing power and communications speed are impressive, it's even more amazing to consider the enormous expansion of information that we make available to our users relative to earlier times. Each generation of technology has offered quantum-level advancements in the types and quantity of information available to library users.
Early Automation Nostalgia
Text-Based Electronic Card Catalog's: The Vanderbilt Libraries began its initial automation effort in the mid-1980s. Before about 1985, the card catalog provided access to the library's collection of books and journals. Printed indexes and abstracts facilitated the process of finding relevant articles from our journals and periodicals. But essentially, library users' search and information retrieval processes were unautomated, and performing library research was a tedious and time-consuming activity.
With the implementation of the NOTIS integrated library system in about 1985, users could search and browse the library's collection through a text-only interface. This system offered an enormous benefit beyond the previous manual process, but was essentially an electronic version of the card catalog, making it far easier to find items in the library's physical collection but impossible to offer new types of information.
The earliest public computers in our library were large Telex display terminals that provided access to our NOTIS ILS. One of my earliest jobs in the library systems office was maintaining the network of display terminals attached to the IBM 4361 mainframe computer that ran our implementation of NOTIS. These terminals had no computing capabilities of their own. The mainframe to which they connected had far less computing power than a single PC of today. The terminal network used a bisynchronous protocol that operated at a data rate of 9.6 Kbps-many times slower than the 100 Mbps of our current Ethernet network. I remember well the crankiness and fragility of that network, in which even the smallest power fluctuations or cable irregularities would bring whole banks of terminals to a dead halt.
Locally Mounted Periodical Databases: Although automated access to the library's holdings through the ILS was an enormous improvement, it did nothing to help researchers to find information in journal articles. The paper periodical indexes prevailed as the tool for locating articles in journals and periodicals by topic.
At Vanderbilt, our first foray into providing wide access to article-level information was loading periodical indexes into our mainframe-based library management system. Using special software created by NOTIS, we were able to load and index large sets of citations provided by H.W. Wilson and other companies that specialized in abstracting-and-indexing databases. By the late 1980s, we increased the amount of electronic data available by loading article-level information of periodical indexes into our NOTIS system. This enabled users to search by keywords to find the journal articles in our collection relevant to their topic of research. Although it offered significant benefits to library users, the process of loading new updates and rebuilding the indexes required considerable staff time.
From Dumb Terminals to PCs: Our interest in using personal computers for our public workstations began shortly after our initial installation of the NOTIS system and its network of dumb terminals. Early on, we worked to use PCs-which we considered far more versatile-in place of at least some of the bulky Telex terminals for access to the NOTIS mainframe.
Initially, the PCs ran terminal emulation software. Running the MS-DOS operating system, the PCs were inherently limited to performing one task at a time. Yet, we saw great potential in moving from dumb terminals to microprocessor-based PCs so that we could expand the information resources available. Again, though the computing power of those early 8088-based PCs seems ridiculously small by today's standards, they seemed quite powerful at the time, and gave us enough of a taste of the potential capabilities of microcomputers to increase the services to library users. It was possible, for example, to download information from the online catalog to floppy disks on these microcomputers-a small feat not at all possible on display terminals.
At first, the library's PCs connected to the same communications network as the dumb terminals. While this network provided essential access to our online catalog, it lacked capabilities for communicating with other systems or sharing resources. Though the immediate benefits were limited, we were anxious to begin the transition away from dumb terminals to start creating an infrastructure of computing devices that we believed would ultimately deliver a broader universe of information to our users. Distributed computing models, microprocessor-based personal computers, and high-speed networks held great promise relative to the centralized mainframe computing model.
Our technical services staff members were some of the early beneficiaries of this effort, as we devised an approach to allow access to both our NOTIS cataloging module and the OCLC bibliographic utility on a single system. Catalogers and other technical services employees, rather than having two bulky terminals, could do their work from a single PC, popping instantly between these two systems and easily transferring MARC records into NOTIS from OCLC.
CD-ROM Information Products: For public computers, the next advancement came with the introduction of CD-ROM-based information products. Supplementing what was available through our mainframe, these CD-ROM products allowed us to provide electronic periodical indexes spanning a broad array of disciplines. It's important to note that these CD-ROM products provided no full-text information, but rather offered a way to find relevant articles in our collection of printed journals and periodicals. Initially, these products were implemented by loading the CD-ROM into drives connected to individual PCs and loading the software onto the operating system. In most cases a PC would be dedicated to providing access to a single product.
LANs Expand Options: While individual PCs outstripped terminals, connecting them together in a high-performance local area network (LAN) unleashed their full potential. The LAN made it possible to expand the number of information resources offered, and to make them available at all the computers. The early LANs operated at 10 Mbps, a speed quite capable of supporting access to multiple text-based information resources offered on servers located elsewhere on the network. A "3270 gateway" allowed us to break away from the constraints of the terminal network, while continuing to provide access to the online catalog on the mainframe. The PCs on the network loaded both their operating systems and any other software they needed from file servers instead of local disk drives.
Networked CD-ROMs: One of the early benefits of the Ethernet LANs was providing access at each public workstation in the library to our entire collection of CD-ROM applications. In the prior arrangement, each CD-ROM was available only on designated computers. Using specialized CD networking applications and towers of multiple CD-ROM drives, we were able to configure our public workstations to access a whole menu of information resources.
Although the CD-ROM networks made it possible for library users to access a large array of information resources, they demanded a high level of effort for maintenance and administration. The environment for the users was loss than ideal, in that each of the products offered its own search and retrieval interface-often with cryptic syntax for search commands. Even when the products evolved graphical interfaces under Microsoft Windows, there was little consistency in search techniques.
The Internet Provides Global Connectivity: The emergence of the Internet in the early 1990s marked a new era for public library workstations-one with almost unlimited potential for providing access to information resources. With our Ethernet LANs already in place, each public library workstation could be configured for direct access to the Internet. In the earlier days of the Internet, public library computers gained access to resources, such as the online catalogs of other libraries and other text-based information, through Telnet. The Internet Gopher protocol was an early system for providing access to full-text information on distant servers distributed throughout the Internet.
The Web Changes Everything: The Internet offered a major advancement in information resources that could be made available on library workstations, but the emergence of the Web enabled the most radical and transformative change ever. The Web provided a common user interface for information resources as well as an international fabric of interconnected information resources and information delivery systems.
Prior to the emergence of the Web, most of the resources that could be made available on library workstations were simply pointers to the places where the information actually resided-bibliographic records and citations representing text located elsewhere. The Web succeeded as a medium capable of delivering the information itself. No longer was it adequate to simply deliver a citation. Especially in the realm of journal articles, users quickly came to expect to view the full text online. Electronic journals, abstracting-and-indexing services, and aggregations of full-text resources proliferated at an explosive rate. The Web quickly took off as the medium for publishing information of all sorts. Many individuals and organizations were anxious to create Web sites chock full of information related to their respective areas of interest-much of it available without cost or restriction. Some of the information on the free Web is authoritative and informative, though.
At the same time, as we struggle to meet the expectation of users to deliver more full text online, we also see strong interest in other formats of information-images, sound, and video. As the infrastructure of the Internet has evolved, the delivery of rich media formats has become much less of a technical problem. In my work with the Vanderbilt Television News Archive, I'm glad for that.
The Web challenges the fundamental concept of the library public workstation. Prior to the Web's emergence, the in-house public workstation served as the main vehicle through which the library made its information available to its users. With the ubiquitous Web, resources provided by the library can be accessed easily from computers in the dorms, homes, and offices of its users. While libraries still need to provide public computing stations in their buildings, these serve mostly as platforms for Web browsers, which provide access to the Web site or portal that the library has designed for both in-house and remote access to its resources.
New Problems and Issues
Twenty years ago, we struggled to provide a basic online catalog to our users. Today, our key automation issues revolve around dealing with information overload and providing users with tools to effectively search a vast array of information resources.
We need ways to help users identify which among a wide range of available information products will most likely hold the information they seek, and to create search results that are small and focused enough to be useful. Advanced search engines, OpenURL-based linking technologies, and metasearch environments are but some of the developments that have emerged to help us deal with the overabundance of information our patrons face. Relative to an earlier era of information paucity, these are good problems to have.