Introduction
The presentation outlines my view of current trends in the realm of library management systems and the impact that cloud computing has made in that arena. This rapidly-changing domain finds itself in challenging times where library missions evolve in response to changing demands related to collections and patron expectations. Cloud-based technologies and new model automation systems have emerged to help libraries meet these new challenges. This presentation covers these movements along three different tracks: 1) the realm of discovery systems and services that libraries offer to provide access to their resources and their services; 2) resource sharing arrangements that enable libraries access to content beyond their own local collections; and 3) the resource management systems. New developments -many relating to cloud computing technologies- have transpired in each of these areas.
Technology Adoption Patterns in Libraries
Libraries have distinctive patterns regarding the implementation of cloud technologies compared to other sectors. As a whole, libraries tend to move to new technologies at a relatively slow pace. Fortunately, a minority of libraries are willing to engage as early adopters willing to test new technologies as they become available. Other presentations in the conference gave a more theoretical view of cloud technologies and information management. Considerable distance, however, lies between the theoretical work that is done in computer engineering and the products and services developed and implemented by libraries to provide access to content and services and to automate operations. This gap between the state of the art and practical products
creates a delay relative to the potential impact that these technologies might have on libraries if they could be delivered more rapidly. Rapid adoption of a technology comes with considerable elements of risk which may not always be tolerable to libraries that prefer to work more within a set of well-proven technologies. Trends relating to the adoption of automation products is documented in the annual “Automation Marketplace” industry report [1].
Align Infrastructure with Strategic Mission
To function optimally, libraries must have an automation infrastructure capable of supporting their strategic mission and operational objectives. Technology naturally does not exist as a means in itself, but rather as a set of tools to support the work of a library. Great technology operates relatively transparently, but enables the library to excel in its ability to serve its clients. A mismatch between the critical tasks or activities of the library, what it aspires to accomplish and the capabilities of its automation systems can hinder library success. One of the most glaring issues today relates to library automation systems tightly bound around the model of print borrowing and collections in an era when the electronic and digital collections dominate.
Each sector of libraries sees a different set of trends relative to the shape of their collections and services. Public Libraries, for example, continue to experience vigorous and growing circulation of their print materials. E-book lending has entered as a vital element of service for public libraries, but has not necessarily diminished interest in the print collections and physical spaces. Academic and research libraries, in contrast, generally have experienced more dramatic decreases in the circulation of their print collections as electronic scholarly resources take center stage. Going forward, I hope to see realignment of technology so that it proportionally meets the objectives of libraries relative to print and electronic collections. Over the course of the last two decades, libraries have seen a fundamental shift toward increasingly dominant involvement with electronic materials and it is time for their technology infrastructure to catch up with this reality.
Transitioning from Print to Digital
Academic libraries have seen incredible transformation in recent years. In the 1980s, library collections, especially the sciences and technical disciplines, were dominated by print serials, with hundreds of ranges of shelves of bound periodicals which were most actively used. In stark contrast, the print serials collection in recent years sees minimal use since these materials are much more conveniently available through subscriptions to electronic journals and aggregated resources of scholarly content. Many libraries have either discarded or placed the vast majority of their print serials collections in remote storage, making way for collaborative learning spaces or other programs that more directly engage library patrons.
The same kind of shift now is taking place in academic libraries in monographs. Many research libraries continue to have very large legacy print collections. But most academic libraries have reported that they have vastly curtailed current acquisitions of print monographs in favor of e-book collections, often purchased through demand-driven acquisitions.
This transition from print to digital library collections represents a major change, invoking the question of whether the automation systems in place today can continue to optimally handle the new workflows and business processes involved. The overwhelming trend favors ever higher proportions of electronic and digital materials with lower spending on print. That said, library collections will likely not reach a point in the next decade or two where they consist entirely of digital materials. Some amount of print and other physical materials will persist for the long-term future. The proportions will change over time in favor of the digital and the print. The future of library collections will become increasingly multi-faceted and not quickly evolve into purely digital formats.
If that assumption proves true, libraries will increasingly require automation systems designed to handle complex collections comprised of multiple formats, primarily to manage electronic and digital resources but that can also efficiently manage print and physical inventory. The current legacy systems, however, were originally designed during the era when print dominated. These systems that were developed for print, were later updated with an ability to manage electronic resources. In many cases libraries have implemented separate applications or utilities to manage their electronic resources. In this phase when academic libraries spend most of their collection funds on electronic resources, they need the appropriate management and discovery tools.
Transitions in Metadata
Much has also changed in the area of metadata used to describe library collections. The MARC formats that have been employed in library automation systems for the last 30 years are poised for change. For the last few years, many libraries have been busy implementing new cataloging rules. RDA, Resource Description and Access, is currently beginning to replace AACR2 as the principle cataloging rules used by many national and academic libraries. This transition has been quite time-consuming and expensive for technical services departments, with only incremental benefits in how these records can be used in management and discovery systems. The implementation of RDA has been especially painful given that many of these technical services departments are already under tremendous pressure to operate more efficiently and with fewer personnel.
The next change in library metadata will be even more drastic. The Initiative for Bibliographic Transformation underway at the Library of Congress has produced a proposed new format that brings bibliographic description of content items into the realm of linked data. This new BIBFRAME structure (see Bibframe.org) represents a mapping of the MARC formats into the RDF triple-stores and the conceptual arena of linked data. The conversations regarding BIBFRAME are still underway and have not been operationalized in any library management system, but warrant close attention given the intense interest in bringing the growing universe of linked data into library information infrastructures.
As libraries become involved with other types of collections, other XML-oriented metadata formats such as Dublin Core, VRA, MODS, METS, and EAD have seen increasing use. These materials are often managed through separate platforms that employ these specialized formats. The vision of many of the new library services platforms includes a more comprehensive approach to managing library resources. To achieve this ambition, these platforms must be able to work with many forms of metadata. The flexibility in metadata management must also include the ability to accommodate new formats that may evolve in future years. Hard-coding any specific metadata format into the systems will ensure that they will eventually become obsolete.
The concept of open linked data stands poised to effect major changes in a future wave of library technologies. The move from AACR2 to RDA has been a very expensive and laborious transition, with a narrower set of tangible benefits in the way that library collections are managed and presented to library users. As changes of greater magnitude loom, I hope that libraries are able to navigate the transition expeditiously and in ways that will achieve more transformational results.
Cycles of Technology Culminate in Cloud Computing
Libraries should also be ready from a paradigm shift in the way that they deploy their computing and information infrastructure away from local servers and storage to cloud-based technologies. Cloud computing and applications based on the service-oriented architecture are becoming increasingly adopted in many different kinds of organizations and ICT sectors.
There are many different flavors of cloud computing from which any organization can choose depending on its business needs, security concerns, and policy or legal requirements. Private, public, and local clouds offer different models of resource deployment, data segregation, and hosting locations able to meet these varying requirements [2].
Libraries can achieve many tangible benefits as they move to cloud computing. In contrast to the incumbent model that requires locally installed desktop software, cloud computing generally delivers software through Web-based interfaces and eliminates the need for local servers. Moving to cloud computing enables greatly simplified administration of library automation systems. A library automation system based on client/server architecture, for example, involves an onerous process of installing updates, where new client software may be needed to be deployed on hundreds of workstations. This labor-intensive task consumes considerable time for the library's technical personnel that could otherwise be spent on more worthwhile activities.
The transition to Web-based interface provides many other benefits and flexibility in the way that library personnel and patrons make use of technology-based services. Through concepts such as Responsive Web Design, applications can be easily used
across many different types of devices, including smart phones and tablets in addition to full-sized laptop and desktop computers. Given that the adoption of mobile computing continues to rise dramatically, it is essential for libraries to quickly implement interfaces friendly to these devices. Libraries that lack fully mobile enabled interfaces for patron-facing services risk losing an increasing portion of their customers by year. This accelerated trend toward mobile adoption in the consumer sector should prompt libraries to be very aggressive in deploying services that work across all categories of devices. The sluggish way in which libraries have previously moved to new technologies must be accelerated to maintain relevancy and to meet patron expectations through this current phase of change.
The current change resembles previous phases in the history of computing. The earliest phase of library automation took place during the time of mainframe computers. The mainframe-based ILS products relied on very expensive central computers, with character-based interfaces accessed through networks of display terminals with new computational capabilities of their own. These mainframes had very limited capabilities of processing and storage by today's standards, were very expensive, and required highly technical software and hardware engineers to maintain. A new generation computing infrastructure in libraries based on client/server architectures displaced the mainframes beginning in the mid to late 1980s. These client/server systems took advantage of the desktop computers that were beginning to proliferate in libraries in conjunction with more affordable mid-range servers. This generation of library automation systems offered graphical user interfaces for staff and patrons designed to be more intuitive to use than the character-based interfaces of the previous era that operated through cryptic commands or textual menus.
Once the era of client/server computing was in full force, software development had to adjust accordingly. As organizations decommissioned their mainframes, developers began porting or developing software designed for the operating systems, distributed computing models, and graphical environments consistent with the client server architecture.
We see the same kind of fundamental shift in computing architectures playing out in recent years as the era of client/server gives way to cloud computing. In this transition between preferred technology architectures we see two threads among those who develop major library systems. One approach works to reshape existing platforms incrementally toward Web-based interfaces and the service-oriented architecture. This evolutionary method can deliver a more gradual transition toward systems more technically viable by today's standards. They require considerable effort in re-engineering products, but are generally able to reuse some of the code base and preserve functionality that may have matured over time. Alternately, this transition also provides the opportunity to build entirely new products specifically designed to be deployed through modern multi-tenant platforms and with a fresh look at functionality. The evolutionary approach can be seen in integrated library systems that have been substantially reworked to encompass a more inclusive set of resource management capabilities and to gradually implement Web-based interfaces for staff. The new genre of library services platforms includes many examples of the revolutionary development through the creation of entirely new products with an entirely-new codebase written through current programming methods, software architectures, with functionality designed without the baggage of existing systems, and able to be deployed through multi-tenant software-as-a-service.
Beware of Marketing Hype
Cloud computing today finds a high level of acceptance in most libraries. In the early phase of this technology cycle many organizations worked on educating libraries regarding the virtues of cloud computing, giving reassurance to its ability to meet the needs of libraries in a reliable and secure way.
As cloud computing has become popular, some organizations have begun to employ the term as they market their products. The term “cloud computing” tends to be applied to scenarios where a vendor hosts the server portion of a client/server application as well as those deployed through true Web-based software-as-a-service. Since “in the cloud” has become more of a marketing term than a technical designation, libraries need to be quite careful with regard to understanding the architecture and deployment options of the systems under consideration. While hosted applications generally represent a positive arrangement for libraries, they also do not necessarily offer transformational potential possible with more full-fledged implementations of multi-tenant software as a service. The term “cloud washing” describes the marketing hype that applies the label of cloud computing without necessarily delivering the technologies consistent with the established architectures.
Even a hosted service that may not meet the modern understanding of software-as-a-service can result in benefits to libraries. The efforts of a library's technical personnel need to be targeted strategically. Taking care of local servers requires considerable time and attention. The layers of security and data protection to responsibly manage local computing and storage infrastructure requires considerable technical expertise and may not play to the core strengths of a library. Large-scale data centers associated with cloud infrastructure providers can employ teams of specialists for each aspect of infrastructure. Relying on externally hosted systems or subscribing to applications through software-as-a-service can free up a library's technical personnel to focus on activities that have a more direct impact on end-user services.
Paying for Cloud Computing
Cloud-based services may be priced through a utility model where computing cycles, storage, and bandwidth consumed are metered and charged according to the amount used. Amazon Web Services, for example, employs metered pricing. Customers pay more during peak periods and can scale back to save costs during periods of less intense activity. Utility pricing for cloud-based infrastructure can especially be attractive for software development projects where the use levels remain quite low, and can even remain in the tier of free services. Once the service is ready to be put into production, the resources expected to support higher levels of use are deployed, including redundant components and other configurations needed to provide adequate performance, reliability, and security.
Alternatively some services are priced through fixed monthly or annual subscription fees. This subscription model of pricing prevails in library software, where the company negotiates the amount of the annual fee with the library according to factors such as the components of the system employed, the size of collections, the number of users served, and other factors that represent the scale and complexity of the implementation.
Software-as-a-Service
The most common form of cloud computing today involves deploying applications through software-as-a-service. Characteristics of this model include interfaces delivered entirely through a Web browser and with no requirement for local servers. The service will consolidate or segregate uses and data as needed so that individuals or organizations gain access only to their own data, with safeguards in place to prevent unauthorized access. For a mail application such as Gmail, for example, individual accounts can operate both privately and within organizational structures, with the appropriate domain name, user authorizations, branding, and other parameters. Each individual user can see only their own messages, unless explicitly shared within organizational folders. Data architectures have been well established for partitioning multi-tenant software-as-a-service so that each user of a system can access the appropriate data.
Software developers benefit from multi-tenant applications through the ability to deploy a single code base that serves all users of the system with appropriate branding, configuration, and data segregation. New features, security patches, or bug fixes can be deployed once for all users of the system rather than having to install updates on many different server installations and workstation clients.
Many applications deployed through software-as-a- service give users control over the way that new features are deployed. An administrative console gives organizational administrators the ability to manage the configuration and behavior of the system. When new features become available, they may be suppressed initially so that they can be tested and users can be notified or trained as needed before they are activated. Existing features can be improved through incremental changes that do not disrupt the productivity of users as might be the case when major changes happen abruptly.
Efficiency and Collaboration
Cloud computing not only enables more efficient and convenient use of applications, but it also brings forward some opportunities that can be transformative to libraries. While multi-tenant applications must have the ability to limit and segregate access to data, they also come with the ability to share resources very broadly.
While some types of data must be confined within your own organization, there are many areas where information can be shared to the broader community with great mutual benefit. Such multi-tenant, or Web-scale infrastructure, allows libraries to collaboratively build and share critical resources such as bibliographic services, knowledge bases of e-resource coverage and holdings, or centralized article-level discovery indexes.
These highly shared models of automation present many advantages over those based on isolated local implementations of integrated library systems that build individual silos of content. Cloud computing enables workflows that leverage the cumulative efforts of librarians across many different organizations—or even regions of the world—to collaboratively create resources with enormous mutual benefit. These large collaboratively created resources not only allow libraries to operate more efficiently, but this approach also provides ever larger pools of information resources to library patrons and a foundation for resource sharing [3]. Local computing, in contrast, tends to reinforce patterns where each library recreates data transactions redundantly in isolation from their peers.
Reshaping Library Organizations and Software Design
This new phase of technology provides the opportunity to develop new library management applications, and to fundamentally re-think their organization and design. The incumbent slate of integrated library systems (ILS) was designed when libraries were involved almost exclusively with print collections.
The classic model of the ILS divides functionality into a standard set of modules including circulation, cataloging, public catalog, serials management, acquisitions, and authority control. Optional modules or add-ons may support reserve reading collections or inter-library loans. Many libraries have structured their organizations in a similar pattern. The transformation of libraries into organizations primarily involved with electronic and digital materials brings the opportunity to reshape both their technical and organizational infrastructure.
In the current model, libraries offer a fairly standard set of services, through desks or offices dedicated to specific activities, most of which are oriented to physical materials. A typical library operates a circulation desk for standard loans and returns of books available in the library, a reserve desk for short-term loans of materials set aside for use in a specific course, inter-library loan to request items not owned by the library. The legacy concepts of circulation, reserves, inter-library loan, branch transfers and related activities may be better conceptualized today as resource fulfillment.
The traditional ILS modules and service points organized around them in the physical library can be reconsidered in favor of alternatives that provide a more flexible service to library patrons. Automation systems likewise can be redesigned to manage and provide access to library resources through workflows optimized for modern multi-faceted collections and not constrained by the increasingly obsolete structure of ILS modules. The transformed nature of multi-faceted library collections that favor electronic materials and new capabilities of resource management systems
present an opportunity to reconsider whether the traditional models of service make sense in today's circumstances [4].
Open Systems
Libraries today demand more open systems that provide access to data and functionality outside of the user interfaces that come with the system. Libraries today have little tolerance for a closed proprietary system that restricts or completely disables access to its underlying data. Many libraries need to extend the functionality of the system to meet specific local needs. They need to connect systems together to exchange data efficiently. Library automation systems operate within an ecosystem of data that spans many areas of the campus enterprise. The university's student management system definitely manages the accounts of registered students, and this data must be well synchronized with the ILS. The business transactions related to the acquisition of library materials need to be reconciled with the enterprise resource planning or accounting systems of the University. Campus-wide authentication systems should enable all the patron-facing services of the library to operate through a single sign-on mechanism.
In order to meet modern expectations of interoperability and extensibility, library automation systems must be more open than ever before. The primary vehicle for delivering this openness comes through application programming interfaces, or APIs, that allow library programmers to access the data and functionality of the system through a set of well documented requests and responses that can be accessed through scripts of software programs.
In past generations, libraries needing local changes would hope for the ability to customize the internal coding of a system. This model of customization is not sustainable, since the changes made for one organization may not work well for the general release. Also, any local changes would need to be re-implemented with every new release of the software. Local customizations tend to be fragile, with the possibility that any new version of the software may change the underpinnings on which they depend. Rather than expecting to meet local changes by changing the internal coding, modern systems offer a richer set of configuration profiles that meet the needs of most organizations that implement the system and provide APIs to create functionality for local requirements through a more sustainable method.
Shared Infrastructure
A modern application, such as a library services platform, provides a base level of functionality through its default user interfaces, but also allows each organization that implements it to create utilities or widgets that extend its capabilities. Many of these local needs may also be useful to other implementers of the system, providing the opportunity for communities of developers surrounding any of these systems to share their code creations and expertise.
Another important consideration relates to how libraries organize themselves relative to their automation environments and the opportunities for large-scale implementations to transform the way they provide access to their collections to patrons. The traditional model of library automation targets providing service to a finite number of facilities organized within a system. A system may be comprised of multiple branches within a municipal library service or a central library and departmental or faculty libraries within a university. Multiple library systems may collaborate to share a library automation system.
The current phase of library automation, with the support of cloud computing technologies, support ever expansive implementations of platforms that enable libraries to automate collaboratively in ever larger numbers. While libraries have shared consortia systems from the earliest phases of automation, their size has been constrained due to the limitations of computing resources. In today's era of cloud computing, the limits of scale seem almost boundless. One of the important trends in recent years includes the consolidation of libraries into shared automation infrastructure, often at the regional, state, or national level [5]. These consolidated implementations allow libraries to automate at a lower cost relative to operating their own local systems and provide their users the benefit of access to massive collections. Some examples of these large shared infrastructure implementations include the State of South Australia where all the public libraries share a single SirsiDynix Symphony system, the country of Chile that provides shared automation based on Ex Libris Aleph coupled with VuFind interface, and the Illinois Heartland Library System based on a Polaris ILS shared by over 450 libraries in the largest consortium in the United States. The country of Denmark has recently launched a project to automate all of the public libraries in the country in a shared system.
Cloud computing stands to support important advancements in library automation, enabling libraries to have a greater impact on the communities they serve. In these times when libraries have fewer resources at their disposal, yet must meet ever increasing expectations to meet the information needs of their clientele, technologies based on cloud computing have considerable potential. A new generation of library services platforms has emerged in recent years that aims to manage resources more comprehensively, to leverage shared knowledge bases and bibliographic services, and to provide open platforms for extensibility and interoperability. Web-scale index-based discovery services provide library users instant access to library collections, spanning all types of resources. This new phase of library automation, built on the foundation of cloud technologies, offers libraries willing to break free from traditional models of automation based on local resources the means to collaborate on a global scale to meet the needs of their communities.
References
1. Breeding, M.: Automation Marketplace: The Rush To Innovate. Lib. J. 137, no. 6 (2013), http://www.thedigitalshift.com/2013/04/ils/automation-marketplace-2013-the-rush-to-innovate/
2. Breeding, M.: Cloud Computing for Libraries. ALA TechSource (2012)
3. Breeding, M.: Next-Gen Library Catalogs. Neal-Schuman Publishers, Inc. (2010)
4. Breeding, M.: The Library Information Landscape Approaching the Year 2050. Information Services and Use 32(3), 105--106 (2012)
5. Breeding, M.: Library Discovery Services: From the Ground to the Cloud. In: Getting Started with Cloud Computing: A LITA Guide. Neal-Schuman (2011)