Any complex organization today depends on the help of computers to operate with optimal efficiency and in support of the products and services it exists to produce. Whether the organization operates commercially or as a nonprofit, mostly online, or in-person, it requires a well-designed, maintained technology infrastructure to carry out its mission effectively and efficiently. In the library context, we depend on sophisticated business applications specifically designed to support our work. This infrastructure consists of such components as integrated library systems, their associated online catalogs or discovery services, and self-check equipment, as well as a website and the various online tools and services needed to manage and provide access to library resources. These systems work together to support the behind-the-scenes work, in-person services for patrons and virtual mobile and web-based services.
It's not just that all this technology makes the work of the library easier - without it, much of what we do cannot happen at all. This mission-critical infrastructure plays a key role in how the library keeps pace with its day-to-day activities and whether it achieves its strategic goals. I see a library's technical infrastructure as something that requires constant attention and occasional overhaul. Under-investing in technology can weaken the performance of the organization. While overall budgets may be shrinking, a solid technical infrastructure can help the library do more with fewer personnel resources and reap the best advantage from its print and electronic collection materials.
Just as a library's physical facilities require ongoing maintenance, repair, and occasional renovation projects to rework spaces in response to changing use patterns, its technical infrastructure likewise demands constant attention and periodic revaluation. In this month's column, we explore some of the layers of attention that need to be in place to ensure that technology contributes its full potential to the success of the organization. Some of these layers fall into the area of routine, but often deferred, maintenance, as well as largerscale renovation or rebuilding projects.
Performing Without a Net
In most cases, libraries operate without a safety net of fallback procedures for their critical systems. It's not practical at all, for example, for libraries to maintain a physical card catalog just in case its online catalog might not be available. Almost all libraries find that the cost of producing and filing cards greatly exceeds the hopefully rare episodes of downtime of the online catalog. Some libraries do, however, maintain backup circulation systems that can capture checkout transactions in the event the integrated library system is down so that it can be uploaded at a later time. Such an offline system will have inherent limitations, such as not being able to accurately predict loan periods.
One of my early library programming projects, incidentally, was to write a backup circulation system for the NOTIS LMS that was in use at Vanderbilt University in the mid1980s. I developed it in Turbo Pascal to operate on PCs with dual 5 1/4" disk drives that ran MS-DOS, the typical desktop computer of the time. In those days, the computing equipment and networks were relatively fragile, and libraries had to plan for at least some episodes of downtime. Today, offline circulation is one of the very few areas in which a backup scenario is even possible. In practical terms, the key strategy for mission-critical applications lies more in shoring up reliability than investing in alternative processes to be used during episodes of downtime.
Libraries will need to assess their areas of vulnerability and plan accordingly regarding any fallback or redundant systems they may want to put in place. If internet access has historically had frequent outages, then it may be time to invest in a redundant connection though a separate provider. Large-scale automation implementations, such as those for a consortium, may find it worthwhile to operate redundant data centers with failover capability or that work in parallel as a cluster. Few libraries, unfortunately, have the resources to maintain redundancy for their technical infrastructure, and as libraries increasingly rely on electronic and digital content, analog fallbacks may not even exist. Fortunately, the means to achieve very high levels of reliability lie within closer reach.
Ensuring Near-Perfect Reliability
Libraries really depend on their automation systems. Downtime causes incredible disruption throughout the organization. Patrons cannot search the online catalog or check out materials, and library personnel cannot perform their work. As service-oriented organizations, libraries aim to provide high customer satisfaction, which is extremely challenging to accomplish if the computing tools on which the library has come to rely go down.
Fortunately, computer systems today can have exceptional reliability. Over the past 3 decades, most automation environments in libraries have become progressively more dependable. Networks are more stable, hardware is less subject to failure, operating systems are more fault-tolerant, and software applications have fewer bugs. All that said, libraries still need to take appropriate measures to ensure the best reliability from their missioncritical technical infrastructure.
Running the most recent stable versions of software will generally result in greater stability, security, and functionality Operating systems, such as Linux or Windows, must routinely apply all new updates available. While it's best to run the latest version of an operating system, even if you run an older version, it's essential to apply any system patches as they become available. Once patches are no longer issued, then it's time to upgrade to a more current version of the operating system family Running older versions of operating systems may also have implications on the higher-level applications. Libraries using Microsoft Windows-based systems will want to consult the product life-cycle guidelines (http://support.mi crosoft.com/lifecycle).
For both Linux and Windows, the application of patches can be automated. While some very complex environments may require that patches be routinely tested prior to being applied to production servers, in most cases, automatic updates can be safely activated. Given the pervasive onslaught of bots and malware probing for known vulnerabilities, failing to apply security patches can be catastrophic.
Some libraries may erroneously believe that "leaving well-enough alone" will result in better stability, pointing to servers that have run for several years without having been touched. Running unpatched servers, however, is extremely risky since they are much more vulnerable to security compromises that can not only cause downtime but that also risk corruption or total loss of data.
I've also seen many libraries operate on very old versions of their integrated library system software. Again, they may be leery of anything that might go wrong during the update process. Once a library misses a few consecutive version updates, getting to the latest version can be a difficult, multistage process. I would almost always recommend that libraries move forward with updates as they become available. These routine updates are covered by the software maintenance fees, so libraries save no money by postponing updates. Running older versions of the application will mean that the library is not benefiting from any new functionality that may have been added or from bug fixes that have been implemented. On multiple occasions, I have come across libraries that express low satisfaction with the functionality or reliability of their ILS, only to find out that they operate a version many years out-of-date and that some of their problems have been addressed in more recent versions. It's not really fair to assess the capability and reliability of an ILS unless you are working with the current version.
Regular software updates across all the types of computers used in a library is consistent with the proactive strategy demanded for mission-critical technical infrastructure.
Hardware and Hosting Issues
Running hardware platforms past their reasonable life expectancy can also diminish reliability. Libraries should follow a reasonable replacement cycle for all types of computer hardware, not only including servers but also all types of public and staff-use personal computers. I consider a 3-year cycle as ideal, though a 5-year cycle is more consistent with the budgetary limitations of most libraries. With personal computers, most libraries follow a hand-me-down approach, where more demanding users receive the newer machines, shifting their 3-year-old system to a more routine use and taking the oldest ones out of service. Don't be tempted to keep too many outdated systems in production use since a high proportion of obsolete machines will increase the cost of support and will diminish overall levels of reliability. Some parsimonious libraries avoid getting rid of anything of value, such as old computers. It's important to realize, however, that keeping obsolete equipment in use past its reasonable Ufe expectancy will incur costs beyond any residual practical value.
As the servers on which critical library applications operate become obsolete, options for replacement would include shifting to a third-party hosting arrangement in addition to an inplace hardware replacement. Most ILS vendors, for example, offer hosting services and take full responsibility for hardware, operating system, and application maintenance. The hosting fees should generally prove less expensive than the cluster of costs associated with local servers, including direct costs such as purchased hardware and associated service contracts, operating system software, and database licenses, as well as indirect costs such as data center infrastructure, electrical utility costs, and allocated personnel for systems and network administration. Hosted services may also be more secure and reliable than locally hosted equipment when hosted in industrial-strength data centers with multiple layers of firewall protection, redundant power sources, and multiple internet pathways. These facilities would typically include proactive monitoring to detect problems before they result in system failures. Few libraries have the capacity to provide state-ofthe-art infrastructure support. While there may be important issues to be addressed, such as privacy of patron data, taking advantage of third-party hosting services can be a reasonable strategy for the operation of at least some components of a library's critical infrastructure. When implementing hosting services, be sure that the contract includes quality of service language that stipulates near-perfect levels of availability and performance with specific penalties for service interruptions.
Scale for Optimal Performance
A library's critical infrastructure should be designed to scale in proportion to anticipated levels of use. In these times of extremely powerful servers, libraries can easily architect their systems to sustain excellent performance even during the periods of peak activity. Rather than rely on a single server, many applications can be distributed across multiple servers to achieve higher performance and reliability Depending on the clustering configuration, an application can continue to operate even when one or more units in a cluster experience a failure.
Data management also benefits from redundancy. Disk storage continues to see diminished cost per gigabyte, making it now much more affordable to implement redundant storage. Disk-to-disk data backup has become quite common, avoiding many of the bottlenecks and complications involved in tape-based operations. Placing copies of critical data on one or more cloud-based storage services can also provide additional layers of safety.
Mission -Critical Functionality
Without going into great detail regarding the relative merits of the various automation products, it's essential that libraries have technology tools well-suited to their operational needs and service objectives. If the capabilities of the library's technical infrastructure require constant work-arounds or have gaps that leave important areas of activity to manual routines or informal computerized management, then it may be time to reconsider shifting to a new set of automation tools. Although changing systems will be costly in terms of financial resources, the time needed to migrate data, and the time needed to retrain personnel, having a system with ill-suited functionality can be even more costly in the time lost every day when relying on a system that takes a constant toll on operational efficiency
Libraries tend toward a state of inertia in respect to their automation systems, even in the face of increasing frustration with its capabilities relative to their current requirements. Most libraries have seen enormous changes in the nature of their strategic missions in recent years. New technologies reshape library collections and the work involved surrounding their management and fulfillment. Even when maintaining the same essential content, new technologies and media have transformed the way that libraries acquire, manage, store, and provide access to their collections. Journal articles once accessed in print are now delivered in electronic full text through subscriptions that libraries maintain with publishers and aggregators. Ebooks and audiobooks have entered the library scene in a big way.
The service models for libraries have also shifted. Checkouts of materials have moved from staff-mediated transactions across a circulation desk to patron self-service kiosks. Factual reference questions have largely evaporated as individuals routinely turn to their personal computers or smartphones for instant answers to routine research questions. This change isn't necessarily bad news for those involved at the reference and circulation desks since it allows more time for in-depth research consultations and other kinds of more meaningful service interactions.
The library's web-based services also need to be in tune with strategic priorities. The library's technical infrastructure must include the right tools for connecting its patrons with its collections and services. The library's web presence needs to include a set of discovery and fulfillment tools that can successfully facilitate the access of all the components of its collections by its patrons.
Providing the right functionality from the library's technical infrastructure requires a periodic reassessment. It's important to ensure that the automation environment fits the strategic activities of the library. Does it focus in the right areas? Does it leave out important areas of activities that then require manual processes or inefficient workarounds?
If the library identifies a gap between desired functionality and what is delivered by the current technical infrastructure, a process needs to be placed in motion to address any missing capabilities. Such an assessment might be similar to dealing with an aging library building. Are the problems cosmetic or structural? Can it be addressed with a renovation of the existing facility, a new extension, or is it time for entirely new construction? Some gaps in technical functionality might be resolved through the addition of a new product such as a discovery service or through the implementation of RFID-based equipment. In other cases, the concerns may be more systematic, prompting a process to investigate migrating to an entirely new automation environment.
Astable, reliable, and well-designed technical infrastructure doesn't happen by itself. It requires that the library address the layers of routine maintenance, incremental improvements, and periodic reinvestment appropriate for such mission-critical assets.