The Editors have recently seen a number of RFPs for automated library systems which call for live, online, full-load benchmark tests. While such tests provide the most accurate measure of a system's capabilities, they are very expensive ($10,000 or more) and quite unnecessary when systems of the size and complexity required by a library are already operational at other institutions. The purpose of a benchmark is to determine if a vendor's proposed system can meet a library's future requirements. If a library can match its five year data to an actual installation of its vendor's system, that installation can be used to establish the system's capabilities at significantly lower cost. The library can legally protect itself by including a contract clause which obligates the vendor to upgrade the system at its expense if the system subsequently fails to meet the requirements.
When a vendor does not have a comparable installation that uses the proposed hardware, a benchmark is warranted. The test can be conducted on a machine at the vendor's offices or on a machine conditionally installed in the library. The former is almost always less expensive. A library should carefully prepare for such a benchmark by identifying all of the pieces of equipment to be used in the test, the name and version of the operating system, the number of the applications software release, the size of the data base, number of indices, all transaction types, transaction mix, and number of transactions per minute.
Libraries which want to conduct tests prior to purchase, but at lower cost than a benchmark, should consider a simulation test. A simulation uses a mock-up of real activity. The most common approach is a predefined simulation with one computer, and the transaction file on a tape device. Less common is a predefined simulation with two computers, one of them a control computer which feeds in the transactions. Simulations are less reliable, but they cost only half as much as a benchmark.