Since client/server is a hot topic in computing, one would think that there would be a simple definition which could be quoted to those investigating their options for purchasing an automated system. There isn't. True, there is general agreement that client/server architecture is a “software design that divides functions into client (requestor) and server (provider) subsystems that use a standardized method of intercommunication.” But where is the dividing line between the two components? An obvious choice is to move user interface functions from the central system to client PCs. But there are those who would argue that this is nothing more than putting a GUI (graphical user interface) such as Windows on the desktop, with all of the other applications on the central system. It follows that the server would be the same size machine with virtually the same functions as in a traditional host computer environment.
One of the difficulties in establishing the division is determining that information to be shared and that which is to be held separately. Applying this in a library environment, almost everything in a typical library's database is of interest to everyone, or at least to most of the staff, e.g., online catalog, on order files, serials inventory and check-in activity, etc. Therefore, while this data could be moved off the server and distributed among clients, this action would immediately introduce the problem of keeping data in sync throughout the system. One of the advantages of an integrated system is having activity in one module automatically update files in all modules. Furthermore, limitations in desktop backup and recovery procedures can result in loss or corruption of data.
In the case of turnkey systems, the dividing line between client and server is established by the vendor as part of the initial system design. Many client/ server implementations have experienced poor performance because initial design assumptions were subsequently changed. For example, when less data is placed on the clients than the initial design envisioned, network performance can deteriorate as clients compete for access to the server.
In addition to the location of data, developers must decide where it is that application functions are executed. Putting a great deal of procedural code on clients might give individual users better response times, but it may be too complex from a software distribution and maintenance standpoint.
It is our view that an RFP should not dictate client/server architecture, but should spell out functionality, user interface requirements, the expected response times, file security concerns, and other issues which have been addressed in RFPs for the past several years. It ought
not to matter whether the configuration proposed by a vendor includes a RISC-based supermicro at the central site and low-end PCs and “dumb” terminals at the desktops (the traditional host-based computing approach) or a high-end 486 as a server and high-end 486s at each of the desktops (the aggressively client-oriented approach) if the RFP requirements are met and the price is attractive. In our experience, vendors that offer client/server are not providing less expensive solutions (except in the case of vendors that are moving away from mainframe environments) nor functionally richer ones. While they usually offer attractive graphical user interfaces, a system does not have to be based on client/ server in order to do so.
Our point is not that client/server should be avoided, but that client/server should not be mandated in an RFP. There are many more important criteria that should be used in the selection of a system.