Welcome to the Official Internet World World Wide Web Yellow Pages, a resource that aims to help you find information on the World Wide Web relevant to your needs and interests. The World Wide Web, or more simply, the Web, is a large and complex matrix of information servers on the Internet that offers an incredible amount of information. There are tens of thousands of servers on the Web, and each server can offer a vast amount of information. The Web is very loosely organized, and there is no one index or starting point that provides structure. The size and complexity of the Web can definitely intimidate newcomers to the Internet. We believe that this directory will be an important resource to newcomers to the Internet, and will also assist more experienced users. While not a comprehensive directory of the Web, we believe that the 4,000 entries included are some of the most interesting and useful sites. To provide some context for the directory, a brief discussion of the Web itself is in order.
World Wide Web and the Internet
The Internet stands as one of the most important communications media ever. No matter what your walk of life may be, you cannot escape some exposure the Internet--even if you don't use a computer. The Internet is a vast and complex entity. One of its components, the Web, has gained phenomenal popularity and experienced explosive growth. The Web has transformed the Internet completely. The Web makes traversing the Internet easy and fun--no special technical knowledge is required.
The Internet is a massive network that interconnects thousands of computers all over the world. A network is simply a set of components that help computers exchange information with one another. The Internet is a collection of interconnected networks that spans practically the whole globe.
On a technical level, the Internet is a collection of hardware and software. Computers of all sorts connect to the Internet--supercomputers, mainframes, workstations, and microcomputers, all running different operating systems and software applications. All these diverse systems communicate through a common networking protocol, called TCP/IP. With a physical connection to the Internet and TCP/IP networking software, any computer can be part of the Internet.
But the Internet is much more than a technical entity, it is a worldwide community of information providers and consumers. The underlying hardware, software, and data communications links exist for the purpose of sharing information. Individuals and organizations establish servers on the Internet to make information available to others. The goal of the Web and other systems on the Internet lies in providing access to information content.
The Web: Concepts and Protocols
The Web is the information delivery system that has come to dominate the Internet. The Web gives the Internet a user-friendly interface. Through a piece of software called a Web browser, one can easily navigate the Internet to retrieve information, download software, or just generally explore. But underlying the view of the Internet as presented by the Web browser, are a set of design concepts and protocols that make this user-friendly approach to the Internet possible:
One of the characteristics of the Web lies in its client/server design. Client/Server computing divides computing tasks among many different computers. Servers handle the part that involves the storage, indexing, and searching of information. Clients provide a user interface and specialize in the presentation of information. Client/Server systems depend on a network to complete the process. For the Web, the network involved is the global Internet.
Clients communicate with servers, and vice versa, through standard protocols that operate on the network. The protocol designed for the Web is called HTTP, or Hypertext Transfer Protocol. One of the main characteristics of HTTP is that it is stateless, meaning that no permanent session is established between a Web browser and a server. Only a single request is processed at a time. HTTP, as network protocols go, is fast, simple, and efficient.
The software that allows a computer to be a Web server is called a HTTP daemon, or a HTTPD. Although most of the computers that act as Web servers run sophisticated multitasking operating systems such as Unix or Windows NT, there are HTTPD applications for smaller systems such as the Macintosh and the PC as well. Both CERN and NCSA have developed HTTPD software and make it freely available. While the free Web server software provides basic capabilities, commercial versions also abound. The commercial versions not only offer technical support, but also include enhanced security and other extra features.
HTML. In order for information to be available on the Web, it must be formatted according to a set of rules called Hypertext Markup Language, or HTML. HTML is a simple coding system that is used within documents to format headings, define links, incorporate images, and otherwise define the presentation of information. HTML is an evolving standard, always expanding to incorporate more sophisticated formatting commands. The current version of HTML 2.0 supports such things as fill-in forms in addition to the standard document formatting commands. Version 3.0 adds features such as wrap-around text, backgrounds, tables, and other more sophisticated formats.
While most word processors follow a what-you-see-is-what-you-get (WYSIWYG) approach, HTML uses commands imbedded in text to specify how the text will be presented. Just to illustrate what HTML coding looks like, here is an example of a very simple HTML document:
<HTML> <HEAD> <TITLE> A Sample HTML Document </TITLE> </HEAD> <BODY> <H1> Section Title </H1> <P><A paragraph of sample text would go here. The text in an HTML document can contain links. <A HREF = "http://www.iw.com/">Mecklerweb's iWORLD</A> </P> <P>Inline images can also be included:=> <HR> Comments to <A HREF="mailto:firstname.lastname@example.org">Marshall Breeding</A> </BODY> </HTML>
Information on the Web is identified through Uniform Resource Locators, or URL's. The URL identifies the Web server that hosts the information, and specifies its location within that server. Since URLs are used throughout this directory, let's take a closer look. URLs consist of three components:
The first specifies the type of the resource. This component will be http:// for native Web documents, gopher:// for gopher-based resources, ftp:// for files from anonymous FTP servers, and telnet:// for Telnet-accessible systems.
The second component specifies the Internet address of the server, such as www.library.vanderbilt.edu. This address must be a valid host name that can be resolved through the Domain Naming System, or DNS. Occasionally, you will see the Internet address specified in its dotted decimal form, such as 184.108.40.206. This form should be avoided when possible.
The third component of the URL describes the location of a document on a Web server. Its simplest form is simply a slash, indicating the default document configured for that server. In most cases, Web servers are configured to display a home page or index page as the default. Often, however, the URL specifies documents embedded somewhere on the file system of the server. These file names will often contain special symbols such as slashes, periods, and the tilde. Remember that the file names are generally case sensitive, meaning that you must use upper and lower case letters exactly as specified. Also be careful not to confuse the forward slash and the backslash. Otherwise, the server will return a message indicating that the requested URL cannot be found. An example of the document location component of the URL is /user/breeding/home.html.
Putting these three components together, a sample URL might look something like:
Another fundamental characteristic of the Web involves hypertext. Through hypertext, information need not be organized sequentially or hierarchically. Rather, documents can include links that allow one to navigate through related concepts, and to easily return to previous points. While the Web did not originate the concept of hypertext, it thoroughly embraces it. The Web incorporates hypertext through links. Links are words within a document that have additional information available. If the reader wants more information, clicking on the word will invoke the document specified by that link. Most Web browsers have a button that jumps back to the previous document, allowing the reader to easily return from visiting a link. Links are generally presented in another color as the surrounding text and may be underlined.
The Web was designed to allow information to be distributed on many different servers and to allow one to move easily from one server to another. Links can point to other documents within the same server, or on a different one. As one follows links, the information may originate from many different servers all in different parts of the world.
The Emergence and Development of the Web
Originally designed to share information among a relatively small group of researchers, the World Wide Web has since evolved into a global information system. The Web was conceived and first implemented by Tim Berners-Lee at CERN, the European Laboratory for Particle Physics near Geneva, Switzerland. Berners-Lee was interested in developing a system for disseminating information for this organization and its researchers on the Internet. Berners-Lee had been involved with hypertext information systems for a number of years. He developed planning documents and began implementation a system based on these concepts in about 1990. These ideas came to fruition in May of 1991 when the Web was born at CERN.
The early year or so of the Web saw only moderate growth and acceptance. The content available on the Web was limited and highly specialized. The tools available to access the Web were also in short supply. Initially the only browsers available were for the NeXT operating system, since that system was the development platform used by Berners-Lee at CERN, and a primitive line-mode browser. By about 1993, the Web consisted of about 350 servers.
At about the same time that the World Wide Web was being developed at CERN, the Internet Gopher was attracting significant interest. This information system was developed by the University of Minnesota in about 1990. Prior to the availability of Gopher servers and clients, the tools available for obtaining information on the Internet were not generally palatable by non-technical types. Finding information and retrieving it through tools such as FTP and Telnet required significant training or practice. Gopher provided a much more user-friendly approach to accessing information on the Internet and allowed even non-technical users to take advantage of these resources. Gopherspace, as it came to be called, was basically hierarchical and menu-oriented. During the early 1990's the Internet Gopher gained significant momentum and for a while was the dominant information delivery system. A very large number of campus, research, and business information systems were based on gopher protocols.
Mosaic changed everything. This Web browser, developed at the National Center for Supercomputing Applications (NCSA) gave the Web an attractive, easy-to-use interface. Marc Andreesen, then a student at the University of Illinois at Urbana-Champaign, was a key designer and developer of Mosaic. Originally developed for X Windows for Unix systems, Macintosh and Windows versions were released a few months later. NCSA had earned a reputation for creating useful network software and providing it free of charge to the Internet community. NCSA's Telnet software and other TCP/IP utilities were already popular among PC and Macintosh users of the Internet.
Beginning in about 1993, Mosaic stormed the Internet. This Web browser proved to be a seminal application. Its popularity drove the Web as it became the dominant information delivery system on the Internet, quickly surpassing gopher. NCSA Mosaic also helped drive the growth of the Internet itself. Many individuals and organizations who previously had not taken interest in the Internet, now saw it in a new light. No longer was the Internet perceived as solely the domain of techies, but was now seen as having the potential to reach a broad audience of information consumers.
The popularity of Mosaic shows in that its name became synonymous with the Web itself. Especially among those new to the Internet, the term Mosaic was commonly used to describe all aspects of the Web.
In 1994 NCSA licensed Mosaic to Spyglass Communications, a commercial software developer. Spyglass, starting with the original NCSA programming code, refined this Web browser into one that could be commercially viable. Spyglass does not sell its Enhanced Mosaic directly to individuals, but licenses it to other software vendors who integrate it into their products.
While Mosaic gave the Web a major jump start, Netscape currently rules the Web. Jim Clarke, formerly the founder of Silicon Graphics, started a new company, now known as Netscape Communications to develop software for the World Wide Web. Marc Andreesen, the creator of Mosaic, joined this company and led the development of the Netscape Navigator, a new Web browser. While Netscape bears some resemblance to Mosaic, it was created completely anew. While Mosaic was developed as a general-purpose Web browser, primarily in an academic environment, Netscape was designed from the ground up as one that would support commercial uses as well. The security and privacy of information as it traverses the Web was a key consideration in the development of Netscape.
Netscape is now the dominant Web browser on the Internet. This software was formerly distributed freely on the Internet to non-commercial users, and was available to commercial organizations at a modest cost. With the upcoming versions, Netscape plans to begin charging for its software. Netscape has a reputation as having a slightly more intuitive interface than Mosaic, and is faster to incorporate support for newly emerging display and security features. A large number of Web sites now advertise themselves as “enhanced for Netscape” and cannot be properly viewed with other browsers. Netscape's security features are required for many sites that offer financial transactions.
The current character of the Web
Just as there are many components of society at large, the Internet also represents many different communities and interests. Some of the communities that can be found on the Web include educational and research organizations, government agencies, non-profit organizations, and commercial businesses.
The Internet began as a network of research and educational institutions. Great emphasis was placed on freely sharing information. Encryption and security were given due consideration, but were not driving forces. Universities, libraries, and other research centers were the dominant players. Significant government subsidies helped to support its infrastructure.
The flavor of the Web has changed significantly in the last few years. Commercial interests now dominate. As government contributions to the support of the Internet have waned, commercial organizations have taken a stronger role in the sustaining the Internet. Security, reliability, and adequate bandwidth are major considerations.
Commerce on the Web
Two layers of commerce pervade the Web. The one that is currently more developed involves business-to-business interactions. A large number of servers on the Internet represent business that market their products and services primarily to other businesses. These Web sites generally specialize in providing pre-sales information, and post-sales support.
A commercial Web site can play any of a number of roles, including that of marketing, support and sales. As a marketing tool, a company can use the Web to promote their products and services. One commonly finds detailed descriptions of a company's products on the Web. All the information traditionally delivered through printed brochures, catalogs, and sales agents can be efficiently presented on a corporate home page. One can expect to see not only descriptions of products, but pictures and detailed technical specifications. The Web allows a company to promote its products to a worldwide audience.
An example of a site that specializes in business-to-business communications would be Novell Corporation's site. It includes complete information about all its products to its potential customers. This site also offers a vast amount of information related to the technical support of its products, including a massive database of technical information, product updates, and drivers. You cannot initiate a purchase and make payments on this site, however.
More and more businesses use the Web to gather information about products they intend to purchase. Depending on the nature of the product, one can generally do considerable comparative research through the Web. One of the fields where this is most mature is, not surprisingly, the computer industry. As one who must constantly make purchasing decisions for computer equipment, I find that I can almost always obtain enough information to make a well-informed purchase decision solely from information available on the Web. Most companies provide detailed descriptions of their entire product line, including photographs of the equipment, technical specifications, and price lists. Each site will also provide contact information for obtaining additional information or making a purchase.
The other layer of commercial activity on the Web targets retail consumers. The business-to-consumer aspect of the Web is still in its infancy. Several factors relate to the potential development of this sector of the Web. One concerns the recent upsurge in the numbers of individual consumers that access the Web. Not only have an increasing number of individuals and households elected to purchase Internet connections from Internet Service Providers, all of the major online services--each with hundreds of thousands of customers--now offer access to the Web. The introduction of this critical mass of consumers to the Web offers a tremendous market opportunity to businesses that offer their products and services on the Web.
Another critical factor in the development of commerce on the Web directed at retail consumers involves the ability to process payments. The convenience of purchasing products on the Web is negated if some other medium has to be used to process payments. But the mechanisms available for businesses to use to receive payments are still evolving, and consumer confidence in their security is still somewhat low.
Commercial sites are beginning to emerge offer retail consumers an opportunity for completing a purchase through the Web. Following the same model as home shopping television networks, many see the Internet as a vehicle for selling goods and services directly to consumers. A number of Web sites have emerged, including sites sponsored by individual companies as well as “cybermalls” where one can shop among a whole variety of electronic stores.
For companies to sell their goods on the Internet, there must be trusted methods for exchanging money. The Internet has never had a great reputation as a highly secure network. Quite the contrary. Stories of the hackers, computer break-ins, and security compromises that abound on the Internet have been reported in the popular media as well as in the computer trade press. In order for individuals to make payments on the Internet, they must trust that their financial transactions are secure. No one will type in their credit card number on their computer if they fear they will be vulnerable to fraud.
Several methods have emerged to support safe financial transactions. For a transaction to be safe, several criteria have to be met. One basic requirement is that the information be completely private between the buyer and the seller. If a credit card number, bank account, or other personal number is involved, it must be well-protected on the network. Given that the structure of the Internet allows one to easily monitor network traffic and eavesdrop on others, it is vital that these numbers be encrypted and decipherable only their intended recipient. Two standards are available to support this level of security for financial transactions. Netscape includes a Secure Sockets Layer (SSL) that adds this level of security to Web-based financial transactions. Both the Web server and Web client must have support for this protocol to assure that the transaction is secure. Netscape has an indicator that indicates whether the server to which it is connected is secure or not.
Searching and navigating the Web
As the Web expands, indexes, directories, and other tools must emerge to assist people in finding information. The Web lacks formal structure or organization. A number of guides and indexes have been developed to help users find information on the Web. While most of these finding tools began in educational institutions, almost all of them have migrated to the commercial sector. Few charge directly for searching the Web, but obtain revenue from sponsors that advertise on these heavily visited sites. Some offer limited searching for free, but charge for more sophisticated services.
The tools available for finding information on the Web follow several different approaches. Some rely on developers of Web servers to submit information about their sites, some go out and manually search for new sites, while others create indexes of the Web through robotic spiders that traverse all known sites.
A handful of Web sites have earned a reputation as being premier tools for finding information on the Internet. Some of the best include:
- Yahoo. (http://www.yahoo.com/) This index to the Web was originally conceived and implemented by two graduate students at Stanford University David Filo and Jerry Yang. Yahoo is a hierarchical index, organized by general subjects. Under each general subject area are found a list of narrower topics, which in turn have entries for individual Web sites.
- Webcrawler. (http://www.webcrawler.com/) Now owned and operated by America Online, Inc, this search engine depends on a robot that traverses the Web and builds its indexes. The Webcrawler was originally developed by Brian Pinkerton at the University of Washington.
- Lycos. (http://www.lycos.com/) Developed by Carnagie Mellon University and now currently owned by CMG@Ventures, Lycos is another massive index of the Web. Lycos has earned a reputation as one of the most complete of the Web indexes.
- The WWW Virtual Library. (http://www.w3.org/hypertext/DataSources/bySubject/ Overview.html) This resource is a catalog of subject areas that is distributed on many sites on the Web. The WWW Virtual Library consists of Internet resource guides for each major academic research area and discipline. Specialists in each discipline maintain their respective parts of the catalog.
- net.Happenings. This site provides information about new sites on the Web. When one creates a new Web site, information about that site needs to be included in the various Web finding tools. Almost all creators of new Web sites will announce it to net.Happenings. If you are looking for what's brand new on the Web, check here first.
What you need to access the World Wide Web
This directory highlights the vast array of information available on the Web. If you don't already have access to the Web, you may be asking yourself how to get connected. Here's what you need.
The first requirement for accessing the Web involves a connection to the Internet. You may be lucky enough to work for an organization that provides access to the Internet. If not, there are lots of other options for getting connected. Access to the Internet is available through companies known as Internet Service Providers or ISPs. These companies provide the data communications link and any hardware and software required to connect your computer or group of computers to the Internet.
There are many ways to connect to the Internet. If you are planning to connect your organization's network to the Internet, then you will likely want to consider a high-speed dedicated link to the Internet. With this type of connection, all of the computers on your company's network access the Internet, and you can establish servers that can be accessed by others out on the Internet. You will need to have equipment such as a router, your computers will need to be connected together through a local area network, and this network will need to support TCP/IP protocols.
If you need to connect a single computer to the Internet there are simpler options. You can use a modem and a regular telephone line to connect your personal computer to the Internet. Internet Service Providers offer dial-up connectivity for individuals and businesses. The particular services available vary among the different ISPs, but you can generally expect to receive a certain number of hours of connect time, electronic mail, and access to Internet services such as Web and newsgroups. The ISP will generally provide the software that you need to connect to the Internet through their facilities, including the login scripts and procedures required to get connected. You will need to have a high-speed modem installed in your computer. These days you'll want to purchase a modem that communicates at 28.8 kilobits per second or faster. The costs for accessing the Internet vary among the different ISP's. Expect to pay a set amount per month plus hourly connect charges.
Once you have your computer physically connected to the Internet, you will need software to allow it to communicate. In some cases the ISP will provide this software, but this is not always the case. The Internet uses a network protocol called TCP/IP, and you will need software for your computers that implements this protocol. There are both shareware and commercial products available for most types of computers that provide the required TCP/IP support. If you are a PC user running Microsoft Windows, there are a number of packages available. Windows for Workgroups, Windows 95, and Windows NT all have TCP/IP support built in. Macintosh users will need to have MacTCP or equivalent software. All versions of Unix come with TCP/IP support.
Another software component helps your computer communicate with your network card or modem. When you connect a network to the Internet, you will need to have software drivers for the network cards in the computers. If you are connecting to the Internet with a modem, you will need software that implements a protocol called SLIP (Serial Line Internet Protocol) or PPP (Point to Point Protocol). This software allows your modem to communicate using the Internet's TCP/IP protocol. This software will also include the ability to control your modem to dial your ISP's telephone number, establish the connection, and follow any required login sequence. Most ISP's will provide you with a script that automates this process.
Once you have your connection to the Internet in place and have installed your TCP/IP software, you will then need a piece of software called a Web browser to access the Web. The Web browser is your interface to the World Wide Web. Most browsers are graphical applications that follow a point-and-click approach to navigating through the Web.
Many different Web browsers are available for each of the major computer platforms, especially for Windows, Macintosh, and Unix systems. The two major Web browsers in use today are Netscape and Mosaic. The Netscape Navigator is a commercially produced Web browser from Netscape Communications Corp. Netscape, is by far the most widely used browser. For further information about Netscape and to order the latest copy, see http://home.mcom.com/. Previous versions of Netscape were available without cost for non-commercial use. Another popular Web browser is NCSA Mosaic. This browser can be obtained from http://www.ncsa.uiuc.edu/SDB/Software/Mosaic/.
While all these components are necessary to get connect to the Internet to surf the Web, several products are available that combine all the services and software into an integrated package. Many of the Internet Service Providers will provide all the software required, completely pre-configured to dial into their system. Some software packages contain all the required components, and include an application that automatically configures the software for any of the major Internet Service Providers. An example of this genre of software is Netmanage's Internet Chameleon.
About these Yellow Pages
This directory was created to help users of the Web find resources related to their interests. The Web is complex and ever-growing. While many resources exist on the Web itself for finding resources, it is vital to have a directory of important resources in print form for those new to the Web who haven't mastered all its tricks. This Web directory also has a human touch. Every web site listed here was visited by a live person, unlike many of the online indexes that are generated by computers.
The Web has grown to be so large that no directory like this one can claim to be comprehensive. We aim, rather, to provide listings for the most interesting and important resources on the Web. Web sites come and go. Given that a printed directory is a static entity, it at best represents a snapshot of the types of resources that are available at any given time. Check resources such as net.Happenings (http://www.mid.net/NET/) to find recent additions to the World Wide Web.
Each listing in this directory includes several different pieces of information.
Title. The title of the site is generally what the site calls itself. The title generally reflects the name of the organization sponsoring the site and may or may not indicate the type of information available.
URL. The URL, or Uniform Resource Locator, describes the location of this resource on the Internet. With most Web browsers, you can simply type in the URL to visit a particular site. All the URL's in this directory will begin with “http://” indicating that they are Web resources. When entering the URL into your browser, remember to pay attention to upper and lower case.
Sponsor. The sponsor of the site is the organization or individual that creates and maintains its content or that provides other resources to support the site.
Subject. We have assigned subject headings to each of the directory entries and have organized the directory according to these subject categories.
Description. A descriptive paragraph is provided for each entry. Each description in the directory was composed by an individual who visited the site and reviewed its pages. These descriptions are designed to provide information about the kinds of information that can be found on each site. The descriptions are intended to be brief and concise, yet informative.
Webmaster. When available, we include the name and e-mail address of the Webmaster for each site listed. This is the individual responsible for maintaining the site. One can contact a Webmaster to obtain additional information related to the site, to report errors or problems, or to provide feedback to the site's developers.
Recommended. We list some of the sites in this directory as “recommended.” The quality and usefulness of Web sites varies tremendously. To help our readers focus on the best sites, we have flagged no more than five percent of the entries as recommended. We judge these sites as providing especially important information, or as presenting its information in a unique or particularly effective way.
A number of individuals contributed to this directory. Most of the reviewers for the project were recruited from among my colleagues at Vanderbilt University. All are experienced and knowledgeable users of the Internet. Each reviewer was assigned a general set of topics. Zora Oatley was charged with assigning and revising the subject headings assigned to each entry. The reviewers and their areas of specialization are as follows:
|Jody Combs:||Philosophy, Religion, Libraries, Universities|
|David Carpenter:||Economics, Sociology, Archaeology|
|Cindy Boin:||Business, Finance|
|Carlin Sappenfield:||Mathematics, Science, Engineering|
|Rick Stringer-Hye:||Science, Engineering|
|Nancy Godleski:||Arts, Entertainment, Leisure|
|Zora Oatley:||Miscellaneous topics|
The creation of this directory was a very labor-intensive process. My heartfelt thanks go to all these individuals who contributed to this project. I also appreciate the hard work of the staff at Mecklermedia who made this project possible, especially to Carol Davidson the Managing Editor, and to Tony Abbott, Senior Vice President of Mecklermedia for giving me this opportunity to help create an important Internet resource. I also want to thank my family and friends for their support.
Marshall Breeding, Vanderbilt University