Telecommunications Technology and Infrastructure

Marvin A. Sirbu

Carnegie Mellon University

1. Introduction

It is popular today to talk about "information infrastructure" in the same way we talk about our transportation system, or electricity distribution.[1,2,3] Yet if we look closely at our transportation system, we see that the broad term "infrastructure" covers a dazzling variety of technologies serving very different needs. From cow paths to eight-lane expressways, from cars to trucks to barges to supersonic transports, our transportation infrastructure means many different technologies carrying many types of traffic at widely varying speeds.

Similarly, when we look at the technology issues behind the phrase "information infrastructure," many techniques are used to meet an ever wider range of service demands. The National Telecommunications and Information Administration has defined the information infrastructure as "all of the facilities and instrumentalities engaged in delivering and disseminating information throughout the nation," including not only the telecommunications (e.g. telephony) industry but also mass media, (broadcasting and cable television service), the Postal Service, publishing, printing, and the production and distribution systems of the motion picture industry."[4] It would take an encyclopedia to review all of the technologies used in providing our communications infrastructure. In this paper, we have a much more modest agenda. First, we shall attempt to provide a framework within which to situate a discussion of communications technologies. Second, we will identify some of the key technical developments that have occurred in the last decade and the critical future developments that are expected to substantially alter the nature of our communications infrastructure. Third, we examine the current political debate over communications infrastructure and discuss how technology developments are both framing and likely to alter the terms of the debate in coming years.


1.1. A Generic Telecommunications Network

The economic imperatives of providing telecommunications services lead virtually all telecommunications networks to have a very similar generic architecture, illustrated in Figure 1. The user interacts with the network through some sort of user terminal. It may be a telegraph key, a telephone handset, a computer or a TV camera; its function is to capture information from the user and to convert it into a form suitable for electronic transmission. Next there is a telecommunications link from the user's terminal to some type of switching node. This first access link has a unique characteristic: the link must be capable of carrying the maximum instantaneous traffic capacity the user will ever need; but since most users do not spend all their time communicating, the link is frequently inactive and its capacity unused.

Figure 1.

Communications Network Architecture

The third element is the first level switching or concentration point, sometimes referred to as an end office. This first level of switching may be a telephone company switching office providing residential service and business Centrex, or it may be a corporate owned Private Branch Exchange (PBX). Such a node serves two important functions. First, it may make connections between two users' access lines, allowing them to communicate with each other. Second, it can concentrate traffic from multiple users onto a single high capacity communications link which carries traffic to a higher level switching node. Because high capacity communications links exhibit substantial economies of scale, network providers can realize great savings in transmission as a result of this concentration function. Moreover, the same inter-switch link can be shared sequentially by many different callers, as the conversations of first one subscriber and then another are carried across the trunk.

Higher level switching nodes, interconnected by high capacity links provide long distance transport between regions served by end office switching nodes.

Finally, there is the logical element of the network indicated by the box labeled control. The control function provides the intelligence for setting up the switching paths needed to interconnect two parties that wish to communicate. While Figure 1 shows it as a centralized abstraction, control can be distributed either to each interface or to each switching node.[5]

From this simple abstraction, a number of profound observations can immediately be made. First, communications network costs per user are typically dominated by the investment in the "last mile" -- the link between the user's terminal and the first concentration point. Capital in the last mile is dedicated to a single user, and not shared, as contrasted with switching nodes and inter-office trunks whose costs are shared by the traffic of many callers. The capital investment of the nation's local exchange carriers in what is referred to as the "local loop" is more than the investment in all other parts of the network combined.

Second, high capacity communications links will make their first appearance in the long distance trunking part of the network. Even if the traffic of each individual user is small, the aggregation of traffic from many simultaneous users can require large capacity transmission lines. Second, high costs associated with high speed transmission can be justified in the trunking plant where costs are shared over many users; only when these costs are greatly reduced, can we expect to see high capacity links, migrate out towards the user from the carrier's end offices.

Third, as we imagine new end-to-end services such as digital data transmission or video dial tone,[6] changes are required throughout the network: in terminal equipment, loop plant, switching, control, and inter-office trunking. The pace of service introduction is determined by the last of these items to be upgraded. Because of the capital invested in the local loop, there is a great incentive to find ways to use existing local loops to deliver new services. Conversely, would-be competitors with the existing local exchange carriers must focus their attention on reducing costs for the last mile--if not the last few hundred meters--if they hope to be successful.

1.2. From Wires to Services

The previous discussion focused on the geographic elements of communications infrastructure. An alternative decomposition looks at the different levels of added value. At the lowest level, communication requires transmission channels. These channels may consist of copper or optical fiber, radio waves from terrestrial or satellite transmitters, or free space lasers.

While there are many companies today that sell only point to point communications channels, a communications infrastructure generally means a network. A network implies a set of channels linked by some form of switching that enables any two parties connected to the network to send signals between them.

Finally, at the highest level, there are complete services, such as Plain Old Telephone Service (POTS), electronic mail, video telephony, or enhanced services involving protocol conversion and interaction with stored data. Complete services require many ancillary functions ranging from sophisticated billing and reporting to directories or complex data processing.

Charles Jonscher has observed[7] that the businesses of providing each of these different levels of added values are very different indeed. The transmission business consists of delivering a highly standardized commodity--each bit transmitted is the same as every other bit. Success in a commodity business requires low cost production. That in turn requires investment in state of the art technology for production -- i.e., transmission facilities. The successful vendor of transmission focuses on process innovation as opposed to product innovation. Commodity businesses are also characterized by capital intensity and large economies of scale.

At the other extreme is the business of providing services, particularly enhanced services such as electronic mail, protocol conversion, and information services. These services are characterized by a high degree of customization for each end user or vertical market segment. Successful participants in this end of the business will focus on product not process innovation. Skilled system developers, not capital is the scarce resource, and comparative advantage requires a focus on the customer and his needs as much as on the processes of production.[8]

The networking part of the business is intermediate between these two. While not as much a commodity business as transmission, there is much more standardization than in enhanced services. The cost of switching nodes is increasingly dominated by design and software costs, which exhibit significant economies of scale. At the same time, various traffic types require different switching technologies; thus there is significant variation among networks optimized for different traffic types or different peak channel speeds.

We will return to these distinctions as we examine more carefully the major trends in the underlying technologies of communications and their implications for information infrastructure.

2. Technology Trends

2.1. The New Traffic

For years the nation's telecommunications infrastructure was optimized to carry voice telephone calls. Analog signals of 3 KHz bandwidth were the dominant form of information carried. Today, data traffic is steadily increasing its share. Unlike voice traffic with its well-defined characteristics, data traffic requirements vary from a few hundreds of bits per second for telemetry data to billions of bits per second for supercomputer interconnection.

The new data traffic differs from voice traffic in two fundamental ways. First, it is largely bursty traffic. That is, unlike voice traffic, where an open network connection will typically be used almost continuously by somebody speaking, data traffic flows in fits and starts. A user types a few characters at a terminal, receives a screenful of data in response, and then may pause for many seconds while he or she studies the information received. Second, whereas a digital channel with a peak speed of 64 kbps can carry a 3 Khz voice channel, many data applications require much higher peak speeds to provide the desired quality of service. Consider a doctor examining an image sent electronically from a Magnetic Resonance Imaging (MRI) scanner. A single image might consist of 2000 by 2000 picture elements, each requiring 24 bits to encode a full range of color, for a total of 12 megabytes of data. To transmit that data in a time comparable to the rate at which a doctor can flip through a series of film images requires a peak transmission rate of several hundred megabits per second.

A simple way to understand this demand for higher speed networks is to look at both the typical size for a "chunk" of information needed by an application, and the elapsed time the user is willing to wait for it--known as the latency. When the user is a person, acceptable latencies are measured in seconds; when the user is a computer however, waiting one millisecond might mean 50,000 wasted instruction cycles for a typical workstation. This concept is illustrated in Figure 2 which shows chunk sizes and latencies for a variety of applications. The axes are drawn to a logarithmic scale, covering many orders of magnitude. The diagonal lines correspond to transmission speeds of 64 kbps--one voice channel--and 45 megabits per second. They illustrate how the combination of low latencies and larger chunk sizes leads to demands for networks where a single user can consume hundreds of megabits per second, even if only for a brief burst of traffic.

Figure 2.

Selected Applications

While the peak capacity demanded may be high, the average usage may still be very low. Figure 3 illustrates the average to peak ratio for a number of different applications.

Figure 3.

Traffic Characteristics

The preceding charts paint a picture of network requirements far different from those of voice. The most critical differences are in the access link and in switching. The need to carry a large number of phone calls between major switching centers means that the peak capacity of some interoffice trunks is already more than a gigabit per second.

2.2. Evolution of the local loop

The 64Kbps needed to carry a speech signal is well within the capability of the majority of existing copper loop plant. However as demand for data and video traffic expands, the limits of the existing copper plant are being severely tested. In 1987, industry prognosticators, such as Richard Snelling of BellSouth, were predicting that by 1990 all new loop construction would use fiber so that the network could handle video services to the home.[9] The economic problems of fiber, particularly fiber networks capable of providing switched video services, have proved far more difficult than the early optimists imagined. These problems include installation costs, electro-optics costs, and powering issues. More recent analyses suggest that it will be the end of this decade, if then, that fiber loop networks capable of carrying video on demand will be economical[10,11] in new construction situations, let alone as an upgrade technology. Significantly, adding the capability to carry video on demand nearly doubles the cost of Fiber in the Loop (FITL) compared to narrowband only systems.

In the meantime, significant progress has been made on two fronts which will prolong the viability of the existing copper plant. First, advances in image compression technology make it possible to encode VCR quality video at bit rates as low as 1.5 Mbps, and broadcast quality at rates below 5 Mbps. Second, advances in digital signal processing have raised the ceiling on what can be transmitted over a copper loop with acceptable error rates. AT&T and other manufacturers have announced a technology known as Asymmetric Digital Subscriber Loop (ADSL), which, by installing appropriate interfaces at the subscriber's premises and in the central office, would allow a majority of existing copper plant to carry 3-4 Mbps downstream--enough for a single broadcast quality video signal or two VCR quality signals--while carrying a lower speed voice/signaling channel upstream.[12],13

These developments suggest that the inevitability of fiber to the home and the optimal time schedule for deployment are still quite uncertain. Moreover, we may well see a mixed scenario in which fiber is used out to a neighborhood concentration point, and copper pairs for the last few hundred meters. Shortening the copper link allows it to support higher data rates.

For medium to large business customers, however, fiber has clearly proved its worth. Large business users, because they concentrate traffic from many offices, can make use of high capacity links from their premises to the carrier's central office. While technology similar to ADSL can provide up to 24 channels on one or two copper pairs[14], increasingly, business users need the capacity of fiber. Fiber not only provides more capacity than copper today, but also promises easy expansion of capacity simply by installing more capable electronics in the future. The local exchange carriers are rapidly installing fiber rings in major metropolitan areas with alternate path routing in the event of a cable break.

The demand by businesses for fiber-based access has induced a number of new companies to construct metropolitan fiber networks to compete with the local exchange carriers. By providing quick service, competitive prices, and an alternate path for reliability, these Competitive Access Providers (CAPs) have gained many satisfied customers. The CAPs may eventually become full fledged competitors to the local exchange carriers; to date, however, they have been limited--by regulation as well as by strategic choice--to bypassing the LECs and providing leased access to the interexchange carriers. The FCC has recently issued a tentative decision and notice of proposed rule making which would allow the CAPs to begin to compete in the provision of switched access to the interexchange carriers.

While the telephone companies wrestle with their copper vs fiber dilemma, two other approaches to providing the access link to the subscriber continue to garner attention: the use of coaxial cable of the type already installed to some 60% of U.S. homes for the carriage of cable television, and radio technology.

Earlier generations of cable television networks used a tree and branch architecture (Figure 4) which distributes a common set of video channels to all households in a franchise area. The constant branching of the cables coupled with normal attenuation with distance requires the installation of numerous amplifiers in the network to boost the signal power to adequate levels. When 30 or more of these amplifiers are cascaded together, they introduce distortion in the signal, especially at higher frequencies, thus limiting the capacity of the network. Older networks may support as few as 12 channels, though more recent systems go up to 450 or 500 MHz, or about 70 video channels.

Figure 4.

Cable Television Networks

As the cable franchises come up for renewal, and the operators are asked to upgrade their networks, many are installing optical fiber backbones from the headend to an optical network interface (ONI) point much closer to the subscriber (Figure 5). This design drastically reduces the number of amplifiers needed, thus increasing network capacity to as much as 1 GHz or 160 6 MHz video channels. At the same time, it allows for different combinations of channels to be sent on a backbone to any particular neighborhood.

Figure 5.

Fiber Backbone Cable Networks

In the limit, as the number of homes served by an ONI is decreased, and the number of video channels is increased--for example by the use of digital compression--a cable operator could have enough channels available to send a unique video signal to every household. The cable operator could then provide video dialtone service at a cost lower than what it would cost a telephone company to build an integrated voice/data/video all fiber network from scratch.[15] Indeed, if you look at the plans of the cable operators for installing fiber backbones, they talk about six fibers to every ONI. Their planning documents label two of the fibers for cable TV, two for resale to a competitive local access provider, and two for resale or use in linking radio base stations for cellular telephony.[16] The wide bandwidth of the coaxial cable access link provides sufficient capacity that a portion can easily be allocated to offer high bit rate data services, or even conventional telephone service along with video delivery. In the U.K. several joint ventures between U.S. local exchange carriers and cable television operators are installing fiber backbone/coaxial cable networks to provide both cable television and telephone access line service.[17] Recently, U.S. West announced that it was exploring the combination of coaxial cable plus fiber for future residential telephone network expansion.[18]

Besides telephone company provided fiber, or cable TV company fiber/coax hybrids, a third alternative for the local access link is the radio spectrum. Already some 7 million subscribers make use of wireless telephone service provided by one of the two franchised cellular telephone operators in each metropolitan area, or about 5% of the total number of wired loops. In a cellular telephone system subscriber terminals are linked to a base station by radio signals rather than wires. These base stations are then linked by wire or microwave to a switching node. As mobile subscribers move from the area or "cell" served by one base station to a cell served by another, the wireless access link is automatically switched to the new base station. (Figure 6). Increasingly, cellular operators are finding that some subscribers use their cellular service as their primary telephone line, abandoning wireline service altogether.[19] While existing cellular service was designed to support automobile-based telephones, work is rapidly progressing on technology and standards for so-called Personal Communications Services (PCS) which will support personal telephones the size of a cigarette pack. The FCC has recently issued a Notice of Proposed Rule Making (NPRM) asking for comment on whether it should grant PCS licenses for individual cities, as it did with PCS, for regions as large as LATAs, or grant one or more nationwide licenses of spectrum to PCS providers.[20]

Figure 6.

Cellular Telephone Network

As new technology developments increase the capacity of wireless systems and reduce their cost, it is likely that existing communications networks based on fixed wires for the access link will find themselves in direct competition with new technology based on wireless local access. Already GTE is using "wireless local loops" based on radio technology to provide basic telephone service to a new subdivision in Texas. In Eastern Europe small businesses can't wait until their entire neighborhood is wired in order to have telephone service. Using wireless access, these important customers can be served quickly. In some urban areas in the U.S., carriers are looking at wireless loops as less susceptable to vandalism The ultimate radio access link could be satellite based. Proposals such as Motorola's Irridium would link user telephones via a network of low earth orbiting satellites.

In order to conserve spectrum, most proposals for new radio access technology assume the use of speech compression to reduce the channel rate to 16 or 32 kbps, which limits the usefulness of these access links for data by comparison with 64 kbps wireline channels.

Radio spectrum is basically an access technology, someone must still supply the switching. Treating wireless only as a loop substitute, the switching could be handled by the existing LECs; that is, the wireless operator would simply hand over the traffic collected at the base stations to the existing carriers; or, the cellular operator could install its own switching center to serve customers directly. A long distance company could choose to compete for the new radio licenses, build a radio access network, and handle all switching in existing or expanded toll switching offices. Finally, there is potentially great synergy between cable television companies and wireless telephony providers: excess fiber in the cable operator's plant can be used to interconnect local radio base stations to a central switching center. [21,22]

To date, wireless local access has focused on voice communications. However, as companies like AT&T, Apple and Sharp introduce their "Personal Digital Assistants", handheld computers designed to link their owners via wireless access to an enormous web of information and computational power, the demand for wireless data services seems poised to explode.[23] Limitations in available spectrum at the frequencies currently used for cellular kbps data rates at 32 kbps and below. However, new research on wireless access at frequencies around 30 GHz may make possible wideband wireless access links. These higher frequencies are currently more costly to exploit, however, and more susceptible to interference due to inclement weather.

The significance of this litany of local loop developments is that it suggests that the simple notion of five years ago that a single integrated broadband network based on fiber access was the "obvious" path for future telecommunications network development is today not nearly so obvious.

2.2.1 Inside Wiring

Prior to 1980, most companies as well as most homes were wired for communications only with twisted copper pairs. If a computer terminal was needed in the office, special wiring would be pulled on an ad hoc basis. Between 1980 and 1990, many companies went from less than 20% of white collar workers with terminals to more than 70%. As a consequence, they began to think of data communications wiring as part of the building infrastructure that should be installed once and managed as infrastructure. Accordingly, the industry developed a number of standard data wiring schemes based on coaxial cable, shielded twisted pair cable or unshielded data grade twisted pair, with some use of fiber as well. Today most new office buildings are provided with a data wiring infrastructure just as they include electric wiring. With advances in technology, copper pair wires have been made to support data rates up to 100 Mbps for short distances from the desk to a wiring closet. At the wiring closet sophisticated wiring "hubs" provide the first level of traffic concentration, justifying the use of more expensive optical fiber to link these concentration points in campus-wide networks. The ability to carry up to 100 Mbps on copper seems likely to forestall widespread introduction of fiber to the desktop for another 5 to 10 years.

2.3. Switched Networks

The function of switching is to allow the same transmission link to be used to support communications between different users. With circuit switching, a chunk of capacity--for example between two switches--will be dedicated to two users for the duration of a call. When voice was the primary form of traffic, telephone networks were optimized to use circuit switching in units of 64 kbps -- the size of a single voice channel. Circuit switching is inappropriate for bursty traffic.

Packet switching was invented in the 1960s to respond to the traffic requirements of data. Instead of reserving a dedicated circuit for the length of a call, a packet switching network allows many users to share transmission capacity by breaking up information into small chunks, called packets, and then using a transmission line to alternately send packets from several different users. The concept was used first in private networks built by end users. Until recently, users had to build their own data networks by leasing full period channels from the telephone carriers, and adding their own premises-based data switching equipment. Leasing full period channels is costly however, when traffic has a high peak requirement, but relatively low average throughput. This creates demand for switched data services from the carriers who can take advantage of traffic statistics to provide high peak capacity to each user, while charging only for the average throughput actually consumed.

The first carrier service meeting these objectives were public switched packet networks (PSDNs) which carried data in packets of 128 characters at speeds up to 48 kbps. These were served primarily terminal to mainframe computer traffic.

The introduction of desktop computers, created a demand for high speed switching of bursty traffic between machines. The idea of distributing the switching function among all the attached machines, rather than having a central switch led to the development of Local Area Network technology. In early LANs, the transmission medium is configured as a bus or ring and its capacity is shared by all of the users as they transmit their data in high speed bursts.[24] Each node on the LAN is responsible for assuring that its transmissions do not interfere with any others. Very quickly many companies found themselves with campus-wide LANs capable of efficiently handling bursty traffic at speeds of 4 Mbps or more. By comparison, the data services offered by the public carriers were slow and not well suited to computer-to-computer--as opposed to terminal to computer--traffic. Corporate users again had to rely on leased lines and premises based switches ("routers") to link LANs at their various locations. (Figure 7.)

Figure 7.

Corporate Data Network Using Leased Lines and Premises Switching

In the early 1990s, carriers began to introduce new higher speed switched services suitable for linking the high speed LANs. Frame Relay is a stripped down version of traditional packet switching. Relying on the lower error rates of fiber-based transmission, and the increasing intelligence of data terminating equipment, frame relay nets dispense with the error correction service offered by traditional PSDNs and in return realize high speeds, lower delay, and lower costs. Responsibility for error correction is left with the equipment at each end, much as is done in LANs. Figure 8 illustrates the range of new higher speed and switched services being rolled out by the carriers.

Figure 8

New Network Services

From a carrier perspective deploying and maintaining multiple switching nodes for circuit and packet, and maintaining separate access facilities for each is an operations nightmare. The holy grail for a carrier is a single switching technology capable of handling the full range of traffic characteristics. Circuit switching works well for constant periodic (isochronous) traffic like voice and video, but is ill suited for data. Packet switching works well for data but introduces too much delay for use by voice and video. The solution is a new kind of fast packet switching known as cell relay . In a cell relay network, all traffic is broken up into fixed length cells which are switched by parallel hardware elements. The hardware parallelism allows cell relay switches to be scaled up to handle thousands of links each operating at 150 Mbps or more, for aggregate throughputs measured in terabits (10sup3(12)) per second. Because the cells are short, and link speeds are large, delays are minimized so that continuous bit rate services like voice and video can be carried. At the same time, bursts of cells generated by data traffic can also be carried.

The CCITT standard version of cell relay is called Asynchronous Transfer Mode (ATM) and is the basis of proposed Broadband Integrated Services Digital Network (BISDN) standards at 155 Mbps. Interexchange carriers are expected to offering ATM switching services beginning in 1994. By 1992, several LAN vendors had begun offering ATM switches as successors to LAN hubs for linking increasingly powerful workstations at speeds above 100 Mbps.[25] An interim step to customer use of cell relay is Switched Multimegabit Data Service which provides a LAN-like packet switching interface familiar to end users on top of a cell relay infrastructure.

While there are some who still question whether ATM switching will prove to be the optimum solution for integrated broadband networks,[26] the concept has such momentum within the international telecommunications community that it is virtually certain to be widely deployed. There are also some who question whether a single integrated network will actually prove to be as cost effective as several networks each specialized for various traffic types. ATM switching, if successful, will provide carriers with the ability to offer "bandwidth on demand"--i.e. to carry all types of traffic over a common transmission and switching infrastructure.

2.4. Control

The flexibility of service offerings over the network is greatly affected by the mechanisms implemented for control of the switching functions. These include rules for routing information, load sharing, simplified addressing, and sophisticated accounting and billing. In 1979 AT&T began to introduce a technology based on Common Channel Signalling--the use of a separate control network for communications between switching nodes--to provide a much greater sophistication in the control of its network. For example, when an 800 number call is dialed, the area code doesn't indicate where the call should be routed. Using common channel signalling, a switching node can retrieve from a centralized database call routing instructions for 800 number calls. The same technology is used by MCI and Sprint as well as AT&T to offer virtual private network service with customized call addressing and call screening features for large corporate and government users.

The local exchange carriers have begun to introduce Common Channel signalling in their own networks to provide such services as Call Return, Repeat Call and Caller ID. Common Channel Signalling is also a pre-requisite for full deployment of Integrated Services Digital Networks which bring the common channel signalling right to the end user's terminal. However, deployment has been slow, partly due to disputes between the RHCs and Judge Greene over whether they can transport CCS data across Local Access and Transport Area (LATA) boundaries or must interconnect with the Inter Exchange Carriers (IECs) in each LATA.[27]

Further innovations in the control of switching are being developed by the local exchange carriers under the heading "Advanced Intelligent Network" (AIN). The goal of AIN is to make it easier for carriers to offer advanced call control features on a customized basis.

Common channel signalling is also central to Personal Communications Services. With PCS, a user would have a single telephone number for his portable terminal which would never change, no matter where the customer traveled. Using a sophisticated control system based on CCS, calls to an individual's number would always be routed to the radio base station nearest his portable handset anywhere in the country, and eventually throughout the world.

As call control becomes more sophisticated, however, the problems posed by multiple providers sharing a geographic area become more complex. For example, for PCS to function properly, information on my whereabouts may need to be shared among multiple service providers if I am to receive calls in any jurisdiction. It is quite likely that the diffusion of competition in the local loop will be paced by the difficulties in resolving control issues, not merely problems of the interconnection of transport facilities. It is perhaps interesting to note that the National Science Foundation in its recent solicitation for proposals to provide services in support of the Interagency Interim National Research and Education Network (IINREN) has advocated the separation of the "Routing Authority" from the provider of transport. One might envision operation of the software systems for the intelligent network eventually being separated from the competing service providers and operated either by a cooperative association, or by an independent party.

2.5. Services

In contrast to the rapid progress of recent years in developing and deploying new transmission technologies or new switching techniques, we are just beginning to comprehend what a services infrastructure might consist of, and what will be required to put it into place. The leading edge in developing such an infrastructure exists within the education and research communities linked together via a network of networks know collectively as the Internet. With roots going back some 25 years to an experimental network supported by the Defense Advanced Research Projects Agency (DARPA), the Internet today consists of some 6000 interlinked networks, both public and private, extending around the globe and hosting more than 400,000 computers. In that crucible we are beginning to see take shape an image of what a services infrastructure might really look like.

Given connectivity among millions of students and researchers who clearly have a need for information sharing, how far have we come in making information sharing simple and available to the average user. The answer is not very far at all. Mail and bulletin boards are the most developed application, accounting for some 15% of total network traffic.[28] Bulletin boards on selected topics, many having to do with computers, provide a way for information in a particular field to circulate rapidly among interested researchers. Yet the usefulness of mail is limited by the total absence of a directory system for finding out someone's electronic mail address.

Many universities or even individuals have taken to making information available to others via anonymous retrieval using the File Transfer Protocol. FTP accounts for some 50% of all traffic on the NSFNET. While the amount of information thus available. is enormous, the tools for finding out what information is available are still quite primitive.

A number of separate projects undertaken at different Universities are beginning to provide models for how overall structuring of information access might be accomplished. Each of these projects incorporate a client server model in which software on a user's machine (the "client") talks to software on one or more "server" machines connected to the internet. These servers may be repositories of both indexing information and of data, or they may be organized in a hierarchy in which the records at one server contain index information as to the data indexed and stored at yet another server. The Wide Area Information System, developed at Thinking Machines Inc. uses a notion of indexes and documents at each server. A client searches across one or more indexes and identifies documents of interest, which can then be retrieved. A "document of interest" might be a reference to another server with its own index to be searched. WAIS uses sophisticated weighted searching techniques to support searching one or more indexes for articles similar to a previously retrieved article of interest. The Gopher System, developed at the University of Minnesota, uses a menu interface to allow users to search across many different servers for documents of interest. Each server can be an entry point into the entire space of documents, since menus are organized as an unrooted graph, not as a tree. The Mercury project at Carnegie Mellon University has developed a client server system for retrieving citations to journal literature, and then to fetch facsimile images of journal pages.

Another form of information sharing, somewhat more transparent than FTP, which involves copying a file from one machine to another, is provided by the Andrew File System marketed by Transarc, Inc. The Andrew File System allows file servers at multiple institutions to share a single hierarchical name space from which users can read and write copies of files no matter where in the world they are located. Thus, two co-authors collaborating on a paper can always have access to the most current version, rather than sending copies to each other via mail or FTP.

Each of these systems gives a hint as to the kind of robust information sharing that might be possible as we develop a true information--as opposed to communications--infrastructure, but there are many unresolved problems. Among them:

* Development of standardized methods for information finding: White Pages directories, Yellow Pages, information indexes

* Development of widely standardized methods for retrieving information which may be scattered across hundreds of different hosts.

* Mechanisms for security and authentication.

*` Development of billing and accounting systems which can track the transfer of intellectual property and provide a mechanism for compensating authors and maintainers.

* Development of standard document representation formats which go beyond ASCII text and allow sharing of graphics, images, voice annotation, animated sequences and video.

Research and demonstration prototypes of systems solving these problems have been called for in the Information Infrastructure and Technology Act of 1992 introduced by Senator Gore as a follow-on to the High Performance Computing and Communications Act of 1991 which has funded network infrastructure.

3. Technology and the Politics of Infrastructure

Over the last 30 years, the telecommunications policy authorities in the United States have moved steadily to open more and more elements of the industry to competition. Hushaphone[29] and later Carterfone [30]led to vigorous competition in Customer Premises Equipment. Specialized Common Carrier[31] and Execunet[32] brought competition in inter-exchange services. Six years ago, in an article entitled "Back to the Future,"[33] a former head of the Federal Communications Commission envisioned a future of vigorous competition in local exchange communications, harking back to the proliferation of local exchange companies that followed the expiration of the original Bell patents at the turn of the century.

The proponents of competition have generally argued in favor of competition in all segments of the communications network: in the local loop as well as in the switching and long haul portions of the network; in transport, and networks, as well as in enhanced services.

The view that vigorous competition is the desirable state of affairs for the nation's telecommunications was at first fiercely resisted by the existing local exchange carriers. More recently, they have come to accept the idea of competition as long as there is a "level playing field." In casting about for a political argument that could help to swing the debate away from competition, the local exchange carriers have recently focused on the code word "infrastructure." By refocusing the debate on universal service, and a "public good" model of networks, the existing carriers have highlighted the risks of competition, and created a climate in which they may be better able to expend ratepayer funds to invest in networks for the future. The fact that such investments tend to raise barriers to entry is merely a side benefit.

The heart of the LEC argument is based on the notion that a single integrated broadband network based on fiber optic transmission facilities and Asynchronous Transfer Mode switching will be able to meet all the varying needs for communications services, from simple POTS to the most exotic supercomputer interconnection. The implicit assumption behind such arguments is that integrated broadband networks have such economies of scale and scope that there can be little room for multiple providers each offering differentiated services oriented towards specific market niches. Further, they argue, any restrictions on the type of services that telephone companies can carry -- information services, video services--would inhibit the realization of these scale and scope economies to the detriment of the consumer. In its most extreme form, the argument takes the position that the scale and scope economies extend beyond transport and networking to the provision of information itself. Thus carriers must also be free to originate content on their integrated broadband networks if the full benefits are to be realized.

This argument is reflected in legislation such as the Burns-Dole bill (S.1200) or the recent telecommunications policy legislation in New Jersey and appears frequently in articles written in telephony trade magazines.[34]

In return for permission to provide the full range of networking services, voice data and video, the telephone companies would continue to operate as common carriers, providing non-discriminatory access to their network to all users. This is the bargain inherent in the FCC's recent video dialtone decision, which authorizes telephone company entry into the delivery of video services, but only on a common carrier basis.[35] The common carrier model is seen as the best approach to fostering competition among information providers. Further, the single integrated common carrier would be in a better position to insure universal service by rate averaging among its customers.

The adherents to the competition paradigm envision a much different future. They question the extent of economies of scale and scope in the communications infrastructure. If economies of scale and scope are limited, then there is room for multiple providers of competing communications networks. Moreover, in a period of great technological ferment, competition is seen as more likely to insure that technological opportunities are exploited. Thus, the way to insure that our communications infrastructure is based on the most cost effective technologies, is to encourage competition at all levels.

If the communications infrastructure is provided on a competitive basis, then the job of regulators is greatly simplified. Regulators no longer have to monitor costs in detail to determine if prices provide only "reasonable" profits: the pressures of competition can be relied on to reduce prices to cost-based levels. If the communications infrastructure is provided on a monopoly basis, great care must be taken to insure that monopolization of transmission or networking does not lead to monopolization of enhanced or information services, for, as we have discussed above, these are likely to be be better provided by many smaller firms which are more flexible and more customer oriented. In a competitive environment, there would be less need to be wary of carrier participation in the preparation of content, since no one carrier would control the only avenue for information dissemination.

To what extent does our review of technology shed light on the above debate? First, the case for a single, fiber optic-based integrated broadband network is still economically uncertain at the present time. Moreover cable companies are sufficiently well positioned, that they may well be able to evolve their networks to carry integrated broadband traffic more cost effectively than can the telephone companies.[36] Indeed, U.S. West recently issued a Request for Information (RFI) to the traditional cable TV vendors for equipment and architectures that would allow the carrier to upgrade its loops using a mix of fiber and coaxial cable.[37]

Second, Jim Utterback has observed that whenever a radically new technology appears that threatens to displace an entrenched alternative, it often stimulates rapid productivity improvement in the older technology which staves off its demise.[38] In the years immediately following the invention of the electric lightbulb, gas lamp manufacturers realized a six fold improvement in light output through research on better wicks and other improvements. In much the same way, we are seeing rapid improvements in the carrying capacity of copper which may well enable the lead broadband product--entertainment video--to be delivered to the home without the need to install fiber.

Third, the most difficult problems of multiple networks are likely to be in interworking the control systems, particularly as these become more and more sophisticated. Little thought has been give to how AIN services might be delivered in a competitive local environment.[39]

Fourth, we have paid far more attention to date to the development of the transport and networking layers of our information infrastructure. However, there are numerous unresolved issues at the services layer that must be addressed if information is to be readily available and shareable throughout the society. Developing simple user metaphors which allow information to be found may not appear as exciting as work on gigabit networks, but it will probably have far more impact on the efficiency and competitiveness of US firms and educational institutions.