• Tidak ada hasil yang ditemukan

The Network Unit

Network (JANET)

3.5 The Network Unit

MTBF of 100 hours, an MTTR of 1 hour, and an availability of 99%. There is a 1%

chance that when it is being used this first unit will fail. We arrange that if the unit that is being used fails, the failure is detected by the system, and the work is automatically transferred to the other unit. This second unit has a 99% proba- bility that it is working, and the overall probability of both units having failed is 1% of 1%, or 1 in 10,000.

It is of course also possible to design a system in such a way that reliable units are combined to give reduced overall availability. This will arise if the user requires both units to be working at the same time. In the case of the two units above, each has a 99% probability of working, but if they must both be working the overall availability falls to 98%.

A large network contains switches used to route traffic, and lines to carry the traffic between switches. We can connect each site (or switch) to more than one switch; this will allow the site (or switch) to continue working if one of the lines, or one of the switches, fails. There are of course costs. The obvious one is the cost of the extra lines and switches. Less obvious is the cost of rerouting traffic in the event of a failure, since this requires software that is capable of detecting the failure, and then automatically carrying out the rerouting. Connectionless protocols have this property by default, as each packet is routed afresh.

It is important when discussing system reliability to realize that the perception of users may be very different from that of those running the system. Users are, quite rightly, concerned with their own work. If the part(s) of the system that they are using keeps breaking down they will view it as unreliable. If the break- downs that occur affect only a small subset of users, those running the system, who think in terms of overall reliability, may well assert that the system is reli- able. From their limited perspectives, both are right.

the Network Executive. The Executive would be primarily concerned with the installation and operation of a national network to serve the entire university and research council community, while the JNT would continue to act as an advisory body, continuing the work on network architecture and protocols.

There was a general expectation that the backbone of any such network would be provided and operated by the GPO. There was no agreement as to how to fund such a national network, although there was strong pressure from some quar- ters in favour of meeting the costs by charging users.

3.5.1 The Manning Unjamming

Between 1979 and 1982 the user community was demanding more and more network provision, but still with no agreement on funding. As the 1975 working party had believed, the issue was not one of technology, but of funding and control. The situation was finally unjammed by a bold move on the part of Dr Geoff Manning, of the Science and Engineering Research Council (SERC), the successor body to the SRC. He proposed that the entire costs, capital and recur- rent, of any national network should be met by the Computer Board, and that the JNT/NE should own and operate the network on behalf of the universities and research councils. At the same time each of the research councils would separately transfer any networking equipment which was providing inter-site connections to the Network Executive. At first sight it might appear that the research councils had the better of this deal, but in fact they would hand over a considerable investment on equipment, and give up their control over what was by now an essential part of their work, and there was a deal of resistance to these proposals among research council funded workers. It says much for Geoff Manning’s powers of persuasion that he was able to persuade the Computer Board to pay the whole costs, while also persuading the research councils to give up direct control of “their” networks. In the event, the SERC and NERC partici- pated wholeheartedly. The AFRC remained rather independent, and continued for some time to operate separately. The MRC, most of whose workers were located in university departments, had always relied on their host universities for network provision.

The staff of the JNT and the NE could not be direct employees of the Board.

Curiously, although the Board commanded a budget of millions of pounds, it was not a legal person, and could not make a contract. It was decided that the two units would continue to be employees of SRC, and would remain based at the ATLAS centre. This in itself gave rise to suspicions in some (university) quarters that there was a bias towards the research councils.

Early in 1982 I took on the role of Director of what would become the national academic network. One of my first tasks was to reach a final clarification of the role of the GPO, with which the JNT had been in discussion for many months.

The GPO had recently established its own packet switching service, PSS, and it was suggested that this should form the backbone of the academic network. The GPO was approached with a proposal that the academic community might become a bulk customer of PSS, in exchange for a discounted price. The

response from the GPO was disappointing. The GPO claimed that as the monop- oly supplier of the service, it could not offer different prices to different classes of customer, and the Board would have to pay at the standard rates. A back of the envelope calculation suggested that from the outset the annual cost of imple- menting the backbone in this way would consume all the Board’s yearly budget for networking. This raised a problem. If the GPO could not meet the commu- nity’s needs, then the community would have perforce to build and operate its own network. The legal position of the research council networks was tolerably secure, as each research council could argue that it was a single organization, and that any of its grant holders in university sites who had access to their wide area network was in some sense a member of that organization. The Computer Board itself was not a legal person, and so it was not possible to use the Board as an overarching body. There was no way in which separate universities would be able to argue that they were part of a single organization, and thus any national network would infringe the GPO monoploy. The GPO was in fact extremely helpful over this issue, and I was able to have several helpful discussions with some of its officers who dealt with regulatory issues. The GPO’s main concern was to have no infringement of their monopoly position in respect of voice traf- fic in the UK, not least because voice traffic at that time produced far more reve- nue than data traffic. We were able to reach an informal agreement, that provided there was no attempt to divert voice traffic on to the academic net- work, then the GPO would not initiate any form of action. It was also agreed that the Network Executive might be able to claim that it was acting as a Crown Agent in offering data services to third parties. The only ways by which a person can formally become a Crown Agent is either by an order in Council, or by the person successfully using the claim that he is acting as a Crown Agent as a defence in a court action. Fortunately, no action was ever initiated, and the sub- sequent privatization of the GPO data transmission services that followed the creation of British Telecom meant that no one from the Network Executive has ever faced the courts.

3.5.2 Coloured Books

From its inception the Network Unit and later the JNT had been active in the development of protocols. This work had been based on the CCITT X-25 protocol, a connection-oriented protocol, which was favoured by CCITT because it was thought that such protocols made charging for network services easier. There was also pressure from UK government to develop home grown networking products. By the early 1980s the UK academic community had developed a complete suite of connection oriented protocols. Each protocol was defined by a document with a different coloured cover, and eventually the complete set of protocols became known as the “coloured books”. The Board used its control over the university sectors capital expenditure on computing equipment as a means of forcing manufacturers to provide implementations of these protocols. In practice, the manufacturers often met this requirement by placing a contract for the development of any necessary protocols with the

university to which it was selling a computer system. These protocols were to be freely available to the academic community.

This situation should be compared with the availability of the connectionless protocols which were favoured by the ARPA community. Any US computer man- ufacturer wishing to sell to an ARPA funded project was required to provide implementations of these protocols. A protocol is like any other piece of soft- ware in the sense that once it has been written and tested, the marginal cost of making another copy is effectively zero. As a result, most US suppliers made the complete protocol stack available at no cost to customers. Naturally at that price, they found plenty of willing buyers.