In the introduction
to this chapter, we noted that there are two types of network links: point-to-point
links, and broadcast links. A point-to-point link consists of a
single sender on one end of the link, and a single receiver at the other
end of the link. Many link-layer protocols have been designed for point-to-point
links; PPP (the point-to-point protocol) and HDLC are two such protocols
that we'll cover later in this chapter. The second type of link, a broadcast
link, can have multiple sending and receiving nodes all connected to
the same, single, shared broadcast channel. The term broadcast is
used here because when any one node transmits a frame, the channel broadcasts
the frame and each of the other nodes receives a copy. Ethernet is probably
the most widely deployed broadcast link technology; we'll cover Ethernet
in detail in Section 5.5. In this section we'll take a step back from specific
link-layer protocols and first examine a problem of central importance
to the data-link layer: how to coordinate the access of multiple sending
and receiving nodes to a shared broadcast channel--the so-called multiple
access problem. Broadcast channels are often used in local area
networks (LANs), networks that are geographically concentrated in a
single building (or on a corporate or university campus). Thus, we'll also
look at how multiple access channels are used in LANs at the end of this
section.
We are all familiar
with the notion of broadcasting, as television has been using it since
its invention. But traditional television is a one-way broadcast (that
is, one fixed node transmitting to many receiving nodes), while nodes on
a computer network broadcast channel can both send and receive. Perhaps
a more apt human analogy for a broadcast channel is a cocktail party, where
many people gather together in a large room (the air providing the broadcast
medium) to talk and listen. A second good analogy is something many readers
will be familiar with--a classroom--where teacher(s) and student(s) similarly
share the same, single, broadcast medium. A central problem in both scenarios
is that of determining who gets to talk (that is, transmit into the channel),
and when. As humans, we've evolved an elaborate set of protocols for sharing
the broadcast channel:
"Give
everyone a chance to speak."
"Don't speak
until you are spoken to."
"Don't monopolize
the conversation."
"Raise your
hand if you have a question."
"Don't interrupt
when someone is speaking."
"Don't fall
asleep when someone else is talking."
Computer networks
similarly have protocols--so-called multiple access protocols--by
which nodes regulate their transmission onto the shared broadcast channel.
As shown in Figure 5.9, multiple access protocols are needed in a wide
variety of network settings, including both wired and wireless local area
networks, and satellite networks. Figure 5.10 takes a more abstract view
of the broadcast channel and of the nodes sharing that channel. Although
technically each node accesses the broadcast channel through its adapter,
in this section we will refer to the node as the sending and receiving
device. In practice, hundreds or even thousands of nodes can directly communicate
over a broadcast channel.
Figure 5.9:
Various multiple access channels
Figure 5.10:
A broadcast channel interconnecting four nodes
Because all
nodes are capable of transmitting frames, more than two nodes can transmit
frames at the same time. When this happens, all of the nodes receive multiple
frames at the same time, that is, the transmitted frames collide
at all of the receivers. Typically, when there is a collision, none of
the receiving nodes can make any sense of any of the frames that were transmitted;
in a sense, the signals of the colliding frame become inextricably tangled
together. Thus, all the frames involved in the collision are lost, and
the broadcast channel is wasted during the collision interval. Clearly,
if many nodes want to frequently transmit frames, many transmissions will
result in collisions, and much of the bandwidth of the broadcast channel
will be wasted.
In order to
ensure that the broadcast channel performs useful work when multiple nodes
are active, it is necessary to somehow coordinate the transmissions of
the active nodes. This coordination job is the responsibility of the multiple
access protocol. Over the past thirty years, thousands of papers and hundreds
of Ph.D. dissertations have been written on multiple access protocols;
a comprehensive survey of this body of work is [Rom
1990]. Furthermore, dozens of different protocols have been implemented
in a variety of link-layer technologies. Nevertheless, we can classify
just about any multiple access protocol as belonging to one of three categories:
channel partitioning protocols, random access protocols, and taking-turns
protocols. We'll cover these categories of multiple access protocols
in the following three subsections. Let us conclude this overview by noting
that ideally, a multiple access protocol for a broadcast channel of rate
R bits per second should have the following desirable characteristics:
-
When only one node
has data to send, that node has a throughput of R bps.
-
When M nodes
have data to send, each of these nodes has a throughput of R/M
bps. This need not necessarily imply that each of the M nodes always
have an instantaneous rate of R/M, but rather that each node
should have an average transmission rate of R/M over some
suitably defined interval of time.
-
The protocol is
decentralized, that is, there are no master nodes that can fail and bring
down the entire system.
-
The protocol is
simple, so that it is inexpensive to implement.
5.3.1: Channel
Partitioning Protocols
Recall from our
early discussion back in Section 1.4 that time division multiplexing (TDM)
and frequency division multiplexing (FDM) are two techniques that can be
used to partition a broadcast channel's bandwidth among all nodes sharing
that channel. As an example, suppose the channel supports N nodes
and that the transmission rate of the channel is R bps. TDM divides
time into time frames (not to be confused with the unit of data,
the frame, at the data-link layer) and further divides each time frame
into N time slots. Each slot time is then assigned to one
of the N nodes. Whenever a node has a frame to send, it transmits
the frame's bits during its assigned time slot in the revolving TDM frame.
Typically, frame sizes are chosen so that a single frame can be transmitting
during a slot time. Figure 5.11 shows a simple four-node TDM example. Returning
to our cocktail party analogy, a TDM-regulated cocktail party would allow
one partygoer to speak for a fixed period of time, and then allow another
partygoer to speak for the same amount of time, and so on. Once everyone
has had their chance to talk, the pattern repeats.
Figure 5.11:
A four-node TDM and FDM example
TDM is appealing
as it eliminates collisions and is perfectly fair: each node gets a dedicated
transmission rate of R/N bps during each frame time. However,
it has two major drawbacks. First, a node is limited to an average rate
of R/N bps even when it is the only node with frames to send.
A second drawback is that a node must always wait for its turn in the transmission
sequence--again, even when it is the only node with a frame to send. Imagine
the partygoer who is the only one with anything to say (and imagine that
this is the even rarer circumstance where everyone at the party wants to
hear what that one person has to say). Clearly, TDM would be a poor choice
for a multiple access protocol for this particular party.
While TDM shares
the broadcast channel in time, FDM divides the R bps channel into
different frequencies (each with a bandwidth of R/N) and
assigns each frequency to one of the N nodes. FDM thus creates N
smaller channels of R/N bps out of the single, larger R
bps channel. FDM shares both the advantages and drawbacks of TDM. It avoids
collisions and divides the bandwidth fairly among the N nodes. However,
FDM also shares a principal disadvantage with TDM--a node is limited to
a bandwidth of R/N, even when it is the only node with frames
to send.
A third channel
partitioning protocol is code division multiple access (CDMA). While
TDM and FDM assign time slots and frequencies, respectively, to the nodes,
CDMA assigns a different code to each node. Each node then uses
its unique code to encode the data bits it sends, as discussed below. We'll
see that CDMA allows different nodes to transmit simultaneously
and yet have their respective receivers correctly receive a sender's encoded
data bits (assuming the receiver knows the sender's code) in spite of interfering
transmissions by other nodes. CDMA has been used in military systems for
some time (due to its antijamming properties) and is now beginning to find
widespread civilian use, particularly for use in wireless multiple access
channels.
In a CDMA protocol,
each bit being sent by the sender is encoded by multiplying the bit by
a signal (the code) that changes at a much faster rate (known as the chipping
rate) than the original sequence of data bits. Figure 5.12 shows a
simple, idealized CDMA encoding/decoding scenario. Suppose that the rate
at which original data bits reach the CDMA encoder defines the unit of
time; that is, each original data bit to be transmitted requires one bit-slot
time. Let di be the value of the data bit for the ith
bit slot. For mathematical convenience, we represent a data bit with a
0 value as -1. Each bit slot is further subdivided into M minislots;
in Figure 5.12, M = 8, although in practice M is much larger.
The CDMA code used by the sender consists of a sequence of M values,
cm, m = 1, . . . , M, each taking a +1
or -1 value. In the example in Figure 5.12, the M-bit CDMA code
being used by the sender is (1, 1, 1, -1, 1, -1, -1, -1).
Figure 5.12:
A simple CDMA example: Sender encoding, receiver decoding
To illustrate
how CDMA works, let us focus on the ith data bit, di.
For the mth mini-slot of the bit-transmission time of di,
the output of the CDMA encoder, Zi,m, is the value of
di multiplied by the mth bit in the assigned CDMA
code, cm:
Zi,m
= di • cm (5.1)
In a simple
world, with no interfering senders, the receiver would receive the encoded
bits, Zi,m, and recover the original data bit, di,
by computing:
(5.2)
The reader might
want to work through the details of the example in Figure 5.12 to see that
the original data bits are indeed correctly recovered at the receiver using
Equation 5.2.
The world is
far from ideal, however, and as noted above, CDMA must work in the presence
of interfering senders that are encoding and transmitting their data using
a different assigned code. But how can a CDMA receiver recover a sender's
original data bits when those data bits are being tangled with bits being
transmitted by other senders? CDMA works under the assumption that the
interfering transmitted bit signals are additive, for example, that if
three senders send a 1 value, and a fourth sender sends a -1 value during
the same minislot, then the received signal at all receivers during that
minislot is a 2 (since 1 + 1 + 1 - 1 = 2). In the presence of multiple
senders, sender s computes its encoded transmissions, Zsi,m,
in exactly the same manner as in Equation 5.1. The value received at a
receiver during the mth minislot of the ith bit slot, however,
is now the sum of the transmitted bits from all N senders
during that minislot:
Amazingly, if
the senders' codes are chosen carefully, each receiver can recover the
data sent by a given sender out of the aggregate signal simply by using
the sender's code in exactly the same manner as in Equation 5.2:
(5.3)
Figure 5.13
illustrates a two-sender CDMA example. The M-bit CDMA code being
used by the upper sender is (1, 1, 1, -1, 1, -1, -1, -1), while the CDMA
code being used by the lower sender is (1, -1, 1, 1, 1,-1, 1, 1). Figure
5.13 illustrates a receiver recovering the original data bits from the
upper sender. Note that the receiver is able to extract the data from sender
1 in spite of the interfering transmission from sender 2.
Figure 5.13:
A two-sender CDMA example
Returning to
our cocktail party analogy, a CDMA protocol is similar to having partygoers
speaking in multiple languages; in such circumstances humans are actually
quite good at locking into the conversation in the language they understand,
while filtering out the remaining conversations. We see here that CDMA
is a partitioning protocol in that it partitions the codespace (as opposed
to time or frequency) and assigns each node a dedicated piece of the codespace.
Our discussion
here of CDMA is necessarily brief and a number of difficult issues must
be addressed in practice. First, in order for the CDMA receivers to be
able to extract out a particular sender's signal, the CDMA codes must be
carefully chosen. Secondly, our discussion has assumed that the received
signal strengths from various senders at a receiver are the same; this
can be difficult to achieve in practice. There is a considerable body of
literature addressing these and other issues related to CDMA; see [Pickholtz
1982; Viterbi
1995] for details.
5.3.2: Random Access
Protocols
The second broad
class of multiple access protocols are so-called random access protocols.
In a random access protocol, a transmitting node always transmits at the
full rate of the channel, namely, R bps. When there is a collision,
each node involved in the collision repeatedly retransmits its frame until
the frame gets through without a collision. But when a node experiences
a collision, it doesn't necessarily retransmit the frame right away. Instead
it waits a random delay before retransmitting the frame. Each node
involved in a collision chooses independent random delays. Because after
a collision the random delays are independently chosen, it is possible
that one of the nodes will pick a delay that is sufficiently less than
the delays of the other colliding nodes and will therefore be able to sneak
its frame into the channel without a collision.
There are dozens
if not hundreds of random access protocols described in the literature
[Rom
1990; Bertsekas
1991]. In this section we'll describe a few of the most commonly used
random access protocols--the ALOHA protocols [Abramson
1970; Abramson
1985] and the carrier sense multiple access (CSMA) protocols [Kleinrock
1975b]. Later, in Section 5.5, we'll cover the details of Ethernet
[Metcalfe
1976], a popular and widely deployed CSMA protocol.
Slotted ALOHA
Let's begin
our study of random access protocols with one of the most simple random
access protocols, the so-called slotted ALOHA protocol. In our description
of slotted ALOHA, we assume the following:
-
All frames consist
of exactly L bits.
-
Time is divided
into slots of size L/R seconds (that is, a slot equals the
time to transmit one frame).
-
Nodes start to
transmit frames only at the beginnings of slots.
-
The nodes are synchronized
so that each node knows when the slots begin.
-
If two or more
frames collide in a slot, then all the nodes detect the collision event
before the slot ends.
Let p be
a probability, that is, a number between 0 and 1. The operation of slotted
ALOHA in each node is simple:
-
When the node has
a fresh frame to send, it waits until the beginning of the next slot and
transmits the entire frame in the slot.
-
If there isn't
a collision, the node has successfully transmitted its frame and thus need
not consider retransmitting the frame. (The node can prepare a new frame
for transmission, if it has one.)
-
If there is a collision,
the node detects the collision before the end of the slot. The node retransmits
its frame in each subsequent slot with probability p until the frame
is transmitted without a collision.
By retransmitting
with probability p, we mean that the node effectively tosses a biased
coin; the event heads corresponds to retransmit, which occurs with probability
p. The event tails corresponds to "skip the slot and toss the coin
again in the next slot"; this occurs with probability (1 - p). Each
of the nodes involved in the collision toss their coins independently.
Slotted ALOHA
would appear to have many advantages. Unlike channel partitioning, slotted
ALOHA allows a single active node (that is, a node with a frame to send)
to continuously transmit frames at the full rate of the channel. Slotted
ALOHA is also highly decentralized, because each node detects collisions
and independently decides when to retransmit. (Slotted ALOHA does, however,
require the slots to be synchronized in the nodes; we'll shortly discuss
an unslotted version of the ALOHA protocol, as well as CSMA protocols,
none of which require such synchronization and are therefore fully decentralized.)
Slotted ALOHA is also an extremely simple protocol.
Slotted ALOHA
works well when there is only one active node, but how efficient is it
when there are multiple active nodes? There are two possible efficiency
concerns here. First, as shown in Figure 5.14, when there are multiple
active nodes, a certain fraction of the slots will have collisions
and will therefore be "wasted." The second concern is that another fraction
of the slots will be empty because all active nodes refrain from
transmitting as a result of the probabilistic transmission policy. The
only "unwasted" slots will be those in which exactly one node transmits.
A slot in which exactly one node transmits is said to be a successful
slot. The efficiency of a slotted multiple access protocol is
defined to be the long-run fraction of successful slots in the case when
there are a large number of active nodes, each always having a large number
of frames to send. Note that if no form of access control were used, and
each node were to immediately retransmit after each collision, the efficiency
would be zero. Slotted ALOHA clearly increases the efficiency beyond zero,
but by how much?
Figure 5.14:
Nodes 1, 2, and 3 collide in the first slot. Node 2 finally succeeds in
the fourth slot, node 1 in the eighth slot, and node 3 in the ninth slot.
The notation C, E, and S represent "collision slot," "empty slot," and
"successful slot," respectively.
We now proceed
to outline the derivation of the maximum efficiency of slotted ALOHA. To
keep this derivation simple, let's modify the protocol a little and assume
that each node attempts to transmit a frame in each slot with probability
p. (That is, we assume that each node always has a frame to send
and that the node transmits with probability p for a fresh frame
as well as for a frame that has already suffered a collision.) Suppose
first there are N nodes. Then the probability that a given slot
is a successful slot is the probability that one of the nodes transmits
and that the remaining N - 1 nodes do not transmit. The probability
that a given node transmits is p; the probability that the remaining
nodes do not transmit is (1 - p)N-1. Therefore
the probability a given node has a success is p(1 - p)N-1.
Because there are N nodes, the probability that an arbitrary node
has a success is Np(1 - p)N-1.
Thus, when there
are N active nodes, the efficiency of slotted ALOHA is Np(1
- p)N-1. To obtain the maximum efficiency
for N active nodes, we have to find the p* that maximizes
this expression. (See the homework problems for a general outline of this
derivation.) And to obtain the maximum efficiency for a large number of
active nodes, we take the limit of Np*(1 - p*)N-1
as N approaches infinity. (Again, see the homework problems.) After
performing these calculations, we'll find that the maximum efficiency of
the protocol is given by 1/e = 0.37. That is, when a large number
of nodes have many frames to transmit, then (at best) only 37% of the slots
do useful work. Thus the effective transmission rate of the channel is
not R bps but only 0.37 R bps! A similar analysis also shows
that 37% of the slots go empty and 26% of slots have collisions. Imagine
the poor network administrator who has purchased a 100 Mbps slotted ALOHA
system, expecting to be able to use the network to transmit data among
a large number of users at an aggregate rate of, say, 80 Mbps! Although
the channel is capable of transmitting a given frame at the full channel
rate of 100 Mbps, in the long term, the successful throughput of this channel
will be less than 37 Mbps.
ALOHA
The slotted
ALOHA protocol required that all nodes synchronize their transmissions
to start at the beginning of a slot. The first ALOHA protocol [Abramson
1970] was actually an unslotted, fully decentralized protocol. In so-called
pure ALOHA, when a frame first arrives (that is, a network-layer datagram
is passed down from the network layer at the sending node), the node immediately
transmits the frame in its entirety into the broadcast channel. If a transmitted
frame experiences a collision with one or more other transmissions, the
node will then immediately (after completely transmitting its collided
frame) retransmit the frame with probability p. Otherwise, the node
waits for a frame transmission time. After this wait, it then transmits
the frame with probability p, or waits (remaining idle) for another
frame time with probability 1 - p.
To determine
the maximum efficiency of pure ALOHA, we focus on an individual node. We'll
make the same assumptions as in our slotted ALOHA analysis and take the
frame transmission time to be the unit of time. At any given time, the
probability that a node is transmitting a frame is p. Suppose this
frame begins transmission at time t0. As shown in Figure
5.15, in order for this frame to be successfully transmitted, no other
nodes can begin their transmission in the interval of time [t0
- 1, t0]. Such a transmission would overlap with the
beginning of the transmission of node i's frame. The probability
that all other nodes do not begin a transmission in this interval is (1
- p)N-1. Similarly, no other node can begin a
transmission while node i is transmitting, as such a transmission
would overlap with the latter part of node i's transmission. The
probability that all other nodes do not begin a transmission in this interval
is also (1 - p)N-1. Thus, the probability that
a given node has a successful transmission is p(1 - p)2(N-1).
By taking limits as in the slotted ALOHA case, we find that the maximum
efficiency of the pure ALOHA protocol is only 1/(2e)--exactly half
that of slotted ALOHA. This then is the price to be paid for a fully decentralized
ALOHA protocol.
Figure 5.15:
Interfering transmissions in pure ALOHA
Norm Abramson and Alohanet
Norm Abramson, a Ph.D. engineer, had a passion for surfing
and an interest in packet switching. This combination of interests brought
him to the University of Hawaii in 1969. Hawaii consists of many mountainous
islands, making it difficult to install and operate land-based networks.
When not surfing, Abramson thought about how to design a network that does
packet switching over radio. The network he designed had one central host
and several satellite nodes scattered over the Hawaiian islands. The network
had two channels, each using a different frequency band. The downlink channel
broadcasted packets from the central host to the satellite hosts; and the
upstream channel sent packets from the satellite hosts to the central host.
In addition to sending informational packets, the central host also sent
on the downstream channel an acknowledgement for each packet successfully
received from the satellite hosts.
Because the satellite hosts transmitted packets in a decentralized
fashion, collisions on the upstream channel inevitably occurred. This observation
led Abramson to devise the pure Aloha protocol, as described in this chapter.
In 1970, with continued funding from ARPA, Abramson connected his Alohanet
to the ARPAnet. Abramson's work is important not only because it was the
first example of a radio packet network, but also because it inspired Bob
Metcalfe. A few years after Abramson's invention, Metcalfe modified the
Aloha protocol to create the CSMA/CD protocol and the Ethernet LAN. |
CSMA--Carrier
Sense Multiple Access
In both slotted
and pure ALOHA, a node's decision to transmit is made independently of
the activity of the other nodes attached to the broadcast channel. In particular,
a node neither pays attention to whether another node happens to be transmitting
when it begins to transmit, nor stops transmitting if another node begins
to interfere with its transmission. In our cocktail party analogy, ALOHA
protocols are quite like a boorish partygoer who continues to chatter away
regardless of whether other people are talking. As humans, we have human
protocols that allow us to not only behave with more civility, but also
to decrease the amount of time spent "colliding" with each other in conversation
and consequently increase the amount of data we exchange in our conversations.
Specifically, there are two important rules for polite human conversation:
-
Listen before
speaking. If someone else is speaking, wait until they are done. In
the networking world, this is called carrier sensing--a node listens
to the channel before transmitting. If a frame from another node is currently
being transmitted into the channel, a node then waits ("backs off") a random
amount of time and then again senses the channel. If the channel is sensed
to be idle, the node then begins frame transmission. Otherwise, the node
waits another random amount of time and repeats this process.
-
If someone else
begins talking at the same time, stop talking. In the networking world,
this is called collision detection--a transmitting node listens
to the channel while it is transmitting. If it detects that another node
is transmitting an interfering frame, it stops transmitting and uses some
protocol to determine when it should next attempt to transmit.
These two rules
are embodied in the family of CSMA (carrier sense multiple access)
and CSMA/CD (CSMA with collision detection) protocols [Kleinrock
1975b; Metcalfe
1976; Lam
1980; Rom
1990]. Many variations on CSMA and CSMA/CD have been proposed, with
the differences being primarily in the manner in which nodes perform backoff.
The reader can consult these references for the details of these protocols.
We'll study the CSMA/CD scheme used in Ethernet in detail in Section 5.5.
Here, we'll consider a few of the most important, and fundamental, characteristics
of CSMA and CSMA/CD.
The first question
that one might ask about CSMA is that if all nodes perform carrier sensing,
why do collisions occur in the first place? After all, a node will refrain
from transmitting whenever it senses that another node is transmitting.
The answer to the question can best be illustrated using space-time diagrams
[Molle
1987]. Figure 5.16 shows a space-time diagram of four nodes (A, B,
C, D) attached to a linear broadcast bus. The horizontal axis shows the
position of each node in space; the vertical axis represents time.
Figure 5.16:
Space-time diagram of two CSMA nodes with colliding transmissions
At time t0,
node B senses the channel is idle, as no other nodes are currently transmitting.
Node B thus begins transmitting, with its bits propagating in both directions
along the broadcast medium. The downward propagation of B's bits in Figure
5.16 with increasing time indicates that a nonzero amount of time is needed
for B's bits to actually propagate (albeit at near the speed-of-light)
along the broadcast medium. At time t1 (t1
> t0), node D has a frame to send. Although node B is
currently transmitting at time t1, the bits being transmitted
by B have yet to reach D, and thus D senses the channel idle at t1.
In accordance with the CSMA protocol, D thus begins transmitting its frame.
A short time later, B's transmission begins to interfere with D's transmission
at D. From Figure 5.16, it is evident that the end-to-end channel propagation
delay of a broadcast channel--the time it takes for a signal to propagate
from one of the channels to another--will play a crucial role in determining
its performance. The longer this propagation delay, the larger the chance
that a carrier-sensing node is not yet able to sense a transmission that
has already begun at another node in the network.
In Figure 5.16,
nodes do not perform collision detection; both B and D continue to transmit
their frames in their entirety even though a collision has occurred. When
a node performs collision detection, it will cease transmission as soon
as it detects a collision. Figure 5.17 shows the same scenario as in Figure
5.16, except that the two nodes each abort their transmission a short time
after detecting a collision. Clearly, adding collision detection to a multiple
access protocol will help protocol performance by not transmitting a useless,
damaged (by interference with a frame from another node) frame in its entirety.
The Ethernet protocol we will study in Section 5.5 is a CSMA protocol that
uses collision detection.
Figure 5.17:
CSMA with collision detection
5.3.3: Taking-Turns
Protocols
Recall that two
desirable properties of a multiple access protocol are (1) when only one
node is active, the active node has a throughput of R bps, and (2)
when M nodes are active, then each active node has a throughput
of nearly R/M bps. The ALOHA and CSMA protocols have this
first property but not the second. This has motivated researchers to create
another class of protocols--the taking-turns protocols. As with
random-access protocols, there are dozens of taking-turns protocols, and
each one of these protocols has many variations. We'll discuss two of the
more important protocols here. The first one is the polling protocol.
The polling protocol requires one of the nodes to be designated as a master
node. The master node polls each of the nodes in a round-robin fashion.
In particular, the master node first sends a message to node 1, saying
that it can transmit up to some maximum number of frames. After node 1
transmits some frames, the master node tells node 2 it can transmit up
to the maximum number of frames. (The master node can determine when a
node has finished sending its frames by observing the lack of a signal
on the channel.) The procedure continues in this manner, with the master
node polling each of the nodes in a cyclic manner.
The polling
protocol eliminates the collisions and the empty slots that plague the
random access protocols. This allows it to have a much higher efficiency.
But it also has a few drawbacks. The first drawback is that the protocol
introduces a polling delay--the amount of time required to notify a node
that it can transmit. If, for example, only one node is active, then the
node will transmit at a rate less than R bps, as the master node
must poll each of the inactive nodes in turn, each time the active node
has sent its maximum number of frames. The second drawback, which is potentially
more serious, is that if the master node fails, the entire channel becomes
inoperative.
The second taking-turn
protocol is the token-passing protocol. In this protocol there is
no master node. A small, special-purpose frame known as a token
is exchanged among the nodes in some fixed order. For example, node 1 might
always send the token to node 2, node 2 might always send the token to
node 3, node N might always send the token to node 1. When a node
receives a token, it holds onto the token only if it has some frames to
transmit; otherwise, it immediately forwards the token to the next node.
If a node does have frames to transmit when it receives the token, it sends
up to a maximum number of frames and then forwards the token to the next
node. Token passing is decentralized and has a high efficiency. But it
has its problems as well. For example, the failure of one node can crash
the entire channel. Or if a node accidentally neglects to release the token,
then some recovery procedure must be invoked to get the token back in circulation.
Over the years many token-passing products have been developed, and each
one had to address these as well as other sticky issues.
5.3.4: Local Area
Networks (LANs)
Multiple access
protocols are used in conjunction with many different types of broadcast
channels. They have been used for satellite and wireless channels, whose
nodes transmit over a common frequency spectrum. They are currently used
in the upstream channel for cable access to the Internet (see Section 1.5),
and they are extensively used in local area networks (LANs).
Recall that
a LAN is a computer network that is concentrated in a geographical
area, such as in a building or on a university campus. When a user accesses
the Internet from a university or corporate campus, the access is almost
always by way of a LAN. For this type of Internet access, the user's host
is a node on the LAN, and the LAN provides access to the Internet through
a router, as shown in Figure 5.18. The LAN is a single "link" between each
user host and the router; it therefore uses a link-layer protocol, part
of which is a multiple access protocol. The transmission rate, R,
of most LANs is very high. Even in the early 1980s, 10 Mbps LANs were common;
today, 100 Mbps LANs are common, and 1 Gbps LANs are available.
Figure 5.18:
User hosts access an Internet Web server through a LAN. The broadcast channel
between a user host and the router consists of one "link."
In the 1980s
and the early 1990s, two classes of LAN technologies were popular in the
workplace. The first class consists of the Ethernet LANs (also known as
802.3 LANs [IEEE
802.3 1998; Spurgeon
1999]), which are random-access-based. The second class of LAN technologies
are token-passing technologies, including token ring (also known as IEEE
802.5 [IEEE
802.5 1998]) and FDDI (also known as fiber distributed data
interface [Jain
1994]). Because we shall explore the Ethernet technologies in some
detail in Section 5.4, we focus our discussion here on the token-passing
LANs. Our discussion on token-passing technologies is intentionally brief,
since these technologies have become relatively minor players in the face
of relentless Ethernet competition. Nevertheless, in order to provide examples
of token-passing technology and to give a little historical perspective,
it is useful to say a few words about token rings.
In a token ring
LAN, the N nodes of the LAN (hosts and routers) are connected in
a ring by direct links. The topology of the token ring defines the token-passing
order. When a node obtains the token and sends a frame, the frame propagates
around the entire ring, thereby creating a virtual broadcast channel. The
node that sends the frame has the responsibility of removing the frame
from the ring. FDDI was designed for geographically larger LANs, including
so-called metropolitan area networks (MANs). For geographically large LANs
(spread out over several kilometers) it is inefficient to let a frame propagate
back to the sending node once the frame has passed the destination node.
FDDI has the destination node remove the frame from the ring. (Strictly
speaking, FDDI is not a pure broadcast channel, as every node does not
receive every transmitted frame.) You can learn more about token ring and
FDDI by visiting the 3Com adapter page [3Com
1999]. |