In the previous
sections, we identified both the principles and the mechanisms used to
provide quality of service in the Internet. In this section, we consider
how these ideas are exploited in a particular architecture for providing
quality of service in the Internet--the so-called Intserv (Integrated Services)
Internet architecture. Intserv is a framework developed within the IETF
to provide individualized quality-of-service guarantees to individual application
sessions. Two key features lie at the heart of Intserv architecture:
-
Reserved resources.
A router is required to know what amounts of its resources (buffers, link
bandwidth) are already reserved for ongoing sessions.
-
Call setup.
A session requiring QoS guarantees must first be able to reserve sufficient
resources at each network router on its source-to-destination path to ensure
that its end-to-end QoS requirement is met. This call setup (also known
as call admission) process requires the participation of each router on
the path. Each router must determine the local resources required by the
session, consider the amounts of its resources that are already committed
to other ongoing sessions, and determine whether it has sufficient resources
to satisfy the per-hop QoS requirement of the session at this router without
violating local QoS guarantees made to an already-admitted session.
Figure 6.31 depicts
the call setup process.
Figure 6.31:
The call setup process
Let us now consider
the steps involved in call admission in more detail:
-
Traffic characterization
and specification of the desired QoS. In order for a router to determine
whether or not its resources are sufficient to meet the QoS requirements
of a session, that session must first declare its QoS requirement, as well
as characterize the traffic that it will be sending into the network, and
for which it requires a QoS guarantee. In the Intserv architecture, the
so-called Rspec (R for reserved) defines the specific QoS being requested
by a connection; the so-called Tspec (T for traffic) characterizes the
traffic the sender will be sending into the network, or the receiver will
be receiving from the network. The specific form of the Rspec and Tspec
will vary, depending on the service requested, as discussed below. The
Tspec and Rspec are defined in part in RFC 2210 and RFC 2215.
-
Signaling for
call setup. A session's Tspec and Rspec must be carried to the routers
at which resources will be reserved for the session. In the Internet, the
RSVP protocol, which is discussed in detail in the next section, is currently
the signaling protocol of choice. RFC 2210 describes the use of the RSVP
resource reservation protocol with the Intserv architecture.
-
Per-element
call admission. Once a router receives the Tspec and Rspec for a session
requesting a QoS guarantee, it can determine whether or not it can admit
the call. This call admission decision will depend on the traffic specification,
the requested type of service, and the existing resource commitments already
made by the router to ongoing sessions. Per-element call admission is shown
in Figure 6.32.
Figure 6.32:
Per-element call behavior
The Intserv
architecture defines two major classes of service: guaranteed service and
controlled-load service. We will see shortly that each provides a very
different form of a quality of service guarantee.
6.7.1: Guaranteed
Quality of Service
The guaranteed
service specification, defined in RFC 2212, provides firm (mathematically
provable) bounds on the queuing delays that a packet will experience in
a router. While the details behind guaranteed service are rather complicated,
the basic idea is really quite simple. To a first approximation, a source's
traffic characterization is given by a leaky bucket (see Section 6.6) with
parameters (r,b) and the requested service is characterized
by a transmission rate, R, at which packets will be transmitted.
In essence, a session requesting guaranteed service is requiring that the
bits in its packet be guaranteed a forwarding rate of R bits/sec.
Given that traffic is specified using a leaky bucket characterization,
and a guaranteed rate of R is being requested, it is also possible
to bound the maximum queuing delay at the router. Recall that with a leaky
bucket traffic characterization, the amount of traffic (in bits) generated
over any interval of length t is bounded by rt + b.
Recall also from Section 6.6, that when a leaky bucket source is fed into
a queue that guarantees that queued traffic will be serviced at least at
a rate of R bits per second, the maximum queuing delay experienced
by any packet will be bounded by b/R, as long as R
is greater than r. The actual delay bound guaranteed under the guaranteed
service definition is slightly more complicated, due to packetization effects
(the simple b/R bound assumes that data is in the form of
a fluid-like flow rather than discrete packets), the fact that the traffic
arrival process is subject to the peak rate limitation of the input link
(the simple b/R bound assumes that a burst of b bits
can arrive in zero time), and possible additional variations in a packet's
transmission time.
6.7.2: Controlled-Load
Network Service
A session receiving
controlled-load service will receive "a quality of service closely approximating
the QoS that same flow would receive from an unloaded network element"
[RFC
2211]. In other words, the session may assume that a "very high percentage"
of its packets will successfully pass through the router without being
dropped and will experience a queuing delay in the router that is close
to zero. Interestingly, controlled load service makes no quantitative guarantees
about performance--it does not specify what constitutes a "very high percentage"
of packets nor what quality of service closely approximates that of an
unloaded network element.
The controlled-load
service targets real-time multimedia applications that have been developed
for today's Internet. As we have seen, these applications perform quite
well when the network is unloaded, but rapidly degrade in performance as
the network becomes more loaded. |