ITPRC News - September 2002
Search The ITPRC:
Career Management
Book Sites
Job Databases
Job Boards
Trade Shows
Training and Certification Technologies
Data Link
Content Networking

IP Routing
Operating Systems
Quality of Service

Storage Networks
Voice & Data
VPNs & Encryption
ISP Resources
Network Management
Network Security Other

Link of the Week
Newsletter Archive

ITPRC NEWS - September 2002 -

A Very Brief Intro to QoS
By Irwin Lazar

The reason this month’s column is called “A very brief intro to QoS” is because QoS is a topic worthy of a book (and there are already lots of them).  In fact, defining the very term QoS itself could consume a book. Those from the ATM/telco world tend to define QoS as something that means an absolute guarantee as provided by services such as ATM constant-bit-rate (CBR) or time-division multiplexing (TDM).  Those from an IP-centric background tend to think of QoS as something that simply provides prioritization for different classes of traffic such as voice and video (often leading to confusion between QoS and the term “class of service (CoS).”  Since I don’t have the time to write a book on a subject, I instead wanted to take a few minutes to provide some brief definitions of popular approaches to providing prioritization for IP traffic flows (QoS for IP if you will).

The Whys
Why to prioritize?  Well the initial design of IP assumed all traffic was of equal importance, there was no need to send one type of traffic ahead of another and in times of congestion, packets were simply dropped.  In fact, dropped packets aren’t a drawback to IP networks; they are a feature.  This approach worked fine for the early days of IP communications when the bulk of traffic was messaging and file transfer, but dropped packets during congestion isn’t a workable strategy for supporting applications such as voice and video in which packet loss can have a severely adverse impact on application performance.  Thus, we need to take action to prevent packet loss to support these types of applications.

The Hows
There are two approaches to minimizing packet loss, over-provisioning and the use of IP QoS protocols.  Over-provisioning simply says “let’s throw more bandwidth at the problem.”  If links get congested, we simply increase the size.  Over-provisioning works well where bandwidth is cheap (e.g. the LAN) but this approach can be cost prohibitive where bandwidth is expensive (e.g. the WAN), so we need something else that will not only reduce the number of dropped packets, but also make sure that the right packets get through in times of congestion.

Reducing Packet Loss
Reducing packet loss without harming drop-sensitive traffic can be accomplished via two approaches, dropping, or slowing down transmission of packets that are tolerant of loss.  Approaches such as “Weighted Random Early Detect (or Discard) [WRED]” identify packets subject to drop, such as FTP, e-mail or other bulk data transfer and drop those packets during congestion to ensure that there is enough bandwidth for the drop-sensitive traffic to get through.  Another approach, called TCP rate shaping, and pioneered by companies such as Packeteer, Sitara, and NetReality interferes in the TCP windowing process to slow down the transmission of drop-sensitive traffic, freeing up the line for other applications.  Check out their web sites for more info on each vendor’s “secret sauce.”

Making Sure The Right Packets Get Through
In addition to reducing packet loss, protecting sensitive traffic means that there needs to be a way of ensuring that sensitive packets get through in times of network congestion.  There are multiple approaches to accomplishing this goal, each of which identifies which traffic flows need to get through, and processes those packets based on a queuing strategy.  A few of the more popular techniques include:

  • Priority Queuing – routers process packets based on priority as defined by packet size, DiffServ code point marking, port number, or destination IP address.  This approach is difficult to configure and manage, and may cause low-priority packets to never be delivered since high-priority packets are always delivered as they arrive.

  • Weighted Fair Queuing – This approach creates a round-robin distribution that gives priority to smaller packets, which are likely to be latency sensitive (things like FTP and e-mail tend to use larger packet sizes).  Since a round-robin approach is used, even the large packets will get sent from time to time, so no flow is choked off.

  • Class Based Weighted Fair Queuing – Essentially the same principle as WFQ, but queues are processed based on DiffServ code point markings, allowing network managers additional control on how queues are managed.

  • Low Latency Queuing – A Cisco standard that combines priority queuing with CBWFQ to provide an absolute priority to certain traffic in times of severe congestion.  This approach ensures that applications such as voice can always get through.

It should be noted that this is a partial list, there are many other alternatives and we aren’t even getting into the issues related to management and configuration (that is another month’s column).

IP QoS methods allow prioritization of latency and drop sensitive applications such as voice and video.  A great deal of alternatives exist, and network managers must carefully understand the alternatives offered by their vendors, and where each alternative is most effective.

For more information on IP QoS, check out the ITPRC's "QoS" page at 
Irwin Lazar is a Practice Manager for Burton Group where he focuses on strategic planning and network architecture for Fortune 500 enterprises as well as large service providers. He is the conference director for MPLScon and runs The MPLS Resource Center  and The Information Technology Professional's Resource Center -

Please send any comments about this article to ============================================================

All Content Of This Site Is Copyright 2000-2004 - ITPRC.COM

Subscribe To Our Free IT Newsletter