myavr.info Technology Cisco Qos Pdf

CISCO QOS PDF

Saturday, June 1, 2019


benefits you can gain from implementing Cisco IOS QoS in your network. Then it focuses on the different QoS configuration approach for each technology. Cisco IOS quality of service (QoS) commands are used to configure quality of service . This command is entered from LANE QoS database configuration mode. End-to-End QoS Network Design: Quality of Service for Rich-Media & Cloud Networks, myavr.info c/.


Cisco Qos Pdf

Author:MALISA ROSENFELD
Language:English, Spanish, German
Country:Israel
Genre:Biography
Pages:406
Published (Last):14.07.2015
ISBN:872-7-33165-263-5
ePub File Size:17.89 MB
PDF File Size:8.72 MB
Distribution:Free* [*Regsitration Required]
Downloads:50737
Uploaded by: HANH

Use the commands in this chapter to configure quality of service (QoS), For QoS configuration information and examples, refer to the Cisco IOS Quality of. Best Effort No QoS policies are implemented. Integrated Services (IntServ). Resource Reservation Protocol (RSVP) is used to reserve bandwidth per-. The Cisco IOS QoS software enables complex networks to control and Coexistence of mission-critical applications—Cisco's QoS technologies make certain.

Delay is the time it takes for a packet to get from the source to a destination, this is called the one-way delay. The time it takes to get from a source to the destination and back is called the round-trip delay.

Introduction to QoS (Quality of Service)

There are different types of delay; without going into too much detail, let me give you a quick overview: Processing delay: this is the time it takes for a device to perform all tasks required to forward the packet. Depending on the router model, CPU, and switching method this affects the processing delay.

Queuing delay: the amount of time a packet is waiting in a queue. When an interface is congested, the packet will have to wait in the queue before it is transmitted. Serialization delay: the time it takes to send all bits of a frame to the physical interface for transmission. Propagation delay: the time it takes for bits to cross a physical medium.

For example, the time it takes for bits to travel through a 10 mile fiber optic link is much lower than the time it takes for bits to travel using satellite links. What we can do with QoS however, is influence the queuing delay.

For example, you could create a priority queue that is always served before other queues. Jitter is the variation of one-way delay in a stream of packets. Because of congestion in the network, some packets are delayed. The delay between packet 1 and 2 is 20 ms, the delay between packet 2 and 3 is 40 ms, the delay between packet 3 and 4 is 5 ms, etc. Packet loss is always possible. For example, when there is congestion, packets will be queued but once the queue is full…packets will be dropped.

With QoS, we can at least decide which packets get dropped when this happens. Traffic Types With QoS, we can change our network so that certain traffic is preferred over other traffic when it comes to bandwidth, delay, jitter and loss.

What you need to configure however really depends on the applications that you use. The file is Bandwidth is nice to have, it makes the difference between having to wait a few seconds, minutes or a few days to download a file like this.

What about delay? There is a one-way delay to get the data from the server to your computer. When you click on the download link, it might take a short while before the download starts. What about packet loss? Packet loss is always possible. For example, when there is congestion, packets will be queued but once the queue is full…packets will be dropped. With QoS, we can at least decide which packets get dropped when this happens.

Traffic Types With QoS, we can change our network so that certain traffic is preferred over other traffic when it comes to bandwidth, delay, jitter and loss. What you need to configure however really depends on the applications that you use. The file is Bandwidth is nice to have, it makes the difference between having to wait a few seconds, minutes or a few days to download a file like this.

What about delay?

There is a one-way delay to get the data from the server to your computer. When you click on the download link, it might take a short while before the download starts.

What about packet loss? File transfers like these use TCP and when some packets are lost, TCP will retransmit your data, making sure the download makes it completely to your computer. An application like your web browser that downloads a file is a non-interactive application, often called a batch application or batch transfer. Bandwidth is nice to have since it reduces the time to wait for the download to complete. With QoS, we can assign enough bandwidth to applications like these to ensure downloads complete in time and reducing packet loss to a minimum to prevent retransmissions.

Interactive Application Another type of application is the interactive application. Since you are typing commands and waiting for a response, a high delay can be annoying to work with. Satellite links can have a one-way delay of between ms which means that when you type a few characters, there will be a short pause before you see the characters appear on your console. With QoS, we can ensure that in case of congestion, interactive applications are served before bandwidth-hungry batch applications.

First, let me give you a quick overview of how VoIP works: Above we have a user that is speaking. With VoIP, we use a codec that processes the analog sound into a digital signal.

The analog sound is digitized for a certain time period which is usually 20 ms. For one second of audio, the phone will create 50 IP packets. The G. If you are speaking with someone on the phone, you expect it to be real-time. Figure shows an example of WTD operating on a queue whose size is frames.

Three drop percentages are configured: 40 percent frames , 60 percent frames , and percent frames. These percentages mean that up to frames can be queued at the percent threshold, up to frames at the percent threshold, and up to frames at the percent threshold.

In this example, CoS values 6 and 7 have a greater importance than the other CoS values, and they are assigned to the percent drop threshold queue-full state. CoS values 4 and 5 are assigned to the percent threshold, and CoS values 0 to 3 are assigned to the percent threshold.

Suppose the queue is already filled with frames, and a new frame arrives. It contains CoS values 4 and 5 and is subjected to the percent threshold. On the ingress queues, SRR sends packets to the stack ring.

On the egress queues, SRR sends packets to the egress port. You can configure SRR on egress queues for sharing or for shaping. However, for ingress queues, sharing is the default mode, and it is the only mode supported.

In shaped mode, the egress queues are guaranteed a percentage of the bandwidth, and they are rate-limited to that amount. Shaped traffic does not use more than the allocated bandwidth even if the link is idle.

Shaping provides a more even flow of traffic over time and reduces the peaks and valleys of bursty traffic. With shaping, the absolute value of each weight is used to compute the bandwidth available for the queues. In shared mode, the queues share the bandwidth among them according to the configured weights. The bandwidth is guaranteed at this level but not limited to it.

For example, if a queue is empty and no longer requires a share of the link, the remaining queues can expand into the unused bandwidth and share it among them. With sharing, the ratio of the weights controls the frequency of dequeuing; the absolute values are meaningless.

Determine ingress queue number, buffer allocation, and WTD thresholds. Are thresholds being exceeded? Yes No Queue the packet. Service the queue according to the SRR weights. Drop packet. Send packet to the stack ring Note SRR services the priority queue for its configured share before servicing the other queue. The switch supports two configurable ingress queues, which are serviced by SRR in shared mode only. Table describes the queues. You can configure three different thresholds to differentiate among the flows.

You can use the mls qos srr-queue input threshold, the mls qos srr-queue input dscp-map, and the mls qos srr-queue input cos-map global configuration commands. High-priority user traffic such as differentiated services DF expedited forwarding or voice traffic.

Enforce QoS Based on DSCP Classification

You can configure the bandwidth required for this traffic as a percentage of the total stack traffic by using the mls qos srr-queue input priority-queue global configuration command. The expedite queue has guaranteed bandwidth.

The switch uses two nonconfigurable queues for traffic that is essential for proper network and stack operation 14 Understanding QoS Chapter 29 You assign each packet that flows through the switch to a queue and to a threshold.

Each queue has three drop thresholds: two configurable explicit WTD thresholds and one nonconfigurable implicit threshold preset to the queue-full state. You assign the two explicit WTD threshold percentages for threshold ID 1 and ID 2 to the ingress queues by using the mls qos srr-queue input threshold queue-id threshold-percentage1 threshold-percentage2 global configuration command. Each threshold value is a percentage of the total number of allocated buffers for the queue.

The drop threshold for threshold ID 3 is preset to the queue-full state, and you cannot modify it. For more information about how WTD works, see the Weighted Tail Drop section on page You define the ratio allocate the amount of space with which to divide the ingress buffers between the two queues by using the mls qos srr-queue input buffers percentage1 percentage2 global configuration command.

The buffer allocation together with the bandwidth allocation control how much data can be buffered and sent before packets are dropped. You allocate bandwidth as a percentage by using the mls qos srr-queue input bandwidth weight1 weight2 global configuration command.

The ratio of the weights is the ratio of the frequency in which the SRR scheduler sends packets from each queue. You can configure one ingress queue as the priority queue by using the mls qos srr-queue input priority-queue queue-id bandwidth weight global configuration command. The priority queue should be used for traffic such as voice that requires guaranteed delivery because this queue is guaranteed part of the bandwidth regardless of the load on the stack ring. SRR services the priority queue for its configured weight as specified by the bandwidth keyword in the mls qos srr-queue input priority-queue queue-id bandwidth weight global configuration command.

Then, SRR shares the remaining bandwidth with both ingress queues and services them as specified by the weights configured with the mls qos srr-queue input bandwidth weight1 weight2 global configuration command. You can combine the commands described in this section to prioritize traffic by placing packets with particular DSCPs or CoSs into certain queues, by allocating a large queue size or by servicing the queue more frequently, and by adjusting queue thresholds so that packets with lower priorities are dropped.

For configuration information, see the Configuring Ingress Queue Characteristics section on page 15 Chapter 29 Understanding QoS Queueing and Scheduling on Egress Queues Figure shows the queueing and scheduling flowchart for egress ports. Note If the expedite queue is enabled, SRR services it until it is empty before servicing the other three queues.

You might also like: CISCO STUDY MATERIAL PDF

Determine egress queue number and threshold based on the label. Send the packet out the port. Done Each port supports four egress queues, one of which queue 1 can be the egress expedite queue.

These queues are assigned to a queue-set. All traffic exiting the switch flows through one of these four queues and is subjected to a threshold based on the QoS label assigned to the packet 16 Understanding QoS Chapter 29 Figure shows the egress queue buffer.

The buffer space is divided between the common pool and the reserved pool. The switch uses a buffer allocation scheme to reserve a minimum amount of buffers for each egress queue, to prevent any queue or port from consuming all the buffers and depriving other queues, and to control whether to grant buffer space to a requesting queue. The switch detects whether the target queue has not consumed more buffers than its reserved amount under-limit , whether it has consumed all of its maximum buffers over limit , and whether the common pool is empty no free buffers or not empty free buffers.

If the queue is not over-limit, the switch can allocate buffer space from the reserved pool or from the common pool if it is not empty. If there are no free buffers in the common pool or if the queue is over-limit, the switch drops the frame.

Overview of QoS

Figure Egress Queue Buffer Allocation Common pool Port 1 queue 1 Port 1 queue 2 Port 1 queue 3 Port 1 queue 4 Port 2 queue 1 Port 2 queue 2 Reserved pool Buffer and Memory Allocation WTD Thresholds You guarantee the availability of buffers, set drop thresholds, and configure the maximum memory allocation for a queue-set by using the mls qos queue-set output qset-id threshold queue-id drop-threshold1 drop-threshold2 reserved-threshold maximum-threshold global configuration command.

Each threshold value is a percentage of the queue s allocated memory, which you specify by using the mls qos queue-set output qset-id buffers allocation The sum of all the allocated buffers represents the reserved pool, and the remaining buffers are part of the common pool. Through buffer allocation, you can ensure that high-priority traffic is buffered. For example, if the buffer space is , you can allocate 70 percent of it to queue 1 and 10 percent to queues 2 through 4.

Queue 1 then has buffers allocated to it, and queues 2 through 4 each have 40 buffers allocated to them. You can guarantee that the allocated buffers are reserved for a specific queue in a queue-set. For example, if there are buffers for a queue, you can reserve 50 percent 50 buffers. The switch returns the remaining 50 buffers to the common pool. You also can enable a queue in the full condition to obtain more buffers than are reserved for it by setting a maximum threshold.

The switch can allocate the needed buffers from the common pool if the common pool is not empty. You can assign each packet that flows through the switch to a queue and to a threshold. The queues use WTD to support distinct drop percentages for different traffic classes.

You map a port to a queue-set by using the queue-set qset-id interface configuration command. You assign shared or shaped weights to the port by using the srr-queue bandwidth share weight1 weight2 weight3 weight4 or the srr-queue bandwidth shape weight1 weight2 weight3 weight4 interface configuration command.

For an explanation of the differences between shaping and sharing, see the SRR Shaping and Sharing section on page The buffer allocation together with the SRR weight ratios control how much data can be buffered and sent before packets are dropped.

The weight ratio is the ratio of the frequency in which the SRR scheduler sends packets from each queue.For more information, see the Policing and Marking section on page Queueing evaluates the QoS label and the corresponding DSCP or CoS value to select into which of the two ingress queues to place a packet.

File transfers like these use TCP and when some packets are lost, TCP will retransmit your data, making sure the download makes it completely to your computer. Performance Evaluation. This is called classification. Unless otherwise noted, the term switch refers to a standalone switch and a switch stack. A policy might contain multiple classes with actions specified for each one of them. Note It is not possible to classify traffic based on the markings performed by an input QoS policy.

Loss: 0. Archived from the original on April 30,

CHANTELLE from Tennessee
I do fancy exploring ePub and PDF books stealthily . See my other posts. I'm keen on driving.