Saturday, March 24, 2012

Cisco CCNA: Bridges and Switches

Cisco CCNA: Bridges and Switches



Unit 1. The Spanning Tree



In another course in this series, you were introduced to bridges and switches. In this course, you will expand your knowledge about the different types of bridges and switches. You will also be introduced to the features of Cisco Systems' Catalyst 1900 and 2820 switches.
In this unit, you will learn about the Spanning Tree Protocol. This important protocol for networks is responsible for removing routing loops from networks. You will see how the Spanning Tree Protocol performs this function.

After completing this unit, you should be able to:
  • Recognize routing loops in a network

  • List the benefits of the Spanning Tree Protocol

  • Identify the steps in removing loops with the spanning tree algorithm

  • List the five port states in the Spanning Tree Protocol


This unit provides information that is relevant to the following CCNA exam objective:
  • Describe the operation of the Spanning Tree Protocol and its benefits

Topic 1.1: Routing Loops

*Routing Loops
An earlier course in this series covered bridges and how data travels through them in a small network. In large networks with multiple bridges, there may be more than one path for data to travel from a source node to a destination node. These redundant paths provide backup pathways for data transmission when other pathways fail. However, they also create loops, called routing loops, that can cause problems with data transmission.

*Erroneous Tables
Bridges use constantly updated tables to keep track of which nodes are on the various segments of a network.
When routing loops occur in a network, erroneous data may be entered in these tables. When this happens, problems such as loopback and broadcast proliferation may occur.

*Loopbacks
Loopbacks are infinite loops of data transmission caused by more than one bridge linking two segments. In the graphic below, the two bridges do not know about node Y. So when X transmits a frame to Y, the bridges forward the frame to the next segment. Then each bridge receives the other's frame, updates its routing table to reflect that X is on the other segment, and forwards the frame to the original segment. This process is repeated until Y transmits a message and the bridges update their tables.

*Broadcast Proliferation
Broadcast proliferation is an infinite loop of data transmission that involves broadcasts. Broadcasts are always forwarded by bridges. When a routing loop exists, broadcasts will be received and transmitted continually by the bridges in a routing loop.

*Solution: STP
Loopbacks and broadcast proliferation are huge potential problems when routing loops are present in a network, but routing loops are sometimes unavoidable.
The Spanning Tree Protocol (STP) was developed to overcome the problems presented by routing loops while maintaining the benefit of redundant paths.

Question 1

Topic 1.2: Spanning Tree Protocol

*Functions of STP
STP is part of the IEEE 802.1d standard. This protocol was designed to perform the following functions:
  • Detect and remove routing loops

  • Activate alternative path if a bridge fails

  • Automatically adapt to a new network configuration

  • Establish individual port states


*Seven-Hop Limit
The algorithm used by the Spanning Tree Protocol imposes a seven-hop limitation. This means that a frame traveling from source node to destination node cannot pass more than seven bridges. Thus, the overall size of the network is limited by the seven-hop rule.

*BPDUs for STP
STP uses configuration messages called BPDUs (Bridge Protocol Data Units). These BPDUs contain information about the individual bridges in a network, and they are used by the spanning tree algorithm to determine the logical network topology and individual port states.

Topic 1.2.1: Removing Loops

*Algorithm for STP
The spanning tree algorithm uses the information in the BPDUs to perform the following functions:
  • Elect a single root bridge
  • Calculate the shortest distance to the root from each bridge
  • Identify a designated bridge for each segment to forward data to the root
  • Identify the port on each bridge with the best path to the root
  • Select ports to be included in the spanning tree


*Electing the Root Bridge
At network startup, each bridge assumes that it is the root bridge, which is the bridge from which all distances are calculated. Each bridge then sends out BPDUs and receives BPDUs from other bridges.
All bridges compare the MAC addresses contained in the BPDUs, and the bridge with the lowest MAC address becomes the root. This process can be overridden if the network administrator assigns a lower ID value to another bridge.

*Calculating Root Cost
After the root bridge is determined, each bridge calculates the shortest distance from itself to the root. This distance is measured in hops, and each bridge between the bridge making the calculation and the root bridge constitutes one hop. The root bridge is also considered a hop. The result of the calculation is called the root cost. The root cost is then used to determine designated and standby bridges.

*Selecting Designated Bridges
The bridges on each segment compare root costs using BPDUs. The bridge with the lowest root cost in each segment becomes the designated bridge for that segment. All network traffic from a segment going to the root will go through the designated bridge. In case of a tie, the bridge with the lowest ID value becomes the designated bridge. All other bridges on that segment become standby bridges.

*Root Cost of Ports
All bridges then undergo a root cost analysis for each port. The port with the shortest path to the root bridge becomes the root port, and all traffic going to the root will go through this port. In case of a tie, the port with the lowest ID becomes the root port. Other ports that connect to bridges further away from the root become designated ports and provide a connection to the root for those bridges. With this definition, all ports on root bridge are considered designated ports. Any port that is not picked as a root port or a designated port becomes a standby, or a blocking port and does not accept or transmit frames.

*Spanning Tree Ports
All root ports and designated ports are included in the spanning tree. Frames are only forwarded between root ports and designated ports, and loops are thereby removed.
A consequence of this is that frames cannot be sent directly between bridges that are not on the same shortest pathway to the root.

Question 2

Question 3

Topic 1.2.2: Adapting to New Topologies

*Automatic Reconfiguration
Whenever there is a change in the network topology, the Spanning Tree Protocol adapts to the new configuration.
For instance, if a node or link fails or if a new node or link is added to the network, the network undergoes an automatic reconfiguration.

*BPDU Age Limit
To allow for reconfiguration, the BPDUs used by the Spanning Tree Protocol are assigned an age limit. Once the age of the BPDUs stored by a bridge exceed that limit, they are discarded and the configuration is recalculated.
If a bridge or port fails in the time between these recalculations, or if a new bridge is added to the network, the recalculation will take them into consideration for the new configuration.

*BPDUs of Age Zero
IEEE 802.1d requires the root bridge to send out BPDUs with an age of zero every 1-10 seconds. This causes all other bridges to send their own BPDUs with an age of zero. The root bridge is the only device that can initiate this process, and the time between repetitions of this process is called the hello time. After receiving BPDUs, designated and standby bridges reevaluate root cost and reconfigure if necessary.

Topic 1.2.3: Port States

*The Five Port States
In order to establish a spanning tree, the Spanning Tree Protocol sets all ports to one of five different states: blocking, listening, learning, forwarding, and disabled. Upon system initialization, ports progress through these states in the following manner:
Examine the following table
Initialization to blocking
Blocking to listening (or disabled)
Listening to learning (or disabled)
Learning to forwarding (or disabled)
Forwarding to disabled


*The Blocking State
After initialization, all ports enter the blocking state.
While in this state, the port receives a BPDU from its own bridge and responds to any network management messages.
Normal data is not received or transmitted while a port is in the blocking state.

*The Listening State
In the listening state, a port receives BPDUs from its own bridge as well as other bridges and responds to network management messages. The BPDUs are then used to determine if the port should participate in the spanning tree.
If the port is chosen to participate in the spanning tree, it proceeds to the learning state. If not, it is disabled and then set to the blocking state.

*The Learning State
A port in the learning state starts building a database of addresses which will be used in frame forwarding. The port sends, receives, and processes all BPDUs and network management messages, but normal data is not received or transmitted.
The port will stay in the learning state for a time period predefined by the vendor or administrator, and then enter the forwarding state. The total time a port spends in the listening and learning states is called the forward delay.

*The Forwarding State
In the forwarding state, a port is fully functional, and will send, receive, and process all frames, BPDUs, and network management messages.
Most bridges allow a port to be set to the forwarding state immediately after initialization, bypassing the earlier states and avoiding the spanning tree calculations. This feature should only be used for ports connected to a single node.

*The Disabled State
A port may be directed to the disabled state from any other state.
In the disabled state, a port is nearly nonfunctional, but will still receive and respond to network management messages.
When a disabled port is re-enabled, it goes to the blocking state.

Question 4

Question 5

Question 6


* Exercise 1
Try creating a network diagram to illustrate the Spanning Tree Protocol.


Examine the following table
Step Action
1 Begin by drawing nine bridges at various locations on a piece of paper.
2 Randomly label each bridge with a unique number.
3 Draw lines from each bridge to each of its neighbors.
4 Pick the bridge with the lowest number. This will be the root bridge.
5 Determine the shortest path to the root for each bridge along the lines you have drawn. This path will include the fewest possible intermediate bridges. If there is a tie, the path going through the bridge with the lowest number wins.
6 Cross out any connections that are not included in any of the shortest paths.
7 The remaining paths, and the bridges they connect, make up the spanning tree for your network.


Topic 1.3: Unit 1 Summary

In this unit, you learned about routing loops and how they are removed with the Spanning Tree Protocol. You also saw how the Spanning Tree Protocol places individual ports into various port states to establish the spanning tree.
In the next unit, you will learn more about bridges.

Unit 2. Bridges



In this unit, you will learn more about bridges. You will be introduced to two very important types of bridges: transparent bridges and source-route bridges. You will see how these types of bridges differ from one another and how each type makes forwarding decisions.
You will also be introduced to mixed-media bridging, which tries to combine transparent and source-route bridging.

After completing this unit, you should be able to:
  • Identify the different types of bridges

  • Recognize transparent bridges

  • Recognize source-route bridges

  • List the differences between transparent and source-route bridging

  • List two types of mixed-media bridges


This unit provides information that is relevant to the following CCNA exam objectives:
  • Describe LAN segmentation using bridges
  • Describe the benefits of network segmentation with bridges


Topic 2.1: Review of Bridging

*Bridges Filter Network Traffic
Bridges are network devices used to connect network segments. These devices operate on the MAC sublayer of the Data Link layer of the OSI model. Bridges act as a filter between segments by either forwarding or not forwarding the frames they receive from one port to other ports.

*Traffic Boundaries
Bridges are mainly used to separate high-traffic areas of a LAN. These areas should have well-defined traffic boundaries, because bridges lose their effectiveness in networks with changing traffic patterns.

*Bridge Operation
Under normal operation, a bridge receives a frame on one of its ports. This frame is stored in a buffer. The bridge then uses its software to determine if and where the frame should be forwarded. If the destination is on the same segment as the source, the bridge will discard the frame. Forwarding does not begin until the entire frame is received and processed by the software. This method is called store-and-forward.

*Pros and Cons of Store-and-Forward
The advantage of the store-and-forward method is that nodes on either side of the bridge may transmit simultaneously without causing collisions. The disadvantage of store-and-forward is that it adds about 20-30% to the overall latency of the transmission, which is the time it takes for a frame to get from source to destination.

*Broadcasting
Bridges rely heavily on broadcasting. If a bridge decides that a frame should be forwarded, but does not know where to send it, the frame will be broadcast to all ports on the bridge except the port that originated the frame. This is inefficient and can lead to broadcast storms. Bridges automatically broadcast packets, which have Network layer addresses.

*Local and Remote Bridges
Bridges may be either local or remote. Local bridges are in the same area as the LAN segments they connect. Remote bridges connect LANs over a wider area using long-distance connections such as telephone lines or satellite dishes.

Question 7

Topic 2.2: Types of Bridges

*Making the Forwarding Decision
There are two methods used by bridges to determine whether a frame should be forwarded. The first method consists of comparing the destination of a frame with a forwarding table within the bridge. The second method consists of reading routing information included in the frame. Bridges that use the first method are called transparent bridges and bridges that use the second method are called source-route bridges.

Topic 2.2.1: Transparent Bridges

*Address Learning
Transparent bridges are used mainly in Ethernet implementations. These bridges are also called learning bridges, because they study the source address of each frame they receive and then build a table that associates addresses with ports.

*Ports and Forwarding
When a transparent bridge receives a frame, it compares the destination address on the frame with the table of addresses in its database. If the bridge finds an associated port for the destination address, it will forward the frame to that port. However, if the associated port is the same port from which the frame was received (i.e., source and destination on same segment), the bridge will discard the frame.

*Destination Unknown
Sometimes, a transparent bridge will not have a table entry for a destination address. This may be because the destination node is new or because the bridge has never received a frame from the destination node with which to update its table. In this case, the bridge will broadcast the frame to all ports except the originating port.

*Limited Knowledge of Route
Transparent bridges do not need to know the entire route that a frame needs to take to reach a destination. They are only concerned with associating a frame with a port. In a large network, a frame may need to go through several bridges to reach its destination, and each of these bridges is considered a hop.

*Advantages of Transparent Bridging
The advantages of transparent bridges are that they can isolate high-traffic segments, they learn the topology of the network on their own, they build their database tables automatically, and they require no reconfiguration or interaction.

*Disadvantage of Transparent Bridging
The main disadvantage of transparent bridges is that their forwarding tables are cleared every time the spanning tree reconfigures. This triggers a broadcast storm as the tables are reconstructed. Thus, transparent bridges are not a good choice in networks where the topology is constantly changing.

Question 8

Question 9

Topic 2.2.2: Source-Route Bridges

*SRB in Token Rings
SRB (Source-Route Bridging) is used in Token Ring implementations. Source-route bridges are simpler than transparent bridges, because they do not build any address tables. Instead, each node is responsible for placing a field on each frame that contains the entire route the frame will travel on the network.

*Using Explorer Packets
In order for a node to place routing information on a frame, it must first know the route the frame needs to take. This knowledge is gained by broadcasting special explorer packets through the entire network. While the explorer packet is traveling the network, each bridge adds its routing information to the packet. When the destination node receives the explorer packet, it adds its own information to the packet and sends the packet back to the originating node. The originating node then places the routing information on the frame and sends the frame to its destination.

*Types of Explorer Packets
There are three types of explorer packets: local, spanning tree, and all routes. Local explorer packets are only used in the local domain. Spanning tree explorer packets disregard bridges that are not part of the spanning tree. All routes explorer packets travel the entire SRB network, going through all ports on all bridges.

*Frame Forwarding
When a source-route bridge receives a frame, it reads the routing information contained in the frame. The bridge then uses the routing information to determine which port should receive the frame and forwards the frame to that port. This process is repeated for each bridge along the route.

*Routing Information Field
The RIF (Routing Information Field) on each frame in an SRB domain contains a list of bridges a frame should travel through to reach its destination. A RIF contains two types of fields: one Routing Control field and a number of Route Designator fields.

*Routing Control Field
The routing control field has four subfields: Type, Length, D, and Largest. Type specifies whether the frame is specifically routed, exploring all paths, or exploring the spanning tree; Length indicates the total size of the RIF; D indicates whether the RIF should be read forwards or backwards; and Largest indicates the maximum frame size that the route can handle.

*Route Designator Field
The Route Designator field contains two subfields: Ring Number and Bridge Number. Ring Number indicates a specific Token Ring to which the frame should be forwarded. Bridge Number indicates a unique bridge on the network. The RIF will contain one Route Designator field for every hop along the route.

*Hop Limit for SRB
Source-route bridging is useful for segmenting high-traffic areas, and is just about the only bridging choice available for Token Ring networks. Older source-route bridges allow a maximum of 7 hops from source to destination, but recent software releases combined with new LAN adapters increase the maximum number of hops to 13.

*SRB Disadvantages
The are a few disadvantages with source-route bridging. SRB is very broadcast intensive, as each node must broadcast to find routes. Each node must also have the software necessary to manage RIF fields and remember multiple routes. Also, most RIF software does not have a method for retiring old paths, which means that individual nodes usually need to be rebooted after a bridge failure.

Question 10

Question 11

Question 12

Topic 2.2.3: Mixed-Media Bridges

*Combined Bridging Needed
Source-route (Token Ring) and transparent (Ethernet) domains are different in many ways. They each treat MAC addresses differently, have completely different frame formats, different maximum frame sizes, and use different spanning tree algorithms. Yet, many large networks contain both domain types within the network. This has led to a need for bridges that can handle both types of bridging.

*Solution: Mixed-Media Bridges
Mixed-media bridges were developed to handle the bridging problems associated with networks containing both Ethernets and Token Rings. Like other bridges, mixed-media bridges operate on the Data Link layer, so packets are treated as broadcasts. The different frame types must translated, however, and the two most popular methods for dealing with frames using mixed-media bridges are translational bridging and source-route transparent bridging.

*Translating Token Ring to Ethernet
Translational bridges work by reformatting a frame to conform to the requirements for the next hop. In bridging a frame from a Token Ring to an Ethernet, translational bridges reorder MAC addresses, limit the maximum size of Token Ring frames to the Ethernet maximum, and strip the frame of bits that are used for Token Ring functions. Any RIFs that are stripped from the Token Ring frames are stored in a cache for future use.

*Translating Ethernet to Token Ring
In bridging a frame from Ethernet to Token Ring, a translational bridge reformats the frame to the Token Ring specifications and looks at its cache of RIFs to see if it has a RIF for the frame's destination node. If the bridge has the required RIF, it will attach it to the frame and forward the frame to the appropriate port. If no RIF is available, the frame is broadcast onto the SRB domain as a spanning tree explorer packet.

*Translational Pros and Cons
Translational bridges are not very efficient, but they do allow nodes from source-route and transparent domains to communicate with each other. Translational bridging provides a viable solution to connecting the two domains, and is generally less expensive and easier to manage than routing.

*SRT and RII
SRT (Source-Route Transparent) bridges are different than translational bridges because they do not add or strip RIFs from any frames. Instead, SRT bridges look for an RII (Routing Information Identifier) in the highest-order bit of the source address. If this bit is a 1, then the frame has a RIF, and the bridge treats it as a source-route frame. If the bit is a 0, then the frame does not have a RIF, and the bridge treats it as a transparent frame.

*Limitation of SRT Bridging
SRT bridges are able to send both transparent and source-route frames across a network. However, they do not provide a method for nodes in a source-route domain to communicate with nodes in a transparent domain. This is because nodes that do not understand RIFs cannot read frames that contain RIFs.

Question 13

Question 14


* Exercise 1
Try using the World Wide Web to find more information on bridging topics.

Examine the following table
Step Action
1 Use your browser to navigate to your favorite search engine (Yahoo!, LookSmart, Excite, and so on).
2 Perform a search on topics such as transparent bridging, source-route bridging, translational bridging, and source-route transparent bridging. You may need to use the advanced search features of the search engine you chose.
3 Follow the links to find more information on the bridging topics you select.


Topic 2.3: Unit 2 Summary

In this unit, you learned about transparent and source-route bridges. You also saw how mixed-media bridges use aspects of both transparent and source-route bridges.
In the next unit of this course, you will learn more about switches.

Unit 3. Switches



In this unit, you will learn more about switches. You will see how switches make forwarding decisions and the different methods they use to forward frames. You will learn how switches can emulate network separation by using virtual LANs.
You will also be introduced to two important switch variants: tag switching and Data Link switching.

After completing this unit, you should be able to:
  • List the benefits of using switches in a network

  • Name two switching methods

  • Identify the benefits of virtual LANs

  • List the differences between Data Link switches and multilayer switches

  • List two different types of multilayer switches


This unit provides information that is relevant to the following CCNA exam objectives:
  • Name and describe two switching methods
  • Describe the benefits of network segmentation with switches
  • Distinguish between cut-through and store-and-forward LAN switching
  • Describe the benefits of virtual LANs


Topic 3.1: Switching

*Review of Basics
A switch is a network device that operates on the Data Link layer of the OSI model. Switches are very similar to bridges, but are faster and provide a greater range of capabilities.
Like bridges, switches learn the network topology and calculate the spanning tree by studying the source addresses of the frames they receive. The information gathered is stored in an address table that associates an address with a port.

*Switches Use Tables
When a switch receives a frame, it reads the destination address of the frame. The switch then looks up the destination address in its address table to see which port is associated with the address. If the port from the table is the same port from which the frame originated, the switch filters (discards) the frame. Otherwise, the frame is forwarded to the port listed in the table.

*Switches Are Speedy
Switches are faster than bridges when it comes to filtering and forwarding frames. One reason for this is that switches are not as software-dependent as bridges, and much of the switching process takes place in the hardware. Switches also have the capability of using different switching methods.

Topic 3.1.1: Switching Methods

*Store-and-Forward
Like bridges, switches can use the store-and-forward method of forwarding data. In this method, the switch stores the entire frame in a buffer and performs error-handling routines before it retransmits the signal. The error-handling routines check for problems such as frame check sequence (FCS) or alignment errors, so the store-and-forward method is recommended in a network experiencing these types of problems.

*Cut-Through
Switches also provide another method of forwarding besides store-and-forward. This method is known as cut-through switching. Switches employing the cut-through method read only a short portion of the frame before forwarding. Cisco Systems employs two modes of cut-through switching: FastForward and FragmentFree.

*FastForward
In the FastForward mode, a switch begins reading a frame until the destination address is read. The switch then looks up the appropriate port for the destination in its address table and immediately forwards the frame to that port. When using this mode, switches are very fast, and introduce very little latency. However, no error checking is performed.

*FragmentFree
In the FragmentFree mode, a switch reads the first 64 bytes of a frame. If a frame has an error, it almost always occurs within the first 64 bytes because this is the length of the collision window. If the frame does not contain at least 64 bytes, the switch filters (discards) the frame, ensuring that collision fragments are not forwarded. Otherwise, the switch looks up the destination address in the address table and forwards the frame to the appropriate port.

*Latency Measurement
The latency introduced by a switching method is measured in two ways. Store-and-forward switching is measured as LIFO (Last-In-First-Out), which only measures the processing time between the receipt of the entire frame and the start of forwarding, not the amount of time taken by the entire forwarding process. Cut-through switching is measured as FIFO (First-In-First-Out), which does measure the entire forwarding process. This can be confusing, because cut-through switching is much faster than store-and-forward switching, but the documented latency for store-and-forward switching is lower than that for cut-through switching.

Question 15

Question 16

Question 17

Question 18

Topic 3.1.2: Switch Operation

*Switches Are Versatile
Like bridges, switches provide connectivity between LAN segments that contain any combination of workstations, servers, printers, etc. However, switches also can provide dedicated, full-duplex communication between devices; provide media-rate adaption; and allow multiple simultaneous conversations.

*Microsegmentation
Switches provide dedicated communication between devices in a process called microsegmentation. In this process, each node is connected to its own port on the switch. This means that each node is on a private segment and has its own collision domain. Each node may then use the full bandwidth of the segment when communicating with other nodes, and this bandwidth is effectively doubled when the nodes and the switch are operating in full-duplex mode.

*Asymmetric Switching
Media-rate adaption, or asymmetric switching, is the process of connecting devices that operate at different speeds. For example, a 10-Mbps node cannot communicate with a 100-Mbps node if the nodes are connected by a bridge. A switch allows communication between these nodes by forwarding frames to each node at the rate at which each node operates. This process is very convenient when multiple 10-Mbps clients access the same server. Asymmetric switching allows the server to operate at 100 Mbps while the clients still operate at 10 Mbps, thus providing more efficient use of the bandwidth.

*One Mode for Asymmetric Switching
If a switch connects devices that operate at different speeds, the Store-and-Forward switching mode is used. The switching mode applies to the entire switch. If just one segment connected to the switch operates at 10 Mbps and all others operate at 100 Mbps, communication between the 100-Mbps nodes will still go through the Store-and-Forward process.

*Multiple Simultaneous Conversations
Switches can also allow multiple simultaneous conversations between devices. This is possible because switches make circuit connections in the hardware. For instance, if a device on port 1 needs to communicate with a device on port 3, and a device on port 2 needs to communicate with a device on port 4, the switch makes the circuit connections necessary for these devices to communicate simultaneously.

Topic 3.1.3: Switch Security

*Secured Ports
Switches offer an extra security precaution with secured ports. This feature allows a network administrator to restrict the use of a port to a defined group of nodes. If a frame arrives from a node not included in this group, the switch will not forward the frame. If only one node is assigned to a secured port, then that node is guaranteed the full bandwidth of the port. If a workgroup is assigned to a secure port, the switch will not forward packets from addresses outside the workgroup. Secured ports are used to restrict network access from a populated segment and to prevent entry by unauthorized users.

*Source Port Filtering
A network administrator may also configure a port to forward frames from other ports only if the frames are received from a static group of other ports. This is called source port filtering. This process provides extra security and can be combined with secured ports to make a network even more secure. Source port filtering also provides load balancing, since the same multicast address can be designated for streams sent from servers on different ports.

Question 19

Question 20

Question 21

Topic 3.1.4: Virtual LANs

*Logical Grouping of Nodes
Switches have the ability to simulate the breakup of a broadcast domain (local area). They do this by defining a group of nodes to be a VLAN (virtual LAN). A VLAN is a logical grouping of network nodes that may be on different LAN segments, but can communicate as if they were on the same segment. Broadcasts and multicasts in a VLAN are only forwarded to nodes within the same VLAN as the originating node.

*Assigning Nodes to VLANs
VLAN nodes may be assigned to a VLAN based on switch port numbers, MAC addresses, logical addresses, or the protocols used by the nodes. The choice of which method to use depends largely on implementation needs and vendor capabilities. Nodes in a VLAN may be on the same segment, different segments, different floors, or even different buildings. Regardless of their location, VLAN nodes share a single broadcast domain.

*One Node, Two VLANs
Sometimes, it is necessary for an individual node to be a member of two different VLANs. This is accomplished by using overlapping ports and configuring the node to be a part of each VLAN. For instance, a Sales department and an Accounting department on separate VLANs might need access to the same server. By setting the server on an overlapped port and configuring it to be a node in both VLANs, each department remains isolated and has access to the server.

*Separate Spanning Trees
Since VLANs simulate a separate LAN, each VLAN needs to calculate a separate spanning tree to prevent bridging loops.
A spanning tree establishes a root node and ensures there is only one path to any destination.
Network devices exchange information so that loops can be removed and in case the root path fails, a new network topology can be structured from the redundant paths.

*VLAN Benefits
There are many benefits associated with VLANs.
Users can be assigned to virtual workgroups regardless of location. When users move to different departments, they can be switched to another VLAN instead of moving offices.
Also, VLANs reduce the need for routers since they break up broadcast domains and have the ability to create firewalls.

*VLAN Drawbacks
VLANs also have a few drawbacks.
Many VLAN solutions are proprietary, which may lead to problems if network devices come from multiple vendors.
VLANs add administrative costs to the network.
VLANs are also a fairly new concept, and many network administrators are unfamiliar with VLAN issues.

Question 22

Topic 3.2: Multilayer Switches

*Switching and Routing
Switches operate on the Data Link layer, but some switches have the capability of using Network layer features.
These switches are called multilayer or IP switches and include devices such as tag switches and data-link switches.
Multilayer switches have the capability to make dynamic decisions on whether to switch frames or to route them.

*Functions of Multilayer Switches
Multilayer switches can perform many of the functions normally associated with routers.
These functions include control of broadcast and multicast traffic, security through access lists (covered in another course in this series), and IP fragmentation (the process of breaking up packets into smaller packets suitable for transmission).

Topic 3.2.1: Tag Switching

*Tagging a Packet
A tag switch is a multilayer switch useful in IP switching.
Tag switches place tags on packets for routing.
These tags act as an index to a tag information base (TIB) that contains outgoing tags, outgoing interface, and outgoing link-level information.

*Matching Tags with the TIB
When a tag switch receives a packet, it reads the tag on the packet and maps the tag to a corresponding entry in the TIB. If a match is found in the TIB, the switch replaces the tag on the packet with a new tag containing the outgoing information. The switch then forwards the packet to the port designated as the outgoing interface. If no match is found in the TIB, the packet is filtered.

*Tag-Mapping Methods
The TIB maps tag values to specific destinations, the ports through which they can be reached, and the appropriate Data Link layer information. There are different methods the TIB uses to map tags. Destination-prefix mapping applies tags, finds the routing table, and checks the packet's Network layer address. In traffic-engineering mapping, a tag edge router tags packets so they flow along specified routes. In application-flow mapping, an edge router applies tags based on both the source and destination addresses as well as using other information found in the packet.

*Building the TIB
Tag switching devices use the TDP (Tag Distribution Protocol) to build and maintain the TIB. This is done by associating a tag with a route in a process called binding. TDP also distributes, requests, and releases TIB information for multiple Network layer protocols. TDP does not replace routing protocols, but it uses information learned from routing protocols to create tag bindings.

*Tag Switching Performance
Tag switching is faster than routing because only the tags on the packet are read by the switch and routing algorithms are not needed.
Tag switches may be implemented over any media type and are able to work with different Network-layer protocols.

Topic 3.2.2: Data Link Switching

*Token Ring Switches
Data Link switching (DLSw) is used in Token Ring environments.
DLSw nodes are switches or routers that perform the functions of source-route bridges (SRBs) and appear as SRBs in Token Ring environments.
DLSw nodes also have the capability to transport IBM's system network architecture (SNA) packets (Token Ring packets) and network basic input/output system (NetBIOS) packets over an IP network.

*The Data Link Control Layer
DLSw nodes use SSP (Switch-to-Switch Protocol) to establish connections with each other, locate the resources of the network, forward data, and handle flow control and error recovery. SSP is not a routing protocol like RIP, IGRP, and OSPF, because it works on the Data Link Control layer of the SNA model, which roughly corresponds to the Data Link layer of the OSI reference model. SSP uses TCP to transmit data between DLSw nodes.

*Communicating Capabilities
DLSw nodes operate by establishing a TCP connection with each other. DLSw nodes then communicate their capabilities to each other and negotiate a common set of transmission parameters. These parameters include DLSw version number, initial pacing window, NetBIOS support, SAPs, number of TCP sessions supported, MAC address lists, NetBIOS name lists, and search frames support.

*Establishing a Circuit
The DLSw nodes then establish a circuit containing themselves and the two end nodes that need to communicate with each other. The end nodes each have a DLC (Data-Link Control) connection with their local DLSw nodes, while the DLSw nodes maintain the TCP connection between themselves. Data transmission may begin after the circuit is established.

*Flow Control
The transmission is controlled with DLSw flow control. The flow control between DLSw devices uses a process called adaptive pacing. Adaptive pacing involves a windowing mechanism that dynamically adapts to buffer availability. This allows the DLSw devices to control the transmission and to ensure the delivery and integrity of the transmission.

*DLSw and SRB
DLSw overcomes many of the limitations of SRB. DLSw removes the seven-hop limit, has better control of broadcast traffic, reduces unnecessary traffic, and has flow control and priority mechanisms that SRB does not have. DLSw also provides WAN connectivity that is not available with SRB alone.

Question 23


* Exercise 1
Try using the World Wide Web to find more information on switching.

Examine the following table
Step Action
1 Use your browser to navigate to your favorite search engine (Yahoo!, LookSmart, Excite, and so on).
2 Perform a search for switching terms such as microsegmentation, virtual LAN, and multilayer switches. You may need to use the advanced search features of the search engine you chose.
3 Follow the links to find more information on the topics you select.


Topic 3.3: Unit 3 Summary

In this unit, you increased your knowledge of switches. You learned about the benefits that switches provide to a network as well as the store-and-forward and cut-through switching methods. You also saw how virtual LANs are used to cut down network traffic, and you were introduced to multilayer switches.
In the next unit of this course, you will learn about ATM switching.

Unit 4. ATM Switching



In this unit, you will be introduced to ATM switching. You will see how ATM switches, which operate differently from other switches, perform their switching functions.
You will also see the different layers and planes of the ATM reference model, and learn how ATM switches provide Quality of Service and LAN emulation.

After completing this unit, you should be able to:
  • Identify the components of ATM cells

  • Define a virtual channel

  • List the planes and layers of the ATM reference model

  • List the benefits and drawbacks of ATM switching


This unit does not address any specific Cisco objectives. However, it does provide background information that is essential for the CCNA exam.
In the course index, questions about background information are indicated with the abbreviation BCK and a short description of the question subject matter.

Topic 4.1: Overview of ATM

*Defining Asynchronous
Asynchronous data refers to data that is not transmitted at regular intervals. For example, a phone conversation is asynchronous since there are many pauses between spoken words. Since there are many types of data transmission that are asynchronous, an efficient method of transmission is needed.

*ATM Networks
ATM (Asynchronous Transfer Mode) switching was developed as a high-speed option for transferring asynchronous data such as voice, video, and data over public networks.
An ATM network consists of ATM switches and ATM endpoints (nodes) which transport data in fixed-length cells.

*International Standard
ATM switching is emerging as an international standard. There are a number of reasons for this. There is an international consensus about ATM technology and standardization. ATM technology can be used in both LANs and WANs. ATM networks can carry voice, data, and video, which are usually carried on separate networks. Also, ATM networks can operate at various speeds ranging from Mbps to Gbps.

Topic 4.1.1: ATM Cells

*Fixed-Length Cells
ATM switches transport data in cells, which are 53 bytes long.
The first 5 bytes of a cell comprise the header and the rest of the cell (48 bytes) is called the payload. The payload contains the data that needs to be transported and the header contains the necessary routing information.

*NNI and UNI
A cell header can be in Network-to-Network Interface (NNI) format or in User-Network Interface (UNI) format. The NNI format is used when a cell is traveling between ATM switches, and the UNI format is used when a cell is traveling between an ATM switch and an endpoint.

*Connecting NNIs and UNIs
NNI and UNI can be either private or public. Private NNIs connect ATM switches within the same private organization, while public NNIs connect ATM switches within the same public organization (telephone company). Private UNIs connect ATM endpoints to ATM switches within the same private organization, while public UNIs connect private ATM endpoints or switches to public ATM switches.

*Cell Payload
The payload of the cell consists of data that is encapsulated at an endpoint. When an endpoint has data to transmit, it fragments the data into 48-byte pieces and adds the header to create the cell. When the cell reaches its destination, the data is reassembled in the proper sequence.

*Predictability
Fixed-length cells allow the network to accurately predict transmission times.
This predictability allows different traffic types (such as voice, data, and video) on the same network, since all cells behave the same regardless of the type of traffic they contain.
This is also the reason ATM networks have fewer delays compared to networks that send larger data packets.

Question 24

Question 25

Topic 4.1.2: Virtual Channels

*VCs and VPs
The concept of the virtual channel (VC) is central to ATM operation. A VC behaves as a permanent transmission channel, but in reality uses whatever channel is available at the time of transmission. A virtual path (VP) contains a bundle of VCs and a transmission path contains a bundle of VPs.

*VCCs and VCLs
ATM networks operate by creating virtual channel connections (VCCs), which are logical circuits between two endpoints in an ATM network. Each VCC is made up of one or more virtual channel links (VCLs), which are the connections between ATM devices. For instance, the VCC for a channel between endpoint A going through switch X and switch Z to endpoint B would contain three VCLs (A-X, X-Z, Z-B).

*VPI/VCI
Each VCL is defined by a virtual path identifier/virtual channel identifier (VPI/VCI) pair, which identifies the VP and VC used in the VCL. The VPI/VCI pair is placed in the header of cells before transmission. Since this pair is specific for each VCL, and therefore only of local significance, it must be replaced in the header of the cell at each device along the VCC.

*ATM Forwarding Process
When an ATM switch receives a cell, it looks up the VPI/VCI value in a translation table. This table contains the outgoing port for the VPI/VCI value and a new VPI/VCI value for the next VCL. The switch then replaces the old VPI/VCI value in the cell with the new VPI/VCI value and retransmits the cell to the outgoing port.

Question 26

Question 27

Topic 4.2: ATM Reference Model

*A Logical Model
The ATM reference model is a logical model to describe the functions of ATM switching.
This model consists of three planes (Control, User, and Management) and several layers.
The planes span all layers of the ATM reference model.

*Planes of the ATM Model
The Control plane is where signaling requests are generated and managed.
The User plane is where data transfer is managed. The Management plane is divided into Layer Management and Plane Management. Layer Management is responsible for layer-specific management functions, while Plane Management is responsible for system-wide management functions.

*Comparing ATM and OSI Models
The defined layers of the ATM reference model have corresponding layers in the OSI reference model. The Physical layer is similar to the Physical layer of the OSI model. The ATM and ATM Adaptation layers together correspond to the Data Link layer of the OSI model. The higher layers of the ATM reference model have not been completely defined, so there is no definite correspondence with the OSI reference model.

*The Physical Layer
The ATM Physical layer is responsible for the physical media and media standards as well as transmission synchronization and convergence.
It is this layer that converts bits into cells, controls transmission and receipt of cells, tracks cell boundaries, and packages cells into frames.

*The ATM Layer
The ATM layer is responsible for establishing connections across the ATM network.
The ATM layer is also responsible for passing cells through the network once a connection has been established.
This is accomplished by using the information in the cell header.

*ATM Adaptation Layers
The ATM Adaptation layers (AALs) adapt different classes of application data to the ATM layer. There are four AALs. AAL1 handles circuit emulation and is used for connection-oriented, delay-sensitive traffic requiring constant bit rates. AAL2 is used mainly for voice and video traffic. AAL3/4 is used for both connectionless and connection-oriented variable bit rate traffic. AAL5 is used for connection-oriented variable bit traffic, but is simpler than AAL3/4 in order to provide smaller bandwidth overhead.

*Higher Layers
The higher layers of the ATM reference model are responsible for accepting user data, arranging the data into packets, and transmitting the formatted data to the appropriate AAL.

Topic 4.3: Quality of Service

*Negotiating QoS
ATM Quality of Service (QoS) is a measure of transmission quality and service availability on the ATM network. When a VCC is established, ATM endpoints negotiate the QoS required for the transmission. The ATM network will only route the transmission if all nodes in the VCC have the resources to deliver the requested QoS through the entire ATM network.

*Traffic Contract
The type of guarantees supported by ATM QoS include traffic contract, traffic shaping, and traffic policing. Traffic contract specifies an envelope that describes the intended traffic flow. This envelope specifies traffic flow values such as peak bandwidth, average sustained bandwidth, and burst size.

*Traffic Shaping
Traffic shaping ensures that ATM devices adhere to the traffic contract. It does this by using queues to constrain data bursts, limiting the peak data rate, and smoothing jitters so that the traffic will fit within the promised envelope.

*Traffic Policing
To enforce the traffic contract, ATM switches can use traffic policing. Traffic policing measures the actual traffic flow and compares it against the agreed-upon traffic envelope. If the parameters of a cell are outside of this envelope, the switch can set the cell-loss priority of the cell to allow switches handling the cell to drop the cell during periods of congestion.

Question 28

Topic 4.4: LAN Emulation

*Legacy LANs and ATM
Legacy LANs, such as Ethernet and Token Ring, rely on broadcast and connectionless functions, while ATM is a point-to-point connection-oriented technology. LAN Emulation (LANE) bridges the gap between these technologies to allow legacy LANs to overlay an ATM network.

*Added Capabilities for ATM
The LANE protocol allows ATM stations to have the same capabilities they would have in a legacy LAN (such as being in a workgroup), while retaining the high speed of the ATM network. This is done by encapsulating data in the MAC format of the LAN being emulated.

*Media Access with LANE
LANE does not emulate the media access methods of the LAN being emulated (CSMA/CD for Ethernet or token passing for Token Ring). Data is still transmitted via ATM standards.

*Maturing Specification
The LANE specification is still fairly new. However, this specification has paved the way for integrating ATM with legacy LANs. LANE is expected to provide ATM networks with greater flexibility and interoperability as the specification matures.

Topic 4.5: ATM Benefits and Drawbacks

*ATM Benefits
There are many benefits of ATM switching. ATM is a high-speed connection-oriented technology that gives high bandwidth utilization with flexible bandwidth allocation. Data transmission predictability allows for QoS guarantees. ATM allows for the integration of networks and traffic types, is compatible with currently deployed physical networks, and allows incremental migration from an existing network to an ATM network. Also, using the same technology for all levels of the network simplifies network management.

*ATM Drawbacks
There are also a couple of drawbacks with ATM switching. The cell header comprises almost 10 percent of the total cell size, which is a large overhead for a data packet. The mechanisms for achieving QoS are sometimes very complex. Also, heavy network congestion may cause many cells to be dropped.

*Developing Specifications
The benefits of ATM switching far outweigh the drawbacks. For this reason, ATM has become a widely-used technology. ATM specifications are continually being developed to accelerate the definition of ATM technology.

Question 29


* Exercise 1
Try drawing a diagram of an ATM network.

Examine the following table
Step Action
1 In the center of the paper, draw four public ATM switches and connect them together.
2 On one side of the paper draw two ATM switches, a regular switch, and a router and connect these devices together.
3 Now repeat step 2 for the other side of the paper.
4 Connect the ATM switches from steps 2 and 3 to an ATM switch from step 1.
5 Now label all the connections Public NNI, Private NNI, Public UNI, or Private UNI.


Topic 4.6: Unit 4 Summary

In this unit, you learned about ATM switching. You saw how ATM switches use small, fixed-length cells to transfer data over virtual channels. You were also introduced to the different layers and planes of the ATM reference model, and how ATM switches provide Quality of Service and LAN emulation.
In the next unit of this course, you will learn about Cisco Catalyst switches.

Unit 5. Cisco Catalyst 1900/2820 Switches



In this unit, you will be introduced to Cisco Systems' Catalyst 1900 and 2820 switches. You will see some of the features of these switches, such as shared memory buffering, LED indicators, connectivity, and protocols.
You will also see the differences between Catalyst 1900 and 2820 models, such as the number of ports available and the expansion capabilities.

After completing this unit, you should be able to:
  • List important features of Catalyst 1900 and 2820 switches

  • List important protocols for Catalyst operation and management

  • Identify the meaning of the LEDs

  • Recognize the differences between Catalyst 1900 and 2820 switches


This unit does not address any specific Cisco objectives. However, it does provide background information that is essential for the CCNA exam.
In the course index, questions about background information are indicated with the abbreviation BCK and a short description of the question subject matter.

Topic 5.1: Overview

*ClearChannel Architecture
The Cisco Catalyst 1900 and 2820 series of Ethernet switches were designed for use in bandwidth-intensive networks.
These switches use Cisco Systems' ClearChannel architecture, which provides wire-speed bridging on all ports.

*Shared Memory Buffering
Catalyst 1900 and 2820 switches use 3 MB of shared memory buffers, which is a memory pool shared by all ports. This allows a switch to dynamically allocate buffers as they are needed. To prevent an individual port from monopolizing the shared buffers, a port can use a maximum of 1.5 MB of buffer memory. Shared memory buffering avoids packet duplication and dropped packets, which makes the switch more efficient.

*Switching Modes
Catalyst 1900 and 2820 switches use Store-and-Forward, FastForward, and FragmentFree switching modes. FastForward is the default mode for these switches, and FragmentFree can be implemented when there are a high number of collisions in the network. The Store-and-Forward mode should be used in networks experiencing a large number of packet errors.

*Multicast and Broadcast Control
Multicast traffic in the Catalyst 1900 and 2820 switches can be controlled with intelligent filtering, a process where the administrator restricts multicast traffic to appropriate ports. Broadcast control is accomplished by having an administratively assigned broadcast threshold value for each port. If a port exceeds its threshold, the switch will stop forwarding broadcast packets from that port. Broadcast control does not affect unicast and multicast traffic.

*Virtual LAN Support
Cisco Systems supports up to four port-based VLANs in each Catalyst 1900 and 2820 switch. Each of these VLANs require a unique spanning tree. Each VLAN also has a separate MIB to keep track of addresses and ports.

*Full-Duplex Capability
Each 10Base-T and 100Base-T port is capable of full-duplex operation. The 100Base-T ports are also capable of autonegotiation to automatically select half-duplex or full-duplex operation.

Topic 5.1.1: Protocols and Management

*STP and CDP
Catalyst 1900 and 2820 switches use STP (Spanning Tree Protocol) and CDP (Cisco Discovery Protocol).
STP is used to remove redundant paths from the network in order to prevent routing loops.
CDP is used to allow automatic discovery of network devices.

*SNMP and RMON
Network management in Catalyst 1900 and 2820 switches is accomplished with the use of SNMP.
These switches also provide RMON support. The RMON groups supported are Statistics, History, Event, and Alarm.

*Web-Based Management
Network management can be monitored and controlled with the web-based management tool that comes with Catalyst 1900 and 2820 switches. This tool can be accessed from any Internet client by using a web browser.

Question 30

Question 31

Question 32

Topic 5.1.2: Front Panel LEDs

*LED Types
The Catalyst 1900 and Catalyst 2820 both have LEDs for system status, RPS status, port mode, and port status. With the exception of the Port Status LED, these LEDs behave the same for both product series. The Catalyst 2820 also has LEDs to show the status of expansion modules.

*System Status LED
The System Status LED indicates whether the system is powered and operational.
This LED will be off if the switch is off, solid green if the switch is operating normally, and solid amber if the switch has power but is not operating properly.

*RPS LED
The RPS LED indicates the status of the redundant power supply. Either the internal power supply of the switch or the RPS may be used for power, but not both.
The RPS LED will be off if there are no problems with the switch's internal power supply, solid green if the RPS is powered and operational, solid amber if the RPS is not operational or not connected correctly, and flashing green if both the RPS and the switch's internal power supply are on (indicating that one should be turned off).

*Port Mode LED
The Mode button is used to select the mode of the Port Status LEDs and the Port Mode LED indicates the currently selected mode.
The available modes are STAT (port status), UTL (bandwidth utilization), and FDUP (full-duplex).

*STAT Mode
Port Status LEDs in the STAT mode indicate the current status of each port.
These LEDs are off if there is no link, solid green if there is a link but no activity, flashing green if there is a link with activity, alternating green and amber if the link is faulty, and solid amber if the port is not forwarding data.

*UTL Mode
Port Status LEDs in the UTL mode indicate both the current and peak bandwidth utilization of the switch. Current bandwidth is indicated by a blinking LED. Peak bandwidth for a defined interval is indicated by the solid green LED farthest to the right. For the 12-port switches, LEDs 1-4 indicate activity between 0.1 and 1.5, LEDs 5-8 between 1.5 and 20, and LEDs 9-12 between 20 and 140 Mbps. In the 24-port switches, LEDs 1-8 indicate activity between 0.1 and 6, LEDs 9-16 between 6 and 120, and LEDs 17-24 between 120 and 280 Mbps.

*FDUP Mode
Port Status LEDs in the FDUP mode indicate whether a port is operating at half-duplex or full-duplex.
If the LED is off, the port is operating in half-duplex mode.
If the LED is solid green, the port is operating in full-duplex mode.

*Expansion Module LEDs
Catalyst 2820 switches also have two LEDs (A and B) to show expansion slot status.
For each LED, off indicates there is no expansion module for that slot, solid green indicates the module is operational, flashing green indicates the module is running a power-on self-test, and solid amber indicates that the module failed the self-test and is not operational.

Topic 5.1.3: Rear Panel

*Rear Panel Connections
The rear panel of Catalyst 1900 and 2820 switches is where fans and connectors are located. There are connections for a power supply and a redundant power supply, as well as an EIA/TIA-232 serial port for a modem and an AUI connector that can connect to a transceiver for legacy 10Base-2 or 10Base-5 segments. The Reset button on the rear panel acts as an off/on switch. Pressing the Reset button will drop all connections, and this should only be done if the switch does not respond to network management or if packet forwarding has stopped.

Topic 5.2: Ports and Expansion Modules

*Differences
Even though Catalyst 1900 and 2820 switches have many things in common, there are also a few differences.
These differences are mainly the number and types of ports available on base models as well as the expansion capabilities of the two types of switches.

Topic 5.2.1: Catalyst 1900

*Catalyst 1912 and 1912C
There are four models of Catalyst 1900 switches and all models have an address cache that can store 1024 MAC addresses. Models 1912 and 1912C have 12 switched 10Base-T ports. In addition, the 1912 contains two 100Base-TX ports and the 1912C contains one 100Base-TX port and one 100Base-FX port.

*Catalyst 1924 and 1924C
Models 1924 and 1924C have 24 switched 10Base-T ports. Model 1924 also has two 100Base-TX ports, while Model 1924C has one 100Base-TX port and one 100Base-FX port.

*Best Use for Catalyst 1900
Catalyst 1900 switches are best utilized in connecting several 10-Mbps workgroups with 100-Mbps servers and/or a Fast Ethernet backbone.

Topic 5.2.2: Catalyst 2820

*Catalyst 2822 and 2828
There are two models of Catalyst 2820 switches. Both models have 24 switched 10Base-T ports and two high-speed expansion slots. The difference in models is that the 2822 has an address cache that can hold up to 2048 MAC addresses and the 2828 has an address cache that can hold up to 8192 MAC addresses.

*Catalyst 2820 Expansion Modules
There are eleven expansion modules that can be used by Catalyst 2820 switches for high-speed connections. For 100Base-TX connections, there is either a 1-port or an 8-port module. For 100Base-FX, there is either a 1-port or a 4-port module. The three modules for FDDI connections are for a fiber SAS, fiber DAS, or UTP SAS. There are also four 1-port ATM modules that differ by the cable type used.

*Best Use for Catalyst 2820
The high-speed expansion modules available for Catalyst 2820 switches make them ideal for customizing high-speed connectivity solutions.

Question 33

Question 34


* Exercise 1
Try using the World Wide Web to find more information on Catalyst switches.

Examine the following table
Step Action
1 Use your browser to navigate to Cisco Systems' home page.
2 Perform searches for terms such as Catalyst 1900, Catalyst 2820, Catalyst expansion modules, etc.
3 Follow the links to find more information on Catalyst 1900 and 2820 switches.


Topic 5.3: Unit 5 Summary

In this unit, you were introduced to Catalyst 1900 and 2820 switches. You saw the features they have in common, such as shared memory buffering, VLAN support, switching modes, management protocols, and LED indicators. You also became familiar with the different models and expansion modules of these switches.
In this course, you learned about bridging and switching. You saw how bridges work and how STP is used to remove routing loops. You also found out about various types of switching, such as Data Link switches, multilayer switches, and ATM switches.

No comments:

Post a Comment