Sunday, January 2, 2011

Creating a VLAN and Assigning Ports

VLANs must be created before they can be used. Creating VLANs is easy—in global configuration mode just identify the VLAN number and optionally name it!

(config)# vlan 12
(config-vlan)# name MYVLAN


Delete a VLAN by using the same command with no in front of it. There is no need to include the name when deleting.

When statically assigning ports to VLANs, first make the interface an access port, and then assign the port to a VLAN. At the interface configuration prompt:

(config-if)# switchport mode access
(config-if)# switchport access vlan 12


Verifying VLAN Configuration

To see a list of all the VLANs and the ports assigned to them, use the command show vlan. To narrow down the information displayed, you can use these keywords after the command: brief, id, vlan-number, or name vlan-name:

Other verification commands include:

show running-config interface interface no: Use the following to verify the VLAN membership of the port:

ASW# show run interface fa0/5
Building configuration...
Current configuration 64 bytes
interface FastEthernet 0/5
switchport access vlan 20
switchport mode access

show mac address-table interface interface-no. vlan-vlan no: Use the following to view MAC
addresses learned through that port for the specified VLAN:


show interfaces interface-no. switchport: Use the following to see detailed information about the port configuration, such as entries in the Administrative Mode and Access Mode VLAN fields:

ASW# show interfaces fa0/1 switchport
Name: Fa0/1
Switchport: Enabled
Administrative Mode: dynamic desirable
Operational Mode: static access
Administrative Trunking Encapsulation: negotiate

Operational Trunking Encapsulation: native
Negotiation of Trunking: On
Access Mode VLAN: 1 (default)
Trunking Native Mode VLAN: 1 (default)
Trunking VLANs Enabled: ALL
Pruning VLANs Enabled: 2-1001
Protected: false
Unknown unicast blocked: false
Unknown multicast blocked: false
Broadcast Suppression Level: 100
Multicast Suppression Level: 100
Unicast Suppression Level: 100

Wednesday, October 27, 2010

VLAN Implementation

VLANs are used to break large campus networks into smaller pieces. The benefit of this is to minimize the amount of broadcast traffic on a logical segment.


VLAN Overview

A virtual LAN (VLAN) is a logical LAN, or a logical subnet. It defines a broadcast domain. A physical subnet is a group of devices that shares the same physical wire. A logical subnet is a group of switch ports assigned to the same VLAN, regardless of their physical location in a switched network. VLAN membership can be assigned either statically by port, or dynamically by MAC address or username.

Two types of VLANs are:
  • End-to-end VLAN: VLAN members reside on different switches throughout the network. They are used when hosts are assigned to VLANs for policy reasons, rather than physical location. This provides users a consistent policy and access to resources regardless of their location. It also makes troubleshooting more complex because so many switches can carry traffic for a specific VLAN, and broadcasts can traverse many switches. Figure 2-1 shows end-toend VLANs.
  • Local VLAN: Hosts are assigned to VLANs based on their location, such as a floor in a building. This design is more scalable and easier to troubleshoot because the traffic flow is more deterministic. It enables more redundancy and minimizes failure domains. It does require a routing function to share resources between VLANs. Figure 2-2 shows an example of local VLANs.

When planning a VLAN structure, consider traffic flows and link sizing. Take into account the entire traffic pattern of applications found in your network. For instance, IP voice media traffic travels directly between phones, but signaling traffic must pass to the Unified Communications Manager. Multicast traffic must communicate back to the routing process and possibly call upon a Rendezvous Point. Various user applications, such as email and Citrix, place different demands on the network.

Application flow influences link bandwidth. Remember that uplink ports need to handle all hosts communicating concurrently, and although VLANs logically separate traffic, traffic in different VLANs still travels over the same trunk line. Benchmark throughput for critical application and user data during peak hours; then analyze the results for any bottlenecks throughout the layered design.

User access ports are typically Fast Ethernet or faster. Access switches must have the necessary port density and can be either Layer 2 or Layer 3. Ports from user Access to the Distribution layer should be Gigabit Ethernet or better, with an oversubscription ratio of no more than 20:1. Distribution switches should be multilayer or Layer 3. Links from Distribution to the Core should be Gigabit Etherchannel or 10-Gig Ethernet, with an oversubscription of no more than 4:1.


VLAN Planning

Before beginning a VLAN implementation, you need to determine the following information:
  • VLAN numbering, naming and IP addressing scheme
  • VLAN placement—local or multiple switches
  • Are any trunks necessary and where?
  • VTP parameters
  • Test and verification plan

Wednesday, June 9, 2010

Service-Oriented Network Architecture

Service-Oriented Network Architecture (SONA) attempts to provide a design framework for a network that can deliver the services and applications businesses need. It acknowledges that the network connects all components of the business and is critical to them. The SONA model integrates network and application functionality cooperatively and enables the network to be smart about how it handles traffic to minimize the footprint of applications.

Figure 1-3 shows how SONA breaks down this functionality into three layers:
  • Network Infrastructure: Campus, data center, branch, and so on. Networks and their attached end systems (resources such as servers, clients, and storage.) These can be connected anywhere within the network. The goal is to provide anytime/any place connectivity.
  • Interactive Services: Resources allocated to applications, using the network infrastructure. These include:
  • Management
  • Infrastructure services such as security, mobility, voice, compute, storage, and identity
  • Application delivery
  • Virtualization of services and network infrastructure
  • Applications: Includes business policy and logic. Leverages the interactive services layer to meet business needs. Has two sublayers:
  • Application layer, which defines business applications
  • Collaboration layer, which defines applications such as unified messaging, conferencing, IP telephony, video, instant messaging, and contact centers


Planning a Network Implementation

It is important to use a structured approach to planning and implementing any network changes or new network components. A comprehensive life-cycle approach lowers the total cost of ownership, increases network availability, increases business agility, and provides faster access to applications and services.

The Prepare, Plan, Design, Implement, Operate, and Optimize (PPDIOO) Lifecycle Approach is one structure that can be used. The components are:
  • Prepare: Organizational requirements gathering, high-level architecture, network strategy, business case strategy
  • Plan: Network requirements gathering, network examination, gap analysis, project plan
  • Design: Comprehensive, detailed design
  • Implement: Detailed implementation plan, and implementation following its steps
  • Operate: Day-to-day network operation and monitoring
  • Optimize: Proactive network management and fault correction

Network engineers at the CCNP level will likely be involved at the implementation and following phases. They can also participate in the design phase. It is important to create a detailed implementation plan that includes test and verification procedures and a rollback plan. Each step in the implementation plan should include a description, a reference to the design document, detailed implementation and verification instructions, detailed rollback instructions, and the estimated time needed for completion. A complex implementation should be done in sections, with testing at each incremental section.

Campus Network Design

An enterprise campus generally refers to a network in a specific geographic location. It can be within one building or span multiple buildings near each other. A campus network also includes the Ethernet LAN portions of a network outside the data center. Large enterprises have multiple campuses connected by a WAN. Using models to describe the network architecture divides the campus into several internetworking functional areas, thus simplifying design, implementation, and troubleshooting.


The Hierarchical Design Model

Cisco has used the three-level Hierarchical Design Model for years. The hierarchical design model divides a network into three layers:

Access: Provides end-user access to the network. In the LAN, local devices such as phones and computers access the local network. In the WAN, remote users or sites access the corporate network.
  • High availability via hardware such as redundant power supplies and redundant supervisor engines. Software redundancy via access to redundant default gateways using a first hop redundancy protocol (FHRP).
  • Converged network support by providing access to IP phones, computers, and wireless access points. Provides QoS and multicast support.
  • Security through switching tools such as Dynamic ARP Inspection, DHCP snooping, BPDU Guard, port-security, and IP source guard. Controls network access.

Distribution: Aggregation point for access switches. Provides availability, QoS, fast path recovery, and load balancing.
  • High availability through redundant distribution layer switches providing dual paths to the access switches and to core switches. Use of FHRP protocols to ensure connectivity if one distribution switch is removed.
  • Routing policies applied, such as route selection, filtering, and summarization. Can be default gateway for access devices. QoS and security policies applied.
  • Segmentation and isolation of workgroups and workgroup problems from the core, typically using a combination of Layer 2 and Layer 3 switching.

Core: The backbone that provides a high-speed, Layer 3 path between distribution layers and other network segments. Provides reliability and scalability.
  • Reliability through redundant devices, device components, and paths.
  • Scalability through scalable routing protocols. Having a core layer in general aids network scalability by providing gigabit (and faster) connectivity, data and voice integration, and convergence of the LAN, WAN, and MAN.
  • No policies such as ACLs or filters that would slow traffic down.

A set of distribution devices and their accompanying access layer switches are called a switch block.


The Core Layer

Is a core layer always needed? Without a core layer, the distribution switches must be fully meshed. This becomes more of a problem as a campus network grows larger. A general rule is to add a core when connecting three or more buildings or four or more pairs of building distribution switches. Some benefits of a campus core are:
  • Adds a hierarchy to distribution switch connectivity
  • Simplifies cabling because a full-mesh between distribution switches is not required
  • Reduces routing complexity by summarizing distribution networks

Small Campus Design

In a small campus, the core and distribution can be combined into one layer. Small is defined as fewer than 200 end devices. In very small networks, one multilayer switch might provide the functions of all three layers. Figure 1-1 shows a sample small network with a collapsed core.


Medium Campus Design

A medium-sized campus, defined as one with between 200 and 1000 end devices, is more likely to have several distribution switches and thus require a core layer. Each building or floor is a campus block with access switches uplinked to redundant multilayer distribution switches. These are then uplinked to redundant core switches, as shown in Figure 1-2.



Data Center Design

The core layer connects end users to the data center devices. The data center segment of a campus can vary in size from few servers connected to the same switch as users in a small campus, to a separate network with its own three-layer design in a large enterprise. The three layers of a data center model are slightly different:
  • Core layer: Connects to the campus core. Provides fast switching for traffic into and out of the data center.
  • Aggregation layer: Provides services such as server load balancing, content switching, SSL off-load, and security through firewalls and IPS.
  • Access layer: Provides access to the network for servers and storage units. Can be either Layer 2 or Layer 3 switches.

Network Traffic Flow

The need for a core layer and the devices chosen for the core also depend on the type of network traffic and traffic flow patterns. Modern converged networks include different traffic types, each with unique requirements for security, QoS, transmission capacity, and delay. These include:
  • IP telephony signaling and media
  • Core Application traffic, such as Enterprise Resource Programming (ERP), Customer Relationship Management (CRM)
  • Multicast multimedia
  • Network management
  • Application data traffic, such as web pages, email, file transfer, and database transactions n Scavenger class traffic that requires less-than-best-effort treatment

The different types of applications also have different traffic flow patterns. These might include:
  • Peer-to-Peer applications such as IP phone calls, video conferencing, file sharing, and instant messaging provides real-time interaction. It might not traverse the core at all, if the users are local to each other. Their network requirements vary, with voice having strict jitter needs and video conferencing using high bandwidth.
  • Client-Server applications require access to servers such as email, file storage, and database servers. These servers are typically centralized in a data center, and users require fast, reliable access to them. Server farm access must also be securely controlled to deny unauthorized users.
  • Client-Enterprise Edge applications are located on servers at the WAN edge, reachable from outside the company. These can include email and web servers, or e-commerce servers, for example. Access to these servers must be secure and highly available.

Thursday, May 6, 2010

Structured Cabling Systems

Rules

Structured cabling is a systematic approach to cabling. It is a method for creating an organized cabling system that can be easily understood by installers, network administrators, and any other technicians that deal with cables. There are three rules that will help ensure the effectiveness and efficiency of structured cabling design projects.

The first rule is to look for a complete connectivity solution. An optimal solution for network connectivity includes all the systems that are designed to connect, route, manage, and identify cables in structured cabling systems. A standards-based implementation is designed to support both current and future technologies. Following the standards will help ensure the long-term performance and reliability of the project.

The second rule is to plan for future growth. The number of cables installed should also meet future requirements. Category 5e, Category 6, and fiber-optic solutions should be considered to ensure that future needs will be met. The physical layer installation plan should be capable of functioning for ten or more years.

The final rule is to maintain freedom of choice in vendors. Even though a closed and proprietary system may be less expensive initially, this could end up being much more costly over the long term. A non-standard system from a single vendor may make it more difficult to make moves, adds, or changes at a later time.


There are seven subsystems associated with the structured cabling system, as shown in Figure 1. Each subsystem performs certain functions to provide voice and data services throughout the cable plant:
  • Demarcation point (demarc) within the entrance facility (EF) in the equipment room
  • Equipment room (ER)
  • Telecommunications room (TR)
  • Backbone cabling, which is also known as vertical cabling
  • Distribution cabling, which is also known as horizontal cabling
  • Work area (WA)
  • Administration

Scalability

A LAN that can accommodate future growth is referred to as a scalable network. It is important to plan ahead when estimating the number of cable runs and cable drops in a work area. It is better to install extra cables than to not have enough.

In addition to pulling extra cables in the backbone area for future growth, an extra cable is generally pulled to each workstation or desktop. This gives protection against pairs that may fail on voice cables during installation, and it also provides for expansion. It is also a good idea to provide a pull string when installing the cables to make it easier for adding cables in the future. Whenever new cables are added, a new pull string should also be added

When deciding how much extra copper cable to pull, first determine the number of runs that are currently needed and then add approximately 20 percent of extra cable.

A different way to obtain this reserve capability is to use fiber-optic cabling and equipment in the building backbone. For example, the termination equipment can be updated by inserting faster lasers and drivers to accommodate fiber growth.


Each work area needs one cable for voice and one for data. However, other devices may need a connection to either the voice or the data system. Network printers, FAX machines, laptops, and other users in the work area may all require their own network cable drops.

After the cables are in place, use multiport wall plates over the jacks. There are many possible configurations for modular furniture or partition walls. Color-coded jacks can be used to simplify the identification of circuit types, as shown in Figure 1. Administration standards require that every circuit should be clearly labeled to assist in connections and troubleshooting.

A new technology that is becoming popular is Voice over Internet Protocol (VoIP). This technology allows special telephones to use data networks when placing telephone calls. A significant advantage of this technology is the avoidance of costly long distance charges when VoIP is used over existing network connections. Other devices like printers or computers can be plugged into the IP phone. The IP phone then becomes a hub or switch for the work area. Even if these types of connections are planned, enough cables should be installed to allow for growth. Especially consider that IP telephony and IP video traffic may share the network cables in the future.


Demarcation Point


The demarcation point (demarc), shown in Figure 1, is the point at which outdoor cabling from the service provider connects to the intrabuilding backbone cabling. It represents the boundary between the responsibility of the service provider and the responsibility of the customer. In many buildings, the demarc is near the point of presence (POP) for other utilities such as electricity and water.

The service provider is responsible for everything from the demarc out to the service provider facility. Everything from the demarc into the building is the responsibility of the customer.

The local telephone carrier is typically required to terminate cabling within 15 m (49.2 feet) of building penetration and to provide primary voltage protection. The service provider usually installs this.