• Tidak ada hasil yang ditemukan

Additional Reading

The authors tried to keep the size of the book manageable while providing only necessary information for the topics involved.

Some readers may require additional reference material and may find the following books a great supplementary resource for the topics in this book.

Fuller, Ron, David Jansen, and Matthew McPherson. NX-OS and Cisco Nexus Switching. Indianapolis: Cisco Press, 2013.

Edgeworth, Brad, Aaron Foss, and Ramiro Garza Rios. IP Routing on Cisco IOS, IOS XE, and IOS XR. Indianapolis: Cisco Press, 2014.

Krattiger, Lukas, Shyam Kapadia, and David Jansen. Building Data Centers with VXLAN BGP EVPN. Indianapolis: Cisco Press, 2017.

This chapter covers the following topics:

Nexus Platforms

NX-OS Architecture

NX-OS Virtualization Features

Management and Operations Capabilities

At the time of its release in 2008, the Nexus operating system (NX-OS) and the Nexus 7000 platform provided a substantial leap forward in terms of resiliency, extensibility, virtualization, and system architecture compared to other switching products of the time.

Wasteful excess capacity in bare metal server resources had already given way to the effi- ciency of virtual machines and now that wave was beginning to wash over to the network as well. Networks were evolving from traditional 3-Tier designs (access layer, distribution layer, core layer) to designs that required additional capacity, scale, and availability. It was no longer acceptable to have links sitting idle due to Spanning Tree Protocol blocking while that capacity could be utilized to increase the availability of the network.

As network topologies evolved, so did the market’s expectation of the network infra- structure devices that connected their hosts and network segments. Network operators were looking for platforms that were more resilient to failures, offered increased switch- ing capacity, and allowed for additional network virtualization in their designs to better utilize physical hardware resources. Better efficiency was also needed in terms of reduced power consumption and cooling requirements as data centers grew larger with increased scale.

The Nexus 7000 series was the first platform in Cisco’s Nexus line of switches created to meet the needs of this changing data center market. NX-OS combines the functionality of Layer 2 switching, Layer 3 routing, and SAN switching into a single operating system.

Introduction to Nexus Operating

System (NX-OS)

From the initial release, the operating system has continued to evolve, and the portfolio of Nexus switching products has expanded to include several series of switches that address the needs of a modern network. Throughout this expansion, the following four fundamental pillars of NX-OS have remained unchanged:

Resiliency

Virtualization

Efficiency

Extensibility

This chapter introduces the different types of Nexus platforms along with their place- ment into the modern network architecture, and the major functional components of NX-OS. In addition, some of the advanced serviceability and usability enhancements are introduced to prepare you for the troubleshooting chapters that follow. This enables you to dive into each of the troubleshooting chapters with a firm understanding of NX-OS and Nexus switching to build upon.

Nexus Platforms Overview

The Cisco Nexus switching portfolio contains the following platforms:

Nexus 2000 Series

Nexus 3000 Series

Nexus 5000 Series

Nexus 6000 Series

Nexus 7000 Series

Nexus 9000 Series

The following sections introduce each Nexus platform and provide a high-level overview of their features and placement depending on common deployment scenarios.

Nexus 2000 Series

The Nexus 2000 series is a group of devices known as a fabric extender (FEX). FEXs essentially act as a remote line card for the parent switch extending its fabric into the server access layer.

The FEX architecture provides the following benefits:

Extend the fabric to hosts without the need for spanning tree

Highly scalable architecture that is common regardless of host type

Single point of management from the parent switch

Ability to upgrade parent switch and retain the FEX hardware

The Nexus 2000 FEX products do not function as standalone devices; they require a parent switch to function as a modular system. Several models are available to meet the host port physical connectivity requirements with various options for 1 GE, 10 GE connectivity as well as Fiber Channel over Ethernet (FCoE). On the fabric side of the FEX, which connects back to the parent switch, different options exist for 1 GE, 10 GE, and 40 GE interfaces. The current FEX Models are as follows:

1 GE Fabric Extender Models: (2224TP, 2248TP, 2248TP-E)

10 GBase-T Fabric Extender Models: (2332TQ, 2348TQ, 2348TQ-E, 2232TM-E, 2232TM)

10 G SFP+ Fabric Extender Models: (2348UPQ, 2248PQ, 2232PP)

When deciding on a FEX platform, consider the host connectivity requirements, the parent switch connectivity requirements, and compatibility of the parent switch model.

The expected throughput and performance of the hosts should also be a consideration because the addition of a FEX allows oversubscription of the fabric-side interfaces based on the front panel bandwidth available for hosts.

Nexus 3000 Series

The Nexus 3000 series consists of several models of high performance, low-latency, fixed configuration switches. They offer a compact 1 or 2 RU (rack unit) footprint with a high density of front panel ports ranging in speed from 1 GE, 10 GE, 40 GE, to 100GE.

These switches are not only high performance but also versatile because they support a wide range of Layer 2 features as well as support for Layer 3 routing protocols and IP Multicast. The model number is a combination of the platform series, the number of ports or the total bandwidth of the ports, and the type of interfaces.

The current Nexus 3000 models are as follows:

Nexus 3000 Models: (3064X, 3064-32T, 3064T, 3048)

Nexus 3100 Models: (3132Q/3132Q-X, 3164Q, 3172PQ, 3172TQ, 31128PQ)

Nexus 3100V Models: (31108PC-V, 31108TC-V, 3132Q-V)

Nexus 3200 Models: (3232C, 3264Q)

Nexus 3500 Models: (3524/3524-X, 3548/3548-X)

Nexus 3600 Models: (36180YC-R)

Each of these models has advantages depending on the intended role. For example, the Nexus 3500 series are capable of ultra-low-latency switching (sub-250ns),

which makes them popular for high-performance computing as well as high- frequency stock trading environments. The 3100-V is capable of Virtual Extensible Local Area Network (VXLAN) routing, the 3200 offers low-latency and larger buffers, while the 3000 and 3100 series are good all-around line rate Top of Rack (ToR) switches.

Note All Nexus 3000 series, with the exception of the Nexus 3500 series, run the same NX-OS software release as the Nexus 9000 series switches.

Nexus 5000 Series

The Nexus 5000 series support a wide range of Layer 2 and Layer 3 features, which allows versatility depending on the network design requirements. The Nexus 5500 series require the installation of additional hardware and software licensing for full Layer 3 support, whereas the Nexus 5600 series offers a native Layer 3 routing engine capable of 160 Gbps performance. The Nexus 5600 also supports VXLAN and larger table sizes compared to the 5500 series.

The current Nexus 5000 models are as follows:

Nexus 5500 Models: (5548UP, 5596UP, 5596T)

Nexus 5600 Models: (5672UP, 5672UP-16G, 56128P, 5624Q, 5648Q, 5696Q) The Nexus 5000 series is well suited as a Top of Rack (ToR) or End of Row (EoR) switch for high-density and high-scale environments. They support 1 GE, 10 GE, and 40 GE connectivity for Ethernet and FCoE. Superior port densities are achieved when used as a parent switch for FEX aggregation. The 5696Q supports 100 GE uplinks with the addi- tion of expansion modules. The platform naming convention is the model family, then the supported number of ports at 10 GE or 40 GE depending on the model. A Nexus 5672 is a 5600 platform that supports 72 ports of 10 GE Ethernet, and the UP characters indicate the presence of 40 GE uplink ports.

The support for Layer 3 features combined with a large number of ports, FEX aggregation, and the flexibility of supporting Ethernet, FCoE, and Fibre Channel in a single platform make the Nexus 5000 series a very attractive ToR or EoR option for many environments.

Nexus 6000 Series

The Nexus 6001 and Nexus 6004 switches are suited for ToR and EoR placement in high-density data center networks. The 6001 is a 1RU chassis that supports connectivity to 1 GE to 10 GE servers, and the 6004 is a 4RU chassis suited for 10 GE to 40 GE server connectivity or FCoE. FEX aggregation is also a popular application of the Nexus 6000 series. The Nexus 6000 series offers large buffers and low latency switching

to meet the needs of high-performance computing environments. They support robust Layer 2, Layer 3, and storage feature sets with the appropriate feature license installed.

The Nexus 6000 series has reached end of sale in its product life cycle as of April 30, 2017. The Nexus 5600 platform is designated as the replacement platform because it offers similar benefits, density, and placement in the data center.

Nexus 7000 Series

The Nexus 7000 series first shipped nearly 10 years ago, and it continues to be a very popular option for enterprise, data center, and service provider networks around the world. There are many reasons for its success. It is a truly modular platform based on a fully distributed crossbar fabric architecture that provides a large number of features. The Nexus 7000 series is categorized into two chassis families: the 7000 and the 7700. The 7000 series chassis are available in the following configurations, where the last two digits of the platform name represent the number of slots in the chassis:

Nexus 7000 Models: (7004, 7009, 7010, 7018)

Nexus 7700 Models: (7702, 7706, 7710, 7718)

The different chassis configurations allow for optimal sizing in any environment. The 7000 series has five fabric module slots, whereas the 7700 has six fabric module slots.

The 7004 and the 7702 do not use separate fabric modules because the crossbar fabric on the Input/Output (I/O) modules are sufficient for handling the platform’s requirements.

Access to the fabric is controlled by a central arbiter on the supervisor. This grants access to the fabric for ingress modules to send packets toward egress modules. Virtual output queues (VOQ) are implemented on the ingress I/O modules that represent the fabric capacity of the egress I/O module. These VOQs minimize head-of-line blocking that could occur waiting for an egress card to accept packets during congestion.

The Nexus 7000 and 7700 utilize a supervisor module that is responsible for running the management and control plane of the platform as well as overseeing the platform health.

The supervisor modules have increased in CPU power, memory capacity, and switching performance, with each generation starting with the Supervisor 1, then the Supervisor 2, and then the current Supervisor 2E.

Because the Nexus 7000 is a distributed system, the I/O modules run their own software, and they are responsible for handling all the data plane traffic. All Nexus 7000 I/O mod- ules fall into one of two families of forwarding engines: M Series or F Series. Both fami- lies of line cards have port configurations that range in speed from 1 GE, 10 GE, 40 GE, to 100 GE. They are commonly referred to by their forwarding engine generation (M1, M2, M3 and F1, F2, and F3), with each generation offering improvements in forwarding capacity and features over the previous. The M series generally has larger forwarding table capacity and larger packet buffers. Previously the M series also supported more Layer 3 features than the F series, but with the release of the F3 cards, the feature gap

has closed with support for features like Locator-ID Separation Protocol (LISP) and MPLS. Figure 1-1 explains the I/O module naming convention for the Nexus 7000 series.

N77-F348XP-23

Requires at Least 3 Fabric Modules Module HW Revision

SFP/SFP+ Optics Number of Interfaces

F3 Forwarding Engine Nexus 7700 I/O Module

Figure 1-1 Nexus 7000 Series I/O Module Naming Convention

The Nexus 7000 is typically deployed in an aggregation or core role; however, using FEXs with the Nexus 7000 provides high-density access connectivity for hosts. The Nexus 7000 is also a popular choice for overlay technologies like MPLS, LISP, Overlay Transport Virtualization (OTV), and VXLAN due to its wide range of feature availability and performance.

Nexus 9000 Series

The Nexus 9000 Series was added to the lineup in late 2013. The Nexus 9500 is a modular switch and was the first model to ship with several innovative features. The modular chassis was designed to minimize the number of components so it does not have a mid-plane. The line-card modules interface directly to the fabric modules in the rear of the chassis. The switching capacity of the chassis is determined by adding up to six fabric modules that are designed to be full line rate, nonblocking to all ports.

Recently the R-Series line cards and fabric modules were released, which feature deep buffer capabilities and increased forwarding table sizes for demanding environments.

The Nexus 9500 is a modular switching platform and therefore has supervisor modules, fabric modules, and various line-card options. Two supervisor modules exist for the Nexus 9500:

Supervisor A with a 4 core 1.8 GHz CPU, 16 GB of RAM, and 64 GB of SSD storage

Supervisor B with a 6 core 2.2 GHz CPU, 24 GB of RAM, and 256 GB of SSD storage The Nexus 9000 series uses a mix of commodity merchant switching application- specific integrated circuits (ASIC) as well as Cisco’s developed ASICs to reduce cost where appropriate. The Nexus 9500 was followed by the Nexus 9300 and Nexus 9200 series. Interface speeds of 1 GE, 10 GE, 25 GE, 40 GE, and 100 GE are possible, depend- ing on the model, and FCoE and FEX aggregation is also supported on select models.

The 9500 is flexible and modular, and it could serve as a leaf/aggregation or core/spine layer switch, depending on the size of the environment.

The 9300 and 9200 function well as high-performance ToR/EoR/leaf switches. The Nexus 9000 series varies in size from 1RU to 21RU with various module and connectivity

options that match nearly any connectivity and performance requirements. The available models are as follows:

Nexus 9500 Models: (9504, 9508, 9516)

Nexus 9300 100M/1G Base-T Models: (9348GC-FXP)

Nexus 9300 10 GBaseT Models: (9372TX, 9396TX, 93108TC-FX, 93120TX, 93128TX, 93108TC-EX)

Nexus 9300 10/25 GE Fiber Models: (9372PX, 9396PX, 93180YC-FX, 93180YC-EX)

Nexus 9300 40 GE Models: (9332PQ, 9336PQ, 9364C, 93180LC-EX)

Nexus 9200 Models: (92160YC-X, 9272Q, 92304QC, 9236C, 92300YC) The Nexus 9000 platform naming convention is explained in Figure 1-2.

F – MAC SEC N9K-C93180YC-EX

C – Chassis/ToR X – Line Card

[92–93] – 9200 or 9300 Platform [94–97] – 9500 LC ASIC Type

Number of Ports If They Are the Same Speed or

Total Bandwidth in 10s of Gb

E – Enhanced X – Analytics (NetFlow) S – Merchant Silicon 100G U – Unified Ports R – Deep Buffers P – 10G SFP+

T – 10G Copper Y – 25G SFP+

Q – 40G QSFP+

C – 100G QSFP28

Figure 1-2 Nexus 9000 Series Naming Convention

The Nexus 9000 series is popular in a variety of network deployments because of its speed, broad feature sets, and versatility. The series is used in high-frequency trading, high-performance computing, large-scale leaf/spine architectures, and it is the most popular Cisco Nexus platform for VXLAN implementations.

Note The Nexus 9000 series operates in standalone NX-OS mode or in application- centric infrastructure (ACI) mode, depending on what software and license is installed. This book covers only Nexus standalone configurations and troubleshooting.

The portfolio of Nexus switching products is always evolving. Check the product data sheets and documentation available on www.cisco.com for the latest information about each product.

NX-OS Architecture

Since its inception, the four fundamental pillars of NX-OS have been resiliency, virtual- ization, efficiency, and extensibility. The designers also wanted to provide a user interface that had an IOS-like look and feel so that customers migrating to NX-OS from legacy products feel comfortable deploying and operating them. The greatest improvements to the core operating system over IOS were in the following areas:

Process scheduling

Memory management

Process isolation

Management of feature processes

In NX-OS, feature processes are not started until they are configured by the user. This saves system resources and allows for greater scalability and efficiency. The features use their own memory and system resources, which adds stability to the operating system.

Although similar in look and feel, under the hood, the NX-OS operating system has improved in many areas over Cisco’s IOS operating system.

The NX-OS modular architecture is depicted in Figure 1-3.

NX-OS Modular Architecture

Layer 2 Protocols Layer 3 Protocols HA

Infrastructure

Hardware

Drivers Netstack

MTS OSPF

BGP ElGRP

PIM RIB

SNMP VRRP HSRP GLBP

RIB CTS 802.1x

CDP UDLD

LACP IGMP STP VLAN MGR Management

Infrastructure

NX-APl Python SNMP NETCONF

CLI

PSS SYSMGR

Kernel

Figure 1-3 NX-OS Modular Architecture

Note The next section covers some of the fundamental NX-OS components that are of the most interest. Additional NX-OS services and components are explained in the context of specific examples throughout the remainder of this book.

The Kernel

The primary responsibility of the kernel is to manage the resources of the system and interface with the system hardware components. The NX-OS operating sys- tem uses a Linux kernel to provide key benefits, such as support for symmetric- multiprocessors (SMPs) and pre-emptive multitasking. Multithreaded processes can be scheduled and distributed across multiple processors for improved scalability.

Each component process of the OS was designed to be modular, self-contained, and memory protected from other component processes. This approach results in a highly resilient system where process faults are isolated and therefore easier to recover from when failure occurs. This self-contained, self-healing approach means that recovery from such a condition is possible with no or minimal interruption because individual processes are restarted and the system self-heals without requiring a reload.

Note Historically, access to the Linux portion of NX-OS required the installation of a

“debug plugin” by Cisco support personnel. However, on some platforms NX-OS now offers a feature bash-shell that allows users to access the underlying Linux portion of NX-OS.

System Manager (sysmgr)

The system manager is the NX-OS component that is responsible for the processes run- ning on the system. That means that the system manager starts the processes and then monitors their health to ensure they are always functional. If a process fails, the system manager takes action to recover. Depending on the nature of the process, this action could be restarting the process in a stateful or stateless manner, or even initiating a sys- tem switchover (failover to the redundant supervisor) to recover the system if needed.

Processes in NX-OS are identified by a Universally Unique Identifier (UUID), which is used to identify the NX-OS service it represents. The UUID is used by NX-OS because a process ID (PID) may change, but the UUID remains consistent even if the PID changes.

The command show system internal sysmgr service all displays all the services, their UUID, and PID as shown in Example 1-1. Notice that the Netstack service has a PID of 6427 and a UUID of 0x00000221.

Example 1-1 show system internal sysmgr service all Command

NX-1# show system internal sysmgr service all

! Output omitted for brevity

Name UUID PID SAP state Start Tag Plugin ID count

--- --- --- --- --- --- --- --- --- aaa 0x000000B5 6227 111 s0009 1 N/A 0 ospf 0x41000119 13198 320 s0009 2 32 1 psshelper_gsvc 0x0000021A 6147 398 s0009 1 N/A 0 platform 0x00000018 5817 39 s0009 1 N/A 0 radius 0x000000B7 6455 113 s0009 1 N/A 0 securityd 0x0000002A 6225 55 s0009 1 N/A 0 tacacs 0x000000B6 6509 112 s0009 1 N/A 0 eigrp 0x41000130 [NA] [NA] s0075 1 N/A 1 mpls 0x00000115 6936 274 s0009 1 N/A 1 mpls_oam 0x000002EF 6935 226 s0009 1 N/A 1 mpls_te 0x00000120 6934 289 s0009 1 N/A 1 mrib 0x00000113 6825 255 s0009 1 N/A 1 netstack 0x00000221 6427 262 s0009 1 N/A 0 nfm 0x00000195 6824 306 s0009 1 N/A 1 ntp 0x00000047 6462 72 s0009 1 N/A 0 obfl 0x0000012A 6228 1018 s0009 1 N/A 0

Additional details about a service, such as its current state, how many times it has restart- ed, and how many times it has crashed is viewed by using the UUID obtained in the output of the previous command. The syntax for the command is show system internal sysmgr service uuid uuid as demonstrated in Example 1-2.

Example 1-2 show system internal sysmgr service Command

NX-1# show system internal sysmgr service uuid 0x00000221 UUID = 0x221.

Service "netstack" ("netstack", 182):

UUID = 0x221, PID = 6427, SAP = 262

State: SRV_STATE_HANDSHAKED (entered at time Fri Feb 17 23:56:39 2017).

Restart count: 1

Time of last restart: Fri Feb 17 23:56:39 2017.

The service never crashed since the last reboot.

Tag = N/A Plugin ID: 0

Note If a service has crashed, the process name, PID, and date/time of the event is found in the output of show cores.

Dokumen terkait