******************************************************************* ******************************************************************* ENTER HERE: >>.

HP Compaq Presario 5137 Presario Specific Models Easy Access Keyboard Option Pack driver download free (ver. Hardware: HP PC Drivers; Device: HP Compaq Presario 5137 Drivers; OS: Windows 95, Windows 98; Version: 1.00 Rev. A; Size: 119 Kb; Released: 199807; Download: HP Compaq Presario.

Building Architectures to Solve Business Problems About the Authors Tim Cerling, Technical Marketing Engineer, Cisco Tim Cerling is a Technical Marketing Engineer with Cisco's Datacenter Group, focusing on delivering customer-driven solutions on Microsoft Hyper-V and System Center products. Tim has been in the IT business since 1979. He started working with Windows NT 3.5 on the DEC Alpha product line during his 19 year tenure with DEC, and he has continued working with Windows Server technologies since then with Compaq, Microsoft, and now Cisco. During his twelve years as a Windows Server specialist at Microsoft, he co-authored a book on Microsoft virtualization technologies - Mastering Microsoft Virtualization. Tim holds a BA in Computer Science from the University of Iowa.

Compaq Keyboard Model 5137 Driver

Prashanto Kochavara, Solutions Engineer, EMC Prashanto has been working for the EMC solutions group for over 3 years. Prashanto is a SME on EMC Storage and Virtualization technologies including VMware and Hyper-V. He has vast amount of experience in end-to-end solution planning and deployments of VSPEX architectures. Prior to joining the solutions group at EMC, Prashanto has interned as a Systems Engineer (EMC) and Software Developer (MBMS Inc.).

Prashanto holds a Bachelor's degree in Computer Engineering from SUNY Buffalo and will be graduating with a Master's Degree in Computer Science from North Carolina State University in December 2013. Acknowledgments For their support and contribution to the design, validation, and creation of this Cisco Validated Design, we would like to thank: • Mike Mankovsky—Technical Lead, Cisco • Mehul Bhatt—Technical Lead, Cisco About Cisco Validated Design (CVD) Program The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments. For more information visit.

ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, 'DESIGNS') IN THIS MANUAL ARE PRESENTED 'AS IS,' WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS.

RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO. 12 x 600 GB SAS drives (hot spares) Technology Overview Cisco Unified Computing System The Cisco Unified Computing System is a next-generation data center platform that unites compute, network, and storage access. The platform, optimized for virtual environments, is designed using open industry-standard technologies and aims to reduce total cost of ownership (TCO) and increase business agility.

The system integrates a low-latency; lossless 10 Gigabit Ethernet unified network fabric with enterprise-class, x86-architecture servers. It is an integrated, scalable, multi chassis platform in which all resources participate in a unified management domain.

The main components of Cisco Unified Computing System are: • Computing - the system is based on an entirely new class of computing system that incorporates blade servers based on Intel Xeon E5-2600 Series Processors. • Network - the system is integrated onto a low-latency, lossless, 10-Gbps unified network fabric. This network foundation consolidates LANs, SANs, and high-performance computing networks which historically have been separate networks. The unified fabric lowers costs by reducing the number of network adapters, switches, and cables, and by decreasing the power and cooling requirements. • Virtualization - the system unleashes the full potential of virtualization by enhancing the scalability, performance, and operational control of virtual environments. Cisco security, policy enforcement, and diagnostic features are now extended into virtualized environments to better support changing business and IT requirements.

• Storage access - the system provides consolidated access to both SAN storage and Network Attached Storage (NAS) over the unified fabric. By unifying the storage access, the Cisco Unified Computing System can access storage over Ethernet, Fibre Channel, Fibre Channel over Ethernet (FCoE), iSCSI, and SMB 3.0.

This provides customers with choice for storage access and investment protection. In addition, the server administrators can pre-assign storage-access policies for system connectivity to storage resources, simplifying storage connectivity and management for increased productivity. • Management - the system uniquely integrates all system components to enable the entire solution to be managed as a single entity by the Cisco UCS Manager. The Cisco UCS Manager has an intuitive graphical user interface (GUI), a command-line interface (CLI), and a powerful scripting library module for Microsoft PowerShell built on a robust application programming interface (API) to manage all system configuration and operations. The Cisco Unified Computing System is designed to deliver: • A reduced Total Cost of Ownership and increased business agility. • Increased IT staff productivity through just-in-time provisioning and mobility support.

• A cohesive, integrated system which unifies the technology in the data center. The system is managed, serviced, and tested as a whole. • Scalability through a design for hundreds of discrete servers and thousands of virtual machines and the capability to scale I/O bandwidth to match demand.

• Industry standards supported by a partner ecosystem of industry leaders. Cisco UCS Manager Cisco UCS Manager provides unified, embedded management of all software and hardware components of the Cisco Unified Computing System through an intuitive GUI, a command line interface (CLI), a Microsoft PowerShell module, or an XML API. The Cisco UCS Manager provides unified management domain with centralized management capabilities and controls multiple chassis and thousands of virtual machines. Cisco UCS Fabric Interconnect The Cisco® UCS 6200 Series Fabric Interconnect is a core part of the Cisco Unified Computing System, providing both network connectivity and management capabilities for the system. The Cisco UCS 6200 Series offers line-rate, low-latency, lossless 10 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE), and Fibre Channel functions. The Cisco UCS 6200 Series provides the management and communication backbone for the Cisco UCS B-Series Blade Servers and Cisco UCS 5100 Series Blade Server Chassis. All chassis, and therefore all blades, attached to the Cisco UCS 6200 Series Fabric Interconnects become part of a single, highly available management domain.

In addition, by supporting unified fabric, the Cisco UCS 6200 Series provides both the LAN and SAN connectivity for all blades within its domain. From a networking perspective, the Cisco UCS 6200 Series uses a cut-through architecture, supporting deterministic, low-latency, line-rate 10 Gigabit Ethernet on all ports, 1 Tb switching capacity, 160 Gbps bandwidth per chassis, independent of packet size and enabled services.

The product family supports Cisco low-latency, lossless 10 Gigabit Ethernet unified network fabric capabilities, which increase the reliability, efficiency, and scalability of Ethernet networks. The Fabric Interconnect supports multiple traffic classes over a lossless Ethernet fabric from a blade server through an interconnect. Significant TCO savings come from an FCoE-optimized server design in which network interface cards (NICs), host bus adapters (HBAs), cables, and switches can be consolidated.

The Cisco UCS 6248UP 48-Port Fabric Interconnect is a one-rack-unit (1RU) 10 Gigabit Ethernet, FCoE and Fibre Channel switch offering up to 960-Gbps throughput and up to 48 ports. The switch has 32 1/10-Gbps fixed Ethernet, FCoE, and FC ports and one expansion slot.

Figure 1 Cisco UCS 6248UP Fabric Interconnect. Cisco UCS Fabric Extenders The Cisco UCS 2200 Series Fabric Extenders multiplex and forward all traffic from blade servers in a chassis to a parent Cisco UCS fabric interconnect over from 10-Gbps unified fabric links. All traffic, even traffic between blades on the same chassis or virtual machines on the same blade, is forwarded to the parent interconnect, where network profiles are managed efficiently and effectively by the fabric interconnect. At the core of the Cisco UCS fabric extender are application-specific integrated circuit (ASIC) processors developed by Cisco that multiplex all traffic. The Cisco UCS 2204XP Fabric Extender has four 10 Gigabit Ethernet, FCoE-capable, SFP+ ports that connect the blade chassis to the fabric interconnect.

Each Cisco UCS 2204XP has sixteen 10 Gigabit Ethernet ports connected through the midplane to each half-width slot in the chassis. Typically configured in pairs for redundancy, two fabric extenders provide up to 80 Gbps of I/O to the chassis. Figure 2 Cisco UCS 2204XP Fabric Extender. Cisco UCS Blade Chassis The Cisco UCS 5100 Series Blade Server Chassis is a crucial building block of the Cisco Unified Computing System, delivering a scalable and flexible blade server chassis.

The Cisco UCS 5108 Blade Server Chassis is six rack units (6RU) high and can mount in an industry-standard 19-inch rack. A single chassis can house up to eight half-width Cisco UCS B-Series Blade Servers and can accommodate both half-width and full-width blade form factors. Four single-phase, hot-swappable power supplies are accessible from the front of the chassis. These power supplies are 92 per cent efficient and can be configured to support non-redundant, N+ 1 redundant, and grid-redundant configurations.

The rear of the chassis contains eight hot-swappable fans, four power connectors (one per power supply), and two I/O bays for Cisco UCS 2204XP Fabric Extenders. A passive mid-plane provides up to 40 Gbps of I/O bandwidth per server slot and up to 80 Gbps of I/O bandwidth for two slots. The chassis is capable of supporting future 40 Gigabit Ethernet standards. The Cisco UCS Blade Server Chassis is shown in.

Figure 3 Cisco UCS 5108 Blade Server Chassis (back and front). Cisco UCS Blade Servers Delivering performance, versatility, and density without compromise, the Cisco UCS B200 M3 Blade Server addresses the broadest set of workloads, from IT and Web Infrastructure through distributed database to virtualization. Building on the success of the Cisco UCS B200 M2 blade servers, the enterprise-class Cisco UCS B200 M3 server further extends the capabilities of Cisco's Unified Computing System portfolio in a half blade form factor.

The Cisco UCS B200 M3 server harnesses the power and efficiency of the Intel Xeon E5-2600 processor product family, up to 768 GB of RAM, 2 drives or SSDs and up to 2 x 20 GE to deliver exceptional levels of performance, memory expandability, and I/O throughput for nearly all applications. In addition, the Cisco UCS B200 M3 blade server offers a modern design that removes the need for redundant switching components in every chassis in favor of a simplified top of rack design, allowing more space for server resources, providing a density, power, and performance advantage over previous generation servers. The Cisco UCS B200 M3 Server is shown in. Figure 4 Cisco UCS B200 M3 Blade Server. Cisco I/O Adapters Cisco UCS Blade Servers support various Converged Network Adapter (CNA) options.

Cisco UCS Virtual Interface Card (VIC) 1240 is used in this EMC VSPEX solution. The Cisco UCS Virtual Interface Card 1240 is a 4-port 10 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE)-capable modular LAN on motherboard (mLOM) designed exclusively for the M3 generation of Cisco UCS B-Series Blade Servers. When used in combination with an optional Port Expander, the Cisco UCS VIC 1240 capabilities can be expanded to eight ports of 10 Gigabit Ethernet.

The Cisco UCS VIC 1240 enables a policy-based, stateless, agile server infrastructure that can present up to 256 PCIe standards-compliant interfaces to the host that can be dynamically configured as either network interface cards (NICs) or host bus adapters (HBAs). In addition, the Cisco UCS VIC 1240 supports Cisco Data Center Virtual Machine Fabric Extender (VM-FEX) technology, which extends the Cisco UCS fabric interconnect ports to virtual machines, simplifying server virtualization deployment. Figure 6 Cisco UCS VIC 1240. Cisco UCS Differentiators Cisco's Unified Computing System is revolutionizing the way servers are managed in data centers.

Following are the unique differentiators of Cisco UCS and Cisco UCS Manager. • Embedded management—In Cisco Unified Computing System, the servers are managed by the embedded firmware in the Fabric Interconnects, eliminating need for any external physical or virtual devices to manage the servers. Also, a pair of FIs can manage up to 40 chassis, each containing up 8 blade servers, to a total of 160 servers with fully redundant connectivity. This gives enormous scaling on the management plane. • Unified fabric—In Cisco Unified Computing System, from blade server chassis or rack server fabric-extender to FI, there is a single Ethernet cable used for LAN, SAN, and management traffic.

This converged I/O results in reduced cables, SFPs, and adapters - reducing capital and operational expenses of overall solution. • Auto Discovery—By simply inserting the blade server in the chassis, discovery and inventory of compute resource occurs automatically without any management intervention. The combination of unified fabric and auto-discovery enables the wire-once architecture of Cisco Unified Computing System, where compute capability of Cisco Unified Computing System can be extended easily while keeping the existing external connectivity to LAN, SAN, and management networks.

• Policy based resource classification—When a compute resource is discovered by Cisco UCS Manager, it can be automatically classified to a given resource pool based on policies defined. This capability is useful in multi-tenant cloud computing. • Combined Rack and Blade server management—Cisco UCS Manager can manage Cisco UCS B-series blade servers and Cisco UCS C-series rack server under the same Cisco UCS domain.

This feature, along with stateless computing makes compute resources truly hardware form factor agnostic. • Model based management architecture—Cisco UCS Manager architecture and management database is model based and data driven. An open, standard based XML API is provided to operate on the management model.

This enables easy and scalable integration of Cisco UCS Manager with other management system, such as VMware vCloud director, Microsoft System Center, and Citrix Cloud Platform. • Policies, Pools, Templates—The management approach in Cisco UCS Manager is based on defining policies, pools and templates, instead of cluttered configuration, which enables a simple, loosely coupled, data driven approach in managing compute, network and storage resources. • Loose referential integrity—In Cisco UCS Manager, a service profile, port profile, or policies can refer to other policies or logical resources with loose referential integrity. A referred policy does not have to exist at the time of authoring the referring policy or a referred policy can be deleted even though other policies are referring to it. This provides different subject matter experts to work independently from each other. This provides great flexibility where different experts from different domains, such as network, storage, security, server, and virtualization work together to accomplish a complex task. • Policy resolution—In Cisco UCS Manager, a tree structure of organizational unit hierarchy can be created that mimics the real life tenants and/or organization relationships.

Various policies, pools, and templates can be defined at different levels of organization hierarchy. A policy referring to another policy by name is resolved in the organization hierarchy with closest policy match. If no policy with specific name is found in the hierarchy of the root organization, then special policy named 'default' is searched. This policy resolution practice enables automation friendly management APIs and provides great flexibility to owners of different organizations. • Service profiles and stateless computing—A service profile is a logical representation of a server, carrying its various identities and policies.

This logical server can be assigned to any physical compute resource as far as it meets the resource requirements. Stateless computing enables procurement of a server within minutes, which used to take days in legacy server management systems.

• Built-in multi-tenancy support—The combination of policies, pools, templates, loose referential integrity, policy resolution in organization hierarchy, and a service profiles based approach to compute resources makes Cisco UCS Manager inherently friendly to multi-tenant environment typically observed in private and public clouds. • Extended Memory—The extended memory architecture of Cisco Unified Computing System servers allows up to 760 GB RAM per server - allowing huge VM to physical server ratio required in many deployments, or allowing large memory operations required by certain architectures like Big-Data. • Virtualization aware network—VM-FEX technology makes access layer of network aware about host virtualization. This prevents domain pollution of compute and network domains with virtualization when virtual network is managed by port-profiles defined by the network administrators' team. VM-FEX also off loads hypervisor CPU by performing switching in the hardware, thus allowing hypervisor CPU to do more virtualization related tasks. VM-FEX technology is well integrated with VMware vCenter, Linux KVM, and Hyper-V SR-IOV to simplify cloud management.

• Simplified QoS—When the Fibre Channel and Ethernet are converged in Cisco Unified Computing System fabric, built-in support for QoS and lossless Ethernet makes it seamless. Network Quality of Service (QoS) is simplified in Cisco UCS Manager by representing all system classes in one GUI panel. Microsoft Hyper-V 2012 R2 Microsoft Hyper-V 2012 R2 is a next-generation virtualization solution from Microsoft that builds upon previous releases and provides greater levels of scalability, security, and availability to virtualized environments.

Hyper-V 2012 R2 offers improvements in performance and utilization of CPU, memory, and I/O. It also offers users the option to assign up to 64 virtual CPU to a virtual machine—giving system administrators more flexibility in their virtual server farms as processor-intensive workloads continue to increase. Illustrates the increase in scale from the previous major release.

Table 2 Hyper-V Scale. System Resource Maximum Number Windows Server 2008 R2 Windows Server 2012 R2 Host Logical processors on hardware 64 320 Physical memory 1 TB 4 TB Virtual processors per host 512 1,024 Virtual Machine Virtual processor per VM 4 64 Memory per VM 64 GB 1 TB Active virtual machines 384 1,024 Virtual disk size 2 TB 64 TB Cluster Nodes 16 64 Virtual machines 1,000 8,000 Microsoft provides System Center 2012 R2 to provide additional management capabilities to a virtualized and physical environment. This Design document assumes the existence of a System Center Virtual Machine Manager server, but Hyper-V provides significant management capabilities without the addition of System Center. Included in the base capabilities of Hyper-V are: • High availability—up to 64 Hyper-V hosts can be formed into a single cluster hosting up to 8,000 virtual machines. • Disaster recovery—virtual machine replicas can be made to other locations for rapid recovery and restart in case of a disaster. • Live migration—virtual machines can be live migrated (moved from one host to another with no service downtime) between any two Hyper-V hosts, whether they are clustered or not, without the need for any shared storage.

• Live storage migration—as with live machine migration, storage for a virtual machine can be migrated without any service downtime. • Dynamic memory—virtual machines defined with dynamic memory can release unused memory for use by other virtual machines that require it.

• Cluster Shared Volumes—storage volumes in a failover cluster environment allow any virtual machine executing on any cluster host full read/write access to its virtual hard drives from any node in the cluster. This also provides additional high availability by ensuring access to the volume and uninterrupted virtual machine execution even if the Hyper-V host loses physical connection to the volume. • Clustering of virtual machines—virtual machine clusters can use storage from many locations: iSCSI Targets, Virtual HBA to directly access Fibre Channel storage, SMB 3.0 file shares, or virtual hard drives residing on Cluster Shared Volumes. • PowerShell—a complete PowerShell module enables management of virtual machines and their resources via scripting so repetitive tasks can be easily executed. • NIC teaming—teaming at the host level enables up to 32 physical NICs to form a single team.

Within a virtual machine, two virtual NICs can form a team. • Data deduplication—automatically deduplicate common data on the disk, including operating system files on virtual hard drives. EMC Storage Technology and Benefits The VNX storage series provides both file and block access with a broad feature set, which makes it an ideal choice for any private cloud implementation.

VNX storage includes the following components, sized for the stated reference architecture workload: • Host adapter ports (For block)—Provide host connectivity through fabric to the array • Storage processors—The compute components of the storage array, which are used for all aspects of data moving into, out of, and between arrays • Disk drives—Disk spindles and solid state drives (SSDs) that contain the host or application data and their enclosures • Data Movers (For file)—Front-end appliances that provide file services to hosts (optional if CIFS services are provided). Note The term Data Mover refers to a VNX hardware component, which has a CPU, memory, and I/O ports. It enables Common Internet File System (CIFS-SMB) and Network File System (NFS) protocols on the VNX. The Microsoft Hyper-V private cloud solutions for 300, 600, and 1,000 virtual machines described in this document are based on the EMC VNX5400, EMC VNX5600, and the EMC VNX5800 storage arrays, respectively. The VNX5400 array can support a maximum of 250 drives, the VNX5600 can host up to 500 drives, and the VNX5800 can host up to 750 drives. The VNX series supports a wide range of business-class features that are ideal for the private cloud environment, including: • EMC Fully Automated Storage Tiering for Virtual Pools (FAST VP™) • EMC FAST Cache • File-level data deduplication and compression • Block deduplication • Thin provisioning • Replication • Snapshots or checkpoints • File-level retention • Quota management Features and Enhancements The EMC VNX flash-optimized unified storage platform delivers innovation and enterprise capabilities for file, block, and object storage in a single, scalable, and easy-to-use solution.

Ideal for mixed workloads in physical or virtual environments, VNX combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today's virtualized application environments. VNX includes many features and enhancements designed and built upon the first generation's success. These features and enhancements include: • More capacity with multicore optimization with Multicore Cache, Multicore RAID, and Multicore FAST Cache (MCx) • Greater efficiency with a flash-optimized hybrid array • Better protection by increasing application availability with active/active storage processors • Easier administration and deployment by increasing productivity with a new Unisphere Management Suite VSPEX is built with the next generation of VNX to deliver even greater efficiency, performance, and scale than ever before. Flash-Optimized Hybrid Array VNX is a flash-optimized hybrid array that provides automated tiering to deliver the best performance to your critical data, while intelligently moving less frequently accessed data to lower-cost disks. In this hybrid approach, a small percentage of flash drives in the overall system can provide a high percentage of the overall IOPS.

A flash-optimized VNX takes full advantage of the low latency of flash to deliver cost-saving optimization and high performance scalability. The EMC Fully Automated Storage Tiering Suite (FAST Cache and FAST VP) tiers both block and file data across heterogeneous drives and boosts the most active data to the cache, ensuring that customers never have to make concessions for cost or performance. FAST VP dynamically absorbs unpredicted spikes in system workloads. As that data ages and becomes less active over time, FAST VP tiers the data from high-performance to high-capacity drives automatically, based on customer-defined policies.

This functionality has been enhanced with four times better granularity and with new FAST VP solid-state disks (SSDs) based on enterprise multi-level cell (eMLC) technology to lower the cost per gigabyte. All VSPEX use cases benefit from the increased efficiency. VSPEX Proven Infrastructures deliver private cloud, end-user computing, and virtualized application solutions. With VNX, customers can realize an even greater return on their investment. VNX provides out-of-band, block-based deduplication that can dramatically lower the costs of the flash tier. VNX Intel MCx Code Path Optimization The advent of flash technology has been a catalyst in totally changing the requirements of midrange storage systems.

EMC redesigned the midrange storage platform to efficiently optimize multicore CPUs to provide the highest performing storage system at the lowest cost in the market. MCx distributes all VNX data services across all cores—up to 32, as shown in. The VNX series with MCx has dramatically improved the file performance for transactional applications like databases or virtual machines over network-attached storage (NAS).

Figure 7 Next-Generation VNX with multicore optimization. Multicore Cache The cache is the most valuable asset in the storage subsystem; its efficient use is key to the overall efficiency of the platform in handling variable and changing workloads. The cache engine has been modularized to take advantage of all the cores available in the system. Multicore RAID Another important part of the MCx redesign is the handling of I/O to the permanent back-end storage-hard disk drives (HDDs) and SSDs. Greatly increased performance improvements in VNX come from the modularization of the back-end data management processing, which enables MCx to seamlessly scale across all processors. VNX Performance Performance enhancements VNX storage, enabled with the MCx architecture, is optimized for FLASH 1 st and provides unprecedented overall performance, optimizing for transaction performance (cost per IOPS), bandwidth performance (cost per GB/s) with low latency, and providing optimal capacity efficiency (cost per GB). Virtualization Management EMC Storage Integrator EMC Storage Integrator (ESI) is targeted towards the Windows and Application administrator.

ESI is easy to use, delivers end-to end monitoring, and is hypervisor agnostic. Administrators can provision in both virtual and physical environments for a Windows platform, and troubleshoot by viewing the topology of an application from the underlying hypervisor to the storage.

Microsoft Hyper-V With Windows Server 2012, Microsoft provides Hyper-V 3.0, an enhanced hypervisor for private cloud that can run on NAS protocols for simplified connectivity. Offloaded Data Transfer The Offloaded Data Transfer (ODX) feature of Microsoft Hyper-V enables data transfers during copy operations to be offloaded to the storage array, freeing up host cycles. For example, using ODX for a live migration of a SQL Server virtual machine doubled performance, decreased migration time by 50 percent, reduced CPU on the Hyper-V sever by 20 percent, and eliminated network traffic.

EMC Avamar EMC's Avamar ® data deduplication technology seamlessly integrates into virtual environments, providing rapid backup and restoration capabilities. Avamar's deduplication results in vastly less data traversing the network, and greatly reduces the amount of data being backed up and stored; resulting in storage, bandwidth and operational savings. The following are the two most common recovery requests used in backup and recovery: • File-level recovery—Object-level recoveries account for the vast majority of user support requests. Common actions requiring file-level recovery are-individual users deleting files, applications requiring recoveries, and batch process-related erasures.

• System recovery—Although complete system recovery requests are less frequent in number than those for file-level recovery, this bare metal restore capability is vital to the enterprise. Some of the common root causes for full system recovery requests are viral infestation, registry corruption, or unidentifiable unrecoverable issues. The Avamar System State protection functionality adds backup and recovery capabilities in both of these scenarios. Architectural Overview This Cisco Design discusses the deployment model of the Microsoft Hyper-V 2012 R2 solution for up to 300 virtual machines. Understanding the virtual machine workloads vary from customer to customer, Cisco server components can be easily added in single server increments to address the exact workload of the customer. Lists the hardware requirements of the designed solution.

Table 3 Hardware Requirements. Component Hardware Required Servers Six Cisco B200 M3 Servers with 192 GB of memory Adapters Six Cisco VIC 1240 adapters; one per server Chassis One Cisco UCS 5108 Blade Server Chassis Fabric extenders Two 2204XP fabric extenders; two per chassis Fabric interconnects Two Cisco UCS 6248UP Fabric Interconnects Network switches Two Cisco Nexus 5548UP Switches Storage One EMC VNX5400 Storage array lists the various firmware and software components for this VSPEX design. Table 4 Firmware and Software Components. Component Capacity Memory (RAM) 192 GB (12 x 16 GB) Processor 2 x Intel E2650 CPUs, 2.6 GHz, 8 cores, 16 threads Adapters Cisco VIC 1240 This architecture assumes there is an existing infrastructure / management network available in which a virtual machine hosting Microsoft System Center Virtual Machine Manager 2012 R2 server and Windows Active Directory/DNS/DHCP server are present. Illustrates a high-level Cisco solution for EMC VSPEX Microsoft Hyper-V architecture for up to 300 virtual machines.

Figure 10 Reference Architecture for up to 300 Virtual Machines. The following are the high-level design points of the architecture: • Only Ethernet is used as network layer 2 media to access Cisco UCS 6248UP from the Cisco UCS B200 M3 blade servers. • Infrastructure network is on a separate 1GE network.

• Network redundancy is built in by providing two switches, two storage controllers and redundant connectivity for data, storage, and infrastructure networking. This design does not recommend or require any specific layout of infrastructure network. The Virtual Machine Manager server and AD/DNS/DHCP virtual machines are hosted on the infrastructure network.

However, design does require accessibility of certain VLANs from the infrastructure network to reach the servers. Hyper-V 2012 R2 is used as the hypervisor on each server and is installed on fibre channel SAN.

The defined load is 60 virtual machines per Cisco UCS B200 M3 server blade. Memory Configuration Guidelines This section provides guidelines for allocating memory to the virtual machines.

The guidelines outlined here take into account Hyper-V memory overhead and the virtual machine memory settings. Hyper-V Memory Management Concepts Microsoft Hyper-V has a number of advanced features to maximize performance, and overall resource utilization. The most important features relate to memory management. This section describes some of these features, and the items to consider when using these features in the VSPEX environment. Dynamic Memory Dynamic Memory was introduced in Windows Server 2008 R2 SP1 to increase physical memory efficiency by treating memory as a shared resource, and dynamically allocating it to virtual machines. The amount of memory used by each virtual machine is adjustable at any time. Dynamic Memory reclaims unused memory from idle virtual machines, which allows more virtual machines to run at any given time.

In Windows Server 2012, Dynamic Memory enables administrators to dynamically increase the maximum memory available to virtual machines. Dynamic memory pools all the physical memory available on a physical host and dynamically distributes it to virtual machines running on that host as the virtual machines need it. As workloads change, virtual machines will be able to dynamically ask for memory if it is needed or dynamically release memory if it is no longer needed without service interruptions. Dynamic Memory requires that a virtual machine have a minimum and maximum size of virtual memory assigned.

The virtual machine will never have less memory assigned to it than what is specified by the minimum and never ask for more than what is specified as the maximum. When a virtual machine is initialized, it is given the minimum amount specified. As processes are loaded, they create a demand for a specific amount of memory. If the virtual machine does not yet have enough physical memory assigned to it based on its demand, the hypervisor and SLAT will allocate more physical memory to ensure optimal performance. If the virtual machine is no longer using memory, it uses a ballooning technique to free up unused physical memory to return to Hyper-V to be allocated to other virtual machines that need the memory. One of the things that often happens is that when a machine, physical or virtual, is first starting, it may require more physical memory than it would require while it is running its normal tasks. This happens due to the many startup processes that are run to get the system running but then stop once they have performed their function.

Therefore, Hyper-V also has a startup RAM setting that can be set larger than the minimum, thereby ensuring more memory at startup for a faster startup time. After the virtual machine has started and has it normal processes running, the ballooning technology will free any excess memory for use by other virtual machines. In addition to the startup, minimum, and maximum settings offered by Hyper-V, it also allows two other settings to optimize memory usage. One setting is to specify the percentage of memory that Hyper-V should try to reserve as a buffer for the virtual machine. Then when the virtual machine has a demand for more physical memory, it can draw from this buffer instead going through the more intensive route of asking for new physical memory. This ensures a quicker response to memory demands within the virtual machine. The second setting is a memory weight that specifies how to prioritize memory demands for one virtual machine in relationship to the demands of other virtual machines.

This allows for high-priority virtual machines to have their memory demands satisfied before lower priority virtual machines. Smart Paging Even with Dynamic Memory, Hyper-V allows more virtual machines than the available physical memory can support. In most cases, there is a memory gap between minimum memory and startup memory.

Smart Paging is a memory management technique that uses disk resources as temporary memory replacement. It swaps out less-used memory to disk storage, and swaps in when needed. Performance degradation is a potential drawback of Smart Paging. Hyper-V continues to use the guest paging when the host memory is oversubscribed because it is more efficient than Smart Paging. Non-Uniform Memory Access Non-Uniform Memory Access (NUMA) is a multi-node computer technology that enables a CPU to access remote-node memory. This type of memory access degrades performance, so Windows Server 2012 employs a process known as processor affinity, which pins threads to a single CPU to avoid remote-node memory access. In previous versions of Windows, this feature is only available to the host.

Windows Server 2012 extends this functionality to the virtual machines, which provides improved performance in symmetrical multiprocessor (SMP) environments. Allocating Memory to Virtual Machines Memory sizing for a virtual machine in VSPEX architectures is based on many factors.

With the number of application services and use cases available determining a suitable configuration for an environment requires creating a baseline configuration, testing, and making adjustments, as discussed later in this paper. Outlines the resources used by a single virtual machine. Table 6 Resources for a Single Virtual Machine. Characteristics Value Virtual processor per VM (vCPU) 1 RAM per VM 2 GB Available storage capacity per VM 100 GB I/O operations per second (IOPS) per VM 25 I/O pattern Random I/O read/write ration 2:1 The following are some recommended best practices for memory allocation: • Account for memory overhead - Virtual machines require memory beyond the amount allocated, and this memory overhead is per virtual machine.

Memory overhead includes space reserved for integration services and other virtualization-related processes. The amount of overhead is somewhat trivial, but it still needs to be factored in. VMs with 1 GB or less of RAM only use about 32 MB of memory for virtualization-related overhead. You should add 8 MB for every gigabyte of additional RAM. For example, a VM with 2 GB of RAM would use 40 MB (32 MB plus 8 MB) of memory for virtualization-related overhead.

Likewise, a VM with 4 GB of memory would have 56 MB of memory overhead. Each running virtual machine also has an associated virtual machine worker process to coordinate management tasks for the virtual machine.

This process uses a little less than 7 MB of memory. This memory and process overhead is in addition to the memory allocated to the virtual machine and must be available on the Hyper-V host. • 'Right-size' memory allocations - Over-allocating memory to virtual machines can waste memory unnecessarily, but it can also increase the amount of memory overhead required to run the virtual machine, thus reducing the overall memory available for other virtual machines. Fine-tuning the memory for a virtual machine is done easily and quickly by adjusting the virtual machine properties. Using Dynamic Memory helps ease the administration of this right-sizing. • Do not overcommit - as the term 'overcommit' implies, this is trying to use more than is available. Just as you cannot pump 10 gigabits of data through a 1 Gpbs network in one second, you cannot use more memory than what is physically available.

Again, Dynamic Memory helps ease the administration of memory on a physical system that is running close to its memory capacity. • Monitor usage - Hyper-V provides performance monitoring statistics for resources used by virtual machines. These statistics can be monitored from the host environment without the need to go into every virtual machine individually. Perfmon, a performance monitoring utility that is part of the operating system, provides these statistics in counters prefixed with the string 'Hyper-V'. For more information about monitoring performance on Hyper-V see. In addition to accounting for the memory used by the virtual machines, you should also allow the host to have at least 2 GB for the parent partition. This partition includes processes for monitoring and managing the environment and features such as the built-in failover clustering capability.

Additionally control information for the running of the virtual machines is also stored in this memory. Storage Guidelines VSPEX architecture for Microsoft Hyper-V up to 300 VMs uses Fibre Channel to access storage arrays. The Hyper-V hosts boot from the SAN storage, ensuring stateless configuration for the individual Hyper-V hosts. If one of the Cisco UCS B200 M3 servers becomes unavailable for any reason, the service profile defining that server's configuration can be associated to another Cisco UCS B200 M3 server and boot from the same operating system image on the SAN without any reconfiguration to bring it into service. The shared storage for storing VMs and their data uses the same storage array and protocol, minimizing the management overhead associated with managing VM storage.

VMs can access Fibre Channel LUNs directly (optional) using virtual HBAs in the VMs. Highly available failover clusters built with VMs can use either the virtual HBAs or simply shared virtual hard disks (VHDX) stored on the array. Virtual Server Configuration Storage for Hyper-V can be categorized into three layers of storage technology: • The storage array is the bottom layer consisting for physical disk spindles. These spindles are aggregated into RAID sets and then LUNs are defined on the RAID sets to present to the Hyper-V hosts. • Storage array LUNs presented to the Hyper-V hosts are used for two purposes.

– Boot volumes - operating system image used for booting the Hyper-V host – Cluster Shared Volumes - shared storage that is read/write accessible by all Hyper-V hosts configured in a Microsoft Failover Cluster • Virtual disk files (VHDX) are created on the Cluster Shared Volumes and are used for multiple purposes. – Boot volumes - guest operating system image used for booting a virtual machine – Data volumes - data volumes required by applications – Shared storage - shared volumes used by virtual machine guest clusters illustrates the above explanation. Figure 11 Hyper-V Storage Virtualization Stack. Storage Protocol Capabilities The EMC VNX5400 provides Hyper-V and storage administrators with the flexibility to use the storage protocol that meets the requirements or standards of the business. This can be a single protocol datacenter-wide or multiple protocols for tiered scenarios. The EMC VNX5400 can support Fibre Channel, FCoE, iSCSI, and SMB protocols. The Cisco solution for EMC VSPEX with Microsoft Hyper-V recommends a single protocol, Fibre Channel, throughout in order to simplify the design.

Storage Best Practices It is recommended that storage administrators become familiar with Microsoft's suggestions for performance tuning. They have published a document that can be found at:. • Multi-path—having a redundant set of paths to the EMC VNX is critical to protecting the availability of the environment as well as ensuring the best performance. Microsoft provides built-in multi-path IO support as part of its operating system.

EMC builds on top of this built-in capability to provided added functions with its PowerPath software. • Partition alignment—in the past, it was often recommended to manually align partitions on disk volumes to ensure optimal performance.

Since Windows Server 2008, Microsoft has taken the guess-work out of partition alignment. Microsoft Windows Server 2012 and later natively support the 4K sector disks that are commonly available today and automatically aligns the partitions for optimal performance. • Shared storage—Cluster Shared Volumes provide a high level of availability to the virtual machine environment.

VHDX files (virtual hard disks) can be accessed from any node of the cluster (up to 64 nodes) with full read/write capability. If a node loses its physical connection to a CSV, it can still access the VHDX files via the network, ensuring uninterrupted service from that VM owning those VHDX files.

• Calculate total VM storage requirements—each virtual machine may require more space than the total of its VHDX files. One of the settings for a virtual machine is to take an automatic stop action; that is action to be performed if the host machine is gracefully shut down.

One of the automatic stop actions often used is to save the virtual machine state. In order to ensure that enough space is available on disk when an automatic stop action is initiated, Hyper-V will create file on disk with enough space to save the memory contents of the virtual machine. For example, assume that you have a virtual machine with 8 GB or RAM allocated and a 40 GB VHDX system volume and a 50 GB VHDX data volume. If the automatic stop action for a virtual machine is to save the virtual machine state (the default setting), you will need to ensure that you have at least 98 GB of disk space available for this VM. • Understand I/O requirements—Under-provisioned storage can significantly slow responsiveness and performance for applications.

In a multi-tier application, you can expect each tier of application to have different I/O requirements. As a general recommendation, pay close attention to the amount of virtual machine disk files hosted on a single CSV. Over-subscription of the I/O resources can go unnoticed at first and slowly begin to degrade performance if not monitored proactively Storage Layout shows the physical disk layout.

Disk provisioning on the VNX5400 storage array is simplified through the use of wizards, so that administrators do not choose which disks belong to a given storage pool. The wizard may choose any available disk of the proper type, regardless of where the disk physically resides in the array. Figure 12 Storage Architecture for up to 300 Virtual Machines.

Note System drives are specifically excluded from the pools and are not used for additional storage. • Six 200 GB flash drives are configured in the array FAST VP. • A single 200 GB flash drive is configured as a hot spare. • Three pools are configured to map to the three SAS disk pools. The VNX family storage array is designed for five 9s availability (99.999% uptime) by using redundant components throughout the array.

All of the array components are capable of continued operation in case of hardware failure. The RAID disk configuration on the array provides protection against data loss due to individual disk failures, and the available hot spare drives can be dynamically allocated to replace a failing disk.

Storage Building Blocks VSPEX uses a building block approach to reduce this complexity. A building block is a set of disk spindles that can support a certain number of virtual servers in the VSPEX architecture. Each building block combines several disk spindles to create a storage pool that supports the needs of the private cloud environment. Each building block storage pool, regardless of the size, contains two flash drives with FAST VP storage tiering to enhance metadata operations and performance. VSPEX solutions have been engineered to provide a variety of sizing configurations which afford flexibility when designing the solution. Customers can start out by deploying smaller configurations and scale up as their needs grow. At the same time, customers can avoid over-purchasing by choosing a configuration that closely meets their needs.

To accomplish this, VSPEX solutions can be deployed using one or both of the scale-points below to obtain the ideal configuration while guaranteeing a given performance level. Building Block for 13 Virtual Servers The first building block can contain up to 13 virtual servers. It has two flash drives and five SAS drives in a storage pool. Figure 13 13 Virtual Server Building Block. This is the smallest building block qualified for the VSPEX architecture. This building block can be expanded by adding five SAS drives and allowing the pool to restripe to add support for 13 more virtual servers.

Building Block for 125 Virtual Servers The second building block can contain up to 125 virtual servers. It contains two flash drives, and 45 SAS drives. The preceding sections outline an approach to grow from 13 virtual machines in a pool to 125 virtual machines in a pool. However, after reaching 125 virtual machines in a pool, do not go to 138. Create a new pool and start the scaling sequence again.

Figure 14 125 Virtual Server Building Block. Networking Guidelines The following are some recommended best practices for networking with Hyper-V: • Fabric failover—always use the fabric failover feature of Cisco UCS VIC adapters for high-availability of network access.

• Separate virtual machine and host traffic—keep virtual machine and host traffic separate. This can be accomplished physically using separate virtual switches for VM networks that are defined on separate physical NICs or virtually using VLAN segmentation. Recommended NICs The recommended and suggested networking for a Microsoft Hyper-V solution have the following NICs: • Host (or infrastructure) management—used by the Hyper-V host for management functions. If using the Nexus 1000V (optional), it is recommended that its Virtual Supervisor Module network run on this management network. • Live Migration—used by Hyper-V host for live migrating virtual machines. This is not a required network, but it is highly recommended to separate this traffic • Cluster Shared Volume—used by the Hyper-V host for managing the cluster shared volumes. This is not a required network, but it is highly recommended.

– Normal operations—this network sees very little traffic. Each node in the cluster is directly accessing the CSV to read/write to the VHDX files in use by the virtual machines running on that node. The only traffic that passes over this network is what is known as 'metadata updates'. Metadata updates comprise file and directory creation/deletion/extension at the Hyper-V level. Most I/O is directly to the contents of the VHDX files in use by the VMs, and none of that is considered a metadata update. Metadata updates are handled by the node in the cluster that owns the LUN presented as a CSV. – Redirected mode—should a node of the cluster lose physical connectivity to a CSV, the CSV is set in redirected mode for that host.

This means that read/write operations that would normally go directly to the CSV from the VM are redirected over the network designated as the CSV network to the node in the cluster that owns the LUN. This mode also may be initiated by backup programs. • Virtual machine access—this network should be a separate network for accessing the resources of the virtual machines running on Hyper-V host. It is recommended that this network be defined as not available for use by the host.

There should be a minimum of one virtual machine access network. Depending on your needs, you may require more.

Optional NICs Depending on your business needs, you might be requirements to have additional networks as follows: • Virtual Ethernet Module—when using the Nexus 1000V (optional), it is necessary to define a network for each subnet you want managed by the Nexus 1000V. • iSCSI—if using iSCSI storage, it is recommended to deploy two networks on different subnets • SMB—if using SMB storage, it is recommended to deploy two networks on different subnets Quality of Service It is recommended to define a quality of service for the Live Migration network to ensure optimal performance. If iSCSI and/or SMB networks are defined, it is recommended to define a quality of service for their use. In general, the QoS defined for storage will be different from the QoS defined for live migration.

Solution for up to 300 Hyper-V Virtual Machines The key aspects of the Cisco UCS with EMC VNX5400 for up to 300 Hyper-V virtual machines solution are as follows: • The solution is built with redundancy at every level—compute, networking, and storage. • Six Cisco UCS B200 M3 servers are configured into a failover cluster. An average of 60 VMs per server allows for a single server to be available as a spare for failure or maintenance. • Each Cisco UCS B200 M3 server is configured with 192 GB of RAM.

This is slightly more RAM than is needed for the 60 reference VMs, ensuring that slight changes in the customer's configuration can be accommodated. • Windows Server Hyper-V 2012 R2 is booted from SAN disk. FCoE is used from the servers to the fabric interconnects.

Native FC is used between the Nexus 5248UP switches and the VNX5400. • SAN boot and Cisco UCS Manager service profiles provide a stateless computing architecture. A Cisco UCS B200 M3 server can be replaced with very little, if any, downtime. • The entire solution is built using a building block approach so the configuration can easily grow as the needs increase beyond the initial 300 virtual machines.

Stateless Computing Cisco UCS Manager (UCSM) provides the concept of a Service Profile for a server running on a physical hardware. A service profile is a logical entity associated to a physical server.

Among other things, service profile includes various identities of the server or server components, such as: • BIOS UUID • MAC address of virtual NIC of the server • Node WWN (WWNN) for Fibre Channel SAN access • Port WWN (WWPN) of the virtual HBA of the server • IQN ID, if iSCSI protocol is used for storage access • Management IP address for the KVM access All these identities can be assigned to any physical server managed by the Cisco UCS Manager. All other configuration of the service profile is based on templates, pools, and policies, providing immense flexibility to the administrator. This includes firmware and BIOS versions required by the server. These concepts enable Cisco UCS Manager to provide stateless computing across the entire Cisco UCS Manager managed compute hardware. If remote storage is used to boot operating system of the server (such as SAN boot, PXE boot, iSCSI boot, etc.), a given service profile can be associated to any physical server hardware and downtime for migrating such a server can be reduced to few minutes.

The solution presented in this CVD makes use of identity pools and SAN storage to simplify the server procurement and provide stateless computing capability. Sizing Guidelines In any discussion about virtual infrastructures, it is important to first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that takes into account every possible combination of workload characteristics. Defining the Reference Workload To simplify the discussion, a representative customer reference workload is defined as a virtual machine with specific characteristics. By comparing the actual customer usage to this reference workload, one can extrapolate which reference architecture to choose. For the VSPEX solutions, the reference workload was defined as a single virtual machine.

This virtual machine has the following characteristics (). Table 8 Reference Virtual Machine Workload. Characteristic Value Virtual machines operating system Windows Server 2012 R2 Virtual processors per VM (vCPU) 1 RAM per VM 2 GB Available storage capacity per VM 100 GB I/O operations per second (IOPS) per VM 25 I/O pattern Random I/O read/write ratio 2:1 Logical CPU to virtual CPU ratio Up to 4:1 This specification for a virtual machine is not intended to represent any specific application.

Rather, it represents a single common point of reference to measure other virtual machines. Applying the Reference Workload When considering an existing server which will move into a virtual infrastructure, you have the opportunity to gain efficiency by right-sizing the virtual hardware resources assigned to that system. The reference architecture creates a pool of resources sufficient to host a target number of reference virtual machines as described above. It is entirely probable that customer virtual machines will not exactly match the specifications above. In that case, you can say that a single specific customer virtual machine is the equivalent of some number of reference virtual machines, and assume that number of virtual machines have been used in the pool.

You can continue to provision virtual machines from the pool of resources until it is exhausted. Consider these examples: Example 1 - Customer Built Application A small custom-built application server will move into this virtual infrastructure. The physical hardware supporting the application is not being fully utilized at present. A careful analysis of the existing application reveals the application uses only one processor and needs 3 GB of memory to run efficiently.

The IO workload ranges between 4 IOPS at idle time to 15 IOPS when busy. The entire application is only using about 30 GB on local hard drive storage. Imsi Masterclips 250 Area. The following resources are needed from the resource pool to virtualize this application: • CPU resources for 1 VM • Memory resources for 2 VMs • Storage capacity for 1 VM • IOPS for 1 VM In this example a single virtual machine uses the resources of two reference virtual machines. When this VM is deployed, the solution's new capability would be 298 VMs. Example 2 - Point of Sale System The database server for a customer's point-of-sale system will move into this virtual infrastructure. It is currently running on a physical system with four CPUs and 16 GB of memory. It uses 200 GB storage and generates 200 IOPS during an average busy cycle.

The following resources are needed from the resource pool to virtualize this application: • CPUs of 4 reference VMs • Memory of 8 reference VMs • Storage of 2 reference VMs • IOPS of 8 reference VMs In this example, the one virtual machine uses the resources of eight reference virtual machines. Once this VM is deployed, the solution's new capability would be 292 VMs. Example 3 - Web Server The customer's web server will move into this virtual infrastructure. It is currently running on a physical system with two CPUs and 8GB of memory. It uses 25 GB of storage and generates 50 IOPS during an average busy cycle. The following resources are needed from the resource pool to virtualize this application: • CPUs of 2 reference VMs • Memory of 4 reference VMs • Storage of 1 reference VMs • IOPS of 2 reference VMs In this example the virtual machine would use the resources of four reference virtual machines.

Once this VM is deployed, the solution's new capability would be 296 VMs. Example 4 - Decision Support Database The database server for a customer's decision support system will move into this virtual infrastructure. It is currently running on a physical system with 10 CPUs and 48 GB of memory.

It uses 5 TB of storage and generates 700 IOPS during an average busy cycle. The following resources are needed from the resource pool to virtualize this application: • CPUs of ten reference VMs • Memory of 24 reference VMs • Storage of 50 reference VMs • IOPS of 28 reference VMs In this example the one virtual machine uses the resources of fifty reference virtual machines.

When this VM is deployed, the solution's new capability would be 250 VMs. Summary of Examples The four examples show the flexibility of the resource pool model. In all four cases the workloads simply reduce the number of available resources in the pool. If all four examples were implemented on the same virtual infrastructure, with an initial capacity of up to 300 virtual machines, they would leave a capacity for 236 reference virtual machines in the resource pool. In more advanced cases, there may be trade-offs between memory and I/O or other relationships where increasing the amount of one resource, decreases the need for another. In these cases, the interactions between resource allocations become highly complex and are out of the scope of this document.

However, when a change in the resource balance is observed, and the new level of requirements is known; these virtual machines can be added to the infrastructure using the method described in the above examples. If the customer does not have a thorough understanding of the resource needs of their particular environment, Microsoft has a free tool, Microsoft Assessment and Planning Toolkit, that can be run against the customer environment to capture the actual characteristics. This tool can be found. Configuration Guidelines This section provides the procedure to deploy the Cisco solution for EMC VSPEX Hyper-V architecture.

Follow these steps to configure the Cisco solution for EMC VSPEX VMware architectures: • Pre-deployment tasks • Physical setup • Cable connectivity • Configure Cisco Nexus switches • Configure Cisco Unified Computing System using Cisco UCS Manager • Prepare and configure storage array • Install Initial Microsoft Windows Server 2012 R2 • Install Additional Microsoft Windows Server 2012 R2 and Failover Cluster • Test the installation These steps are described in detail in the following sections. Included in this document are sample PowerShell scripts that can be used to more quickly build out this VSPEX environment. Though the scripts have been tested, no warranty is implied or granted that they do not contain errors. The use of these scripts assumes a 'green field' environment in which nothing else has been previously installed.

In any case, each script should be examined before execution in the customer environment. Several of the scripts may have IP addresses hard coded in them that need to be changed to reflect the customer environment. It is assumed that a person familiar with Microsoft's PowerShell scripting language is available to review these scripts for the customer before execution in the customer environment. In particular, the UcsConfig.xml file contains many variables that should be reviewed with the customer to ensure they reflect the customer environment and naming conventions. Pre-deployment Tasks Pre-deployment tasks include procedures that do not directly relate to environment installation and configuration, but whose results will be needed at the time of installation.

Examples of pre-deployment tasks are collection of hostnames, IP addresses, VLAN IDs, license keys, installation media, and so on. These tasks should be performed before the customer visit to decrease the time required onsite. • Gather documents—Gather the vendor product installation documents.

These are used throughout the text of this document to provide detail on setup procedures and deployment best practices for the various components of the solution. • Gather tools—Gather the required and optional tools for the deployment. Use to confirm that all equipment, software, and appropriate licenses are available before the deployment process. • Gather data—Collect the customer-specific configuration data for networking, naming, and required accounts. Enter this information into the Customer Configuration Worksheets found later in this document for reference during the deployment process.

Table 9 Requisite Components. To complete software setup of the VNX array, it will be necessary to configure system connectivity including the creation of an Administrative user for the VNX array. The following worksheets (also found in the Installation documentation) list all required information, and can be used to facilitate the initial installation. VNX Worksheets With your network administrator, determine the IP addresses and network parameters you plan to use with the storage system, and record the information on the following worksheet. You must have this information to set up and initialize the system. The VNX5400 array is managed through a dedicated LAN port on the Control Station and each storage processor.

These ports must share a subnet with the host you use to initialize the system. After initialization, any host on the same network and with a supported browser can manage the system through the management ports. This information can be recorded in. Table 10 IPv4 Management Port Information. Sysadmin (default) It is also necessary at this time to install the NaviSecCli command line interface from a supported Windows client environment. The client should have network access to the VNX5400 array for both HTTP/HTTPS access and for remote NaviSecCli command execution. Installation media for the NaviSecCli utility, as well as ESI, are available by download.

The current version of the media should always be utilized. Installation of the utility is implemented through the typical application installation process for Windows-based systems. After array installation, it will also be possible to connect to the VNX5400 array via the Unisphere graphical user interface at the IP address assigned to either SP-A or SP-B, or the control station in the event that a Unified version of the VNX is being implemented. The following configuration also assumes that the array has been configured with: • DAE - BUS 0 / Enclosure 1 45 drives • DAE - BUS 0 / Enclosure 2 45 drives • DAE - BUS 1 / Enclosure 0 20 drives In the event that the physical configuration of the system differs in regards to the DAE placements, then modifications to the Bus Enclosure naming used subsequently will need to be appropriately altered.

Creating Storage Pools—Unisphere 1. Log into Unisphere and browse to the Storage Pool page. Storage >Storage Configuration >Storage Pools 3. Click Create.

Under disks, manually select 45 600GB SAS drives from 3 DAE's and 2 EFD's. Name the pool in Storage Pool Name. This name is used in the sample PowerShell scripts. Creating Support for Clone Private LUNs In the previous step storage pools were defined on the VNX array based on disks within the chassis. Additional RAID Group based LUNs are required to support clone private LUNs in the system. As part of the automation of Virtual Machine deployments, SnapView Clones can be utilized both through scripting and also through the SMI-S integration of System Center Virtual Machine Manager. The use of clones requires the existence of at least one RAID group on the VNX5400 for storage of the clone private LUNs used in the cloning process.

To help ensure proper performance for the number of virtual machines, all available free disks are configured into storage pools. Therefore, it will be necessary to create a RAID group using the system drives and configure just the clone private LUNs, and no others, on this RAID group. Navigate to Storage >Storage Configuration >Storage Pools.

Select the RAID Groups tab. Click Create. Record WWNN and WWPN Values The Fibre Channel protocol uses Worldwide Node Names (WWNN) and Worldwide Port Names (WWPN) as addresses to uniquely identify endpoints in the Fibre Channel communication. These values must be recorded to be used in other steps in this configuration. For example, the WWPN values need to be entered into the UcsConfig.xml file used to configure UCS.

Obtain the WWPN information from the EMC VNX5400 by using the NaviSecCli that is installed on your Windows management system and record it in the WWNN/WWPN worksheet found in the Customer Configuration Worksheet section. Following is an example for obtaining the WWPNs from the connections to the VNX5400. It may be necessary to provide additional parameters, for login, password and scope options. The example below returns configuration information for all ports configured within the array. The WWPN for any given Fibre Channel port is derived from the last half of the SP UID entry. The first half of the SP UID is the WWNN entry. As an example, the WWNN of the array is 50:06:01:60:88:60:06:A1 and the WWPN of Port 4 on SP-A Port ID 4 is 50:06:01:64:08:60:06:A1.

Cable Connectivity The Customer Configuration Worksheet section of this document contains tables showing the cabling that was used to validate this configuration for up to 300 virtual machines. If a different configuration is implemented, update the worksheets for reference documentation.

Configure Cisco Nexus Switches The following sections provide a detailed procedure for configuring the Cisco Nexus 5548 switches for use in this solution. Follow these steps precisely because failure to do so could result in an improper configuration. Make use of information captured in the Customer Configuration Worksheets to complete these steps. In the Sample PowerShell Scripts section is a sample script, UcsConfig.ps1, for configuration UCS to reflect the cabling configuration shown in the Customer Configuration Worksheets. UcsConfig.ps1 reads the information from the UcsConfig.xml file, also contained in the Sample PowerShell Scripts sections.

If changes are made to cabling configuration, the XML file will need to be edited to reflect those changes. Set Up Initial Cisco Nexus 5548 Switch These steps provide details for the initial Cisco Nexus 5548 Switch setup. Cisco Nexus 5548 A On initial boot and connection to the serial or console port of the switch, the NX-OS setup should automatically start.

Enter yes to enforce secure password standards. Enter the password for the admin user. Enter the password a second time to commit the password. Enter yes to enter the basic configuration dialog. Create another login account (yes/no) [n]: Enter. Configure read-only SNMP community string (yes/no) [n]: Enter. Configure read-write SNMP community string (yes/no) [n]: Enter.

Enter the switch name: Enter. Continue with out-of-band (mgmt0) management configuration? (yes/no) [y]: Enter. Mgmt0 IPv4 address: Enter. Mgmt0 IPv4 netmask: Enter.

Configure the default gateway? (yes/no) [y]: Enter. IPv4 address of the default gateway: Enter. Enable the telnet service? (yes/no) [n]: Enter.

Enable the ssh service? (yes/no) [y]: Enter. Type of ssh key you would like to generate (dsa/rsa):rsa. Number of key bits:1024 Enter.

Configure the ntp server? (yes/no) [y]: Enter.

NTP server IPv4 address: Enter. Enter basic FC configurations (yes/no) [n]: Enter. Would you like to edit the configuration? (yes/no) [n]: Enter. Be sure to review the configuration summary before enabling it. Use this configuration and save it?

(yes/no) [y]: Enter. Configuration may be continued from the console or by using SSH. To use SSH, connect to the mgmt0 address of Nexus A. Log in as user admin with the password previously entered. Cisco Nexus 5548 B On initial boot and connection to the serial or console port of the switch, the NX-OS setup should automatically start. Enter yes to enforce secure password standards.

Enter the password for the admin user. Enter the password a second time to commit the password. Enter yes to enter the basic configuration dialog.

Create another login account (yes/no) [n]: Enter. Configure read-only SNMP community string (yes/no) [n]: Enter.

Configure read-write SNMP community string (yes/no) [n]: Enter. Enter the switch name: Enter. Continue with out-of-band (mgmt0) management configuration? (yes/no) [y]: Enter.

Mgmt0 IPv4 address: Enter. Mgmt0 IPv4 netmask: Enter. Configure the default gateway? (yes/no) [y]: Enter. IPv4 address of the default gateway: Enter. Enable the telnet service?

(yes/no) [n]: Enter. Enable the ssh service? (yes/no) [y]: Enter. Type of ssh key you would like to generate (dsa/rsa):rsa 17. Number of key bits:1024 Enter. Configure the ntp server?

(yes/no) [y]: Enter. NTP server IPv4 address: Enter. Enter basic FC configurations (yes/no) [n]: Enter. Would you like to edit the configuration?

(yes/no) [n]: Enter. Be sure to review the configuration summary before enabling it. Use this configuration and save it? (yes/no) [y]: Enter. Configuration may be continued from the console or by using SSH. To use SSH, connect to the mgmt0 address of Nexus A. Log in as user admin with the password previously entered.

Enable Appropriate Cisco Nexus Features These commands enable the appropriate Cisco Nexus features. Nexus A and Nexus B. The Nexus switch will reboot. This will take several minutes.

Create Necessary VLANs These steps provide details for creating the necessary VLANs. Note that the SMB (or iSCSI) VLANs are not created on the Nexus switches. The SMB (or iSCSI) connections are made directly from the Fabric Interconnects to the EMC VNX array.

The Nexus switches do not see this SMB (or iSCSI)-related traffic. Nexus A and Nexus B Following the switch reloads, log in with user admin and the password previously entered.These commands define the minimum VLANs used in this configuration. Copy run start Link Into Existing Network Infrastructure Depending on the available network infrastructure, several methods and features can be used to uplink the private cloud environment. If an existing Cisco Nexus environment is present, Cisco recommends using vPCs to uplink the Cisco Nexus 5548 switches included in the private cloud environment into the infrastructure. The previously described procedures can be used to create an uplink vPC to the existing environment.

Initial Configuration of Cisco Unified Computing System The following information provides a detailed procedure for the initial configuration of the Cisco UCS Manager. These steps should be followed precisely because a failure to do so could result in an improper configuration. You will need information captured on the customer configuration worksheets.

Cisco UCS 6248 A 1. Connect to the console port on the first Cisco UCS 6248 fabric interconnect. At the prompt to enter the configuration method, enter console to continue. If asked to either do a new setup or restore from backup, enter setup to continue. Enter y to continue to set up a new fabric interconnect. Enter y to enforce strong passwords. Enter the password for the admin user.

Enter the same password again to confirm the password for the admin user. When asked if this fabric interconnect is part of a cluster, answer y to continue. Enter A for the switch fabric. Enter the cluster name for the system name. Enter the Mgmt0 IPv4 address. Enter the Mgmt0 IPv4 netmask. Enter the IPv4 address of the default gateway.

Enter the cluster IPv4 address. To configure DNS, answer y. Enter the DNS IPv4 address.

Answer y to set up the default domain name. Enter the default domain name.

Review the settings that were printed to the console, and if they are correct, answer yes to save the configuration. Wait for the login prompt to make sure the configuration has been saved. Cisco UCS 6248 B 1. Connect to the console port on the second Cisco UCS 6248 fabric interconnect. When prompted to enter the configuration method, enter console to continue.

The installer detects the presence of the partner fabric interconnect and adds this fabric interconnect to the cluster. Enter y to continue the installation. Enter the admin password for the first fabric interconnect. Enter the Mgmt0 IPv4 address. Answer yes to save the configuration.

Wait for the login prompt to confirm that the configuration has been saved. Logging Into Cisco UCS Manager These steps provide details for logging into the Cisco UCS environment: 1.

Open a Web browser and navigate to the Cisco UCS 6248 fabric interconnect cluster address. A warning displays about the security certificate. Click Continue. Create the VSPEX Environment with Cisco UCS PowerTool Contained in the Sample PowerShell Scripts of this document are the following two files: • UcsConfig.ps1—this script reads UcsConfig.xml, using the customer information entered in it to define the configuration of UCS cabling and various pools, policies, and templates.

• UcsConfig.xml—this file contains customer provided information for values and names to assign to pools, policies, and templates. When the Customer Configuration Worksheets are completed, the information should be transferred to the appropriate locations within this XML file.

Care should be used when editing this file to ensure that the XML structure defined is not altered. F altered, it will likely cause the UcsConfig.ps1 script to fail. It is recommended that you review and validate these files before running. The script takes about 30 seconds to run to configure Cisco Unified Computing System according to the VSPEX validated design. This validation can save a significant amount of time. Create the VSPEX Environment with Cisco UCS Manager The following steps detail the basics for configuring the VSPEX environment by using the Cisco UCS Manager instead of using the UcsConfig.ps1 PowerShell script. It is assumed some basic knowledge of Cisco UCS Manager.

For example, when creating VLANs, it shows the basic procedure to create a VLAN, but it does not step through creating every VLAN. Synchronize Cisco UCS to NTP These steps provide details for synchronizing the Cisco UCS environment to the NTP server: 1. Select the Admin tab at the top of the left window.

Select All >Timezone Management. In the right pane, select the appropriate timezone in the Timezone drop-down menu. Click Add NTP Server. The ext-mgmt pool is the default management pool. By default, IP addresses are assigned to the physical servers as they are recognized.

It is also possible to create a separate pool and then assign it Service Profiles (Management IP Address) when they are assigned to a physical server. This is important if you plan on using SMI-S for management of the servers, as then the IP address follows the service profile instead of the physical server. Edit the Chassis Discovery Policy These steps provide details for modifying the chassis discovery policy as the base architecture includes two uplinks from each fabric extender installed in the Cisco UCS chassis: 1.

Navigate to the Equipment tab in the left pane. In the right pane, click the Policies tab.

Under Global Policies, change the Chassis Discovery Policy to 2-link. Select the Port Channel radio button for the Link Grouping Preference. Select the desired settings for the Rack Server Discovery Policy, Power Policy, and MAC Address Table Aging. Click Save Changes. Create a Scrub Policy Scrub policies define what is erased when a service profile is disassociated from a server.

If the disk field is set to Yes, when a service profile containing this scrub policy is disassociated from a server, all data on the server local drives is completely erased. If this field is set to No, the data on the local drives is preserved, including all local storage configuration. Select the Servers tab at the top left of the window. Go to Policies >root. Right-click Scrub Policies. Select Create Scrub Policy.

Create Maintenance Policy When a service profile is associated with a server, or when changes are made to a service profile that is already associated with a server, the server needs to be rebooted to complete the process. The Reboot Policy field of the maintenance policy determines when the reboot occurs for servers associated with any service profiles that include this maintenance policy. Select the Servers tab at the top left of the window. Go to Policies >root.

Right-click Maintenance Policies. Select Create Maintenance Policy. Note This policy is recommended for virtualization servers even if they do have local disks. Flexibility is a key component of virtualization, so it is best to have configurations as loosely tied to physical hardware as possible.

By not making provision for local disks and SAN booting, you ensure that moving the profile to another system will not create an environment that will lose something as it moves. Select the Servers tab on the left of the window. Go to Policies >root.

Right-click Local Disk Config Policies. Select Create Local Disk Configuration Policy. Enable Fabric Interconnect Port Definitions These steps provide details for enabling Fibre Channel, server, and uplinks ports: 1. Select the Equipment tab on the top left of the window.

Select Equipment >Fabric Interconnects >Fabric Interconnect A (primary) >Fixed Module. Expand the Unconfigured Ethernet Ports section. Select the ports that are connected to the Cisco UCS chassis (2 per chassis). Click Reconfigure, then select Configure as Server Port from the drop-down menu. A prompt displays asking if this is what you want to do. Click Yes, then OK to continue. Repeat for other ports, selecting the appropriate configuration from the Customer Configuration Worksheets.

Repeat for Fabric Interconnect B. Right-click vNIC Templates. Select Create vNIC Template.

Enter as the vNIC template Name. Check Fabric B. Ensure the Enable Failover box is cleared. Under target, unselect the VM box.

Select Updating Template as the Template Type. Under VLANs, select.

Set Native VLAN. Under MTU, set to 9000. Under MAC Pool, select. For QoS Policy, select. Click OK to complete creating the vNIC template 16.

Repeat for each vNIC template. Create Service Profile Templates and Service Profiles Two Service Profile Templates can now be created. One should be created to use the Fabric A boot policy and the other should be created with the Fabric B boot policy. Otherwise, all the other pool, template, and policy selections should be the same. Use the Create Service Profile (expert) option and select the pools, templates, and policies just created. When the two Service Profile Templates have been created, create a Service Profile for each server. Assign half the service profiles to the first service profile template and the other half of the service profiles to the other service profile template.

Use the Create Service Profiles From Template option. Create EMC VNX5400 LUNs for VSPEX The VSPEX cloud environment implements a boot from SAN environment using the concept of a Master Boot LUN. The Master Boot LUN is a storage area that will be used to maintain an image of a Windows Server 2012 R2 image to be used as a Clone source.

This image should be configured as a base image to be used for subsequent installations, so all patching and custom configuration steps should be taken. For example, maybe a desired configuration setting is to ensure that all physical servers are able to be remotely managed.

When the image is configured according to customer policy, the Microsoft sysprep utility can be run against this image to prepare it for use as a Clone. The steps to configure this Master Boot LUN image are in the following section.

Clones created from the Master Boot LUN will be presented to the physical servers defined by Service Profiles in the UCS environment. This style of deployment allows Service Profiles to be fully transportable between different physical blades as the boot device is external to the chassis, and also allows for multiple Master Boot images to be implemented providing support for different operating system versions or configurations which may need to be implemented over time.

Management of the boot LUN requires special consideration, and needs to ensure that the LUN ID provided to the LUN, as seen from the host is set to 0 (zero). The ESI (EMC Storage Integrator) PowerShell commands do not allow the manipulation of the LUN ID for devices presented to servers, and simply default to the sequential allocation of LUN IDs as implemented by the VNX array. As a result of this behavior, the boot LUN must be the first device that is mapped to the server (Cisco UCS service profile).

If this is incorrectly implemented, then the wrong target will be selected for Windows boot operations on server power-up. As described, the ESI PowerShell commands or Unisphere are utilized for provisioning of the LUNs required within the environment, and assume that the storage pool creation outlined in the previous section has been completed. For the procedure to set up a Master Boot LUN, a single LUN is created, and is used to install a Windows Server 2012 R2 instance. This server instance subsequently will be processed with Windows sysprep and be removed from the server. All compute nodes will then use a clone of the sysprep image to be customized as individual server instances. Creation of all necessary LUNs within the Private Cloud environment can be executed with the PowerShell script ProcessStorageRequests.ps1 provided Appendix B. The defined XML configuration file is read by the PowerShell script.

This XML configuration file contains five parameters. There are two classes that can be repeated multiple times. The XML class can be repeated multiple times to define multiple LUNs for a server. The class can be repeated to create multiple server records. For the purpose of defining and creating the Master Boot LUN, it is recommended to create a unique XML configuration file that defines only this specific device.

Later the format of the XML configuration file can be followed for creating multiple LUNs. • - the name that will be assigned to the LUN that is created • - the storage pool from which the LUN will be created • - the size of the LUN (in GB) to be created • - the name of the server that will be assigned the LUN that must match the Service Profile name in UCS Manager, including case. This name is also used for management purposes on the VNX array • - the management IP address of the server Here is a sample XML file illustrating the content for creating the Master Boot LUN. It will need to be modified to reflect the customer environment. In addition to the five parameters listed above that can be repeated, there are two other parameters that are defined only once. The parameter is the name of the VNX array. The parameter is the IP address for accessing the Cisco UCS management console.

An example of the contents of a configuration is found in the Sample PowerShell Scripts section in a configuration file called Storage_Luns.xml. This configuration file is used by three different sample scripts. • PrepMasterBoot-AddViaWWPN.ps1 • ProcessStorageRequests.ps1 • PostClone_AddViaWWPN.ps1 Alternatively to using the provided PowerShell scripts, EMC Unisphere can be used for the purpose of creating LUNs for the boot from SAN deployment. To create EMC VNX5400 LUNs for VSPEX, do the following: 1. From the Storage >LUNs menu, select Create and create the LUN. Select the RAID Group radio button.

Specify a User Capacity of 50 GB (or whatever your standard size is). Provide a meaningful Name for the master boot LUN. After creation of the required LUN, it is necessary to create a storage group containing the LUN and present the storage group to the Service Profile. The example PowerShell script found in the Sample PowerShell Scripts section, PrepMasterBoot_AddViaWWPN.ps1, utilizes both EMC Storage Integrator and the Cisco UCS PowerTool, and expects that both have been successfully installed. After presentation of the storage group to the WWPNs defined within the Service Profile, it will be possible to proceed with Windows Server installation.

An alternative to using ESI PowerShell would be to manually present storage using Unisphere as in the following steps: 1. Open a browser. Enter the IP address of your EMC VNX5400 SAN with an prefix. Click Continue. Configure Zoning on Cisco Nexus 5548 Switches The following steps detail the procedure for configuring the Cisco UCS environment to boot the blade servers from the EMC VNX5400 SAN. Gather Necessary Information After the Cisco UCS service profiles have been created (earlier section), each infrastructure management blade has a unique configuration.

To proceed with the deployment, specific information must be gathered from each Cisco UCS blade to enable SAN booting. Insert the required information in the WWNN/WWPN table for Hyper-V servers in the Customer Configuration Worksheets sections. Both WWNN and WWPN from the Cisco UCS service profiles are needed for masking the LUNs on the VNX5400 SAN.

To gather the information for the Cisco UCS servers, launch the Cisco UCS Manager GUI as follows: 1. Select the Servers tab. Expand Servers >Service Profiles >root. Click each service profile and then click the Storage tab on the right. Record the WWPN for Fabric A and Fabric B for each service profile.

Record the WWNN for each service profile. Create Device Aliases and Create Zone for First Server These steps provide details for configuring device aliases for all devices on both Nexus A and Nexus B.

It also creates a zone for the primary boot path for the first server that will be installed and used for creating a 'gold image' or Master Boot image. The initial zoning provides a single path to the SAN. If more than one path is defined to the boot volume, and there is no multipath software available, as is the case for an initial installation of Windows Server 2012 R2, data corruption can occur on the disk.

After the operating system is installed and configured for MPIO, the secondary boot path can be defined. This configuration assumes the use of the default VSAN 1. Cisco Nexus 5548 A.

Copy run start Install Initial Microsoft Windows Server 2012 R2 The following steps provide the details necessary to prepare the host for the installation of Windows Server 2012 R2 Datacenter Edition. It assumes that the SAN has been zoned and the EMC V NX5400 has masked the LUN so only a single path to the server is available. To speed the process of installing Windows Server 2012 R2 across all the physical hosts, a multiple step process is employed: • Install Windows Server 2012 R2 on a single physical server with the boot volume on the EMC VNX5400. • Perform some initial configuration tasks that are common for all servers used.

• Update the installation with the latest patches from Microsoft Update. • Present the boot LUN to both vHBAs and configure MPIO. • Sysprep the image.

• Remove the boot volume from the server on which it was installed. • Make clones of the sysprepped volume within the EMC VNX5400 so each physical server will have its own clone to boot from. • Configure zoning and masking for other servers. • Start each host and complete the mini-setup to tailor each node with things like name, IP addressing (if fixed IP addresses are used), and join to the domain. (It is possible to configure this sort of information with unattend command files. That is beyond the scope of this document, and many shops already have such procedures in place.). Note In order for the Windows Installer to recognize the Fibre Channel SAN boot disk for the initial server, the Cisco UCS fnic (storage) driver must be loaded into the Windows installer during installation.

Please download the latest Cisco Unified Computing System (UCS) drivers from www.cisco.com under Cisco UCS B-Series Blade Server Software and place the ISO on the same machine with the Windows Server 2012 DVD ISO. Open a browser.

Enter the IP address of your fabric interconnect cluster with an prefix. Click Continue. Sysprep the Initial Image Before you run the sysprep utility against this newly created image, it is a good idea to tailor the image to specific customer needs. This includes adding and/or configuring software that will be common for all systems built from this image. In addition to the tasks listed here, the customer may have their own management software to install. Remember that not all software can be installed before an image is sysprepped, so check with the software vendor before installing. At this point, if you have a DHCP server installed on your Management Network, the Management Network Interface should come up with an IP address.

If you do not have DHCP, use the following steps to determine which Network Interface is on the Management VLAN and configure it with a static IP with connection to the outside world. Initial Network Configuration The following sample screen shots may vary significantly from the actual customer environment. This is due to the fact that there are many variables in the potential customer network, and all variations are not covered in these samples. These samples assume that there is no DHCP server (which would make this a little easier, but is beyond the scope of this document). By assuming there is no DHCP server, all NICs will initially be configured with 169.254/16 APIPA addresses. These steps will assign fixed IP addresses to all the NICs. The first step that is necessary is to find the NIC through which host management is performed.

This is not the out-of-band NIC used by UCSM, but the NIC dedicated to host management. Log into the server. Enter the following PowerShell command.

This returns a table of the network names and their associated MAC addresses. Go to the Servers tab in UCSM.

Select Servers >Service Profiles >root and the service profile for the machine you are working on. Expand the Service Profile. Click on vNICs. This enables you to see the MAC addresses for the Mgmt vNIC (in this example, Mgmt is the NIC used for host management). Find the MAC address in the table displayed in the previous step, and take note of the assigned name. For example purposes, assume it is 'Ethernet'.

Common Configuration Tasks There are some tasks that are performed to ensure the ability for the hosts to be remotely managed for the rest of these instructions. In an existing customer environment, the customer may handle some of these tasks via Active Directory group policy objects.

Setting up these tasks to be handled by group policies is beyond the scope of this document, so they should be reviewed with the customer. The Sample PowerShell Scripts section contains a sample PowerShell script, Set-UcsHyperVRemoteMgmt.ps1, that sets a number of firewall rules to enable remote management, enables some services to automatically start, and enable remote desktop. Run this script from a PowerShell command window..NET Framework 3.5 Feature (optional) Depending upon the management tools that are deployed in the customer environment, it might make sense to load the.NET Framework 3.5 feature.

Doing it once on this image that will be sysprepped will save time in later deployments. This is not necessary in every data center. Ensure the KVM has the Windows Server 2012 R2 installation media mounted. Assuming the Windows Server installation media is mounted on drive E:, issue the following PowerShell command to add the.NET Framework 3.5 feature.

Install-WindowsFeature -Name NET-Framework-Core -Source E: sources sxs Run Windows Update It is highly recommended to fully patch the server at this time from Windows Update. Depending on the patches, it might be necessary to reboot and check for updates multiple times before the server is completely patched. Install Windows Roles and Features The Sample PowerShell Scripts section contains a sample PowerShell script, Add-UcsHyperVFeatures.ps1, which installs the MPIO and Failover Cluster features and the Hyper-V role. Run this script from a PowerShell command window. Installation of the Hyper-V role causes a reboot. Configure Paging File By default, Windows allocates and manages a portion of the system disk to be used as a paging file based on the amount of physical memory on a server. Since the workload running on Hyper-V servers really runs in the VMs, the majority of paging occurs within the VMs, minimizing the need for a large page file on the physical server.

Therefore, it makes sense to minimize the size of the paging file of the Hyper-V host to minimize the amount of storage on the boot volume that is reserved for the paging file. In Server Manager, click on the Computer Name to bring up the System Properties window. Click on the Advanced tab. Configure MPIO After the server has been configured with the MPIO feature, it is necessary to present the additional paths to the boot LUN and configure MPIO. The goal is to sysprep this operating system image and then clone the LUN for use by all other physical servers, this means MPIO has to be configured only once. Then, since the operating system image that will be used for booting the additional blades will already have MPIO configured, it is possible to configure paths through both Nexus switches for initial boot of the sysprepped image. The Add-UcsHyperVFeatures.ps1 PowerShell script performed the installation and some configuration of Microsoft's MPIO.

From an elevated command prompt issue the command mpclaim -s -d 0 to validate MPIO is configured on your system. Present Boot LUN to Additional Paths When the zones and zonesets have been updated to reflect the multiple paths to the LUN, it is necessary to configure the EMC VNX5400 SAN to present the boot LUN to the additional paths. Power off the server before starting. In Unisphere, go to Hosts >Initiators and click the Create button to add a new initiator.

The goal is to create an initiator to each port on the VNX5400. You will have two initiator records for each WWNN and WWPN combination for the server that match your zone entries on the Nexus switches. Be sure to select the appropriate SP-Port. Also select Existing Host and select the proper host.

Cloning the Sysprepped Image Removal of the Source Master Image After installation of the Windows Server instance and execution of the sysprep process, it is necessary to remove the source LUN from the Service Profile that was used to build the image. To remove the LUN from the Service Profile, use Unisphere to remove the Master_Boot LUN and the assigned host from its storage group. You may also want to remove the host initiator entries for the Master_Boot if you are using a different naming convention for production servers. With the base sysprep image created and the LUN containing the sysprepped image removed from the service profile, clones can be taken in order to replicate the contents of the master LUN for other servers in the environment. Prior to copying the data, target devices need to be created to be associated with the planned clone sessions. The clones can be created with ESI or through Unisphere.

Create Target LUNs with ESI Run the ProcessStorageRequests.ps1 script to create the appropriate clone target devices. This script uses the Storage_Luns.xml found in the Sample PowerShell Scripts. It must be modified to reflect the number of hosts for which boot LUNs will be created and the naming conventions used by the customer. Create Target LUNs with Unisphere Alternatively, EMC Unisphere can be used to create the target LUNs.

Create Clones with ESI Now that the clone target LUNs are created, the clone process can be run. To automate the clone process, modify the ProcessClones.xml configuration file contents to reflect the customer environment. Run the ProcessClones.ps1 script found in Sample PowerShell Scripts which reads the XML configuration file using ESI and naviseccli. The script will create concurrent clone copies and wait for 100% synchronization. When the copies are complete, the script will delete the clone relationship and the target LUNs can be used for deployment.

Create Clones with EMC Unisphere Alternatively, the following process can be executed from Unisphere to create the clone relationships and copy the data from the master LUN to the boot target LUNs. In Unisphere, go to Data Protection >Clones. Select the Create Clone Group link from the protection side-bar. Boot from Sysprepped LUNs After a clone of the sysprepped LUN has been created for each physical server to be built, you need to zone and mask the LUN before completing a build from the sysprepped image presented. Zone the Network Presenting the LUNs to the various hosts is a combination of configuring the zones and zonesets on the Cisco Nexus 5548 switches and masking the LUNs through Unisphere or naviseccli. The detailed steps for this were shown previously, so they will be summarized here. Zoning • Create the device alias for each service profile with the value of the fabric A WWPN defined on the A Nexus, and the value of the fabric B WWPN defined on the B Nexus.

• Create a zone for each service profile on each Nexus containing the device alias for appropriate server WWPN and both WWPNs of the associated EMC interfaces. • Add the created zones to the zoneset and activate it.

Shows the end result of this step will provide a listing of the zoneset. (WWPN values will differ for each environment).

Figure 18 Example Zoneset for Cisco Nexus 5548 A. Mask Boot LUNs to Service Profiles Following the cloning and zoning processes, the boot LUNs can be presented to their respective service profiles. The same XML configuration file used to create the boot LUNs can be used in conjunction with the PostClone_AddViaWWPN.ps1 script to present the boot LUNs to the servers.

The script will also register the appropriate initiators with the storage array and create the necessary storage groups along with presenting the LUNs to the appropriate servers. An alternative to using the script would be to use the Unisphere management GUI as outlined previously in the 'Mask Boot LUN with EMC Unisphere' section. Following the masking operations, start each host and complete the mini-setup to tailor each node with things like name, IP addressing (if fixed IP addresses are used), and join to the domain.

Complete Image Builds from Sysprepped Images When the sysprep image has been cloned and the LUNs are properly masked so the boot volumes only appear to the owning host, every server must complete its installation. Booting from a sysprep image runs what is referred to as a 'mini-setup'. Note This document does not describe the use of an unattend file. If your organization makes use of unattended installations of sysprep images, that can be used to replace these steps. Open Cisco UCS Manager. Select the Servers tab.

Open Service Profiles. Click on KVM Console to open a window from which you can manage the mini-setup. The association between the service profile and the blade should have taken effect when you created the service profile, so you should see the first screen of the Windows Server mini-setup. If it is still booting when you connect to it, you may see a series of progress messages display as the system completes the initial setup. Enter a complex password.

The password must contain three of the following and be at least eight characters in length. • Upper case character • Lower case character • Digit • Special character 2. Re-enter the same password.

Click Finish. At this point, you will have a complete base image. This means you will need to activate Windows, change the name of the system, join to the domain, configure your network settings, and complete any other tailoring required to meet your company requirements for Windows Server installation.

Configure Networks It is recommended that you rename the network adapters from the Windows default values of 'Local Area Connect #x' to reflect the actual network from the Cisco UCS Service Profile. You can use the manual procedure defined earlier in the document, or you can use the sample PowerShell script, Set-UcsHyperVAdapters.ps1, found in Sample PowerShell Scripts. This script requires that the machine domain-joined and the script is being run from a workstation that has the Cisco UCS PowerTool installed.

The Set-UcsHyperVAdapters.ps1 script will assign a fixed IP address to each NIC based on the a 192.168.xx.yy notation where xx is the VLAN read in from Cisco UCS and yy is a specific value assigned so that the last octet of each address is the same on for each server. It also sets each adapter, except the excluded (generally the management) adapter so that it does not register itself in DNS. It is best to have only the primary (management) address register in DNS. Depending upon your configuration, you might have DHCP set up for every network.

In which case, it is not recommended to use this script without first modifying it to not alter IP addresses. Configure Hyper-V Virtual Switches To configure Hyper-V virtual switches, do the following: 1.

From Server Manager >Tools, select Hyper-V Manager. The Unisphere Host Agent will bind to the first NIC within the binding order on the host. This needs to be a NIC which can communicate with the VNX SP IP addresses If this ends up being the incorrect NIC, use the agentID.txt to set the correct interface. In the installation directory for the Unisphere Host Agent (default = C: Program Files (x86) EMC Unisphere Host Agent) create a file called agentID.txt. Within the file, place the server name on the first line, press enter, and then place the IP address of the desired management interface on the second line. Create Hyper-V Failover Cluster When you have completed the build of the servers to SAN boot in a multipath IO environment, have all the network adapters configured the same, and the hosts joined to the Active Directory domain, you will create the failover cluster on which all the virtual machines will be deployed. This cluster can be expanded up to a total of 64 hosts for running VMs within the Microsoft virtualized environment.

Create Shared Storage Microsoft Failover Clusters use shared storage for storing the VMs. For a 300 VM deployment, 3 storage pools are created with 45 SAS drives and 2 EFDs in two pools and 20 SAS drives and 2 EFDs in the third pool. Two 7TB LUNs will be provisioned from each 45 drive pool and two 3TB LUNs will be provisioned from the 20 drive pool. A small LUN of 1 GB in size must be created on any of the 3 pools to act as the witness disk in the cluster. There will be 7 LUNs created in total. • Witness Disk - 1 GB • Cluster Shared Volume 1 - 7 TB • Cluster Shared Volume 2 - 7 TB • Cluster Shared Volume 3 - 7 TB • Cluster Shared Volume 4 - 7 TB • Cluster Shared Volume 5 - 3 TB • Cluster Shared Volume 6 - 3 TB When these LUNs are created, they need to be added to each of the storage groups associated with each node in the Hyper-V cluster. When the same LUN is added to multiple storage groups, the VNX will display an error message cautioning about the possibility of corrupting data.

The clustering software controls access to the LUNs, so that is acceptable. It is recommended that at least one CSV is created for each node in the cluster. A new feature of Windows Server 2012 R2 is to distribute the CSV ownership across the cluster nodes to ensure that management functions are not concentrated on any single node. However, to speed the time it takes to run the Cluster Validation Wizard, you should not present these volumes to the cluster until after the cluster is formed. The Cluster Validation Wizard will create multiple combinations of failover scenarios to be tested. The more disks in the test, the longer it takes to complete the validation.

Since the storage array will be tested with the initial two disks, subsequent disks can be added later with the knowledge that the initial disks were configured correctly. You may also want to create larger or smaller LUNs depending upon the practices in place in the customer environment. The VSPEX reference VM allocates 100 GB per VM, though a typical boot VM is only 50 GB.

During normal operations, Hyper-V reserves enough space on disk to capture the memory of the running VM in case of certain types of failures. This space is in addition to the space assigned to the virtual hard drive files. To run 300 VMs on the six nodes of a cluster assumes that 60 reference VMs will run on each of five nodes, with a single node for backup when one of the other nodes needs to be taken out of service.

In order for a single CSV to contain all 60 VMs, it is recommended to create the CSV to be 7 TB to allow for a little overage. Before you can test and form the cluster, it is necessary to format the shared LUNs as NTFS volumes. Perform the following steps on only one node of the cluster to format the drives.

From Server Manager on one of the hosts to which the storage has been presented, select Tools >Computer Management. (Alternatively, type compmgmt.msc into a command or PowerShell window.) 2. Right-click on the area under the disk number designation and select Online to bring the volume online. Repeat for each new LUN. (Get-ClusterNetwork -Cluster VSPEX-Clus01 -Name LiveMigration).Role = 0 The cluster is complete.

By default, Hyper-V will store the virtual hard drives for created virtual machines on the system drive. It is easy to set up Hyper-V to default to the Cluster Shared Volumes for storage. This is not an absolute requirement, but it does make management easier. A good practice is to have the same number of Cluster Shared Volumes as you have nodes in the Hyper-V cluster. Each node in the cluster would have a default storage location of one of the Cluster Shared Volumes. Within the Hyper-V Management console, select Hyper-V Settings.

From the Actions pane. Sample PowerShell Scripts These sample PowerShell scripts and input files are provided as examples to assist in in the rapid deployment of this VSPEX environment. They should be reviewed for compliance with customer policies and naming conventions.

They were tested within the lab environment where this system was configured. Security in your environment may not allow these scripts to run in your environment. No warranty or support is implied by their inclusion within this document.

They were included to provide you with a starting point if you want to automate some steps. Cisco Scripts UcsConfig.ps1. Download Maxwell Sv Software Engineering.

} Cisco Nexus 1000V Installation Cisco Nexus 1000V Series Switches provide a comprehensive and extensible architectural platform for virtual machine and cloud networking. The switches are designed to accelerate server virtualization and multi-tenant cloud deployments in a secure and operationally transparent manner for environments like Microsoft's Private Cloud. Download the distribution software from the location specified in the Software Revision table at the beginning of this document and expand it into a temporary directory. Create Two Virtual Supervisor Module Virtual Machines The Nexus 1000V runs as a pair of virtual machines for high availability purposes. The Nexus 1000V distribution contains an ISO file (nexus-1000v.5.2.1.SM1.5.1.iso) that is used in the creation of the virtual machines that will run the Nexus 1000V software. Copy it to the Virtual Machine Manager library.

(The VMM library is a standard Windows share, so normal procedures for putting simple files into the share work. Refresh the library location after the copy is completed. From an elevated PowerShell window on a Virtual Machine Manager machine, navigate to the directory containing the extracted contents of the Nexus 1000V distribution. Find the Register-Nexus1000VVSMTemplate.ps1 script and execute it.