Capacity-on-Demand (Part 2 of 2)

It’s important to acknowledge that not all servers in a data centre run at full capacity. Peak loads often fail to sustain, let alone utilize the extra resources provisioned by some system administrators to preempt server crashes during load peaks. What if we could harness and repurpose 20% of such idle capacity from a 1,000-server farm while enhancing service levels and adding value?

Server Virtualization

Many daily activities in a data centre involve moving (servers), adding (servers), and changing (server configurations), commonly known as MAC (Move, Add, Change) operations. These seemingly routine tasks become increasingly prevalent and complex in many large enterprises with a growing array of operating systems, databases, web and application services, and geographically dispersed data centres.

From hardware setup to software configuration, virtualization slices physical hardware into multiple programmable servers, each with its CPU, memory, and I/O. Strictly speaking, once automated, software work incurs no labour cost, allowing MAC activities to scale swiftly with cost-effectiveness, precise accuracy, and no boundaries.

Virtualization underpins a significant shift in data centre operations:

Firstly, we no longer need to oversize servers, knowing that CPU, memory, and storage resources can be dynamically adjusted. This, however, doesn’t diminish the importance of proper capacity sizing, but it eliminates the psychological “more is better” effect.

Secondly, we no longer need to panic when a server suffers from the infamous “crash of unknown cause.” A hot or cold standby server, utilizing harvested resources, can quickly minimize user impact.

Thirdly, cloning a server becomes effortless, especially when enforcing the same security settings across all servers, minimizing human oversights.

Fourthly, it serves as a kill switch during a suspicious cyberattack by taking a snapshot of the server and its memory map for forensic purposes before shutting it down to contain the exposure.

Workstation Enablement

High-end workstations are typically reserved at desktops for power users who work with large datasets in tasks like data modelling, analytics, simulation, and gaming. Thanks to significant advancements in chip technology, virtualization has gained substantial traction in high-performance computing (HPC). This allows more desktop users to have workstation capabilities and provides ready-to-use specialized HPC software, such as MATLAB, SPSS, AutoCAD, etc., maintained centrally without the hassle of per-unit installation. Both CPU- and GPU-intensive workloads are processed at the data centre, with screen changes, for example, transmitted back to the user on a lightweight desktop computer. Achieving decent performance largely depends on sufficient desktop bandwidth, with a minimum of 1 Gbit, based on my experience, assuming the enterprise has ample bandwidth within the data centre.

Network Virtualization

Computer networking primarily involves switching and routing data packets from source to destination. It seems simple, except when addressing MAC activities such as firewalling a group of servers at dispersed locations for a business unit dealing with sensitive data or filtering malicious traffic among desktops. The proliferation of IoT devices and surveillance cameras with delayed security patches only exacerbates the situation.

By creating logical boundaries at layer two for data switching or layer three for data routing among the servers in the data centre, users’ desktops, or specialized devices, one can easily insert either a physical or software-based firewall into the data path to protect workloads.

Crucial Requirement

While both the Cloud and Virtualization offer similar capabilities in agility within modern IT, the staff’s expertise in network and system architecture remains the most crucial requirement for the successful implementation and realization of the benefits. It is timely for enterprises to incorporate Generative AI into their technology workforce, allowing them to learn and grow together, promoting knowledge retention and transfer.