• Tidak ada hasil yang ditemukan

Hardware and Infrastructure

Traditional data center hardware is not that different from cloud data center hardware except for how they are configured in the underlying software and firmware. Non-cloud,

Technical Basics of Cloud and Scalable Computing 57

or traditional, data centers often had multiple software platforms to support a diversified software ecosystem, which also affected the kind of hardware used. Cloud data centers are homogenous both in software and hardware, so optimization is simpler. Cloud data centers are designed and configured to cater to a virtually infinite number of users, while traditional data centers are meant to serve only the capacity of the organization and maybe some third parties such as partners and customers.

Since data centers are enormous investments for a company, meant to provide a signifi- cant return on investment (ROI) in the long term, great planning is required.

According to the white paper produced by Network System Architects (www.nsai.net/

White_Paper-Planning_A_Data_Center.pdf), the following are key points to consider in the planning stages for a data center:

Floor space

Power requirements and conditioning, including backup and uninterruptible power supplies

Heating, Ventilating, Air Conditioning (HVAC) and fire prevention systems

Network connectivity and security

Support and maintenance strategy

Business continuity and disaster recovery

Logical security

Floor Space Consideration

One important consideration is the building that the data center would be located in, espe- cially the floor space. You must consider the overall weight of the data center, including all the equipment and even the flooring itself, especially if it is to be located in floors higher than the ground floor. The capacity rating of the building must be considered when the center needs to be placed above ground level, which is often the case in areas that are prone to flooding. The floor and floor tiles require special consideration because potentially heavy loads from equipment are concentrated in small floor areas. We are, of course, referring to the racks themselves. Each rack has a different rating according to its function. So let’s con- sider one rated for 2,000 pounds. That means that each foot (caster) of the rack has a point load of 500 pounds, which needs to be carried by one or two square inches of floor space, and if tiles are used, we have to consider that one or two rack casters can occupy the same tile. So we must make sure that each tile is able to handle the weight that is placed upon it.

Raised flooring is often the choice of many designers because it gives the best flexibility in terms of network and electrical cabling and offers a cheaper alternative to open-air cooling because you would need to cool only the air beneath the floor and within the racks.

Power Considerations

Data centers by virtue of their purpose require a lot of power, which makes this an important consideration that must be addressed very early on. Power coming from the main electrical grid is not often stable and will fluctuate depending on the time of the day as the overall load

changes dynamically. It is important to have power conditioners, which will ensure that the power supply is within an acceptable level.

Since uptime is important, uninterruptible power supplies (UPSs) and backup power generators are a must. UPSs are only there to sustain power before backup generators can go online, but they should still be able to provide sustained power in case the backups are running a little late. We do not need to stress how important backup generators are; they are a necessity for sustained uptime and disaster preparedness, so they must be able to provide more than enough power and provide it for extended periods. A good way to ensure uptime is through the use of a redundant power infrastructure, that is, to have two separate power infrastructures working together or alternately to power the data center. Figure 3.2 shows two separate power lines being connected to a cloud data center; this ensures that when one power provider goes down, the other can take over in powering the center.

F I G U R E   3 . 2 Redundant power infrastructure

Diesel generator 1

UPS/conditioner

Diesel generator 2

UPS/conditioner Cloud

Heating, Ventilating, Air Conditioning (HVAC) and Fire Prevention Systems Data centers are like raging fire pits. If you own a laptop, you know how hot it can get, and that machine has only one processor module. Now imagine that a single rack can contain 10 or more blade servers a little bit wider than a laptop, thrice as thick, and containing some- where between 2 and 32 processors (stacking 16 server boards that have 2 processor sockets

Technical Basics of Cloud and Scalable Computing 59

each). The heat generated would be tremendous, and so it follows that the cooling systems for the whole floor area and for each rack would have to be on par to keep the temperature at manageable levels, preventing overheating even in times of extreme processing load.

The industry ad hoc standard for calculating cooling requirements is to have one British thermal unit (BTU) of cooling for every three kilovolt-amps (kVA) power requirements. And cooling is usually described in tons, with one ton of air having 12,000 BTUs of cooling.

Another consideration is the equilibrium of the cooling for HVAC systems. The atmo- sphere within the center must maintain a good mixture of levels, keeping in mind operating temperature as well as operator comfort. Nicely cooled equipment is good, but the environ- ment must be comfortable for the operators to work within the premises. Humidity levels must also be monitored as high levels would cause condensation and equipment corrosion while low levels facilitate occurrences of electrostatic discharge (ESD).

Network Connectivity and Security

All that super computing power would go to waste if you cannot direct it to your users.

The main objective of a data center is to house computer equipment, and it needs network connectivity to have any purpose at all. The network must be well designed with future growth in mind, so an upgrade path must be considered in the design.

Data cables would consist of a mixture of fiber optics and Cat 5e or Cat 6 rated cables, but quality of connectivity will depend on the type of equipment being used, which also depends on the applications that the center needs to offer. The position and distribution of network equipment needs to be well thought out, especially in large data centers where the distances can easily go beyond recommended cable lengths for device interconnectivity. The data center’s orientation to the building’s telecommunications equipment should be considered, and any equipment meant for external communication should be placed as close to the Telco’s closest ingress into the data center as possible to reduce latency.

But the main issues in planning network connectivity involve capacity and redundancy.

Capacity is the total bandwidth in both directions of your data center, while redundancy relates to keeping the connection alive by selecting external connectivity providers that offer high redundancy or by employing multiple service providers. Table 3.1 shows different leased lines that are usually offered by network providers.

TA B L E   3 .1 Types of leased lines

Leased Line Capacity

T1/DS-1 1 .544 Mbps

T3/DS-3 44 .75 Mbps

OC-3 155 .52 Mbps

There are two ways to plan for redundancy, symmetrical and asymmetrical:

Symmetrical Symmetrical network redundancy can be achieved by leasing, for example, two T3 lines from two separate providers.

Asymmetrical Asymmetrical redundancy is by leasing a T3 line from one provider and a T1 line from another provider.

Network redundancy can be either active-active or active-passive.

Active-Active Active-active means that both lines are working together simultaneously;

this type of redundancy provides the greatest bandwidth because both lines are working at the same time.

Active-Passive Active-passive redundancy, however, is what we consider backup redun- dancy. Only one line is really active at a time, and that is the main line, which has a bigger capacity. The other, lower-capacity line is used as backup in case the main line becomes unavailable. This scheme provides redundancy at a lower cost, at the expense of bandwidth.

Having an active-active network is beneficial for organizations that deal with high volumes of traffic from their customers . It does not simply offer high bandwidth, it’s also a good way to maintain high availability . However, a cost-benefit ratio must be thoroughly analyzed to determine if this costly method is suitable .

Another redundancy measure aside from a redundant wide area network (WAN) is to be redundant at the edge of your network. It is always better to have two Internet routers meshed to your local area network (LAN) so that when one fails, there is still network connectivity between your data center and the outside.

Support and Maintenance Strategy

A high-tech installation such as a data center should be well maintained to keep it running in top condition even after many years of service. Our worst enemy would be software degrada- tion. This is eventual failure of the various applications and OSs running in your servers. Due to processing, constant file transfers, data traffic, and other normal day-to-day processes, settings and file systems can get corrupted and cluttered. This leads to eventual slowdowns and even failures such as crashes and hangs. This can happen even when your hardware is kept in the best condition possible, so it is essential to have regular server maintenance (either automated or manual). This involves backing up critical data and cleaning or resetting various parameters and settings to make sure everything works as expected.

Eventually something will break down, so there has to be proper procedures to follow when it does. Backup systems have to be put in place to take over for whatever system broke down. A proper upgrade path also needs to be laid out. Which parts of the system need to be eventually upgraded or replaced with newer versions or standards has to be figured out beforehand. Upgrading has to be made easy; the center has to be laid out in a way that does not require a lot of teardown.

Technical Basics of Cloud and Scalable Computing 61

Business Continuity and Disaster Recovery

Disasters happen. They are out of our control, but their impact can be minimized through careful planning and preparation. This doesn’t mean you have to have a bombproof building or one that could survive a catastrophic event; what it does means is that the most important commodity being housed in your data center should be well protected in the form of backups.

Backing Up on Tape Tape backups are the cheapest and most common backup method used today. Since tape backups do not have the advantage of capacity, they are usually performed daily and then sent to a secure location offsite. This ensures that there are physical backups to use in case of disaster or accidental corruption or erasure.

Backing Up by Data Vaulting Another method for performing backups is data vaulting.

This involves a WAN link to a remote backup facility, which backs up data as frequently as desired, even up to the minute. This ensures almost real-time backup but is obviously more costly because the backup facility itself is a data center. There are, however, some third-party providers of backup services.

Physical Security

The network operations center (NOC) is your command center when it comes to the day-to- day operations of your data center. It can serve as the single point of monitoring, supervision, and maintenance for your network, software distributions and updates, and other essential processes. The NOC includes climate control systems and power and generator monitoring and can stand as the security control center for the data center facility.

The NOC will be the office of the operations staff, so make sure there are ample facilities and space for the consoles and monitoring equipment to be placed there. When interconnecting different data centers from different time zones, it is good practice to allow NOCs to be able to take control of data centers other than the ones they are origi- nally set up for. This helps you to tap into your pool of technical experts from different locations to solve local problems. It also minimizes late-night shifts because NOCs from other time zones can take control during the late hours.

Here are a few things to keep in mind when designing security for the data center:

All access points into the data center must be controlled by security access devices such as card readers, cipher locks, and biometric scanners. All access attempts must be logged, no exceptions, so this must be done automatically. If possible, a short video clip is automatically recorded with each access attempt and stored for a certain duration pending review.

Avoid using exterior walls for the data center. The center must be a room within a room to prevent external accidents or intrusion. Windows especially should be avoided because they interfere with cooling.

Video cameras and motion sensors must be installed and monitored around the facility, including on the raised flooring and dropped ceilings.

Air ducts must be made inaccessible to humans to prevent physical intrusions.

Logical Security

Physical security is actually the least of your worries. Very few bad elements would go so far as to try to enter a data center because of the high risk of getting caught. The most valuable commodity is data, and it is not physical. System intrusions are common, and it is estimated that data center systems are besieged at least a hundred times a day. Most or all are unsuccessful attempts that the system handles by itself.

As for administering devices, multilevel authorization should exist, and engineers and operators should be given only minimal access, only that which is needed to complete their tasks. Access to the server console should be through a separate network that is available only via the local NOC. If a NOC from other time zones is given control at times, network connectivity should run to that specific location only and run parallel to the organization’s internal backbone. Strong encryption is required at the very least.

Because the most logical point of entry of any attacker is the network, this is where security has to be at its finest. It should be applied in a tiered fashion:

Tier 1 This is your edge protection, the first line of defense, using hardware and software firewalls specifically calibrated for the center’s needs. Bastion hosts belong here. A Bastion host is a special-purpose computer that is designed and configured to withstand attacks on the network.

Tier 2 This is the next layer, which separates publicly accessible devices such as DNS and web servers from the internal network. Typically, the devices used here are still firewalls, and in some cases both tier 1 and 2 layers reside in the same physical device. VPN tunneling for passing confidential data can be set up parallel to the firewall.

Tier 3 This is an additional layer that can be implemented when you need additional separation from the overall network for environments that store highly critical information, such as a database of classified files or bank records.