• Tidak ada hasil yang ditemukan

Deploying and Managing a Cloud Infrastructure

N/A
N/A
Nguyễn Gia Hào

Academic year: 2023

Membagikan "Deploying and Managing a Cloud Infrastructure"

Copied!
458
0
0

Teks penuh

Some material included in standard print versions of this book may not be included in e-books or print-on-demand. I would like to thank my family for giving me the time and space needed to complete the chapters of this book.

CompTIA Cloud+ certification denotes an experienced IT professional equipped to provide secure technical solutions to meet business requirements in the cloud. Not only has the cloud ushered in a new era for business IT, it has become the enabler technology for today's internet startups.

Table of Exercises
Table of Exercises

Who Should Read This Book

The company then had to build a purpose-built data center to house the components, then configure, support and manage the data center. Even for large companies that had their own massive data centers to distribute business applications to workers and store business data, just managing the data center was a hassle, driving up costs.

How This Book is Organized

At the end of the chapter is a discussion of the security and functionality aspects of these models. Deployment-related aspects such as HA, multipathing and load balancing are discussed at the end of the chapter.

Chapter

1 Understanding Cloud Characteristics

TOPICS COVERED IN THIS CHAPTER INCLUDE

When you think about it, most of the world's critical business infrastructure relies on a handful of huge - really huge - data centers scattered around the world. This chapter begins with an overview of some of the most important concepts in cloud computing.

Elasticity

Towards the end of the chapter, there is a special section on elastic object-based storage and how it has enabled enterprises to store and process big data in the cloud. If Amazon were smart, they would put 5 extra (or maybe 10) servers inside their data center in anticipation of the holiday season.

On-Demand Self-service/JIT

During the holiday season, there is an influx of 1,000 users, which is double the capacity of what the current implementation can handle. This would mean physically provisioning 5 or 10 machines, configuring them and connecting them to the current deployment of 5 servers.

Templating

First you'll need to specify the Amazon Machine Image (AMI), which is Amazon's version of a template used to spin up a preconfigured custom server in the cloud. However, it is recommended not to spin mission-critical applications on top of shared community AMIs until you are confident about the security practices in place.

Pay as You Grow

In theory, pay-as-you-grow would mean that cost/user would be treated as constant, and scale-out would mean a linear increase in cloud infrastructure and resource usage bills. The cloud engineer or team at awesome-product.com calculates the cost incurred per user to be $1/.

Chargeback

If you measure the costs linearly, you will simply have 1 x 10 thousand dollars to estimate the monthly cost of cloud infrastructure based on a pay-as-you-go model. This model would calculate the cost/user not based on the total cost of the cloud infrastructure in the early stages, but on the optimal number of users/server and whether the network would optimally handle spikes in usage (something the cloud provider will have). to determine).

Implementing Chargeback

Ubiquitous Access

Security Cloud tenants usually contact the cloud provider for resource management over the network. This would not mean that the cloud provider does not offer ubiquitous access or pseudo-ubiquitous access.

Metering Resource Pooling

Access to the cloud platform for tenants should be abstracted from the underlying details of how network requests would be routed to the appropriate data center. The cloud vendor must have built-in metering so that it can calculate the usage of pooled resources at the atomic level.

Multitenancy

Although virtualization providers such as VMware do provide mechanisms to allocate resources on a server (compute, storage, and network) at a granular level, there will be overflow based on the number of VMs running on a physical server at any given moment. However, when the server becomes saturated with the maximum number of VMs that can be spun, the resources will be strictly rationed.

Single-Instance Model

If a dual Intel Xeon server with IB ports and SSD storage only has a single VM running, the application running on the VM would certainly give its best performance. This resource measurement would have to be implemented on top of the VM resource monitoring and not at the bare metal level.

Customized Configurations

For example, AWS offers instances that specify the number of cores you will get, but not the type of CPU. Rather, it offers x number of cores as the billable compute unit because the virtualization layer running on top of the actual physical resource divides the CPU into a given number of virtual machines, each of which will have access to the same CPU, but not be able to to share the same computing load.

Single-Tenant Cloud

Multi-tenant clouds typically provide units of resources that "look like" actual physical units, but are actually a part of the actual physical resource. Your application's backend database may reside on the same physical storage as your competitor's, but the data cannot seep through and cross the boundaries set by the VM.

Cost Optimization

The Amazon Elastic Compute Cloud (EC2) microinstance has an older generation Xeon-based server, but Amazon does not offer the actual Xeon CPU as a resource unit. Let's take a quick look at the much smaller niche segment, single-tenant clouds, and some of the aspects that need to be properly analyzed when deploying either a single-tenant or multitenant cloud configuration.

Security

Cloud Bursting

Transparent Switching

Load Balancing

Rapid Deployment

When you launch a new EC2 compute instance on AWS, you can choose either a vanilla operating system to run on top of the instance or choose one of the many preconfigured instances for your specific use case. This is not a common use case for most tenants, and there was no public example of Amazon Machine (AMI) available in the market, so Salman built one of his own.

Application-Specific Rapid Deployment Strategy

A use case author Salman Ul Haq generally comes across is having an instance with computer vision libraries, MEAN stack (MongoDB, ExpressJS, AngularJS and Node .js), nginx and a bunch of other tools, databases and frameworks pre-installed. This is not fully automated because he still needs to configure other tools and frameworks on top of it to make his application work on it.

Resource Provisioning

Installation and Configuration

Integration

File-Based Data Storage

Despite these and many more advantages that have made the hierarchical file systems so popular over the past decades, they fall acutely short when it comes to the scale of data storage and read/write operations with which the cloud operates. Read/write latency, lack of APIs, and scalability are among the challenges we would have if we stuck with legacy hierarchical file systems.

Read/Write Latency

Defining Structure It is easier to express the structure of structured and unstructured data when it is organized at multiple levels. You can have a top-level folder called "Family Photos" with subfolders either for each family member or based on location or event.

Lack of APIs

Often, these organizations must write gateway components to translate between internal data access protocols and external cloud object-based storage.

Scalability

Object Storage

Structured vs. Unstructured Data

The RDBMS will manage the data stored across a networked storage system, and consumers of that data will be able to access it through SQL or SQL-like queries. A large part of the data generated every day consists of the abundance of visual and audio files, the billions of web pages on the Internet, and the abundance of sensors in wearable devices and so on that generate data at high frequency.

REST APIs

Relational database management systems (RDBMS) primarily dominated storage, read/write access, and transactions over structured data. We need a better way to organize this data and enable fast access (read/write) to it.

Object ID

These document-based "databases" implement the core features of object-oriented storage and can be deployed on your custom cloud instance or used as a hosted solution offered by several startups. It is perhaps one of the most popular and earliest implementations of a public cloud object storage service.

Object Life Cycle on Amazon S3

Metadata

For example, if you run a video site, a simple way to maintain information about the media files you've stored as objects on S3 would be to create key-value pairs for each piece of data you'd like to associate with the media file. , which may include the title, genre, length in seconds, and a brief description of the media file, as shown in Table 1.2. This way you would just keep the object ID of that media file in another database and whenever you access the media file, you simply get its metadata to fill in the details.

Data/Blob

Legacy file storage systems do not expose the metadata of a file beyond what the operating system exposes. The concept of defining custom metadata for a file is also foreign to legacy file systems.

Extended Metadata

When you create an object and push it to the public cloud for storage, you don't need to specify the location where you want the object to be stored. Likewise, when you want to retrieve that object and consume the data, simply make a GET call to the object store's REST API and provide the unique ID.

Policies

In the case of Amazon S3, this will be the object ID with which you can query the object to read/modify/delete. This extended metadata also becomes part of the system-generated metadata as the unique identifier of the object is generated by the system to ensure its uniqueness and coherence with the ID generation scheme.

Replicas

The concept of object storage arose from the need to store, query and serve enormous amounts of data stored by millions of tenants on the public clouds. Flat object storage In contrast to the hierarchical address space of traditional file systems, the object storage concept of the cloud consolidates all available storage and places data on a flat address space.

2 To Grasp the

Cloud—Fundamental Concepts

That's why we often refer to the modern version of cloud computing as "the second birth of the cloud". In their first life they were known as mainframes, computers that took up huge spaces. This limitation disappeared with the advent of personal computers, marking the end of the first wave of the cloud.

Elastic

Contrary to popular belief, the modern generation of IBM mainframes is still very much in use in a significant number of large organizations that run critical applications, store sensitive data and perform many millions of transactions every single day. The scale and spread of IBM's mainframes was revealed in an antitrust investigation into IBM's mainframe business line initiated by the European Commission.

Massive

Although the first generation of mainframe computers does not compare to today's cloud, they share the mechanism of access and use to some extent - hosted in dedicated rooms (modern data centers) and served through terminals that can connect them to multiple users (now scaled to hundreds of millions) who can place computational work on them.

On Demand

Compute resources in the cloud can be set up with on-demand (highest priced), spot (bidding based on available compute resource pool and other bidders), and reserved pricing (commitment to continue using the virtual server for a specified period of time) models.

Virtualized

Secure

Always Available

This is not insignificant for an online business or social application that serves hundreds of millions of users around the world 24 hours a day, especially transaction and payment gateway systems. Therefore, modern clouds must have failover support and mechanisms to ensure the highest possible availability. This confirms that full reliance on cloud providers for seamless auto-failover support is not recommended.

The True Definer of Cloud Computing

Serving the Whole World

Virtualization allows multiple OS instances to be packaged on the same hardware, running independently with different software stacks.

Use Cases and Examples

Benefits of Hypervisors

Hypervisor Security Concerns

Proprietary vs. Open Source

Moore’s Law, Increasing Performance, and Decreasing Enterprise Usage

These hypervisors must map to every element of the hardware they run on. In the following sections, we provide a quick overview of some of the popular proprietary and open source hypervisors available.

Xen Cloud Platform (Open Source)

The job of a hypervisor would be to transparently virtualize all these resources so that the end user of the operating system can use the operating system without having to make any modifications.

Operating System Agnostic

Device Driver Isolation

Minimal Footprint

PV Support

KVM (Open Source)

OpenVZ (Open Source)

VirtualBox (Open Source)

Citrix XenServer (Proprietary)

VMware vSphere/ESXi (Proprietary)

Memory Overcommitment

Microsoft Windows Server 2012 Hyper-V

Consumer vs. Enterprise Use

Hypervisors for the Mobile Devices

For a quick refresher, BYOD refers to the use of personal mobile devices for work. The Xen Project has also started supporting ARM virtualization extensions, meaning that Xen-based hypervisors can also enable Type 1 hypervisors for mobile devices.

Hypervisors for Enterprise

Workstation vs. Infrastructure

Workstation as a Service

The entire process of starting and stopping OS instances takes less than a minute and can be repeated for every workstation in the company. With the VDI model, a single operating system image can be deployed on thousands of virtual machines and run on every single workstation in the enterprise.

Infrastructure as a Service

On-premises data center deployments within an enterprise can be implemented as a private cloud offering for hosting applications consumed across the enterprise. Scalability instances can be easily activated and decommissioned based on demand, and because there is no need for configuration per occurrence in most cases, the process can be easily replicated as more users connect to the company's apps and demand for resources increases.

Shared Resources

Most of the benefits of implementing hypervisor-based infrastructure/data center virtualization and workstation virtualization have already been discussed in detail in the previous sections.

Time to Service/Mean Time to Implement

Resource Pooling

Scalable

Available

Portable

Network and Application Isolation

Infrastructure

Platform

All major public cloud providers offer multiplatform support that includes support for all major operating systems.

Applications

Enabling Services

There are a multitude of open source and proprietary hypervisors available for consumer and enterprise use. Proprietary and Open Source Hypervisors There are a multitude of open source and proprietary hypervisors on the market.

3 Within the Cloud

Technical Concepts of Cloud Computing

The discussion of the technical concepts of cloud computing includes explanations of the hardware technology involved in the field, something that few cloud technicians take for granted. The truth is that cloud computing is just one application in the broader field of scalable computing.

Defining a Data Center

It is not always an option to just throw away your old data centers and move to the cloud. Many of the applications running in data centers are specifically designed to cater to a relatively small number of employees.

Hardware and Infrastructure

An important consideration is the building in which the data center will be located, especially the floor space. All data center entry points should be controlled by security access devices such as card readers, digit locks and biometric scanners.

Traditional vs. Cloud Hardware

System break-ins are common and it is estimated that data center systems are besieged at least a hundred times a day. Cloud data centers are designed with future upgrades in mind, simply because demand will only grow and the data center will need to scale for that.

Determining Cloud Data Center Hardware and Infrastructure

Multiple supported management tools Few standard management tools Many application updates and patching Minimal required patching and updates Multiple supported software and hardware. This means that it is easier to put in more of the same, and with multitenancy, many people will share in the expenses, which will make everything cheaper.

Designing for Cloud Functionality

The word scalability is usually used to define cloud computing and it is the single important aspect on which cloud data centers are designed and built. Cloud computing advocates will argue that this is not true, that security measures in terms of software and hardware are the same for both on-premises and cloud data centers.

Cloud Data Center Construction

Traditional data centers housed a multitude of applications from as many platforms, each with a different requirement, such as the OS and type of processor used. When you're building cloud data centers to serve multiple customers, your goal is to achieve economies of scale.

Optimization and the Bottom Line

The more customers you think you will have, the cheaper it is to build a data center; because all these customers are bearing and sharing the costs, the ROI would be faster. IT is critical to business growth because it provides scalability and the ability to manage the increasing complexity in an organization as well as its business model and processes.

The Cost of Data Center Downtime

The study mentions that the most recent downtime events of the 41 participating data centers totaled. In fact, about 39 percent of the time the power infrastructure will be the cause of a costly downtime event.

Data Center Monitoring and Maintenance

With a mix of systems and tools, you may not be able to tell what resources you have in your data center. It can provide clear visibility of all data center resources along with their relationships and even physical connectivity in order to support monitoring and reporting of the entire relevant data center infrastructure.

The Value of Energy Efficiency

In the planning stage, engineers typically look at the energy efficiency of every piece of IT equipment that goes into the data center. When you pair this with high-efficiency UPSs and servers, you can be sure that your data center is optimized for power efficiency.

Open Source

Although cloud computing data center hardware is homogeneous and used in a single environment, the virtual environment advises it with its complexity. Users only need to pay for what they provide; this is the utility computing aspect of cloud computing.

Open Compute Project

Server Rack (Open Rack) Open Rack is a server rack design standard that integrates the rack into the data center design. Data Center The design of Open Compute data centers is intended to leverage and maximize the mechanical performance and thermal and electrical efficiency of their other technologies.

OpenStack

Cinder is the module that allows OpenStack to provide block-level persistent storage devices that can be used with OpenStack's compute instances. Image Service (Glance) The server and disk image capture and delivery module, part of the OpenStack backup system.

Proprietary

So the final factor in the battle between open source and proprietary is simply the needs and wants of the organization. Cloud technology is growing at a high speed, and many organizations are diving into cloud computing because of the benefits.

What It Means for Service Providers

Planning Your Cloud

It's a straightforward question that every business asks itself, but it's the answer that will truly set the business apart. But so many planned goals often fall short because of the disconnect between business and IT, even though the business in this case is IT itself.

Cloud Service Solution Planning Workshop

However, the general length of the workshop for smaller projects with daily meetings will be two to three weeks. Overview of the organisation's current state within IT hosting as well as current and planned cloud initiatives.

Workshop Attendees

Some enterprises want to start fresh with new software and hardware, and in some cases, even an entirely new data center. Then there are enterprises that want to transform existing IT assets into a cloud environment.

Building Your Cloud

If you are starting over using new hardware and software or a new data center. This should already be taken care of in the planning process during the discovery of existing IT assets.

Running Your Cloud

Although the technical details may vary depending on the current infrastructure, the process can be divided into several steps.

What This Means for Customers

However, this creates fragmentation, and it is the job of the cloud management platform to tie all this together seamlessly. On the other side of the world, in the United States, policies and procedures can be completely different.

Planning the Documentation of the Network and IP

In the IT environment, the methodologies are the activities that are performed based on the defined policies. For example, a Japanese company may value cleanliness and orderliness in the workplace, so they make it a policy to keep the workplace clean, tidy and free of distractions.

Application-Optimized Traffic Flows

Edge Network It provides connection to external users of different network types, mostly the Internet. Wide area networks (WANs), backbones, and virtual private networks (VPNs) are other networks that can connect to the edge network.

Simplified Network Infrastructure

And even if the nodes that were not replaced scale very well or use the same hardware as the new infrastructure (that is, you are dealing with a uniform and simple network infrastructure), these nodes are still older than the rest of the network . All servers, network nodes, network-attached storage (NAS), automation devices, and the rest of the essential hardware must be configured in exactly the same way—the same operating system, software, and firmware.

Implementing Change Management Best Practices

Regardless of the origin of the change, change management is the important process of taking a structured and planned approach to align the organization with whatever change needs to occur. Planning Develop a plan and documentation that clearly defines the change objectives to be achieved and the means to achieve them.

Request for Change

Well-Informed Stakeholders Encourage stakeholder participation and engagement in the change plan and facilitate open and consultative communication to promote awareness and full understanding of the necessary changes permeating the organization. To be the governing mechanism for configuration management by ensuring that all configuration changes in the IT environment are documented and updated to be reflected in the configuration management system and change document.

Change Proposals

Change Type

Change Manager

Change Advisory Board

Review and Closure

Documentation

Configuration Control

All configuration changes must first be evaluated before being documented, tested, and then implemented. The change manager should be responsible for documenting any changes that need to be made.

Asset Accountability

The change manager must then ensure that all configuration items have been successfully updated and that the specified objectives have been met.

Managing the Configuration

Even at the beginning of the implementation process, configuration management is already present by defining, documenting and tracking the IT assets that will be managed as configuration items (CI). The essence of configuration management is the trust it places in the CMS and KI owners.

Configuration Management Database (CMDB)

That trust can also be a disadvantage, but it can be used as an indicator for the success of the configuration management process. The following sections describe some key concepts that you need to understand to fully understand configuration management.

Managing Capacity

The capacity manager or actor responsible for capacity management should be the most critical member of the change advisory board (CAB). Capacity management must be closely related to configuration management, because the capacity and performance of any IT asset largely depends on its configuration.

Managing the Systems Life Cycle

It exists throughout the service lifecycle and interacts with the other four phases to tailor the available services to the organization's business needs (if internal) or the customer's business needs. The phases of the ITIL framework are interrelated with continuous improvement serving as a regulator of quality, which becomes apparent after multiple journeys through a service's lifecycle.

Scheduling Maintenance Windows

Further improvements to processes and resources will enable the organization to implement its service offerings effectively, which in turn provides value to customers and the organization itself. The Microsoft Operations Framework for lifecycle management attempts to simplify this process into four phases and can be applied even to organizations that are not service-oriented.

Server Upgrades and Patches

Managing Workloads Right on the Cloud

To serve in such situations, your cloud infrastructure must be built for fast and predictive provisioning. The traffic level would reach the threshold and pass it quickly, and your system would not be able to provide fast enough, causing some users to lose their temper and leave.

Managing Risk

That's not to say it's not a requirement; especially if the organization has not yet embraced the cloud but needs to support mobile users. It is now clear that we are not in favor of MDM and EMM simply because they are not needed in cloud computing.

Virtualizing the Desktop

It's nothing new, but it's revolutionary, and before cloud computing technology got to where it is now, it wasn't a perfect solution. For example, how do we know that the person logging into the virtual machine is an actual employee and not their child who happened to be playing with the machine, or worse, a thief.

Enterprise Cloud Solution

These policies and procedures will guide most employee and management actions with respect to the cloud system. A configuration management database is maintained to monitor all approved and unapproved changes, as well as the previous state of the system prior to the change.

5 Diagnosis and Performance

The anomalies can be detected or even better prevented through the use of best practices and the performance concepts of cloud computing. The following are some performance concepts and indicators that we use in the field to quantify the performance of our cloud computing systems.

Input/Output Operations per Second (IOPS)

There are two measurements for sequential operations: sequential read IOPS and sequential write IOPS, which are the average number of sequential read and write I/O operations per second, respectively. The two measurements are random read IOPS and random write IOPS, which indicate the average random read and write I/O operations per second, respectively.

Read vs. Write Files

Sequential Operations Sequential read or write operations access storage locations from a device in a continuous fashion and usually occur when large transfer sizes are involved. Random Operations Random read or write operations access storage locations from a device in a random or non-contiguous manner and are associated with operations on small data sizes.

File System Performance

I/O Size Bytes/kilobytes It is best if this value matches or is close to the block size of the file system. Bandwidth measures help in planning storage capacity as well as in determining the appropriate memory storage characteristics of the file system.

Metadata Performance

The file system will require more specific information about the file, which it will not display to the user. File metadata is a type of file in itself, and the file system stores metadata with the file it points to, or in separate folders or even media.

Synchronous Metadata Update

And because of the growing gap between processing performance and disk access time, there is an obvious performance bottleneck. This growing disparity between processing and mechanical performance, coupled with increasing primary storage capacity, really means only one thing: high-performance file systems, such as those used in cloud computing systems, must use special memory storage algorithms or techniques. to cover the performance gap created by disk access latencies.

Soft Updates

While updating metadata, the in-memory copy of a particular block is normally modified and then the corresponding memory information is updated appropriately. So whenever these dirty in-memory blocks are flushed or saved to disk, the dependency information is accessed.

Caching

For this, the soft update engine creates and maintains dependency information to keep up with sequence requests. This is a simple approach that allows any active client to use the memory of another inactive client as backup storage.

Bandwidth

The only explanation is that it could be the WAN or external Internet point at the user's end. And between what ends (from the end of the user or the provider) we have to guarantee the performance of the network and the bandwidth.

Throughput: Bandwidth Aggregation

As we turn our attention back to things under our control, what steps should we take to ensure network availability and bandwidth on our end. Ultimately, the decision to deliver bandwidth performance rests with the provider, and the provider can only guarantee this performance in and around its own jurisdiction, that is, its own networks and those between data centers and availability zones.

Bonding

If you have more cars, it will take longer for them all to arrive. Now if you have four lanes, all four cars would arrive at exactly the same time, in exactly ten minutes, saving you two minutes.

Teaming

As an analogy, think of it as a freeway: if you have four cars going to the same destination, about 10 minutes away, and they are traveling at the same speed on a single lane road, they would arrive one after the other, with the fourth car possibly arriving at the 12-minute mark. This can be done even if the available links do not use the same bandwidth or connection speed, and it dramatically increases throughput and provides good redundancy and failover protection between the connected points.

Jumbo Frames

In this case of teamwork, your three friends each carry one load based on their strengths; they don't necessarily combine into the Incredible Hulk who can carry all your stuff at once. The savings in bandwidth and CPU time costs can result in significant increases in network throughput and performance.

Network Latency

Jitter Jitter is the variation in packet throughput or transmission delay caused by queuing and is the effect of contention or conflict and serialization (where data packets can only travel one after the other) that occurs anywhere along the network path between and within networks . . Depending on the time these servers are running, the network congestion situation in their areas can be good or bad, as internet traffic rises and falls as competition for bandwidth and infrastructure takes place.

Hop Counts

Back in the days of enterprise data centers and networks, the hop count could be easily determined. The hop count is related to how many network nodes (such as routers) the data passes through to reach its destination, but the shortest hop does not often correspond to a geographically shortest path.

Quality of Service (QoS)

For example, the shortest route from London to Paris might go through Quebec, which doesn't make sense geographically, but because of the way web networks intertwine, it might only make sense in the number of hops. Hop counts can be measured using various network tools and applications, the most common example being traceroute.

Multipathing

It makes sense because it's easier to scale and expand in a virtual environment when the number of servers you can deploy is no longer limited by their size. If increasing the processing power of the servers becomes cost prohibitive and the performance benefits are no longer attractive enough to justify the costs, you can increase the number of servers.

Access Time

All these individual measurable variables are added together to come up with a single value that evaluates the performance of the drive in terms of access time. This measure varies greatly from manufacturer to manufacturer and by model; even dives of the same class and model may not perform at exactly the same level.

Seek Time

Solid-state drives don't rely on mechanical parts, so they don't have the same limitations, and access times are fairly short and consistent.

Rotational Latency

Data Transfer Rate

Amazon categorizes its EC2 instance types (http://aws.amazon.com/ec2/instance-types/) with various general-purpose, compute-optimized, memory-optimized, and performance-optimized storage offerings. storage. SSDs will be very important for web servers that serve millions of page clicks per day.

Disk Tuning

A HDD will generally be suitable for batch processing systems or systems that are not to be used in real time. Depending on the disk usage, it should be done once a month or even once a week.

Swap Disk Space

I/O Tuning

For example, in a 10-disk system, a single disk is experiencing 25 percent load instead of only 10 percent, which is a good indicator that the disk is hot.

Analyzing I/O Requirements

If you already have a system in place and it's not performing as expected, follow these steps to determine if the system is configured with sufficient hardware resources first before moving on to the next tuning methods. And for the system to analyze such large sets of data, they would have to be quickly written to disk so that they could be processed and analyzed.

Performance Management and Monitoring Tools

This constant reading and writing will add latency to analysis programs, which can negatively impact the ability to reach timely decisions because the system is taking a long time to process analysis reports. For example, the administrator can schedule the system to create a new web server whenever the running one is experiencing 80 percent load traffic, giving it enough capacity and buffer time to allow the new server to be fully configured.

Figure 5.7 and Figure 5.8 show the CopperEgg server monitoring tool. Figure 5.9 shows the  metrics available with AWS CloudWatch
Figure 5.7 and Figure 5.8 show the CopperEgg server monitoring tool. Figure 5.9 shows the metrics available with AWS CloudWatch

Hypervisor Configuration Best Practices

Your cloud system has the ability to automatically scale with traffic and performance requirements based on its usage, the extent of which is set by the administrator according to certain thresholds. When the new virtual server is ready to receive traffic, the system can automatically balance the load between all the running servers so that no server is more burdened than the others.

Memory Ballooning

So, for example, if the guest operating system is allocated 4 GB of memory, after it has used most of it, it marks 2 GB as free. Since it does not know about the memory mapping to the host's physical memory, the free memory remains in the guest operating system.

I/O Throttling

The host memory is finite, so it eventually runs out, but the guest OS has no idea about this, so the hypervisor has to collect the memory back through the balloon driver, which collects free memory from the guest OS. And as mentioned, if there is not enough free memory to give to the driver, the guest operating system will decide which areas of memory need to be represented so that it can free memory to give to the driver. balloon for remapping for other purposes.

CPU Wait Time

The most prominent of these parts is the disk drive, which incidentally is also the biggest bottleneck of the system. Disk Performance The disk is kind of like the bread and butter of the enterprise.

6 Cloud Delivery and Hosting Models

The private cloud deployment model is primarily the provision of private and unbundled IT infrastructure and resources. If an organization already has some kind of data center set up in house, this model will make a lot of sense because there is no additional capital expenditure to be incurred and no more installation is required, just reconfiguration of the existing infrastructure.

Full Private Cloud Deployment Model

Banks that have an on-premises cloud computing infrastructure that provides software, computing, storage, security and backup. Hospitals and healthcare providers that have strict regulatory requirements and implement private cloud and virtualization solutions within their IT facilities.

Semi-private Cloud Deployment Model

The public cloud deployment model is the most widely used and popular deployment model for cloud solutions. General businesses (small and medium-sized businesses or SMBs) that use a public cloud to host their applications, back up data and files, and more.

On-Premises Hosting

But this time, cloud computing seems to be a better option than just utility computing. The barrier to entry is very low, allowing small businesses to play in the same field as the big boys.

Off-Premises Hosting

It creates opportunities for businesses and enables smaller organizations to use the same technological power as their larger counterparts. However, this is not quite the same as when applied computing was clearly the only choice and became the only choice locally after a while.

Miscellaneous Factors to Consider When Choosing between On- or Off-Premises Hosting

Power/Electricity Costs

Bandwidth Costs and Limitations

Even if the technology your cloud provider uses to solve bandwidth problems has no problem handling large files, the final bottleneck would be the end user's Internet connection. Again, this isn't much of a problem if your workload doesn't involve moving large chunks of data in and out of your cloud environment.

Hardware Replacements and Upgrades

This problem won't matter much when all processes and data transfers occur within your virtualized environment; problems arise when your current workflow requires moving large data files to and from your cloud system. This means that in addition to your constant battles with your ISP for bandwidth on your end, you also have to contend with limitations from your cloud provider.

Over- and Underutilization of Resources

And organizations that choose on-premise solutions are usually not the ones that directly benefit from their cloud solution. This is one of the biggest dilemmas after a few years of operating on local solutions.

Comparing Total Cost of Ownership

In addition, 60 percent of respondents said that cloud computing has reduced the need for their IT team to maintain infrastructure, giving them more time to focus on strategy and innovation. And indeed, 62 percent of companies that have saved money are reinvesting those savings back into the business to increase headcount, increase wages and drive product innovation.

Private Cloud Accountability

It is a paradigm shift that brings various complications, raises questions of liability and responsibility in the event of outages. The individual hardware and software distributors for the various parts used in the system take responsibility and liability by issuing regular software maintenance and updates, because technically customers still pay license fees regularly in some cases.

Public Cloud Accountability

The best example of this would be website owners renting computing resources from cloud service providers to create virtual web servers that in turn serve the end users visiting the website. End Users As the name suggests, end users are at the end point of the business hierarchy, the consumers.

Responsibility for Service Impairments

They are also known as dealers; they purchase services from cloud service providers through service models such as IaaS and SaaS and then push them to the end user in various variants. Well-known companies like Pinterest, Instagram and Netflix rely on cloud service providers to keep their services running, especially AWS.

Accountability Categories

Cloud service provider The cloud service provider provides the cloud computing infrastructure and facilities and is therefore responsible for the robust and reliable delivery of cloud services. They interface directly with the cloud service providers using the infrastructure, so they are responsible for the proper provisioning, configuration and operation of the cloud applications they push to end users.

Multitenancy Issues

Infrastructure Providers These are providers of computing, networking, storage, and other hardware and platform elements that will be used by the cloud service provider to form its infrastructure. For example, a telecom giant like AT&T is simultaneously a network service provider, a cloud service provider, and a cloud consumer simply because it chooses to do business in all three categories.

Gambar

Table of Exercises
Figure 4.3 shows how different parts of a cloud computing system interact with each other  to deliver services to a user.
Figure 5.7 and Figure 5.8 show the CopperEgg server monitoring tool. Figure 5.9 shows the  metrics available with AWS CloudWatch

Referensi

Dokumen terkait

5 (b(left)), for BT31 – BT22 test, it seems that only a few cloud patches are visible in the middle and south of Malaysia; pixels detected as cloud can be seen throughout the

Based on the results and discussion of this paper, it can be concluded that the usage of deadline note, cloud storage, and Google calendar for Indonesia students

Hasil penelitian dari analisis kinerja private cloud computing dengan layanan Infrastructure-As-A-Service (IAAS) ini menunjukkan bahwa perbandingan kinerja satu

Analisis Fungsi Compute pada Private Cloud dengan Model Layanan Infrastructure as a Service Menggunakan OpenStack.. 1) Risang Anom Gayuh Sumunar, 2) Teguh

A question that often arises is whether serverless computing only makes sense in the context of a public cloud setting, or if rolling out a serverless offering on-premises also

A question that often arises is whether serverless computing only makes sense in the context of a public cloud setting, or if rolling out a serverless offering on-premises also

In everyone’s enthusiasm for the cloud, what often gets lost in the conversation is the fact that there are multiple types of clouds (public, private, and hybrid) and multiple

11.3 Vendor A vendor sells products and services that facilitate the delivery, adoption and use of cloud computing.For example:  Computer hardware Dell, HP, IBM, Sun Microsystems o