Managing resources in the cloud
Cloud computing is currently an established industrial standard, growing extremely fast, and it utilizes large-scale virtualized data centers to provide rapid and cost-effective computing services. To efficiently manage such large volume of resources, cloud computing heavily utilizes automation and dynamic resource management.
Also, with a wide variety of private, hybrid, and public cloud-based systems and infrastructure already in use, companies surely need to consider resource management in their cloud computing strategy. However, resource management for such a complex system as cloud computing requires different ways of measuring and allocating resources.
The resource management strategy
Resource management is a core function required for any cloud system, and inefficient resource management has a direct negative effect on performance and cost, while it can also indirectly affect system functionality, becoming too expensive or ineffective due to poor performance.
The strategies for cloud resource management associated with the three cloud delivery models – Iaas, PaaS, SaaS – differ from one another. In some cases, when cloud service providers can predict a spike, they can provision resources in advance (ex. the case of seasonal web services).
However, for an unplanned spike, the situation can get more complicated. You can use Auto Scaling for unplanned spike loads, but in order to do that you need a pool of resources you can release or allocate on demand, and a monitoring system that lets you decide in real time to reallocate resources. Keep in mind that Auto Scaling is supported by PaaS services, but is more difficult for IaaS due to the lack of standards.
Technically speaking, a cloud is a portion of cluster resources capable of growing and shrinking to accommodate the load changes. Also, cloud resources are controlled on three independent levels:
Cluster level – The cluster level of power management is represented by cluster resource manager, a software complex that manages resources and tasks in a cluster in order to maintain its efficiency. Basically, a CRM is responsible for creation and deletion of clouds.
Node level – The node-level power management is done by an operating system (OS), so an OS controls the high-level state of equipment. For instance, to save energy the OS can put a processor (CPU) into the sleep state or spin-down disks.
Hardware level – Modern CPUs consist of many modules, which may not be permanently involved in an operation. Therefore, unused modules can be switched off. This is done by a special circuit responsible for internal power management of the CPU. So, all management is done on a hardware level without involving any OS.
Controlling the cloud
Allocation techniques in computer clouds must be based on a disciplined approach, rather than ad hoc methods. So, here are the four basic mechanisms for implementing resource management policies in cloud computing:
Control theory – Control theory uses feedback to guarantee system stability and predict transient behavior, but it can only predict local behavior.
Machine learning – A major advantage of machine-learning techniques is that they don’t need a performance model of the system. You could apply this technique to coordinating several autonomic system managers.
Utility-based – Utility-based approaches require a performance model and a mechanism to correlate user-level performance with cost.
Market-oriented – Such mechanisms don’t require a system model, such as combining auctions for bundles of resources.
A cloud computing infrastructure is a complex system with a large number of shared resources. These are subject to unpredictable requests and can be affected by external events beyond your control. Cloud resource management requires complex policies and decisions for multi-objective optimization. This is why planning ahead for how you are going to manage these resources will help ensure a smooth transition to working with the cloud.
Photo credit: https://www.flickr.com/photos/[email protected]/2712986388/