Therefore, this technique needs to be modified to ensure that all the constraints of the DRE system are met. This chapter provides the motivation for auto-deployment derivation techniques for determining valid DRE system deployments.
Satisfying Rate-monotonic Scheduling Constraints Efficiently
A variety of algorithms, such as bin-packing algorithm variations, have been created to solve the multiprocessor scheduling problem. We describe how ScatterD ensures scheduling in Section IV and allows for complex objective functions such as network bandwidth reduction.
Reducing the Complexity of Memory, Cost, and Other Resource Con- straintsstraints
A major limitation of applying these algorithms to optimize deployments is that bin packaging does not allow developers to specify which deployment characteristics to optimize. For example, bin packaging does not allow developers to specify an objective function based on the total network bandwidth consumed by a deployment.
Satisfying Complex Dynamic Network Resource and Topology Con- straintsstraints
Thus, the number of processors used and the network bandwidth requirements will not change from one execution of the bin-packing algorithm to another. Additional limitations may also be present based on the type and application of the DRE system being configured.
Resource Interdependencies
A DRE system configuration consists of a valid hardware configuration and valid software configuration in which the computing resource needs of the software configuration are met by the computing resources produced by the hardware configuration. If the resource requirements of the software configuration exceed the resource production of the hardware configuration, a DRE system will not function correctly and will therefore be invalid.
Component Resource Requirements Differ
If RAM or CPU resources are scarce, the medium or low resolution option should be selected. If resources were abundant, the system with the best performance would be the result of choosing a high-resolution component.
Selecting Between Differing Levels of Service
For example, the satellite system shown in Figure V.1 has three options for the image resolution software component, each offering a different level of performance. While the performance of the low-resolution component is less than that of the high-resolution component, it requires a fraction of the computing resources.
Configuration Cannot Exceed Project Budget
Exponential Configuration Space
For example, the relationship between hardware nodes and software components in which the software components consume resources of the hardware nodes must be defined. AMP provides a visual representation of the hardware and software components that makes it significantly easier to understand the problem, especially for users with limited experience in DRE system configuration. When ASCENT runs, it returns the best DRE system configuration determined, as well as the cost and value of the configuration.
Once the value is set, the resource consumption of each option within each representation of the software component option element can be set in the same manner as described for the MMKP hardware issue. Further inspection shows that selected hardware components can support both software components. In this chapter, a version of the knapsack problem is used to demonstrate the possibility of developing a DRE system configuration.
Evolving Hardware to Meet New Software Resource Demands
To derive a configuration for the entire electronic system, another 46 software components and 20 other hardware components must be examined. For example, developers must determine (1) which software and hardware components to purchase and/or build to implement the new feature, (2) how much of the total budget to allocate to software and hardware respectively, and (3) whether the selected hardware components provide sufficient resources for the chosen software components. These issues are related: for example, developers can choose the software and hardware components to dictate the allocation of budget to software and hardware, or the budget distribution can be resolved and then the components can be chosen.
In addition, developers can choose the hardware components and then select software features that match the resources provided by the hardware, or the software can be chosen to determine what resource requirements the hardware should meet. The difficulty of this scenario can be demonstrated by assuming that there are 10 different hardware components that can be developed, resulting in 10 points of hardware variability. Even after each configuration is assembled, developers must determine whether the hardware components provide sufficient resources to support the chosen software configuration.
Evolving Software to Increase Overall System Value
Each replaceable hardware component has 5 deployment options from which the individual upgrade can be selected, thereby creating 5 options for each variability point. To determine which set of hardware components provides the optimal value (i.e., the highest expected return on investment) or the minimum cost (i.e., the minimum financial budget required to engineer the system configurations of component implementations should be investigated. titled "Mapping Hardware Evolution Problems to MMKP" describes how SEAR solves this challenge by using predefined software components and interchangeable hardware components to form a single MMKP evolution problem.
Since no new hardware is purchased, the entire budget can be allocated to the purchase of software. As long as the resource consumption of the selected configuration of the software components does not exceed the resource production of the existing hardware components, the configuration can be considered valid. This section describes how SEAR addresses this challenge by using predefined hardware components and development software components to create a single MMKP development problem.
Unrestricted Upgrades of Software and Hardware in Tandem
In Figure VI.2, the software does not have any variability points suitable for development. V is the total value of the hardware and software components that make up the final system configuration. HC is a set of hardware components that make up the system hardware.
HH defines the maximum amount of heat that can be generated by the hardware H of the system. V(SC) is the total value of the software components SC that make up the final system configurations. The resource production of the hardware components HC must exceed the resource consumption of the software components SC as given by HRSCon f(F).
Capturing VM Configuration Options and Constraints
An MDE technique for transforming function model representations of cloud VM configuration options into constraint satisfaction problems (CSPs where a set of variables and a set of constraints determine the allowed values of the variables). An MDE technique for analyzing application configuration requirements, VM energy consumption, and operating costs to determine which VM instance configurations an autoscale queue should include to meet the autoscale response time guarantee while minimizing energy consumption. Empirical results of a case study using Amazon's EC2 cloud computing infrastructure (aws.amazon.com/ec2) that shows how SCORCH minimizes power consumption and operating costs while ensuring response time auto-scaling requirements are met.
By reducing unnecessary idle system resources by applying auto-scaling queues, power consumption and resulting CO2 emissions can potentially be significantly reduced. This section describes three key challenges of capturing VM configuration options and using this configuration information to optimize the setup of an auto-scaling queue to minimize power consumption. The EC2 configuration options cannot be chosen arbitrarily and must comply with numerous configuration rules.
Selecting VM Configurations to Guarantee Auto-scaling Speed Require- mentsments
For example, the Amazon EC2 cloud infrastructure supports 5 different processor types, 6 different memory configuration options, and more than 9 different OS types, as well as multiple versions of each OS type [47]. The sections titled "SCORCH Cloud Configuration Models" and "SCORCH Configuration Request Models" describe how SCORCH uses feature models to alleviate the complexity of capturing and reasoning about configuration rules for VM instances. For one set of applications, the best strategy may be to populate the queue with a common generic configuration that can be quickly customized to meet the requirements of each application.
For another set of applications, it may be faster to populate the queue with the virtual machine configurations that take the longest to provision from scratch. Numerous strategies and combinations of strategies are possible, making it difficult to choose configurations to populate the queue that will meet auto-scaling response time requirements. Runtime Model Transformation to CSP and Optimization” shows how SCORCH captures cloud configuration options and requirements as cloud configuration feature models, transforms these models into a CSP, and creates constraints to ensure that a maximum response time is met at auto-scaling.
Optimizing Queue Size and Configurations to Minimize Energy Con- sumption and Operating Costsumption and Operating Cost
Developers provide a cost model that specifies the cost of running a VM configuration with each feature present in the SCORCH cloud configuration model. The energy model and cost model are also captured using attributes in the SCORCH cloud configuration model. S is the autoscaling queue size, which represents the number of prestarted VM instances available in the queue.
Q is a set of tuples describing the selection status of each VM instance configuration in the queue. E is the cost model that specifies the energy consumption resulting from including the feature in a running VM instance configuration in the auto-scaling queue. L is the cost model that specifies the cost of including the feature in a running VM instance configuration in the auto-scaling queue.
Existing Software/Hardware Specific Optimization Techniques Require System May Invalidate Safety CertificationSystem May Invalidate Safety Certification
Critical systems, however, are often subject to more design constraints, such as security requirements and real-time deadlines, which can limit the optimizations that can be applied. In the case of the system described in the previous section, several factors such as system recertification, unknown data linkage characteristics, and strict scheduling requirements made it difficult to develop optimization techniques for integrated systems. To ensure that these disasters do not occur, the software and hardware components of safety-critical integrated avionics must be certified.
This certification guarantees that as long as the software and hardware are not modified, the system will operate in a safe, predictable manner. Existing cache optimization techniques such as loop fusion and data padding require modifications to the components to increase cache utilization and performance [58, 87]. Therefore, techniques must be developed that modify the system to optimize a predictive performance metric while leaving the hardware and software of the system unchanged.
Data Sharing Characteristics of Software Components May Be Un- knownknown
Each node in the system described in the case study consists of multiple partitions of executing applications. These cache hits can provide significant reductions in the required execution time for the partition. As described in the case study, the physical structure of the system consists of several separate nodes.
In this case, two tasks of the same application must be executed consecutively to produce cache hits. This sum reflects the total probabilistic number of partition cache hits executed on the node. Data shared between applications and shared between tasks of the same application can greatly affect the effectiveness of a system's cache.
The execution schedule of system software tasks can potentially affect system performance. Again, the Best execution order consisting of the most overlaps resulted in the fewest L1 cache misses for all software systems.