• Tidak ada hasil yang ditemukan

6.2 Model validation challenge problems: structural dynamics application

6.2.3 Model assessment

Subsystem model assessment

The confidence assessment of the subsystem model is an important step because all of the modeling error and uncertainty, even for the system configurations, can be attributed to the subsystem model (Red-Horse and Paez, 2008). In order to assess the subsystem model, the analyst is given both the linear model with which to make predictions, along with the results of various experiments.

Since the assessment will be based on the comparison of the dynamic time-history re- sponses predicted by the linear model to those of the experimental results, the first step is to decide on some method for comparing the time-histories. Directly comparing two time- histories tends to be of little practical use, and the preferred method is to make the comparison based on one or more response features (Therrien, 1989; Jain and Zongker, 1997). Recall that the objective of model validation is to asses the quality of the model with regards to its intended use. Fortunately, for the challenge problem, the intended use of the model is clearly specified, as discussed in Section 6.2.4. Based on the model’s intended use, the most natural comparison feature to work with is the maximum absolute acceleration of mass 3, which will be denoted here by˜a. This is also a very convenient feature to work with because it is a scalar quantity.

There are data available from a total of 120 tests on the subsystem with which to assess the quality of the given linear model. Each of these experiments corresponds to a different subsystem that is randomly selected from a population that contains variability. Sixty of these experiments subjected the subsystem to random vibration excitation (these are referred to as the

“subsystem calibration” experiments by Red-Horse and Paez (2008)), and sixty of these exper- iments subjected the subsystem to shock excitations (referred to as the “subsystem validation”

experiments). Again, in view of the intended use of the model, the validation assessment of

the subsystem model will be based only on those experiments which used shock excitations, because this is the excitation which corresponds to the target application.

These sixty experiments are further divided into three categories based on the nominal excitation level: low, medium, and high. As discussed below, each of these groups of exper- iments is treated as a separate “population,” and the groups are compared separately with the predictions made by the linear model.

It is mentioned above that experimental tests on the subsystem can be classified as “fully characterized,” in that any model input parameters which must be supplied in order to obtain corresponding predictions are known for each of the experiments. The inputs to the linear model consist of a) the excitation waveform and b) the modal parameters for the subsystem.

The excitation waveform is known for each experiment. In addition, the modal parameters of each subsystem tested are also known because they can be back-calculated from the experi- mental data.

Since the experiments are fully characterized, there is one single model prediction corre- sponding to each. A direct comparison, based on the specified response feature, allows one to compute a scalar prediction error associated with each experiment. Let this error be defined as:

e= ˜aobs−˜apred, (6.16)

wherea˜pred and˜aobs are the predicted and observed values of the response feature, respec- tively. In addition, for each of three nominal excitation levels, twenty randomly chosen speci- mens are tested. The proposed approach is to divide the data based on nominal excitation, and characterize the distribution of the error based on each.

The results are shown in Fig. 6.13, which shows histograms and non-parametric kernel

density estimates for the error, as a percentage of˜aobs, for each of the three excitation levels.

0 1 2 3 4 5

-2 -1 0 1 2 3 4 5 6

Frequency

Error (%) (a) Low excitation level

0 1 2 3 4 5

-2 -1 0 1 2 3 4 5 6

Frequency

Error (%) (b) Medium excitation level

0 1 2 3 4 5 6

-4 -2 0 2 4 6 8

Frequency

Error (%) (c) High excitation level

Figure 6.13: Histogram and density estimate for prediction error at three excitation levels

The estimated distribution of the prediction error can now be used to make inferences about

the quality of the linear model. First, note that based on the available data, virtually all of the prediction errors are positive, indicating that the model has a strong tendency to under-predict the response, ˜a. Further, these results also suggest that on a percentage basis, the distribution of the error does not appear to depend on the nominal excitation level. Finally, the magnitude of the error is generally observed in to be in the range of0to−4percent.

System model assessment

In addition to experiments conducted on the subsystem, a small amount of experimental data is also available for the response corresponding to the “accreditation system” configuration. For this configuration, one test each has been done at low, medium, and high excitation levels.

The given model for predicting the behavior of this system takes as inputs an excitation waveform and a set of modal parameters describing the particular subsystem attached to the beam. With regards to validation, the fundamental difference from the case discussed above is that the modal parameters for the subsystem can not be derived from the response of the system.

As a result, the modal parameters governing the subsystem, which are needed as inputs to the system model, are unknown. Thus, this validation analysis can be classified in the second category discussed in the beginning of Section 6.2.3: partially characterized experiments.

For the first case it was possible to make one-to-one comparisons between the predic- tions and observations because all of the model inputs corresponding to each experiment were known. However, for this case, the subsystem modal parameters associated with each experi- ment are unknown, but they are still needed as inputs to the model. Thus, for the purpose of model assessment, the following approach is adopted to deal with the case of partially charac- terized experiments:

1. Characterize the variability associated with the subsystem modal parameters.

2. Corresponding to the excitation of each experiment, propagate the subsystem variability through the system model using Monte Carlo simulation to obtain the predicted distri- bution of the response.

3. Compare the observed response with the predicted distribution obtained from the model.

Thus, for each of the three experiments, one observed value of the response is compared with an entire probability distribution associated with the model predictions. Clearly, this type of comparison makes for a much weaker assessment of the model’s predictive capability than that of the first case. This analysis will not provide sufficient information to characterize the magnitude of the modeling error. In fact, the only conclusion that can be drawn is whether or not the experimental results are strongly inconsistent with the model predictions. Although the resulting inference about model quality is not as strong, it is the best that can be done given that the experiments are not fully characterized.

The first step, characterizing the probability distribution for the subsystem modal parame- ters, is discussed in Section 6.2.2. The second step is to use Monte Carlo simulation to prop- agate this variability through the given system models. As discussed in Section 6.2.1, the use of the system models directly inside a Monte Carlo simulation is computationally prohibitive.

The approach taken here is to use the results from a reasonable number of runs of the given models to develop Gaussian process response surface approximations. The Gaussian process model is a powerful and flexible tool that has the ability to model a wide variety of functional forms, and its use is discussed in detail in Chapter III.

As with the first case, the validation comparisons are based on the scalar response feature

˜

a only, so the response surface approximations are likewise constructed based on this feature only. Further, a separate Gaussian process model is constructed for predicting the response cor-

responding to each of the three excitations used in the accreditation experiments. A quadratic mean (a.k.a trend) function is also used for each response surface approximation.

For this work, the response approximations were found to give excellent fits using 150 training points. To assess the quality of the fits, the response approximations are used to predict a set of 50 held back data points. The mean absolute values of the errors were 265, 426, and 249 for the models corresponding to the first, second, and third accreditation experiments, respectively. In relationship to the magnitude of the response, these approximation errors are acceptably small (they correspond to 0.6%, 1.6%, and 0.6% of the experimentally observed response values, respectively).

The results of the Monte Carlo simulations for each of the three accreditation force levels are given in Figs. 6.14, 6.15, and 6.16. To assist with the visualization, the 95% highest density region (HDR; Lee, 2004) is shaded for each output distribution. The HDR indicates the most likely region for 95% of the responses, based on the model predictions. Similarly, the experimentally observed response is also plotted in each figure as a vertical line, to show where it lies in relation to the predicted output distribution.

Based on the results of Figs. 6.14, 6.15, and 6.16, the evidence does not suggest that the system-level model predictions are overly inconsistent with the experimental data. In all three cases, the experimental results lie within the 95% HDR’s corresponding to the predicted re- sponse (although for the third case, the experimental response lies just at the upper bound of the predicted HDR). Further, for excitations 1 and 2, the experimentally observed response is near the mode (or most likely) value of the predicted response distribution. The gaps in the HDR’s of Figs. 6.14 and 6.16 are the result of multimodality in the probability densities, in these cases causing a small separation between two regions of high probability. Finally, note

2 2.5 3 3.5 4 4.5 5 x 104 0

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8x 10−4

˜

a(in/sec2)

ProbabilityDensity

Figure 6.14: Predicted distribution for ˜a corresponding to excitation 1 for the accreditation configuration. The 95% highest density region is shaded, and the experimentally observed response is plotted as a vertical line.

1.8 2 2.2 2.4 2.6 2.8 3 3.2 3.4

x 104 0

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8x 10−4

˜

a(in/sec2)

ProbabilityDensity

Figure 6.15: Predicted distribution for ˜a corresponding to excitation 2 for the accreditation configuration. The 95% HDR is shaded, and the experimentally observed response is plotted as a vertical line.

2.5 3 3.5 4 4.5 5 x 104 0

0.2 0.4 0.6 0.8 1 1.2 1.4x 10−4

˜

a(in/sec2)

ProbabilityDensity

Figure 6.16: Predicted distribution for ˜a corresponding to excitation 3 for the accreditation configuration. The 95% HDR is shaded, and the experimentally observed response is plotted as a vertical line.

that as with the subsystem model, the system-level model tends to under-predict the magnitude of the response,˜a; this is particularly evident for excitations 1 and 3.