• Tidak ada hasil yang ditemukan

Scales of Measurement and Data Display

Spotlight 2.1 Rensis Likert

2.4 Using Graphs to Display Data

Figure 2.3 is a frequency polygon drawn from the grouped frequency distribution in Table 2.4. We will note that in Figure 2.3 (as well as Figure 2.4) the first and last points of the graph do not meet the horizontal axis. Whether or not to draw the graph so that the end points meet theXaxis is a matter of personal preference.

As we view Figure 2.3, note that theXaxis marks the midpoints of the class intervals. TheYaxis is labeled“Frequency”and presents equally spaced num- bers that specify the frequency of scores. Asinglepoint on the graph indicates the midpoint of a class interval represented on theXaxis, and the number of scores found in the interval is indicated on theYaxis.

The intersection of theXandYaxes usually represents the 0 point for each of the variables. However, sometimes the first number of a class interval is some distance from 0, or the first frequency count of an interval is much greater than 0. Should this situation arise, the X and/or Y axes can be truncated (i.e.

shortened where not needed) with broken lines. Figure 2.4 shows a frequency polygon in which the first midpoint of the lowest class interval is 20 and the first frequency count is 100. Note the truncation marks at the base of axes. The truncation technique will be further discussed later in the chapter.

The frequency polygon is a useful graphic for depicting the overall concentra- tion of numbers. It is easy to construct and it is possible to compare two or more

26 24 22 20 18 16

Frequency

14 12 10 8 6 4 2

1 4 7 10 13 16 19 22 Scores

25 28 31 34 37

Figure 2.3 A frequency polygon of the data in Table 2.4. Points are plotted above each intervals midpoint.

2.4 Using Graphs to Display Data 51

distributions on the same graph (see Box 2.2). However, many suggest frequencies are easier to read when using a different type of graphic display–a histogram.

The Histogram

The histogramis a graph of vertical bars with shared borders in which the height of each bar corresponds to the frequency of scores for a given class inter- val (see Figure 2.5). The width of the bar spans the width of the class interval, including the real limits. This is why there are no spaces between the bars. The bars of a histogram are typically colored in to contrast with the background.

The frequency polygon and the histogram are related. If we were to place a point at the midpoint of the top of each bar of the histogram, erase the bars, and connect the data points, we would have a frequency polygon. A frequency polygon has been superimposed on the histogram depicted in Figure 2.5 so that we can directly compare these two ways to display data graphically.

The Bar Graph

Abar graphis used to represent the frequency of scores associated with cate- gories. A bar graph looks like a histogram except the bars do not share a com- mon border. Since the categories represented on the X axis are discrete in nature, they do not have real limits. Gaps between the bars clearly communicate this. For example, in Figure 2.6, the scale used on the X axis is nominal.

190 180 170 160 150 140 130 120 110 100

20 25 30 35 40 45 50 55 60 65 70

Figure 2.4 TheXandYaxes are broken between 0 and the lowest scores of each axis.

1 4 7 10 13 16 19 22 25 28 31 34 26

24 22 20 18 16 14 12 10 8 6 4 2

Scores

Frequency

Figure 2.5 A frequency polygon superimposed onto a histogram based on the data in Table 2.4.

A B C D E

1100 1000 900 800 700 600 500 400 300 200 100

Frequency

Figure 2.6 The number of undergraduates majoring in psychology (A), sociology (B), history (C), biology (D), and business (E).

2.4 Using Graphs to Display Data 53

Psychology, history, and so on are names of different majors. Figure 2.6 shows hypothetical data depicting the number of undergraduate majors in each of sev- eral university programs.

Box 2.2 Using a Graph to Provide a Visual Display of Data

Over the past several years, social scientists have been asking Americans how much confidence they have in specific public institutions. Some interesting trends have been noted. The recent results of a few of these surveys are sum- marized in the table below (Confidence in Institutions: Trends in Americans’ Attitudes toward Government, Media, and Business, 2016).

The table below is a useful summary of confidence measurements for three public institutions. However, to determine if there is a change in confidence over a recent span of years for any one institution requires careful examination of each row of the table by scanning back and forth between the columns.

Nonetheless, one can see that the public has much more confidence in medicine than education and more confidence in both of those institutions compared with Congress. Representing these findings on a graph, however, provides a visual display that allows one to observe more quickly these differences across time.

Percent of the Public Expressing a Great Deal of Confidence in Three Public Institutions: Medicine, Education, and Congress

2006 2007 2008 2009 2010 2011 2012 2013 2014

Medicine 40 40 39 40 41 40 39 39 38

Education 28 28 29 29 29 28 27 26 26

Congress 13 11 10 10 10 10 8 7 6

The abscissa of Figure 2.7 presents the years. This can be understood as a discrete variable representing consecutive categories of time. As a result, a bar graph could have been used. However, by using the line graphs for each institution, trends in the data can be more easily observed.

Examine the relative heights of the lines to compare the differences among institutions in terms of the percentage of people expressing confidence. Finally, do not forget to examine the span of percentages along theYaxis. In this case, the relative heights of the lines reveal meaningful differences among the three institutions. However, if there were just trivial differences between the three institutions, it would be possible to adjust the scale on theYaxis by truncating it to highlight these minor differences. Whether or not this maneuver would lead to a misrepresentation of the findings is debatable.

Graphs Can Be Misleading

Suppose a researcher compares two different methods for enhancing learning. As it turns out, MethodBproduces a relative gain of three points, whereas MethodA does not have any effect on learning. However, let us suppose that the difference between no change and a three-point change is actually very modest. Figure 2.8 labels pretest and posttest scores on theXaxis. The data points above the pretest indicate the average number of correct responses for each methodbeforethe par- ticipants are administered any training. The data points above the posttest rep- resent the average number of correct responses for each methodaftertraining.

Note that the line for MethodAis parallel with theXaxis, indicating no change in performance as a result of training. The line for MethodBrisesslightly, reflect- ing the modest increase in performance. Since the lines show little divergence, it appears that the two methods are very similar in their effects on performance.

Now examine Figure 2.9. The same data are graphed, but now it looks like MethodBis vastly superior to MethodA. Why? Notice theYaxis. The scale of measurement has been altered so that an increase of three points spans a much greater distance along the Yaxis. The two graphs are bothtechnically accurate. However, the second graph is very misleading. In this example, the Yaxis has been truncated, but without including the broken axis line. Further- more, the highest value on theYaxis is now 14 instead of 30. This leaves the

0 5 10 15 20 25 30 35 40 45

2006 2007 2008 2009 2010 2011 2012 2013 2014 Years

Percent

Medicine Education Congress

Figure 2.7 Data fromConfidence in Institutions: Trends in AmericansAttitudes toward Government, Media, and Business, 2016presented in graphical form.

2.4 Using Graphs to Display Data 55

5 10 15 20 25

Pretest Posttest Method B

Method A

Correct response

Figure 2.8 A graph that shows the relative effects of two training methods on performance.

The lines on the graph indicate that there is little difference between the two methods.

Note the scaling of theYaxis; it appears to start at zero.

8 9 10 11 12 13

Pretest Posttest Method B

Method A

Correct response

14

Figure 2.9 The data points of Figure 2.7 are redrawn to create the impression of a vast difference between the two training methods. Altering the numbers on theYaxis, especially without signifying that it has been truncated by including a broken line, can give a misleading picture of the results of the study.

impression on the viewer that participants using MethodBvirtually topped out in terms of performance. Together, these techniques serve to amplify the differ- ences between the data lines. However, this amplification seems to misrepresent the actual degree of difference between the two methods. Viewers should always pay close attention to not only the data lines but also the scaling of the axes.

Box 2.3 Is the Scientific Method Broken? The Misrepresentation of Data/Findings

In Box 1.1 we started a series asking whether the scientific method is broken.

Public polling suggests most Americans do not possess a “great deal of confidence”in the scientific community (Confidence in Institutions: Trends in Americans’Attitudes toward Government, Media, and Business, 2016). Part of the problem might be the misrepresentation of scientific data and findings.

Data misrepresentation can occur in a number of different ways. One way concerns how science writers interpret scientific findings for the general public.

Since most people get their scientific information from the media, those who interpret scientific findings for the general public bear a tremendous responsi- bility to convey accurately the findings of scientific investigators. However, many writers of science are not sufficiently familiar with the scientific process or the subtleties of doing and interpreting research. Furthermore, there is no getting around the fact that there is a financial incentive behind eye-catching headlines. This situation can often lead to oversimplified descriptions of findings to the general public. A recent example concerns a team of psychol- ogists who, in 2013, reported no cognitive improvement for preschoolers briefly exposed to a music enrichment experience (Mehr, Schachner, Katz, &

Spelke, 2013). It was a limited study designed only to see if effects could be found in young children with just an initial transient exposure to music. Great lengths were taken by the authors to clarify the limits of the study. Nonetheless, headlines soon appeared like this one from the Times of London,“Academic benefits of music a myth”(Devlin, 2013), clearly overstating the study’s modest conclusions, not to mention bucking most people’s strong intuitions to the con- trary. Indeed, other research performed just a year later suggests children from disadvantaged backgrounds show improved neuroplasticity and language development with exposure to community music classes (Kraus, Hornickel, Strait, Slater & Thompson, 2014). Some of the public’s distrust of science results from the careless way in which many popular interpreters of science report findings– “findings”oftentimes shown to have been stated in far too simplistic terms.

Another form of data misrepresentation concerns the researchers them- selves, either through data collection or interpretation. Assuming, for the

2.4 Using Graphs to Display Data 57

moment, the purest of motives, researchers can unintentionally bias participant responses through the ordering of questions (which question comes first, then second, and so on), the limited number of response options available, or even the specific wording of the questions. For example, a 2005 Pew Research survey (Pew Research Center, n.d.) found that the 51% of respondents who favored

“making it legal for doctors to give terminally ill patients the means to end their lives”dropped to 44% when asked if they favored“making it legal for doctors to assist terminally ill patients in committing suicide.”Phrases that may seem iden- tical to the researcher may be interpreted differently by respondents. In addi- tion, there are hard-to-answer questions regarding how to treat data that does not fit and seems like it may have been gathered incorrectly–so called“out- liers.”(Should it be discarded? What if it really is good data?) Some researchers also selectively report findings, only publishing relationships that are standout even though numerous relationships were compared. Sometimes a proper under- standing of a finding can only be found when placed in a broader context– a context some researchers choose to leave out of their report. For instance, would we be impressed by someone if they said they have such mastery over coin flipping that they can control which side of a coin comes up? What if they said they once got a coin to end up on“heads”nine times in a row? Seems impressive, does it not? However, our amazement might be dulled a bit if we found out their reference to a string of nine heads-in-a-row was dug out of the middle of a series of 4000 coin flips. Context matters. (This topic will be explored more in Box 8.1.) Unfortunately, several scientific articles, many of which misrepresented findings unintentionally, are retracted by aca- demic journals every year. Retractionwatch.com is an example of one website that monitors these retractions.

Finally, there is the issue of academic fraud (e.g. Carey, 2016). Science, we must remember, is not practiced by purely objective robots or angels, but rather by people–people possessing the frailties, temptations, and pressures com- mon to us all. Science is also a cultural enterprise, with its own hierarchy of authority, internal rewarding structure, and value system–a value system that places a premium on new findings, new ideas, and numerous publications.

Researchers that do not make original discoveries, propose interesting innova- tive theories, or generate numerous publications often find themselves out of a job. Given this reality, we should not be surprised to learn that just as the enter- prise of professional sports, financial investment, politics, and virtually all other human communities deal with different cheating scandals, this practice can and does take place within the world of scientific investigation. Thankfully, just as in these other professions, there are correcting mechanisms in science– mechanisms designed to ferret out falsehoods and eventually get to the truth.

Nonetheless, when the public finds out that a headline may be incorrect, a jour- nal article needs to be retracted, the journal itself is fake, or a scientist is found to be fraudulent, we should not be surprise to learn that to some people it feels as if“science”is broken.