• Tidak ada hasil yang ditemukan

EXPLICIT AND IMPLICIT QUESTIONS

There are several possible questions that have slightly different wording but suggest different data collection strategies:

. If the question is ‘‘What is the quality of the road?’’ the transportation department might want to bring in some engineering experts to determine the number and size of potholes in the roads.

. If the question is ‘‘How frequently is the road used?’’ the department may want to use a method to count the traffic.

. If the question is‘‘How satisfied are citizens with the road?’’then the department might want to gather data directly from citizens.

. If the question is‘‘Has business at the retail center increased as a result of the roads?’’then the department might want to collect sales data over time or ask the owners whether they think business has improved (Johnson, 2002, p. 28).

To help a group generate a list of questions, Michael Patton suggests a practical exercise. Meet with a group of key stakeholders, and ask them to complete ten times the blank in the sentence

‘‘I would like to know _____ about (name of program or object of study).’’Then, divide them into groups of three or four people each and ask them to combine their lists together into a single list of ten things each group wants to know. Finally, get back together and generate a single list of ten basic things the group does not have information on but they would like to know, things that could make a difference in what they are doing. Follow-up sessions should be held to refine and prioritize these questions (Patton, 1986, pp. 75–77).

This is an important issue because slightly altering a research question may change the focus of a project so much that different data is needed to answer it. It may have to be collected and analyzed in entirely different ways. The data may change the time and cost required to carry out a project.

Another practice that researchers use is making statements that tell the reader explicitly what they are doing and why. These statements guide their research activities, manuscript preparation, and the reader.

Whether a researcher prepares a manuscript using explicit questions or not or whether a researcher uses statements indicating what they are doing seems to this writer to be a matter of personal preference and writing style. Given that substantive content is what matters most, these practices do not necessarily either add to or detract from the value of any researcher’s work.

Nevertheless, the author prefers starting with a question because it makes the research process easier for him and makes working with coinvestigators easier as well.

From a reader’s perspective—when explicit questions are lacking, but appropriate informative content is present in the text—the reader can develop statements of the research questions inves- tigated on their own. Examples of these practices are described in the following paragraphs.

A recent article by De Vita and Eric (2005) illustrates the development of subquestions. In fact, the first part of the title of this article contains an important subquestion—‘‘Who gains from charitable tax credit programs? The Arizona model.’’ This subquestion is intended to catch the reader’s eye and get them to read the article. This article explores a new public finance or tax incentive phenomenon known as charitable tax credit programs at the state level. The authors explicitly identify and systematically address the following questions in their work:

How will program eligibility be defined?

Will taxpayers respond to the incentive?

Which organizations will benefit?

What are the implications for nonprofit fundraising and program accountability?

Unstated questions exist in this manuscript as well. One is ‘‘Why did states begin to offer charitable tax credit programs?’’Another unstated question is‘‘How many or which states use charitable tax credit programs?’’ These are factual questions that are easy to answer. The answers provide contextual background for the current study, but neither is a major research question, so they were left unstated.

Researchers might use both explicitly stated questions and specific statements in the research and presentation efforts. For example, Lewis and Brooks (2005) use both explicit questions and statements in their study entitled‘‘A question of morality: Artists’values and public funding for the arts.’’Explicit questions include:

How did public funding for the arts briefly generate the kind of controversy typical of issues such as abortion, gay rights, and capital punishment?

Why was the NEA susceptible to having its existence framed as a legal sanction of right and wrong?

These are factual questions and they are worthy research questions. Recently, funding of the arts was and remains to some extent a highly contentious issue. Public perceptions and debate of this issue contain a tremendous amount of incomplete information, rhetoric, and bombast. The answers to Lewis and Brooks questions involve specialized information most people are unaware of.

Answers are located in public records that these researchers examined in detail tofind the facts.

To address these issues the investigators explored the history of federal funding for the arts, and details of the controversies leading to Congress repeatedly revisiting public support for the arts, government actions, and court decisions. They state ‘‘Despite congressional action, the National Endowment for the Arts (NEA) continued to generate controversy.’’ That statement implies a question of‘‘Why did controversy continue after Congress acted?’’

The authors provide a multipart answer to this statement and their implied question. They focus on lawsuits, the objectionable activities of artists presenting in venues operated by less than a handful of organizations receiving NEA funding, NEA operations, local government decisions in

response to public furor, federal court decisions addressing issues of censorship, and the issue of values shared within the artist community.

The second half of Lewis and Brooks’study begins with the statement,‘‘But the controversy also continued because of artists’values.’’This statement presents a conclusion which might seem debatable if it were not based on the findings Lewis and Brooks examine two paragraphs later.

They review some literature addressing the issue of values affecting decisions. This phenomenon is found in the representative bureaucracy literature. Finally, they examine literature both suggest- ing and denying that artists, the art community, and consumers of art have values that are different from the values of the general population. They do not state a specific research question, nor do they need to, because it is clear what they are speaking about. Nevertheless, the reader can create one. Thus

Do the values of artists, members of the art community, and consumers of art differ significantly from the values of the general population?

Other researchers may not make much use of explicitly stated research questions in presenting their results. For example, Norris and Moon wrote an article whose title,‘‘Advancing e-government at the grassroots: Tortoise or hare?’’(2005), contains one of their research questions. To be more specific, one of the issues they investigate is how rapidly has e-government been adopted by local governments? However, they investigate more than this. Instead of single questions Norris and Moon developed a guiding framework for their study consisting of three dimensions. The‘‘input dimension’’consisted of organizational and environmental factors affecting the adoption of infor- mation technology. An ‘‘impacts’’ dimension focused on internal organizational processes, and another ‘‘impacts’’ dimension consisted of organizational outputs and outcomes. They use this framework to review the literature and it provides the structure for presentation of theirfindings.

In addition, Norris and Moon provide more detailed guiding statements throughout thefindings section of their article. For example, ‘‘The input dimension includes local government adoption of e-government, the age of local government Web sites, and the development or evolution of local e-government measured by the transactional capabilities of local Web sites.’’Each of these specific statements could just as easily have been written as a question. For example,‘‘To what extent have local governments adopted e-government?’’Similarly, Norris and Moon state‘‘Here, we have examined the perceived impacts of e-government on several aspects of local government administration.’’ An alternative question might have read, ‘‘What are the perceived impacts of e-government on local government?’’

Important questions sometimes arise as discoveries are made. For example, in addressing the issue of barriers to e-government Norris and Moon develop a question that naturally evolves from their findings. ‘‘These findings also show that . . . it [e-government] has produced relatively few impacts, and not all of them are in the positive direction indicated by the hype surrounding this new technology. The obvious question is, why?’’They use this question to guide the remainder of their investigation. In this example, the simple direct question‘‘why?’’would be meaningless without the specific details provided by the sentence preceding it.

No doubt a variety of other formally stated questions could have been developed and addressed in Norris and Moon’s study. Would they have improved it? Who knows? On the other hand the author wonders if questions did not exist at the beginning of this project and what appears in the article is just a matter of writing style. It seems that the following questions could have easily driven this research project even if they do not explicitly appear in the text:‘‘What is the extent of e-government use among local governments?’’ ‘‘How sophisticated are these e-government Web sites?’’and,‘‘What factors foster or inhibit use of e-government?’’or,‘‘What factors foster or inhibit use of sophisticated aspects of e-government?’’

Another source of research questions consists of the expectations that authors may include in their conclusions. For example, Norris and Moon’s study of e-government (2005) contains several

expectations about future trends in local e-government development. Two of those statements illustrate the point, ‘‘For the next few years at least, most local government Web sites can be expected to remain mainly informational with limited transactional capabilities.’’And,‘‘as has been the case with IT in government in general, payoffs will lag adoption.’’

The following questions are suggested by Norris and Moon’s expectations ‘‘How rapidly do local governments adopt e-government applications that have demonstrated payoffs?’’or, ‘‘What are the payoffs that local governments realize from implementing e-government?’’and,‘‘Do local governments adopt applications that have demonstrated payoffs more quickly than other types of applications.’’