• Tidak ada hasil yang ditemukan

Wolfram (2002) presents a comprehensive new paradigm for science in general as well as for its many disciplines, fields, and subfields in particular.

Our purpose here is not to review his work in its entirety, but simply to provide an overview of key points to justify and demonstrate Wolfram’s position and our own position that simple ideas are valuable, are preferred, and explain a great deal of both simple and complex behavior for all of science, including OB.

Simple Rules

Wolfram (2002)uses the notion of rules – ideas or theories – to describe and explain behavior and states that ‘‘in reality even systems with extremely simple rules can give rise to behavior of great complexity’’ (p. 110). Moreover, he notes that ‘‘the simpler a structure is, the more likely it is that it will show

Table 1. Four Theories for a New Kind of OB.

Theory Entities/Levels of

Analysis

Variables/Relationship

Theory 1: individual behavior and decision making

Whole persons Option cutting !þ commitment Theory 2: interpersonal

relations and leadership

Whole dyads Investments !þ returns Theory 3: group dynamics/

team processes and norms

Whole groups Interdependence !þ cohesion Theory 4: collectivized

processes and roles

Whole collectives Titles !þ expectations

Theory 1: level-specific at the person level

Theory 2: level-specific and emergent at the dyad level Theory 3: level-specific and emergent at the group level Theory 4: cross-level to the collective level

up in a wide diversity of different places. And, this means that by studying systems with the simplest possible structure, one will tend to get results that have the broadest and most fundamental significance . . . looking at systems with simpler underlying structures gives us a better chance of being able to tell what is really responsible for any phenomena one sees – for there are fewer features that have been put into the system and that would lead one astray’’

(p. 109). This is a goal we hope to accomplish with the four simple theories for OB proposed here.

This approach, according to Wolfram, leads to an interesting possibility:

‘‘to consider completely random initial conditions . . . one might think that starting from such randomness no order would ever emerge. But in fact . . . many systems spontaneously tend to organize themselves, so that even with completely random initial conditions, they end up producing behavior that has many features that are not random at all’’ (p. 223).

When starting with a fixed or given state, there are three basic types of behavior/patterns identified by Wolfram: simple or repeating; nested or fractal; and complex or random. In terms of randomness, ‘‘something should be considered random if none of our standard methods of perception and analysis succeed in detecting any regularities in it’’ (p. 556). Regularities are repetitions or nesting patterns. More formally, ‘‘something should be considered random whenever there is essentially no simple program [rule, theory, model] that can succeed in detecting regularities in it’’ (p. 556).

In terms of complexity, ‘‘when we say that something seems complex what we typically mean is that we have not managed to find any simple description of it – or at least those features of it in which we happen to be interested’’ (p. 557). The simplest descriptions are repetition and nesting: If we don’t ‘‘see’’ these, then we consider things to be complex.

Relevant for our work here in OB, our four proposed theories are simple ones that can detect, predict, and help understand regularities in behavior.

Other, more complex and complicated theories in OB appear to do no better in detecting regularities, as evidenced by the oftentimes nonreplicability of studies and/or findings in the field, and so may be merely tapping randomness. In Wolfram’s words, ‘‘From the intuition of traditional science we might think that if the behavior of a system is complex, then any model for the system must also somehow be correspondingly complex. But . . . this is not in fact the case, and . . . even models that are based on extremely simple underlying rules can yield behavior of great complexity’’ (2002, p. 364). Our contention is that our four simple theories proposed here meet these conditions – that is, beyond being ‘‘simple,’’ they can account for or explain complex behavior and actions of various entities.

Wolfram goes on to state, ‘‘Typically it is not a good sign if the model ends up being almost as complicated as the phenomena it purports to describe.

And it is an even worse sign if when new observations are made the model constantly needs to be patched in order to account for them’’ (2002, p. 365).

Unfortunately, this pattern is often observed in OB, where new variables are constantly added to models, increasing their complexity, to account for unexplained behavior. In contrast, ‘‘[it] is usually a good sign . . . if a model is simple, yet still manages to reproduce, even quite roughly, a large number of features of a particular system. And it is an even better sign if a fair fraction of these features are ones that were not known, or at least not explicitly considered, when the model was first constructed’’ (Wolfram, 2002, p. 365). This goal, in fact, underlies our proposing of four simple theories to account for OB at the person (individual), dyad (interpersonal), group (team), and organizational (collective) levels of analysis.

Computations and Rules

Wolfram (2002) creates simple models for explaining complex ‘‘everyday systems’’ such as the growth of crystals, the breaking of materials, the flow of fluids (e.g., air and water), fundamental issues in biology (e.g., molecular structure and natural selection), the growth of plants and animals, biological pigmentation patterns, and financial systems. He also demonstrates how simple models can explain complex phenomena in fundamental physics (e.g., conservation of energy, equivalence of direction in space, models of the universe, space–time and relativity, elementary particles, gravity, and quantum phenomena) and in processes of perception and analysis (e.g., randomness, complexity, data compression, visual and auditory perception, statistical analysis, cryptography and cryptanalysis, mathematical formulas, and human thinking).

Wolfram accomplishes this by focusing on the notion of computation. For him, systems can be viewed as simple computer programs or in terms of the computations they can perform. The initial conditions are the input; the state of the system after some number of steps corresponds to the output. Also, different systems may have very different internal workings but the com-putations the systems perform may be very similar, such that ‘‘any system whatsoever can be viewed as performing a computation that determines what its future behavior will be’’ (p. 641).

A further notion Wolfram discusses is universality: ‘‘if a system is uni-versal, then it must effectively be capable of emulating any other system, and

as a result it must be able to reproduce behavior that is as complex as the behavior of any other system’’ (p. 643). He notes that cellular automata, Turing machines, substitution systems, and register machines are examples of systems that, despite the great differences in underlying structures, can be made to emulate each other – that is, ‘‘universals.’’ Also, ‘‘any system whose behavior is not somehow fundamentally repetitive or nesting will in the end turn out to be universal’’ (p. 698) and ‘‘universality is in a sense just associated with general complex behavior’’ (p. 713). This behavior results from simple rules and from altering the initial conditions.

More specifically, the general underlying hypothesis for Wolfram’s whole paradigm is the principle of computational equivalence (PCE). It applies to any kind of process, whether natural or artificial. The key underlying idea that leads to PCE is the notion that ‘‘all processes, whether they are produced by human effort or occur spontaneously in nature, can be viewed as computations’’ (p. 715). PCE asserts that ‘‘when viewed in computational terms there is a fundamental equivalence between many different kinds of processes . . . almost all processes that are not obviously simple can be viewed as computations of equivalent sophistication’’ (pp. 716–717) and that ‘‘even extremely simple rules can be universal’’ (p. 718). While we are not dealing with computations per se here, our four proposed theories of OB are simple rules that are universal in Wolfram’s sense.

Moreover, PCE ‘‘introduces a new law of nature to the effect that no system can ever carry out explicit computations that are more sophisticated than those carried out by systems like cellular automata and Turing machines’’ (Wolfram, 2002, p. 720). PCE suggests ‘‘that beyond systems with obvious regularities like repetition and nesting most systems are universal, and are equivalent in their computational sophistication’’ (p. 735). Again, this idea also applies to the four theories of OB presented in this chapter.

Lastly, ‘‘even though a system may follow definite underlying laws its overall behavior can still have aspects that fundamentally cannot be described by reasonable laws’’ (Wolfram, 2002, p. 750). According to Wolfram, this idea explains the phenomenon of free will. In other words, PCE explains and helps us understand why persons, dyads, groups, and collectives can follow option cutting/commitment, investments/returns, interdependence/cohesion, and titles/expectations, respectively, yet still show variation in behavior that is not accounted for by the theories (rules) per se.

In short, systems/entities have ‘‘free will’’; thus, in OB as in science in general, despite our simple theories’ attempts to account for a variety of complex behavior, we can still expect some variability in behavior of persons, dyads, group, and collectives.