• Tidak ada hasil yang ditemukan

Design And Simulation Of Interactive 3D Computer Games pdf

N/A
N/A
Protected

Academic year: 2019

Membagikan "Design And Simulation Of Interactive 3D Computer Games pdf"

Copied!
20
0
0

Teks penuh

(1)

Technical Section

DESIGN AND SIMULATION OF INTERACTIVE 3D

COMPUTER GAMES

KAMEN KANEV1

{ and TOMOYUKI SUGIYAMA2

1Visual Science Laboratory, Inc., Ochanomizu-Kyoun Building, 2-2 Kanda-Surugadai, Chiyoda-ku,

Tokyo, 101 Japan,e-mail: kanev@vsl.co.jp

2Digital Hollywood Corp., Ochanomizu-Kyoun Building, 2-2 Kanda-Surugadai, Chiyoda-ku, Tokyo,

101 Japan,e-mail: tomo@dhw.co.jp

AbstractÐDesign and development of attractive and competitive computer games is no longer a one-man task, but a complex multistage process with one-many participants. Discovering new game ideas and their further development, game world and characters design and modeling, game evaluation and test-ingÐall these are conducted by specialists teamed to work together. In this paper we discuss tools and facilities supporting the collaborative game design and development process through rapid prototyping and simulation of 3D game worlds, characters, behaviors and other game functionality. Single player and multi-player games are addressed in the context of di€erent hardware platforms and software approaches. We report our experience in building a Game Design and Simulation testbed environment (GDS) and its usage in location-based entertainment projects. Work on GDS has been carried out in the scope of the VirtuaFly project and during the development of the physical motion based commer-cial game VirtuaFly2.#1998 Elsevier Science Ltd. All rights reserved.

Key words:computer game simulations, (VR) virtual reality games, 3D shared game worlds, networked virtual environments (VE), distributed interactive simulation (DIS).

1. INTRODUCTION

Dedicated game hardware, being crafted for playing games and not for software development, o€ers very little to facilitate game prototyping, presen-tation, redesign, testing and gathering experimental data. On the other hand, computer game developers have always been striving to push the available game hardware to the limits of its sustainable per-formance. Specialized game development toolkits and dedicated software and hardware environments have been utilized for achieving this goal. Most of such available facilities are platform dependent, but, while very useful at the game implementation stage, usually o€er little help at the game design and prototyping stages. In contrast, higher level tools which are good for game design and prototyp-ing tend to be more platform independent, but with limited real-time performance.

In this work we discuss a game design and simu-lation testbed environment (GDS) for supporting 3D game design, prototyping and evaluation. It could be used for game prototyping and evaluation of design ideas for a wide range of computer games, including single player, multiplayer and net-worked games. We are aiming to facilitate game de-sign and evaluation not only for dedicated platforms such as game consoles but also for

per-sonal computers and other general purpose compu-ter systems. To ensure adequate game simulation and real-time performance over a wide range of platforms, we need a scalable software which would allow us to bring in as much computing and visual-ization power as needed. Design and simulation of games with low computing and/or graphics demands should be possible on a€ordable, low range general purpose computer systems. For more demanding game simulations, more powerful com-puter systems would be needed, such as those with multiple CPUs and graphics engines. The game simulation software should be capable of making ecient use of both: limited resources of low end computer systems and full computing and graphics power of high end multiprocessor systems.

Another important and highly desirable feature is ecient handling and simulation of multiplayer worked games. To some extent, multiplayer net-worked games could be simulated on a single, suciently powerful general purpose computer graphics system. Such an approach though has many limitations and could hardly compete with networked distributed simulations largely available nowadays. Therefore, distributed networking capa-bilities should be incorporated in the game design and simulation testbed environment. That should make distributing the simulation computation and visualization tasks on multiple networked compu-ters possible whenever desirable.

#1998 Elsevier Science Ltd. All rights reserved Printed in Great Britain 0097-8493/98 $19.00 + 0.00

PII: S0097-8493(98)00038-7

{Corresponding author.

(2)

2. GAME DESIGN AND SIMULATION STAGES

The game design and simulation environment is intended to be used throughout the entire game development process: general game design, game world and character modeling, game functionality implementation, game evaluation and gathering ex-perimental information, and ®nal game implemen-tation. Specialists with di€erent pro®les are involved in each of the above game development stages and GDS should provide appropriate, dis-tinct services to all of them.

2.1.General game design

When new games are conceived designers need to evaluate their ideas. Problems of novelty, orig-inality, public acceptance, feasibility, implementabil-ity and performance, etc. require careful consideration. The creative process could greatly bene®t if game design ideas are shared and widely discussed. Unfortunately new game concepts are very dicult to communicate. Writing, talking, using drawings and even animation helps but does not completely overcome the communication gaps. The most that we could hope to convey by such traditional means would be a bleak impression of the newly conceived game. Moreover, it is quite im-possible to experience the excitement that the game would bring without adequate game simulation fa-cilities. Game designers would like to be able to see and feel their ideas working at a very early stage, before the actual game implementation has even begun. At the game design stage, the way the game feels is much more important than the speci®c details in the underlying graphics or game character behaviors. The latter two could be simulated in a quite general way while still conveying the genuine feeling of the game.

2.2.Game world and character modeling

In contrast to game designers, graphics designers are much more concerned with the models of the game world and the game characters rather than with the way the game feels when played. Nowadays game world models are signi®cantly large and adequate facilities for model partitioning and concurrent modeling and design are essential. Often, many graphics designers contribute to the same game, designing di€erent parts of the game model, sometimes taking over and continuing each other's work. When graphics designers are working on partitions of a given game world, they would like to be able to see how their work would inte-grate with that of their colleagues. This means that the game world should be properly structured for easy integration and interchange of partitions. Such a structuring would also facilitate reusability of models and partitions. In fact the structuring of the game world is more a game design decision than a graphics design one. Therefore, an appropriate

game world structuring scheme should be adopted at the game design stage and then re®ned during the graphics design stage. This will enable graphics designers to plug in and see re®ned partitions in the context of the general game world testbed model whenever desired.

2.3.Game functionality implementation

Another important computer game component is the story and all functionality associated with it. This includes game characters and their behaviors, game rules and objectives, etc. There are generic types of functionality such as Newtonian objects, point awarding facilities, character controls, etc. which could be used in many di€erent game lations. Other, speci®c functionality might be simu-lated through some of the available generic types, while more peculiar functionality might need dedi-cated implementation. We would like to keep the character behaviors separate from their graphics representations whenever possible. That would give us more freedom to manipulate character appear-ances and their behaviors independently, and even-tually to build up new characters on the ¯y. Game developers could implement speci®c game function-ality in the context of the generic game design model, which could be upgraded with re®ned game partitions and character models as they become available. Game functionality and game character behaviors could be expressed in terms of actions, simple responses to stimulus and more complex behaviors. While dealing with such functionality, general facilities such as multichannel record and play, interpolation and extrapolation etc. should also be provided.

2.4. Game evaluation and gathering experimental in-formation

Adequate game simulation is important for gath-ering experimental information and successful evaluation of game ideas. The previous stages as discussed would help set up an appropriate environ-ment for this simulation stage. We are dealing here with an environment approximating and simulating the real game appearance, performance etc. The main objective is to let third parties, including po-tential customers, experience the new game ideas in conditions close to a real game play, so that we could gather extensive feedback information. During the simulation, facilities to simulate di€erent conditions by changing parts of the game world, replacing game characters and modifying their behaviors, revoking and introducing new game rules, etc. will be necessary along with extensive log-ging and analysis options.

2.5.Platform speci®c game implementation

(3)

We would like to secure a high level of re-use of these simulation components in the latter platform speci®c game implementation.

First comes the re-use of the world model and character geometry data. To provide ecient use of the resources of the target game hardware, geome-try data would need to be converted to appropriate platform related formats. Stand-alone tools should be used for such conversions.

Second comes the re-use of character behaviors and game functionality. Most of the simulated behaviors and other functionality could be im-plemented as scripts, with the most complex ones eventually directly coded. Since scripts are generally platform independent they might be interpreted on the target game platform too. Directly coded func-tionality would certainly need some platform speci®c rewriting and adjustments.

In any case, while direct re-use of code might be limited, modeling and behavior data should be freely accessible and reusable.

3. SIMULATION COMPUTER PLATFORM AND ITS IMPLICATIONS

We would like to achieve real-time performance of our interactive 3D game simulations. It should be comparable to what we would get from the real game say running on a dedicated game console with optimized software and well-tuned geometry data-base. Yet we would like to postpone developing tar-get platform speci®c software and model tuning until after the game simulation and evaluation is complete. To achieve this we would most probably need to bring in more computing and visualization power than that of the target implementation plat-form. Recent models of Sony PlayStation, Sega, Nintendo64 are delivering a level of performance at which no PC-based real-time simulation seems to be feasible. Therefore, considering these game con-soles as potential targets, we elected to run our simulations on suciently powerful workstations with adequate graphics capabilities.

While high grade workstations with advanced graphics capabilities are nowadays available from many di€erent vendors we have chosen the Silicon Graphics, Inc. line of products, mainly for reasons of previous in house experience. The base of SGI machines currently installed at our sites is quite extensive and immediately available. There are also several classrooms equipped with networked SGI workstations that could be used for multiplayer game simulations and evaluations. Apart from this, the SGI line of products o€ers a range of speci®c features that are highly desirable for our GDS.

SGI o€ers low range systems like Indy and O2, going through mid-range like Indigo Impact and Octane and extending to the high range Onyx2 and Origin product line. Recent models are based on Uni®ed Memory Architecture (UMA) and Scalable

Shared-memory Multi-Processing (S2MP) architec-ture thus overcoming performance bottlenecks and opening new dimensions for scalability. With the combined strength of the Cray and Silicon Graphics technologies the new line of products demonstrates unsurpassed performance. SGI sys-tems are executable compatible thus ensuring easy software migration. SGI IRIX operating system comes with many additional features as compared to other UNIX distributions. Networked SGI work-stations support multicast as a standard feature and provide a very good environment for distributed VR applications. Convenient graphically-oriented tools and APIs are available including IRIS Performer, which is a vehicle to extract maximum performance from the SGI graphics hardware at all levels.

We are also considering bringing systems from other vendors within the scope of our simulations. Di€erent approaches for platform independent net-worked simulation and visualization are addressed later in the text.

4. VIRTUAL ENVIRONMENTS AND GAME SIMULATIONS

The notion of virtual environment (VE) is often used to denote the speci®c software architecture and the underlying data models used in virtual rea-lity applications [1, 2]. In the computer game simu-lations, VE refers to the game world and character models, and the game simulation software architec-ture. Networked game simulations incorporate ad-ditional communication model components of the VE.

A presentation and discussion of di€erent VE models and their components follow. In this discus-sion we will pay special attention to the communi-cation components since they often play crucial role in shaping the entire simulation environment.

4.1.VE data models

Appropriate structuring of the game VE has always played an important role in the game design and development process. One classic way of impos-ing a structure over a particular game is to divide it into stages. This provides a means to organize the game world, game characters, their behaviors and other functionality in separate groups associated with each stage and to treat them more or less inde-pendently. Unfortunately, if no spatial relations exist between the game stages, the feeling of game continuity is easily lost in the game stage tran-sitions.

(4)

being presented during the stage changes to hide the loading and initialization delays.

For more powerful game hardware, the general tendency seems to be a rather complex graphical en-vironment to be selected and then loaded before the game begins. Then, even if game stages are present the game would still be played in that preloaded, and thus predetermined, environment. Nevertheless, depending on the player's actions, some games may load di€erent graphical models during the play and switch between, say, exterior/interior world, under-water world etc. In most cases that could be done in the background while the game play still con-tinues. Also spatial relationships play more import-ant role on advanced game platforms as compared to the low grade ones.

Another level of complexity arises in the net-worked multiplayer games where players share a simulated virtual game world by using many net-worked computers. Most game implementations assume that each player has a copy of the complete game database on his own computer. In the course of the game, state changes of di€erent entities are communicated between the players' computers in order to keep the game VE synchronized.

As envisaged, our game simulations might be per-formed either on a single computer system or on several networked computer systems. We would like to adopt a VE structuring scheme which would be equally applicable in both cases. In a broader con-text, di€erent structuring approaches and data models pertinent to VE have been explored recently. Most prevalent models could be classi®ed as repli-cated homogenous, shared centralized and shared distributed peer-to-peer or client±server [2].

The large scale, mainly military related simu-lations have been adhering to the replicated hom-ogenous model [2, 3]. Providing a local copy of a large homogenous virtual world database to all the participants before the simulation starts saves a lot of network trac latter, since during the simulation only changes in object states would need to be com-municated [1]. Nevertheless, as complexity and size of virtual worlds grow, it becomes next to imposs-ible to maintain local copies at every participating host. Attempts to decrease the trac through grouping of entities and mobile agents have been reported in [4±6]. Further complications arise with the increase of discrepancies between the local world representations over the simulation time incurred by loss of messages. This happens because replicated homogenous models are usually based on best-e€ort, non-reliable message delivery protocols. While ensuring better scalability in comparison to other reliable network protocols [7±9], there is a clear tradeo€ in regards to replicated database syn-chronization.

Shared centralized models rely on a specialized server computer which is solely responsible for maintaining the entire world state and

communicat-ing it to all of the participants as needed. The model is simple and easy for implementation and maintenance but has some important limitations. First, it does not scale well since all the trac goes through a single server node [1, 10]. A second pro-blem is the additional delay that rerouting through a server would incur as compared to peer-to-peer multicast and broadcast [3].

Despite their limitations, shared centralized models have been widely used in the gaming com-munity. Apart from the many MUDs, MOOs, enhanced chatrooms etc., many highly specialized game servers are currently in operation. The shared centralized model evolves to new realms as more computing and graphics power becomes largely accessible with the recent Pentium based PC models. For example, Ultima on Line, although making use of an underlying centralized data model and a specialized game server, adopted distribution of the initial generic world and character game database through a retail channel on a CD-ROM.

Recently, more and more attention has been paid to distributed models and new mixed approaches. The central problem pertinent to the distributed models is ensuring database consistency and syn-chronization. Attempts have been made to address the problem by using reliable message delivery pro-tocols [11]. Unfortunately, maintaining reliability and consistency incurred signi®cant communication costs and the DIVE system [11] could support very limited number of simultaneous users. The Virtual Society Project [9, 10, 12] adopts some ideas from DIVE and attacks the scaling problems by reducing the level of data sharing so that consistency and synchronization protocols would not need to work across the entire system. In the VS project, the notion of aura [13] is used which represents a dynamic portion of the virtual space, say a region of interest. Objects can register their auras with an aura manager in order to be noti®ed when other objects enter their region of interest. The aura man-ager tracks database partitions, controls spatial in-teractions and maintains di€erent levels of consistency. Its functions are based on the caching of static and some dynamic data, combined with non-reliable, locally ordered and globally ordered reliable message delivery mechanisms. A further drop in the network bandwidth is achieved through generalized dead-reckoning techniques, called move-ment behavior [9].

(5)

acti-vated would carry out the transition from the cur-rent server to the new one. A similar idea for tran-sition between di€erent worlds is exploited in MASSIVE (Model Architecture and System for Spatial Interaction in Virtual Environments) [16]. MASSIVE also uses aura much alike the Virtual Society project [9, 10, 12].

At the Stanford Distributed Systems Group, the PARADISE Project is under development. Within its scope, work is done on reliable logging and mul-ticast channel directory service [8], advanced entity aggregation dead reckoning [5, 17] and object-oriented RPC [18]. Another large scale project, GreenSpace [19], is engaged in developing a new global communications and information environ-ment for the 21st century. The prototype GSnet supports networked communications and shared database among distributed applications. GreenSpace world consists of internal and external parts, the latter managed by external video, audio and reliable multicast protocols. The internal parts are managed by a special application called ``Mr.N'' which is responsible for the networked database synchronization. The GreenSpace world database is based on groups which are represented as collections ofchunks.

All the approaches that we have discussed so far establish some database model which is sub-sequently used for producing views into the virtual world. An approach which ®rst assumes a view and then generates a model, only sucient for that view is described in [20, 21]. This approach uses Entity State Estimator and Network Link processes and potentially reduces the network load while provid-ing di€erent levels of resolution.

4.2.VE software architecture

Bringing more computing power into the simu-lation by assigning part of the computations to some other hosts on the network is obviously less costly than upgrading to a single, more powerful computer. To potentially facilitate such load distri-bution, we design our GDS as a set of separate con-currently executable tasks. The intertask communications could go through network chan-nels, thus allowing the tasks to be spread over di€erent workstations connected to a high-speed LAN.

Another level of complexity lies in using a hetero-geneous, multiplatform computing environment. A widely exploited approach enabling multiplatform hosts to participate in the same simulation is to run dedicated, platform dependent simulation software on each of them. For example, in DIS based simu-lations [1], each host runs its own variant of the simulation software but with common algorithms. Other promising approaches use platform indepen-dent scripting languages such as Telescript, Tcl, Java and JavaScript, etc. Mobile Agents and Smart Networks have been suggested as a method to

enhance DIS simulators [4]. The VR-protocol [22] from MAK Technologies provides for platform independent program execution environment and dynamic linking.

A new generation of software technology is emer-ging with High Level Architecture (HLA). There is a hope that HLA would help products from di€er-ent vendors evolve as fully interoperable over the network. In Gustavson [23] the Microsoft's multi-player gaming solution for Windows 95/NT DirectPlay is evaluated in the context of HLA. Similarly, features of HLA that support the VR-Protocol, as well as complimentary capabilities that VR-Protocol could provide to HLA, are discussed in Taylor [22].

5. THE GDS AND ITS VE MODEL

5.1.The GDS data model

The target environment for GDS is a high speed LAN where we could expect predictable network performance. This allows us to focus on game simu-lation problems, rather than to deal with the pro-blems of reliable data distribution and synchronization over large WANs. We assume that all the initial geometry data representing the game world and the game characters resides somewhere on the LAN. It can be provided in a number of ®les on di€erent network nodes which are accessed by the simulation tasks whenever necessary. Standard facilities like NFS, HTTP, etc. could be used to ensure such an access over the LAN.

An internal representation of the game world and the game characters is built by each simulation task from the available on the LAN geometric data. Such internal representation is later used for visual-ization of the game world with respect to the di€er-ent aspects of the the currdi€er-ent simulation. Many simulation tasks might run in parallel say, each ser-vicing di€erent participants in the game or provid-ing di€erent views in the game world etc. Obviously, the internal representations of the game world for all these tasks need not be the same. In fact, as in Michael and Brock [20, 21], if a view in the VR world is assumed ®rst, then we only need to build an internal world representation satisfactory for that particular view. In GDS we adopt a dynamic internal game world data model which supports similar functionality.

(6)

built, either when geometry data is brought in or when reassembled later.

5.2.The GDS software components

As mentioned before, GDS consists of a number of tasks executed in parallel and communicating through the network. While all of the tasks could be executed on a single, powerful enough computer system, it would be more practical to distribute them over several workstations. The types of tasks currently included in the GDS are game clients, game servers,game interfaces,control interfaces and sound servers. Each such task is a separate appli-cation which is replicated and executed on all or some of the networked hosts participating in the game simulation.

Thegame client application(Fig. 1) is responsible for maintaining an internal game world model and visualizing it with respect to speci®ed views. Both independent and player-related views are supported. The client application also maintains local and remote entities.

Thegame server application(Fig. 2) is responsible for tracking the player-related views and guiding the clients to modify their internal game worlds appropriately. It also functions as a multichannel

recorder/player which can record or disburse pre-recorded sequences on demand.

Thegame user interfaceapplication (Fig. 3) is re-sponsible for collecting and processing the players motion data which is then put on the network. Game clients and game servers use the players motion data provided by this interface task. Alternative input streams from joysticks and game console controllers are also supported.

The sound server application (Fig. 4) plays pre-recorded sound ®les on demand. Its functionality is described in more detail in Section 6.3.

The simulation control interface is used for con-trolling the entire simulation. It is implemented as a menu script with several underlying executables and other scripts.

6. PILOT IMPLEMENTATION OF THE GDS

6.1.Game world prototyping

6.1.1. Components. At the general game design stage, we are interested in an approximation of the game world as envisaged by its creator. This should be done at the lowest acceptable resolution so that time and e€ort would be saved. As our vision of

(7)

the game evolves in the course of the game develop-ment and simulation, we will gradually move to higher resolution models. We would like to convey the right feeling of space and distance through the simplest possible geometry with appropriate texture. For example at the lowest level of resolution the game world could be represented as a texture mapped extruded shape with the view point placed close to its center line. Appropriate scaling and tex-turing could make it look either as a narrow tunnel or as a wide open space. For example, in Fig. 10, both the far fog and the sky are represented by tex-tures mapped on the surrounding extruded shape. Acceptable appearance could be maintained as long as the view point is suciently far from the texture mapped walls and remains relatively static. The di€erence between entirely texture mapped and sculptured walls can be seen by comparing the images in Fig. 8 and Fig. 9. The simplicity of the tunnel model in Fig. 8 becomes more apparent when seen in stereo or from a viewpoint moving in the vicinity of the walls. Better appearance is achieved by introducing models at higher levels of

resolution. One possible way is by re-texturing low-resolution models and adding more geometry. In terms of the simple textured extruded shape, this means that some of the objects initially painted on the walls would be established as true geometric entities inside it. The resulting higher resolution models approximate the envisaged game world more closely. There are standard ways for dealing with resolution adjustments, for example by using LOD [24]. In our approach, we chose to handle this in a di€erent way in order to support the dynamic assembly of internal world models, rather than just di€erent views in a preloaded database. In the ®nal game, we may seek true realism and thus we may need sophisticated geometry and LOD. For the simulation itself though, less should be sucient since we only need to build a convincing impression of the simulated game.

The dynamic world models are based on atomic objects organized in sections (Fig. 5). All simple artefacts which expose no internal structure related to the game simulation should be considered atomic objects. Sections are structural objects which may

Fig. 2. Internal organization of the game server applications.

(8)

have geometry and other attributes and can be used as containers for any number of atomic objects or sections. For example, rooms in a building could be represented as separate sections while furniture in the rooms could be considered as atomic objects. Similarly, simple game worlds could be constructed from sections representing rooms and connecting corridors.

By structuring the game world into sections we e€ectively introduce levels of hierarchy which can be kept separate from the actual geometry. Then, a preliminary culling could be done to identify the sections relevant to a particular view and maintain a database associated with it. It is important to point out that the structuring into sections does not have to be spatially related. Of course, in most practical cases spatial organization of sections may be a good choice. But many games nowadays,

although being played in a 3D world, could be rep-resented by one or two dimensional sequences of sections. We discuss our experiments with such game worlds late in this paper. Nevertheless the world descriptions that we use are more general and allow us to associate lists of sections with n-dimensional coordinate values. This way in the one dimensional case one coordinate is used for posi-tioning in the game world while others could be treated as describing section properties, etc. Sections may be adjacent so that players could phy-sically move between them. But sections may also overlap or contain each other, for example when representing di€erent levels of resolution.

The internal game world representation is built by selecting appropriate elements from the world description. This process is controlled by a metric in n-dimensional space and may be considered as a

(9)

combination of priority ordering and culling which takes place in the gameserver task. This gameserver (Fig. 2) tracks a view, refers to the game world description and produces a list of sections (Fig. 5). Then, an internal game world representation is built on the basis of this view by the client application (Fig. 1). Many views can be supported simul-taneously.

In our model, the global game world description is kept separate from the actual geometry. Thus the game world could be reshaped by changing the world description ®les which have to be kept syn-chronized at all the simulation hosts. Also this makes rapid prototyping and reuse of model data easier as di€erent world descriptions could refer to the same geometry entities.

6.1.2. Importing. Game worlds are usually con-structed using geometric modelers. This is an inter-active process in which designers create and modify complex geometric shapes. Unfortunately the result-ing geometry could hardly be modi®ed and restruc-tured outside of the modeling software used to create it. Alternatively, a procedural approach could be used for creating geometry with minimal human assistance. The problem is that arbitrary shaped objects are dicult to parametrize using such a procedural approach.

In our approach for the construction of game worlds we are bringing together the interactive and procedural ways of construction by interactive de-sign of components and procedural assembly of these components into sections. In order to facili-tate approximate representation of game worlds, we are investigating intermediate levels of complexity. It appears to be advantageous to provide

pro-cedural ways of generating such intermediate com-plexity geometry and still use advanced modelers when needed. Procedural modeling could quickly provide a crude substitute for the simulated game world which could be used as an initial testbed. Then, more re®ned geometry prepared by human designers is gradually integrated. Another reason for adopting such an approach is the fact that sur-prisingly few of the current VR modelers o€er auto-matic LOD generation [24]. And when it is done, general polygon reduction algorithms are usually applied o€-line. This means that simpler models are generated on the basis of reducing the geometrical complexity of detailed models. In contrast, we incrementally build more and more detailed world models at run time.

To achieve this, we need full access to the under-lying data structures. Since we use the SGI Performer for our simulations we need to access its internal geometry data representations. As in most modern visualization systems, the graphical data in Performer is organized as a tree structure contain-ing graphics state information and geometry infor-mation. Specialized node types for grouping and geometry, for transformations, level of detail, ani-mation and morphing etc. are supported.

The task of importing VR world data into Performer consists of building an appropriate tree structure from what is provided in a given graphics ®le. This data conversion process is called importing and the software responsible for it is called impor-ter. Currently, Performer comes with more than 30 standard importers, most of them provided by geo-metric modeler suppliers. The diculty which arises is in combined use of such models and modelers. Although the importers e€ectively convert di€erent graphics ®les into a common internal Performer data structure they do not provide means to inte-grate and blend such structures. One solution is to do the integration by writing speci®c application code and including it into the target Performer ap-plication. This is obviously not suitable for our simulations, as we would like to minimize the need for customizing the application code. Therefore, we decided to develop some standard integration func-tions and make them available to all performer ap-plications through some standard mechanism. We opted to implement these functions as shared objects (DSO) so that they could be accessed by the applications only when needed at run time with no overhead if not accessed. Basic or standard inte-gration functions are dicult to identify and in fact would vary depending on the application. That is why we decided to de®ne our basic functions not on the basis of the application needs, but on the basis on the underlying Performer data structures. As all the graphics data is ®nally converted into a tree built from Performer nodes, we developed a generic way of building such trees on a node by node basis. Building such internal structures in fact

(10)

becomes part of the support for our dynamic in-ternal data model. Yet the data importing is not part of the game simulation application because it is provided as shared data objects and thus could be dynamically changed during the simulation. Di€erent DSO objects could build internal game world representations at di€erent resolutions while still using the same data ®les.

The basic importers that we have developed cor-respond roughly to the types of nodes supported by SGI Performer and exhibit common general func-tionality. First, when a ®le name with model data is supplied, it is analyzed and, depending on its exten-sion, appropriate DSO is loaded and initialized by the operating system. Then, this DSO is executed with the content of the ®le being supplied as input data. The new importers parse this input for valid tokens and interpret them while treating everything else as comments or ®le references in a uniform way. This is a recursive parsing which produces data structures of arbitrary complexity and depth while keeping the parsing and interpretation of indi-vidual ®les quite simple. The basic importers pro-vide for building of internal game world representations possibly at di€erent resolutions from predesigned objects. But to generate actual geometry other types of nodes are needed. In Performer, this is done by geode nodes and they have to be used when geometry data ®les are loaded. We would also like to have specialized geo-metry nodes to represent di€erent types of geome-try.

We consider the conformance with the defacto and emerging standards as one of the priorities of our application. The VRML97 proposal for ISO standard deserves a special mention in this context. VRML1.0 emerged from the SGI OpenInventor ASCII format. In VRML 2.0 or VRML97 import-ant new features have been included in order to bet-ter support dynamic geometry and inbet-teractions. With the MovingWorlds this is carried further toward multi-user networked environments. The nodes that we discussed so far could be directly handled in a VRML 2.0 compliant application. We also experimented with building structures similar to those supported by the VRML 2.0 extrusion nodewhich has no analog in Performer.

For example in our implementation, ®lenames such as 3_1_0.tunnel are handled by a dedicated DSO importer and tunnel-like extruded geometry is generated as a result. The ability to supply numeri-cal parameters directly in the ®le name is intro-duced as a convenience tool for easy generation of regularly shaped tunnels for test purposes. A true ®le would have to be created and its content pro-vided only if parameters other than those in the ®le name are required. Similarly to the VRML 2.0 extruded node, the tunnel can be de®ned by the fol-lowing parameters:

. a 2D crossSection piecewise linear curve, described as a series of connected vertices; . a 3D spine piecewise linear curve, also a series of

connected vertices; . list of 2D scale parameters; . list of 3D orientation parameters.

This de®nition however is rather restrictive. In particular, while the intermediate cross-section orientations and scales could be controlled along the spine, it is not possible to adjust the shape in other ways. To produce smooth-looking geometry, additional parameters and more sophisticated calcu-lations than those described in the VRML 2.0 speci-®cation are necessary. We support a number of such additional global and per vertex parameters, for example the integer values of the shape per-ver-tex parameter control the wall normals at a given vertex in uandv directions. Material handling and texture mapping is also enhanced. In addition to the automatic generation of texture coordinates as for the VRML 2.0 extrusions, other texture map-pings can be directly speci®ed. That makes easier to produce outlooks as in Figs 8, 14 and 15, etc. where walls, the ¯oor and the ceiling are mapped with di€erent textures.

We also provide a set of ®tting algorithms as alternatives for direct supply of cross-section orien-tations and scales. With no speci®c algorithm how-ever, the selected extrusions are generated much in the same way as prescribed in the VRML 2.0 speci-®cation. Other VRML 2.0 compliant node types are also under implementation. In particular, more in-formation about the sound node type is given in Section 6.3.

6.2.Active objects

Object behaviors in our implementation are sup-ported by scripts of actions that must be performed over a given time interval. Although we support features functionally similar to the event-processing mechanisms as described in the VRML 2.0 speci®-cation, full conformance is beyond the scope of our current implementation.

(11)

Ecient description of complex object behaviors might be greatly facilitated if we could identify some simple actions and determine general patterns for building more complex behaviors from them. Basically behaviors of active objects can be divided into two independent categories, namely initial behaviorsandaction behaviors. Initial behavior is an object behavior which takes place at the section initialization stage. More precisely, when a section is added to the current view or, say, is about to be entered by the player, the initial behavior scripts for all objects in that section will be executed. The object action behavior in contrast is triggered on a proximity basis or by sending messages. When sec-tions are dropped from the current view, we might need to execute some maintenance scripts, but at this time we do not treat them as describing object behaviors.

Some examples of initial object behaviors that we believe to be suitable for general game simulations follow. Object behaviors in the gaming world might be independent or linked to the position of the player. Maybe the simplest case would be a ®xed position object that is placed somewhere within the currently activated section. The object would not move, although it may spin, change its shape, color etc. Depending on the object, the player may have to hit it and get a reward or may have to avoid it, otherwise risking a penalty. A moving object might be initially placed close to the player and then start moving away from him. Such a behavior could prompt the player to chase it in order to get points etc. (Fig. 14). Alternatively an object could be in-itially placed somewhere far from the player and then start approaching (Fig. 13). Depending on the object, this would prompt the player to try to escape a direct hit, to wait for such a hit or to hurry to hit the object as soon as possible. A mov-ing object could also follow a predetermined path which does not fall in the previous two categories. When combined with carefully chosen timing and other parameters, initial object behaviors of the above described types could signi®cantly enhance the gaming experience. For example, active objects may be set up to disappear after a given time inter-val elapses. This way, delayed player reactions would result in losing chances to collect points etc. Objects incurring a penalty when being hit may e€ectively act as obstacles in narrow passages thus slowing the player down and making him wait until objects move or disappear (Fig. 15).

Similarly to the initial behaviors objects could be also assigned action behaviors which take place when an object is activated. In contrast to the single initial behavior, each object could have more than one action behavior associated with it. Here we will discuss some examples of possible action behaviors. Quite often, small prizes like coins, ¯owers etc. are scattered around in the game world and the player is awarded points for collecting them. A typical

behavior of such objects when picked up or just approached is to disappear. Such a behavior could be implemented by a regular object action script containing an instant object position change e.g. a jump to a distant place, unreachable by the player. Alternatively the new object position could be elected to be in the vicinity of the player. If it lies on the player's projected movement path, he will have a chance to get more and more points. Depending on the speci®c positioning algorithm, the object might prompt not only for forward, but also for backward, movements of the player, thus e€ectively slowing down the pace of the game (Fig. 16). Generic object behaviors could also be based on simple Newtonian object models repre-senting kicking a ball (Fig. 13) or escaping from cannon-®re. Object behaviors could be further diversi®ed by introducing pseudo-random par-ameters that control the ultimate displacements, directions, velocities etc. This will ensure that the player never knows what is to happen next time he faces the object. It is also a game design choice if objects should interact or not with the other parts of the model. For example, a ball may be permitted or not to go through the walls in the model.

The examples above which describe di€erent in-itial action behaviors are by no means complete. Nevertheless they represent a minimal set of beha-viors that could be used in developing simple test games. All of them are supported by the simple simulation model, developed by us, which was used for experiments and game implementations as described in the following chapters.

6.3.Sound support

The game world as constructed may consist of parts with di€erent sound environments which are relatively isolated from each other. For example, the sound environment in two rooms with a closed door between them might be quite di€erent. On the other hand, when the door opens, noise from the nearby room should penetrate more easily. And this is just one of the diculties we have had to deal with in attempting to create a convincing sound en-vironment within a given game world with respect to each player.

(12)

could be ®xed, attached to static positions within the game world. They may be also attached to some active, moving objects. For example, ¯ying ve-hicles like planes, helicopters, etc. each may pro-duce particular distinct sound which, if properly implemented, could considerably augment the game experience. And obviously separate sound channels for each player and observer would be needed.

The sound server application (Fig. 4) is designed to run on INDY workstations. Each INDY work-station has four output sound channels which could be used independently or combined in two stereo pairs. The sound server application maps on these 4 hardware channels up to 16 software output sound queues which are automatically mixed for each channel. This means that a maximum of 16 simul-taneously played sounds may exist. The number of the actual sound streams (VirtualSoundChannels) is 256 and they are mapped to the 16 queues follow-ing the recommendations of the VRML 2.0 speci®-cation. The sound queues are supplied with data from sound ®les and controlled through the net-work. The sound servers can be con®gured to accept commands from di€erent multicast groups, for example corresponding to di€erent rooms in the game world. There is a separate group for the ambient sound, also used for priority voice mess-ages. When moving sound sources enter a room, their sound commands have to be sent to the cur-rent multicast group and thus go to the appropriate sound server. A moving listener is represented in much a similar way. The di€erence is that the sound server group channel number is modi®ed, which e€ectively ®lters all other sound commands but those of the group representing the listener's close environment.

One diculty on the way to a more rigorous im-plementation of the guidelines of VRML 2.0 seems to be volume adjustment dependent on the dis-tances between the sound sources and the listeners. The sound support hardware currently available on SGI INDY does not allow independent attenuation on a per channel basis. Doing this in the software, on the other hand, requires signi®cant real time processing. However, our implementation as depicted in Fig. 4 employs a virtual sound channel which can be independently attenuated in principle.

6.4.Game interface

During the game simulations and evaluations, we need two functionally distinct user interfaces. One for controlling the entire simulation environment and another for actually playing the game under evaluation. More than one instance of these inter-faces might be needed and simultaneously operated. 6.4.1. Overall simulation control. The control of the simulation environment is a task whose com-plexity varies depending on the game under simu-lation. For simple game simulations running on a few workstations close to each other, most of the

setup and maintenance could be handled directly. Nevertheless larger simulations, which involve dis-tant workstations, do require a specialized control interface. Some of the standard functions that should be provided are those such as starting and initialization of the simulation application tasks on all the participating workstations, monitoring of their status, gathering and logging experimental data, etc. Apart from that, the interface should be able to support game speci®c details which need special handling. In an attempt to satisfy these requirements, we designed our simulation control interface as a general menu script wrapped around a set of executables and control scripts.

When the simulation control interface is ®rst invoked a menu of selectable options is presented to the simulation manager. The menu consists of a few generic functions pertinent to the menu inter-face itself and many other options derived on the basis of the content of a directory given as a par-ameter. Both executables and control scripts may be present there. Most of the functionality is handled through generic record and play tasks, which are supplied with appropriate parameters and executed concurrently. This way, di€erent action sequences could be simultaneously transmitted over the network by simple selection of menu options. Depending on the simulation environment, the actual network distribution would vary, but it remains hidden from the simulation manager since the executables and the simulation speci®c par-ameters are supplied automatically. The simulation setups could be organized in hierarchical structures of subdirectories for easy maintenance. The main simulation script is parametrized in such a way that it could be used for controlling various types of simulations without modi®cations.

6.4.2.Game interfaces.The player interface to the game used in our experiments is depicted in Fig. 3. Along with common input devices as mice, joy-sticks, game console controllers, etc. we also sup-port camera based input. Our belief is that game control by natural human motions, not restricted by wiring or any other physical links could signi®-cantly augment the game experience and satisfac-tion (Fig. 6). One of the technologies that provide for such capturing of unconstrained human motion is the motion tracking from Motion Analysis Corp. A system based on this technology has been in use for several years at our company. It consists of six cameras, ®tted with strobelights and high speed shutters controlled by a specialized interface compu-ter. The system captures human motions by track-ing re¯ective markers attached to the perfomer's body. Nevertheless, given the price and the com-plexity of its operation, the system could hardly be used as a general computer game interface.

(13)

noise ratio is achieved again by using strobe LED lights and a high speed shuttered video camera. Then, inexpensive sticky re¯ective tape as well as handheld markers can be used by the player. While the markers themselves do not restrict the player's motions in any way, for practical reasons, we still have to limit the motions that we process. Essentially that should be motions which could be distinguished on the basis of a single camera view. Indeed, as shown in Fig. 3, video images are pro-cessed by the dedicated motion analysis computer system, responsible for the marker tracking. Then a stream of 2D marker coordinates is transmitted over a RS232 link to a INDY workstation. These coordinates are matched to a physical motion model for the player's human body that resides on the INDY as a part of a separate user interface task. A motion analysis algorithm is used to deter-mine headings, velocities and accelerations which are then communicated to the game simulation ap-plications on the network. While there is much room for experiments with di€erent human motion models, our original idea was to control a ¯ying motion in a way much similar to a bird. If we ima-gine a ¯ying human with wings stretched to his arms, then what should be the motions to control the ¯ight? The sample in Fig. 6 shows one possible mapping in which the plane inclination and the con-secutive turn follow the one of the player.

In addition to simple ¯ight controls, the mapping also provides for game ¯ow control by menu access and selection functions. Experiments with di€erent number of markers and other motion detection al-gorithms are in progress. Options for direct camera connection and image processing on a SGI worksta-tion are also being investigated.

7. EXPERIMENTS WITH THE GDS

First experiments with physically based ¯ight simulations at VSL have been carried out in the scope of the VirtuaFly project. The resulting VirtuaFly demo application proved the feasibility of our ideas and was successfully shown at several events. Unfortunately, as initially implemented, VirtuaFly required signi®cant computing and graphics power so we needed a multiple CPU Onyx machine to run it. In order to make a commercial product out of it we needed to optimize its perform-ance for more a€ordable systems such as Indigo Impact. At the same time improving the game inter-face, the game design and functionality was also planned.

7.1.VirtuaFly2

The project VirtuaFly2 has been launched with the objective to develop a commercial amusement product based on the experience from VirtuaFly.

(14)

This project was the ®rst one to take advantage of the game design and simulation testbed environ-ment.

VirtuaFly2 is a multiplayer virtual reality game based on simulated physical motions of the player's body. Player movements are detected by motion tracking interface subsystem and mapped to its ava-tar in the virtual game world. Then the player could control the ¯ight of its avatar by natural body movements. VirtuaFly2 is played in front of a large screen so that the public could observe both the players and their representations in the virtual game world. A stereo image is projected which enhances the game immersion e€ect. Since screen stereo images could only be seen through special glasses, in some cases with particularly large audi-ences we opted for plain non-stereo projections. Although, at that time, players could wear HUDs and still see the game world in stereo.

VF2 is a game which brings new exciting immer-sive VR experience through a relaxed casual ¯ight in the world of the future (Fig. 7). This is a truly interactive, real time game and its true sense can only be felt during the play. Unfortunately here we could only provide static images recorded in the course of some real games. The color image pairs in Figs 8±10 are intentionally reduced in size in order to be easily seen in stereo with naked eye.

Both stereo and non-stereo version have been installed and exposed to the public on many oc-casions throughout Japan including, but not only at:

. The Science Museum of Electricity in Nagoya city in July 1996, with more than 4000 visitors; . The Softpia Japan Opening Event in Oogaki city,

Gifu prefecture in August 1996 with almost 20000 visitors;

. The NTT New Life Fair in Nagoya city in September 1996 with more than 3000 visitors.

VirtuaFly2 has been shown in stereo at the Virtual Reality Society of Japan Annual Conference in October 1996 [25], and in non-stereo at NICOGRAPH'96 and MULTIMEDIA'96 (Tokyo, 20±22 November). There was a stand with VirtuaFly2 at the 78th Annual Convention and Trade Show of the International Association of Amusement Parks and Attractions IAAPA'96 (New Orleans, 20±23 November). VirtuaFly2 has been presented in a VisionDome brought in January 1997 for the ®rst time to Japan by CWC Studios. The VisionDome provides interactive immersive VR without the discomfort of head-mounted displays or special glasses. The non-stereo version of VF2 when played and observed from within the VisionDome

(15)

brought an unbelievable feeling of presence and truly 3D immersion with stunning e€ect to most of the visitors.

At most of the events a 250 MHz Maximum Impact with 128 MB RAM was used which could support up to two players. A sample printout of the screen image for a two-player setup is given in Fig. 11. Alternatively VirtuaFly2 could run in a single player mode when installed on an O2 machine. More players can enjoy the game together when several workstations are connected to a high speed LAN. The local representation of two vehicles engaged in a high speed pursuit and actually simulated on remote hosts is shown in Fig. 12.

Design and development of VirtuaFly2 was con-ducted in parallel with that of the game simulation environment itself. In this process general com-ponents from GDS have been customized and later incorporated in the ®nal version of VF2. Alternatively, components originally designed for VF2 have been optimized, upgraded with more gen-eral functions and established as components of the GDS. For example, some core functionality from the gameserver in GDS has been brought to the ®nal version of VF2 in the form of a component called action server. Here we would like to mention again that GDS addresses a wide range of game simulations including single player and multiplayer games on LAN with possible extensions to WAN.

Fig. 8. A tunnel exit to the city. The tunnel has ¯at, texture mapped walls.

Fig. 9. A tunnel connecting the hangar with the electric room. The walls of the tunnel are non-¯at, sculptured surfaces.

(16)

In contrast, VirtuaFly2 is a commercial product with its scope being limited mostly to LBE (Location Based Entertainment).

7.2.Continuing work

Currently we are conducting experiments with game simulations targeting lower level platforms. We believe that games simulated on a network of O2 machines could ®nally be hosted on

Pentium-based PCs and played on Internet. This expectation is also in line with the recent developments bringing Open GL to PC platforms and coupling it with advanced graphics chipsets.

At this stage the prototype game worlds we ex-periment with are generated on run time from a database of shapes, paths and textures following one-dimensional world descriptions as discussed in Section 6.1. We are interested in enhancing the

Fig. 11. Views for the two players as they appear on the SGI Maximum Impact screen. The yellow plane serves as a tour guide for the vehicle following it.

(17)

Fig. 13. A section with ¯at walls that appear convex due to the applied texture patterns. The green object in the center is coming toward the player and will respond to hits in a realistic way.

(18)

Fig. 15. A narrow tunnel with sharp edges and an approaching yellow star that will explode if touched.

(19)

game impact by appropriately combining particular geometry, textures and object behaviors. A low res-olution game world model is built from sections im-plemented as extruded textured objects (Figs 13± 16). Automatic doors are implemented as active objects with proximity sensors. All the initial and action behaviors as discussed in Section 6.2 are sup-ported. Currently, experiments are being carried out with di€erent game strategies, point award and tim-ing systems, etc. We simulate game challenges through generic actions as previously discussed. We create incentives for the players to collect coins, spare vehicles, bonus points etc., while avoiding con¯icts with other dangerous objects as well as furniture, doors and walls. Color, shape and sound clues are brought in to prompt certain user actions, to facilitate object discrimination and encourage early recognition. Player attitude is investigated in regard to such clues as well as the response to mis-leading signals, delayed and distorted clue patterns.

In the course of the experiments, actions and object behaviors can be derived on a pseudo-ran-dom basis. This guarantees a unique experience since exactly the same game is never played again. For example, simulations of di€erent trac con-ditions could be performed in one and the same database, but with a set of active objects controlled by a stand-alone stochastic model. Di€erent simu-lations for the same trac conditions will produce statistically equivalent results, but still look di€erent to the user. More precisely, while the general appearance, like the average number of vehicles, etc. will not vary much, the actual events, their tim-ing and order will be di€erent. This means that the player will be meeting di€erent vehicles at di€erent times and thus have a chance to experience a var-iety of simulated situations.

8. CONCLUSIONS

The Game Design and Simulation environment discussed in this paper proved to be a powerful tool signi®cantly facilitating and enhancing the entire game design and development process. Its usage provides for measurable evaluation of game designs and functionality and enforces ecient strategies for building game world and character models. That results in more rigorous planning and im-plementation with guaranteed level of performance.

AcknowledgementsÐWe would like to thank Jun Hong for its leading role in the VirtuaFly project, Takayuki Kondo for its creative design and also all other colleagues who supported our research.

REFERENCES

1. Brutzman, D., Macedonia, M. R. and Zyda, M. J., Internetwork Infrastructure Requirements for Virtual Environments. Virtual Reality Modeling Language

(VRML) Symposium, San Diego California, December 13±15, 1995.

2. Macedonia, M. R. and Zyda, M. J., A Taxonomy for Networked Virtual Environments. IEEE Multimedia,

4(1), January±March 1997.

3. Brutzman, D., Graphics Internetworking: Bottlenecks and Breakthroughs. Digital Illusion: Entertaining the Future With Interactive Technology, Clark Dodsworth ed., Addison-Wesley, Reading Massachusetts, 1997. 4. Stone, S.et al., Mobile Agents and Smart Networks

for Distributed Simulations. Proc. 14th Distributed Simulations Conference, Orlando, FL, March 11±15, 1996.

5. Singhal, S. K. and Cheriton, D. R., Using Projection Aggregations to Support Scalability in Distributed Simulation. Proc. 16th International Conference on Distributed Computing Systems, IEEE Computer Society Press, Hong Kong, May 1996.

6. Pratt, S. et al., Implementation of the IsGroupOf PDU for Network Bandwidth Reduction.Proc. 15th DIS Workshop, Orlando, FL, Sept 16±20, 1996. 7. Smith, W.G. and Koifman, A., A Distributed

Interactive Simulation Intranet Using RAMP, a Reliable Adaptive Multicast Protocol. Proceedings from the Fourteenth Workshop on Standards for the Interoperability of Distributed Simulations, Orlando, Florida, March, 1996.

8. Holbrook, H. V., Singhal, S. K. and Cheriton, D. R., Log-based Receiver-Reliable Multicast for Distributed Interactive Simulation. Proc. SIGCOMM `95, ACM Press, Cambridge, MA, August, 1995.

9. Lea, R. et al., Issues in the Design of a Scalable Shared Virtual Environment for the Internet. 30th Hawaii International Conference on System Sciences, January, 1997.

10. Lea, R. et al., Technical Issues in the Design of a Scalable Shared Virtual World.Sony Research Forum SRF'95, Tokyo, 1995.

11. Carlsson, C. and Hagsand, O., DIVEÐA Multi-User Virtual Reality System. IEEE Virtual Reality Annual International Symposium, 1993, pp. 394±400.

12. Honda, Y. et al., Virtual Society: Extending the WWW to support a Multi-User Interactive Shared 3D Environment.Proc. VRML `95, San Diego, USA, December 1995.

13. Benford, S. et al., Managing Mutual Awareness in Collaborative Virtual Environments. Proc. ACM SIGHI Conference of Virtual Reality (VRST `94), August 23±26 1994, Singapore, ACM Press.

14. Thalmann, D.et al., Sharing VLNET Worlds on the Web.Proc. Compugraphics `96.

15. Capin, T. K. et al., Virtual Human Representation and Communication in VLNET. IEEE Computer Graphics and Applications,17(2), March±April 1997. 16. Greenhalgh, C. and Benford, S. D., MASSIVE: a

vir-tual reality system for tele-conferencing. ACM Transactions on Computer Human Interfaces (TOCHI), 1995,2(3), 239±261.

17. Singhal, S. K. and Cheriton, D. R., Exploiting Position History for Ecient Remote Rendering in Networked Virtual Reality. Presense: Teleoperators and Virtual Environments, 4(2), Spring 1995.

18. Zelesko, M. J. and Cheriton, D. R., Specializing Object-Oriented RPC for Functionality and Performance. Proc. 16th International Conference on Distributed Computing Systems, IEEE Computer Society Press, Hong Kong, May 1996.

19. Mandeville, J. et al., GreenSpace: Creating a Distributed Virtual Environment for Global Applications. IEEE Proc. Networked Reality Workshop, Boston, MA, October 26±28, 1995. 20. Michael, D. and Brock, D. L., A Multiresolution

(20)

15th Workshop on Distributed Interactive Simulation, Orlando, FL, September 1996.

21. Michael, D. and Brock, D. L., A 3D Environment for an Observer Based Multiresolution Architecture.1997 Spring Simulation Interoperability Workshop, Orlando, FL, March 3±7, 1997.

22. Taylor, D., The VR-Protocol and What it O€ers HLA. Orlando, FL, March 3±7, 1997.

23. Gustavson, P., DirectPlay DISÐAnother Way to HLA? 1997 Spring Simulation Interoperability Workshop, Orlando, FL, March 3±7, 1997.

24. Reddy, M., A Survey of Level of Detail Support in Current Virtual Reality Solutions. Virtual Reality,

1(2), 1995, Virtual Press.

Referensi

Dokumen terkait

Hardware-software co-simulation using MATLAB-Xilinx System Generator allows to design and observe the characteristics of hardware; on the other hand manipulate the input signal

Therefore, this project was design, analysis and simulate of casting defect on water pump housing using casting simulation software..

DESIGN AND SIMULATION OF COOLING CHANNEL FOR PLASTIC INJECTION MOULDING.. This report submitted in accordance with requirement of the

To design and develop a GUI simulation for fluid flow and water level control system that is user-friendly and easy to be used by students, by using the Eclipse and Android

Simulation of Wideband Power Amplifier Design for Software Defined Radio (SDR) Using Feedback and Balanced TopologyA.

Further improvement on the simulation process in areas such as queue analysis on the time and number of parts waiting time could add greater complexity to the

Design the automated sorting system to need the manufacturing industry in many fields.. is a very

49 It is evident that when institutional arrangements are such that the government is the Stackelberg leader in the second stage policy game, the optimal central bank design — from