So what are rewards in the context of games? Rewards are items, treats or Easter eggs given to the player for successfully performing tasks and com- pleting challenges throughout the game. They come in various forms, and are awarded in many different ways. For some, a reward can be a beautifully edited cut-sequence after killing a boss monster, for example. Or perhaps a reward is receiving a power-up to a weapon. In a racing game it could be receiving the gold cup after winning a championship, or the keys to another, more powerful car. There are small rewards also, like packets of energy, ammo collectables, or in the case of Dungeon Siege, armour, gold, weapons, magic spells, health, mana potions etc. Designers call them different things, such as power-ups or pick-ups. I simply call them rewards because that’s what they are. Whatever the reward, I always feel the player has to earn them; you should never give them away. There was a time when rewards were simply left lying around in the environments for the player to run over and collect, but that’s simply too easy and it doesn’t make sense. I feel that in order to acquire a reward, the player has to successfully complete a challenge or an action of some kind, no matter how big or small. Once the player has performed the action successfully, he is then rewarded, but the reward should be commensu- rate with the action or challenge performed.
opers with a common language in which to communicate verbally and through code. Simply saying “abstract factory” is easier than explaining what an abstract factory is over and over. Also, when looking at a stranger’s code that implements an abstract factory, you already have a general understanding of what the code is trying to accomplish. MapReduce design patterns fill this same role in a smaller space of problems and solu‐ tions. They provide a general framework for solving your data computation issues, without being specific to the problem domain. Experienced MapReduce developers can pass on knowledge of how to solve a general problem to more novice MapReduce de‐ velopers. This is extremely important because MapReduce is a new technology with a fast adoption rate and there are new developers joining the community every day. Map‐ Reduce design patterns also provide a common language for teams working together on MapReduce problems. Suggesting to someone that they should use a “reduce-side join” instead of a “map-side replicated join” is more concise than explaining the low- level mechanics of each.
Tabs suggest that the content they link to is all part of the same cohesive whole. The tab metaphor is borrowed from a real-world desk—or, more specifically, the tabbed hanging folders you might find in your desk drawers. Just as we can quickly and easily flip between tabs in a folder or in our operating system, it’s common to see the content of tabs appear without a page reload. Sometimes the transition between content areas is animated in some way, for example, swishing from left/right or right/left. This technique became extremely popular again after Panic, a software company, utilized the effect on their site, seen in Figure 4.19. Obviously it’s difficult to demonstrate animation in a printed book, so I definitely recommend you visit the Coda site 16 to see it in action.
It would be easy to assume that hitting is simply a brutish act with little finesse. But what’s great about hitting is that it scales with skill levels. While it’s easy to hit something, it’s very hard to hit with skill. Take baseball again. Swinging a Wiffleball bat and knocking a ball off a tee is not that difficult. With a little coordination and a few swings, we can all master it. Hitting a softball thrown by a pitcher can be more difficult. The ball flies through the air and you must quickly calculate its speed and arc, plus how long it will take you to bring your bat around in time to make contact. All the while you have to keep your eye on the ball if you want to hit it. And if hit- ting a softball seems hard, hitting a fastball thrown by a professional pitcher is nearly impossible. In his book How We Decide, Jonah Lerner breaks down the near absurd- ity of hitting a Major League Baseball pitch. A typical Major League pitch takes 0.35 seconds to fly from the pitcher’s hand to the catcher’s mitt. It takes a batter about 0.25 seconds for his muscles to initiate a swing. It takes a few milliseconds for the visual information of the oncoming pitch to travel from the retina to the visual cor- tex. This leaves the batter with about five milliseconds to decide if he is going to swing. The problem is people can’t think that fast. It takes the human brain about 20 milliseconds to even react to sensory input. 2
Of course, in the way of non-Hârn: Bloodline related news, there was the now legendary Auran vs. BigKid Quake III tournament. This was such a big deal that I actually bothered to come into the office that day. I couldn't help noticing that Big Kid posted a story with the headline "BigKid owns Auran in Q3", along with a *ahem* carefully chosen fragcount screenshot. Now that's not _my_ recollection of events. They way I saw it from 5:30 through to 7:30 when I had to go was that the Auran boyz -- especially Blahnana and Who Was That/Campin' Man were nestled comfortably in the top spots, with ReemeR and mOenadz (clearly a reference to Leibniz' "Discourse on Metaphysicz") settling, like a school of fish poisoned by industrial effluent in a Tokyo bay, at the bottom. The demos (as recorded by ace reporter sprayNwipe) show yours truly yielding first place only to Handsome, who simply got lucky, let's face it. Of course they *claim* that they started to clock up the frags after 8:00pm -- conveniently after myself and SprayNWipe (ace reporter, remember?) had left.
Even though a review of the international legal framework has shown that a right to act anonymously on the Internet is not explicitly included in legal instruments so far there is no evidence that such a right should not be part of the widely acknowledged right to keep certain personal data confidential, particularly due to the described correlation between the anonymity and the fundamental right to privacy. The legally consolidated protection of private life, home and corre- spondence of Internet participants pleads for the existence of a right of not being totally monitored; in fact, States have the obligation to create an environment free of surveillance by improving the existing legislative frameworks. However, a right to rely on anonymity cannot be without limits since State interests do exist, jus- tifying governmental intervention into the sphere of individuals. In order to avoid the individual protection regime’s weakening the respective rules, allowing interventions, must be interpreted in a narrow way.
While the results of this survey clearly indicate certain patterns of tool usage and salary, we should remember some of the limitations of this data. Sampled from attendees at two conferences, these results capture a particular category of professionals: those who are heavily involved in big data or highly motivated to become so, often using the most advanced tools that the industry has to offer. This study shows one perspective of modern data science, but there are others.
A little later (approximately two years ago) people started to really feel the pain of managing their applications at the VM layer. Even under the best circumstances it takes a brand new virtual machine at least a couple of minutes to spin up, get recognized by a load balancer, and begin handling traffic. That’s a lot faster than ordering and installing new hardware, but not quite as fast as we expect our systems to respond.
Wherever you go, whatever you do, anywhere in this world, some “thing” is tracking you. Your laptop, and other personal devices, like an iPad, Smartphone, or Blackberry, all play a role, and contribute to building a very detailed dossier of your likes, concerns, preferred airlines, favorite vacation spots, how much money you spend, political af- filiations, who you’re friends with, the magazines you subscribe to, the make and model of the car you drive, the kinds of foods you buy, the list goes on. There are now RFID chips in hotel towels and bathrobes to dissuade you from taking them with you while your in-room mini bar collects information about every item you’ve consumed (to en- sure that it’s properly stocked for your next visit). That convenient E-ZPass not only makes your commute easier, but it also helps to provide an accurate picture of your whereabouts on any given day, at any given time, as do all the video cameras installed at ATMs, in stores, banks, and gas stations, on highways, and at traffic intersections. Your car collects information about you—from your location, speed, steering, brake use, and driving patterns. Although your home may be your castle, it is not, in the world we now live in, impenetrable. Google Maps provides a very accurate and detailed pic- ture of it, and in the course of getting that picture, if you happened to have had an unencrypted Wi-Fi network, scooped up personal data as well. You may be aware of all the digital tracking that is going on by the Internet giants (Google, Facebook, and the rest), but with almost 40 percent of PCs worldwide infected with some form of malware that can gather information and send it back to their authors, that may be the least of your worries.
Rather than a stream of bytes, Kafka provides a stream of messages, which saves the first step of input parsing (breaking the stream of bytes into a sequence of records). Each message is just an array of bytes, so you can use your favorite serialization format for individual messages: JSON, Avro, Thrift, or Protocol Buffers are all reasonable choices. It’s well worth standardizing on one encoding, and Confluent provides particularly good schema management support for Avro. This allows applications to work with objects that have meaningful field names, and not have to worry about input parsing or output escaping. It also provides good support for schema evolution without breaking compatibility.
the menu has been invoked it is usually unposted until it is needed again. Menus are posted or unposted by invoking their widget commands, which gives the interface Figure 15.3. Examples of menus. Figure (a) shows a single menu with three checkbutton entries, three radiobutton entries, and two command entries. The groups of entries are separated by separator entries. Figure (b) shows the menu being used in pull-down fashion with a menu bar and several menubutton widgets. Figure (c) shows a cascaded series of menus; cascade entries in the parent (leftmost) menu display => at their right edges, and the Line Width entry is currently active. Figure (d) contains a menu that supports keyboard traversal and shortcuts. The underlined characters in the menubuttons and menu entries can be used to invoke them from the keyboard, and the key sequences at the right sides of some of the menu entries (such as Ctrl+X) can be used to invoke the same functions as menu entries without even posting the menu.
This book has had a long gestation. It has seen four countries, three of its authors' marriages, and the birth of two (unrelated) offspring. Many people have had a part in its development. Special thanks are due Bruce Anderson, Kent Beck, and André Weinand for their inspiration and advice. We also thank those who reviewed drafts of the manuscript: Roger Bielefeld, Grady Booch, Tom Cargill, Marshall Cline, Ralph Hyre, Brian Kernighan, Thomas Laliberty, Mark Lorenz, Arthur Riel, Doug Schmidt, Clovis Tondo, Steve Vinoski, and Rebecca Wirfs-Brock. We are also grateful to the team at Addison- Wesley for their help and patience: Kate Habib, Tiffany Moore, Lisa Raffaele, Pradeepa Siva, and John Wait. Special thanks to Carl Kessler, Danny Sabbah, and Mark Wegman at IBM Research for their unflagging support of this work.
There is no good answer to this except the fact that whenever we start throwing out relations in RDBMS (which means that you start removing constraints and indexes for faster writes or start denormalizing for faster reads by duplicating data), we should start considering alternate solutions. Another indication to start thinking about alternate solutions is the data volumes becoming too big and starting to impact query or write throughput SLA. Or, we start thinking about purging data so as to limit the data managed by our RDBMS solution. Adding more hardware such as RAM or CPU for vertical scalability is another indicator that alternate solutions such as NoSQL might be a better solution. Another indicator is the management of rapid schema changes, where changing the schema in RDBMS is nontrivial since it needs managing constraints, relationships, and indexes.
Plain Old Java Objects can be a little more work to code, but they are easier to deploy, since they don't require an EJB container. Because they don't provide a remote interface, they can be more efficient than EJBs when distributed capabilities aren't required. For the same reason, regular objects lend themselves to a more finely grained object model than EJBs do, since each regular object maps to a single object instance (EJBs require four or more). In many web applications, the same server can run the POJO model and the servlet container, keeping everything in the same JVM. The tradeoff is that you need to support transactions on your own, rather than delegating to the EJB container (we'll look at strategies for dealing with this in Chapter 10). POJO models can also be difficult to cluster effectively, although the greater efficiency of this approach for many applications can reduce the need for clusters in the first place, at least for scalability purposes. Most modern databases, running on appropriate hardware, easily keep up with the load imposed by a high volume site, assuming the data model was designed properly. Since a POJO object model lives in the same JVM as the web tier, it's easy to cluster several web servers and point them to the same database, provided you have the right concurrency design.
As ChatOps continues to evolve, the ability to use natural language processing with the chatbots to make the interactions more seamless and “human-like” will continue to improve. Operators will be able to interact with bots as though they are real-life members of the team. Through the use of natural language, users can begin carrying on conversations with bots rather than simply instructing them. Of course, this brings up the topic of artificial intelligence and what is likely to be the not-so-distant future of bots. We aren’t quite there yet with regard to ChatOps, but the conditions are here to begin exploring ways to leverage NLP to open up even more functionality and benefits. Being able to immediately begin interacting with a chatbot, not knowing anything about the correct syntax, lowers the barrier to entry and provides exciting possibilities for what ChatOps may look like in the coming years.
Let’s take a look at what it means for these experiences to be moving from the physical to the digital. Not too long ago, the primary way that you shared photos with someone was that you would have to have used your camera to take a photo at an event. When your roll was done, you’d take that film to the local store where you would drop it off for processing. A few days or a week later you would need to pick up your developed photos and that would be the first time you’d be able to evaluate how well the photos that you took many days prior actually turned out. Then, maybe when someone was at your house, you’d pull out those photos and narrate what each photo was about. If you were going to really share those photos with someone else, you’d maybe order duplicates and then put them in an envelope to mail to them – and a few days later, your friend would get your photos as well. If you were working at a company like Kodak that had a vested interest in getting people to use your film, processing paper or cameras more, then there are so many steps and parts of the experience that I just described which are completely out of your control. You also have almost no way to collect insight into your customer’s behaviors and actions along the process.
The term hacking has a bad reputation in the press. They use it to refer to someone who breaks into systems or wreaks havoc with computers as their weapon. Among people who write code, though, the term hack refers to a "quick-and-dirty" solution to a problem, or a clever way to get something done. And the term hacker is taken very much as a compliment, referring to someone as being creative, having the technical chops to get things done. The Hacks series is an attempt to reclaim the word, document the good ways people are hacking, and pass the hacker ethic of creative participation on to the uninitiated. Seeing how others approach systems and problems is often the quickest way to learn about a new technology.
It was soon after this influx of new participants in the mid-1990s that for- profit companies were born out of this grassroots open source movement, including big names like Red Hat, SuSE, VA Linux, Netscape (soon to be Mozilla), and MySQL AB. Not only were new companies formed, but many large enterprises soon saw the value of open source development models and began participating in open source communities, with salaried employees directed toward full-time “upstream” open source work. IBM was an early adopter of this strategy: in 1998 it created the IBM Linux Technology Center, hiring Linux kernel experts and repurposing internal employees to work on the Linux kernel and other upstream open source projects. The goal was to enable Linux across all of IBM’s hardware platforms and enable Linux
implementations. The RefStack community project and DefCore committee within OpenStack are remedying this situation by providing a test suite and required “core” software code implementations that will need validation if vendors wish to use OpenStack marks and be certified as compatible. While these kinds of bumps in the road can be expected in such a large and diverse community, the OpenStack Foundation governance and meritocratic development models are providing a solid framework for continued collaboration and the growth of the community in positive directions. OpenStack is still young in many ways, but with OpenStack-powered clouds and offerings available from significant players like IBM, HP, Rackspace, Huawei, and Cisco (Piston), among others, the momentum is definitely growing for OpenStack to play a vital role in open cloud collaboration for years to come.