• Tidak ada hasil yang ditemukan

What is UI

N/A
N/A
Protected

Academic year: 2017

Membagikan "What is UI"

Copied!
12
0
0

Teks penuh

(1)

What is UI

In information technology, the user interface (UI) is everything designed into an information device with which a person may interact. This can include display screens, keyboards, a mouse and the appearance of a desktop. It is also the way through which a user interacts with an application or a website

the means by which the user and a computer system interact, in particular the use of input devices and software.

An interface is a set of commands or menus through which a user communicates with a program.

Examples of UI:

1. The computer mouse

Before the mouse, if you wanted to talk to a computer, you had to enter commands through a keyboard.

All that changed in 1964, when engineer and inventor Douglas Engelbart of SRI International pieced together a wooden shell, a circuit board, a couple of metal wheels and some cord to make interacting with a computer as simple as a point and a click

The remote control

As long ago as 1898, Nikola Tesla demonstrated the world's first radio-controlled boat, presenting a method for controlling vehicles from a distance.

While Tesla accurately predicted his "tele-automation" would be used for war, he didn't predict the role the remote control would play in our lives — nor the unbelievable clunky-ness of the average TV remote.

The search engine

Sir Tim Berners-Lee used to index the World Wide Web — by hand. Of course, as it grew to include millions of links, it became clear users would need a better way.

But while early search engines were embedded in crowded "portals" full of news stories and links, Google stripped their search page of everything but the search bar and a couple of buttons.

(2)

9. The ATM

“On Sept. 2nd our bank will open at 9:00 and never close again." – Bank ad announcing the first ATM in 1969.

ATMs gave customers an interface to confirm their identity, interact with the bank's records and then withdraw their own cash. They gave banks the ability to serve their customers out-of-hours – a huge breakthrough in self-service retail.

Electronic Tolling Collection (ETC)

Slow-downs, long lines, finding the right change, making sure the driver has the right receipt – paying and collecting highway tolls is, at the most basic level, an interface problem.

ETC (the use of transponders in cars to pay tolls electronically when a car passes tolling booths) dramatically improves the flow of traffic and reduces gas use limiting the need to stop.

8. Predictive text

The smaller the phone, the harder it is to type. This was true of chunky old Nokias and it's true of sexy, new iPhones.

Predictive text systems like T9 allowed us to spend less time fumbling and more time

communicating. Without them, it's hard to imagine that mobile computing gaining the kind of traction it has.

Speedometer and ipod wheel

Evolution of UI

The user interface evolved with the introduction of the command line interface, which first appeared as a nearly blank display screen with a line for user input. Users relied on a keyboard and a set of commands to navigate exchanges of information with the computer.

(3)

The emerging popularity of mobile applications has also affected UI, leading to something called mobile UI. Mobile UI is specifically concerned with creating usable, interactive interfaces on the smaller screens of smartphones and tablets and improving special features, like touch controls.

UI-PAST AND PRESENT

First interface:

developed by Steve Russell in 1962

He developed 1

st

computer game spacewar in which the

interaction is only by using keyboard

.

Like rockets in space game,

moving rgt and left and firing commands.

Mouse:

40 years ago Douglas Engelbart introduced

The original mouse, housed in a wooden box twice as high as today's mice and with three buttons on top, moved with the help of two wheels on its underside rather than a rubber trackball. The wheels—one for the horizontal and another for the vertical—sat at right angles. When the mouse was moved, the vertical wheel rolled along the surface while the horizontal wheel slid sideways.

The name mouse, originated at the Stanford Research Institute, derives from the resemblance of early models (which had a cord attached to the rear part of the device, suggesting the idea of a tail) to the common mouse

Command Line Interface

The user provides the input by typing a command string with the computer keyboard and the system provides output by printing text on the computer monitor .

Command line interfaces are the oldest of the interfaces discussed here. It involves the computer responding to commands typed by the operator. This type of interface has the drawback that it requires the operator to remember a range of different commands and is not ideal for novice users.

Graphical User Interface

(4)

interacts with other on-screen elements. It allows the user to interact with devices through graphical icons and visual indicators such as secondary notations. The term was created in the 1970s to distinguish graphical interfaces from text-based ones, such as command line interfaces. However, today nearly all digital interfaces are GUIs. The first commercially available GUI, called "PARC," was developed by Xerox. It was used by the Xerox 8010 Information System, which was released in 1981. After Steve Jobs saw the interface during a tour at Xerox, he had his team at Apple develop an operating system with a similar design. Apple's GUI-based OS was included with the Macintosh, which was released in 1984. Microsoft released their first GUI-based OS, Windows 1.0, in 1985.

Menu Driven

A menu driven interface is commonly used on cash machines (also known as automated teller machines, or ATMs), ticket machines and information kiosks (for example in a museum). They provide a simple and easy to use interface comprised of a series of menus and sub-menus which the user accesses by pressing buttons, often on a touch-screen device.preferably if one has knowledge on UML modeling can be a good example to design architecture of the machine.

Form Based

(5)

This is a method of enabling you to interact with an application.

Touch screen:

A touchscreen is an input device normally layered on the top of an electronic visual display of an information processing system. A user can give input or control the information processing system through simple or multi-touch gestures by touching the screen with a special stylus and/or one or more fingers.

1970s: Resistive touchscreens are invented. Although capacitive touchscreens were designed first, they were eclipsed in the early years of touch by resistive touchscreens. American inventor Dr. G. Samuel Hurst developed resistive touchscreens almost accidentally.

(6)

A resistive screen consists of a number of layers. When the screen is pressed, the outer later is pushed onto the next layer — the technology senses that pressure is being applied and registers input. Resistive touchscreens are versatile as they can be operated with a finger, a fingernail, a stylus or any other object.

Natural language

A natural language interface is a spoken interface where the user interacts with the computer by talking to it. Sometimes referred to as a 'conversational interface', this interface simulates having a conversation with a computer. Made famous by science fiction (such as in Star Trek), natural language systems are not yet advanced enough to be in wide-spread use. Commonly used by telephone systems as an alternative to the user pressing numbered buttons the user can speak their responses instead.

This is the kind of interface used by the popular iPhone application called Siri and Cortana in Windows.

UI-Others:

Voice Recognition

Speech recognition has always struggled to shake off a reputation for being sluggish, awkward, and, all too often, inaccurate. The technology has only really taken off in specialist areas where a constrained and narrow subset of language is employed or where users are willing to invest the time needed to train a system to recognize their voice.

This is now changing. As computers become more powerful and parsing algorithms smarter, speech recognition will continue to improve, says Robert Weidmen, VP of marketing for Nuance, the firm that makes Dragon Naturally Speaking.

Last year, Google launched a voice search app for the iPhone, allowing users to search without pressing any buttons. Another iPhone application, called Vlingo, can be used to control the device in other ways: in addition to searching, a user can dictate text messages and e-mails, or update his or her status on Facebook with a few simple commands. In the past, the challenge has been adding enough processing power for a cell phone. Now, however, faster data-transfer speeds mean that it’s possible to use remote servers to seamlessly handle the number crunching required.

Since the ‘Put That There‘ video presentation by Chris Schmandt in 1979, voice recognition has yet to meet with a revolutionary kind of success. The most recent hype over VUI has got to be Siri, a personal assistant application which is incorporated into Apple’s iOS. It uses a natural language user interface for its voice recognition function to perform tasks exclusively on Apple devices.

(7)

interact with it with your fingers. Instead it clings to you as eyewear and receives your commands via voice control.

The only thing that is lacking now in VUI is the reliability of recognizing what you say. Perfect that and it will be incorporated into user interfaces of the future. At the rate that smartphones capabilities are expanding and developing now, it’s just a matter of time before VUI takes centre stage as the primary form of human-computer interaction for any computing system.

Augmented Reality

An exciting emerging interface is augmented reality, an approach that fuses virtual information with the real world.

The earliest augmented-reality interfaces required complex and bulky motion-sensing and computer-graphics equipment. More recently, cell phones featuring powerful processing chips and sensors have to bring the technology within the reach of ordinary users.

Examples of mobile augmented reality include Nokia’s Mobile Augmented Reality Application (MARA) and Wikitude, an application developed for Google’s Android phone operating system. Both allow a user to view the real world through a camera screen with virtual annotations and tags overlaid on top. With MARA, this virtual data is harvested from the points of interest stored in the NavTeq satellite navigation application. Wikitude, as the name implies, gleans its data from Wikipedia.

These applications work by monitoring data from an arsenal of sensors: GPS receivers provide precise positioning information, digital compasses determine which way the device is pointing, and magnetometers or accelerometers calculate its orientation. A project called Nokia Image Space takes this a step further by allowing people to store experiences–images, video, sounds–in a particular place so that other people can retrieve them at the same spot.

We are already experiencing AR on some of our smartphone apps like Wikitude and

Drodishooting, but they are pretty much at their elementary stages of development. AR is getting the biggest boost in awareness via the upcoming Google’s Project Glass, a pair of wearable eyeglasses that allows one to see virtual extensions of reality that you can interact with. Here’s an awesome demo of what to expect.AR can be on anything other than glasses, so long as the device is able to interact with a real-world environment in real-time. Picture a piece of see-through device which you can hold over objects, buildings and your surroundings to give you useful information. For example, when you come across a foreign signboard, you can look through the glass device to see them translated for your easy reading.

AR can also make use of your natural environment to create mobile user interfaces where you can interact with by projecting displays onto walls and even your own hands.

(8)

UI-Future:

Gesture Sensing: present

Compact magnetometers, accelerometers, and gyroscopes make it possible to track the

movement of a device. Using both Nintendo’s Wii controller and the iPhone, users can control games and applications by physically maneuvering each device through the air. Similarly, it’s possible to pause and play music on Nokia’s 6600 cell phone simply by tapping the device twice.

New mobile applications are also starting to tap into this trend. Shut Up, for example, lets Nokia users silence their phone by simply turning it face down. Another app, called nAlertMe, uses a 3-D gestural passcode to prevent the device from being stolen. The handset will sound a shrill alarm if the user doesn’t move the device in a predefined pattern in midair to switch it on.

The next step in gesture recognition is to enable computers to better recognize hand and body movements visually. Sony’s Eye showed that simple movements can be recognized relatively easily. Tracking more complicated 3-D movements in irregular lighting is more difficult, however. Startups, including Xtr3D, based in Israel, and Soft Kinetic, based in Belgium, are developing computer vision software that uses infrared for whole-body-sensing gaming applications.

Oblong, a startup based in Los Angeles, has developed a “spatial operating system” that recognizes gestural commands, provided the user wears a pair of special gloves.

1. Gesture Interfaces

The 2002 sci-fi movie, Minority Report portrayed a future where interactions with computer systems are primarily through the use of gestures. Wearing a pair of futuristic gloves, Tom Cruise, the protagonist, is seen performing various gestures with his hands to manipulate images, videos, datasheets on his computer system.

A decade ago, it might seem a little far-fetched to have such a user-interface where spatial motions are detected so seamlessly. Today, with the advent of motion-sensing devices like Wii Remote in 2006, Kinect and PlayStation Move in 2010, user interfaces of the future might just be heading in that direction.

In gesture recognition, the input comes in the form of hand or any other bodily motion to perform computing tasks, which to date are still input via device, touch screen or voice. The

addition of the z-axis to our existing two-dimensional UI will undoubtedly improve the human-computer interaction experience. Just imagine how many more functions can be mapped to our body movements.

(9)

he navigate through thousands of photos in a 3D-plane through his hand gestures and collaborate with fellow ‘hand-gesturers’ on team tasks. Excited? Underkoffler believes that such UI will be commercially available within the next five years.

Brain-Computer Interfaces

Perhaps the ultimate computer interface, and one that remains some way off, is mind control.

Surgical implants or electroencephalogram (EEG) sensors can be used to monitor the brain activity of people with severe forms of paralysis. With training, this technology can allow “locked in” patients to control a computer cursor to spell out messages or steer a wheelchair.

Some companies hope to bring the same kind of brain-computer interface (BCI) technology to the mainstream. Last month, Neurosky, based in San Jose, CA, announced the launch of its Bluetooth gaming headset designed to monitor simple EEG activity. The idea is that gamers can gain extra powers depending on how calm they are.

Beyond gaming, BCI technology could perhaps be used to help relieve stress and information overload. A BCI project called the Cognitive Cockpit (CogPit) uses EEG information in an attempt to reduce the information overload experienced by jet pilots.

The project, which was formerly funded by the U.S. government’s Defense Advanced Research Projects Agency (DARPA), is designed to discern when the pilot is being overloaded and manage the way that information is fed to him. For example, if he is already verbally communicating with base, it may be more appropriate to warn him of an incoming threat using visual means rather than through an audible alert. “By estimating their cognitive state from one moment to the next, we should be able to optimize the flow of information to them,” says Blair Dickson, a researcher on the project with U.K. defense-technology company Qinetiq.

2.

Brain-Computer Interface

Our brain generates all kinds of electrical signals with our thoughts, so much so that each specific thought has its own brainwave pattern. These unique electrical signals can be

mapped to carry out specific commands so that thinking the thought can actually carry out the set command.

In a EPOC neuroheadset created by Tan Le, the co-founder and president of Emotiv Lifescience, users have to don a futuristic headset that detects their brainwaves generated by their thoughts.

(10)

In any case, envision a (distant) future where one could operate computer systems with thoughts alone. From the concept of a ‘smart home’ where one could turn lights on or off without having to step out of your bed in the morning, to the idea of immersing yourself in an ultimate gaming experience that response to your mood (via brainwaves), the potential for such an awesome UI is practically limitless.

. Flexible OLED display

If touchscreens on smartphones are rigid and still not responsive enough to your commands, then you might probably be first in line to try out flexible OLED (organic light-emitting diode) displays. The OLED is an organic semiconductor which can still display light even when rolled or stretched. Stick it on a plastic bendable substrate and you have a brand new and less rigid smartphone screen.

Furthermore, these new screens can be twisted, bent or folded to interact with the computing system within. Bend the phone to zoom in and out, twist a corner to turn the volume up, twist the other corner to turn it down, twist both sides to scroll through photos and more.

Such flexible UI enables us to naturally interact with the smartphone even when our hands are too preoccupied to use touchscreen. This could well be the answer to the sensitivity (or lack there of) of smartphone screens towards gloved fingers or when fingers are too big to reach the right buttons. With this UI, all you need to do is squeeze the phone with your palm to pick up a call.

. Tangible User Interface (TUI)

Imagine having a computer system that fuses the physical environment with the digital realm to enable the recognition of real world objects. In Microsoft Pixelsense (formerly known as Surface), the interactive computing surface can recognize and identify objects that are placed onto the screen.

In Microsoft Surface 1.0, light from objects are reflected to multiple infrared cameras. This allows the system to capture and react to the items placed on the screen.

(Image credit: ergonomidesign)

In an advanced version of the technology (Samsung SUR40 with Microsoft PixelSense), the screen includes sensors, instead of cameras to detect what touches the screen. On this surface, you could create digital paintings with paintbrushes based on the input by the actual brushtip.

The system is also programmed to recognize sizes and shapes and to interact with

(11)

7. Wearable Computer

As the name suggests, wearable computers are electronic devices which you can wear on you like an accessory or apparel. It can be a pair of gloves, eyeglasses, a watch or even a suit. The key feature of wearable UI is that it should keep your hands free and will not hinder your daily activities. In other words, it will serve as a secondary activity for you, as and when you wish to access it.

(Image Source: sonymobile.com)

Think of it as having a watch that can work like a smartphone. Sony has already released an Android-powered SmartWatch earlier this year that can be paired with your Android phone via Bluetooth. It can provide notifications of new emails and tweets. As with all smartphones, you can download compatible apps into Sony SmartWatch for easy accessibility.

Expect more wearable UI in the near future as microchips bearing smart capabilities grow nano-smaller and be fitted into everyday wear.

8. Sensor Network User Interface (SNUI)

Here’s an example of a fluid UI where you have multiple compact tiles made up of color LCD screens, in-built accelerometers and IrDA infrared transceivers that are able to interact with one another when placed in close proximity. Let’s make this simple. It’s like Scrabble tiles that have screens which will change to reflect data when placed next to each other.

(Image credit: nordicsemi)

As you shall see in this demo video of Siftables, users can physically interact with the tiles by tilting, shaking, lifting and bumping it with other similar tiles. These tiles can serve as a highly interactive learning tool for young children who can receive immediate reactions to their

actions.

SNUI is also great for simple puzzle games where gameplay includes shifting and rotating tiles to win. Then there’s also the ability to sort images physically by grouping these tiles together according to your preferences. It is a more crowd-enabled TUI; instead of one screen it’s made out of several smaller screens that interact with one another.

Organic User Interface

(12)

I'm referring in this case to bendable screens, and although this technology is very much in its infancy, it offers so many incredible opportunities to interact and navigate through devices.

Naturally, an organic user interface offers way more possibilities for input (and even output) than the previous interface concepts. At the moment we're used to point and click, though we're getting more involved with gestures, touch and (thanks to gyroscopes in devices) tilting and rotating, although it's still limited on the web. Eventually, we'll be bending, deforming and manipulating actual physical objects.

If you're interested in these kind of projects, the Human Media Lab has quite a collect

Conclusion:

 User interfaces are changing faster and more fundamentally than ever before.

 In future, users won’t even know that UI’s a ‘thing’ and it will become invisible.

Referensi

Garis besar

Dokumen terkait

Dari hasil pengujian korelasi yang telah dilakukan terhadap variabel independen yaitu ‘Profesionalisme Auditor’ maupun terhadap dimensi - dimensi penyusun

Panitia Pengadaan Barang/Jasa Pengadaan Kendaraan Operasional (Roda Empat) pada Kantor Camat Ilir Timur II Kota Palembang akan melaksanakan Pelelangan Umum Sederhana

2.06.2.07.1.15.05 Fasilitasi Unit Timbang, Belanja Barang Yang akan Pengadaan 1 Paket 60.000.000 APBD Takar dan Perlengkapannya diserahkan kepada masyrakat Langsung.

[r]

completeness of toilet facilities, accessibility design, the corridor in Serayu Opak River Basin Organization (SORBO), and staircase of Water Resource

[r]

Puji dan syukur penulis ucapkan kepada Tuhan Yang Maha Esa yang telah memberikan rahmat dan karunia-Nya kepada penulis sehingga penulis dapat menyelesaikan tugas sarjana ini.

ttd.. Direktur Jenderal Mineral dan Batubara; atau 2. Bersama ini kami mengajukan permohonan untuk mendapatkan Izin Usaha Jasa Pertambangan Mineral dan Batubara dalam