Object-Oriented GUIs are the Future

Object-Oriented GUIs are the Future

Editors note: This article originally appeared on the OpenMCT blog (no longer available) on August 22, 2012. It is re-posted here with permission. The author has also re-posted it on the Presto Innovation blog and we encourage you to link to that as the original source.

Do you like futuristic prototypes and imaginings of human-computer interfaces such as those sampled on the wonderful site HUDs and GUIs? Be sure to pull down the Categories menu there, and check out the clips from films such as Minority Report.

Two tantalizing aspects of design illustrated in those clips are embedded and ubiquitous computer hardware and software—computers whose functions manifest not in objects specialized to be “computers,” and often not even as “tools,” but in objects that already exist for everyday purposes. A common example is a car whose functions are enhanced with computers. More futuristic examples include a door that guesses when a person wants it to open and close, as shown in Star Trek…. Oh, right, that’s been in my grocery store for decades. But in the future they will be foolproof. More futuristic examples include pieces of paper that enhance, store, and retrieve what I draw. Embedded and ubiquitous computing do not push just the computer software application into the background of the users’ awareness; they push the computer itself into the woodwork.

We need not wait for embedded and ubiquitous computers before we can get a substantial portion of their benefits by making human-computer interfaces object oriented. “Object oriented” in this sense has nothing to do with whether object-oriented programming is used. Instead, it means that the user interface as perceived by the user is oriented to the users’ domain objects rather than to the computer software applications.

Look carefully at those clips, and notice that even when users are interacting with an overt “computer,” rarely are they focusing on “applications” (i.e., “programs”). Watch those clips again, and notice which “things” the users are seeing and manipulating. Those things most often are objects in the users’ domains of work or play, rather than the tools (computer applications) that the users use to deal with those domain objects.

This is most apparent in direct manipulation gestural interfaces. The users are grabbing, rotating, and flinging people, diagrams, buildings, and spacecraft. Sure, when they fling a “person” from one computer screen to another, they are not flinging the actual person, but they are thinking of that person as what they are flinging; they are not thinking of flinging that “window” to another screen. They do not update the last known location of the evildoer by opening a spreadsheet, finding that evildoer’s name in the first column, looking across the rows to find the Location column, then typing to change that row-column intersection’s cell from one building’s name to another. Instead they simply drag whatever representation of the evildoer they happen to have in front of them (list row, window, picture, icon, …) onto whatever representation of the new building that is handy (satellite image, schematic, list row, …). All representations of both the evildoer and the building instantly change to reflect that new association of those objects (evildoer to building). The users focus on the objects of their domain—the objects in their mental models while they are doing their tasks to meet their goals given their situation of use.

RELATED:  What is UX Design, and Why Should you Care?

Similarly, users simply flip views of the same manifestation of an object, from graphical to alphanumeric to pictorial to audio, rather than searching again for that same object in each of several different applications, each devoted to only one such type of view. The heroine is looking at a live satellite image of the building, then flips that exact same portion of the screen to show a schematic view of the building, as if putting on x-ray glasses. She does not fire up the separate “Schematic Viewer” application and, once there, hunt down that building. She does not even drag that building’s image from the “Satellite Viewer” application to the “Schematic Viewer” application. She merely flips views as if putting on a different pair of virtual eyeglasses, the whole time keeping her vision and therefore her attention fixed on that building in front of her eyes—the domain object of her attention.

In short, all those futuristic scenarios have the users’ mental models of their task domain objects thrown onto the computer displays in front of them. In the futuristic world of embedded and ubiquitous computers, “computers” have no natural place in the users’ mental models of their task domain, so the computers have no place as recognizable, concrete, discrete, distracting entities in the users’ physical environment. Likewise, the computer software applications have no natural place in the users’ mental models of their task domain, so the computer applications have no place as recognizable, concrete, discrete, distracting entities on the displays.

Back here in the present, most users are stuck not only with physically discrete, intrusive, distracting, attention-demanding, time-consuming computers; they are also stuck with similarly disadvantageous discrete, multiple, computer software applications. Most graphical user interfaces (GUIs) are application oriented. But users need not be so burdened, even today! We have the technology to drastically reduce if not eliminate the notion of the computer application from the user’s physical environment, so they can deal with only a single application, and within it focus purely on the objects from their mental model of the task domain. One easy way to implement object-oriented GUIs is to use NASA’s Mission Control Technologies (MCT) software platform. Just build plugins to provide additional types of objects, views of objects, and actions on objects. Users simply see all those appear within the same MCT application. It’s general purpose, not restricted to space mission operations, and it is open source!

In a future blog post I will explain a design process for figuring out what the users’ objects, attributes, and actions should be. It’s called The Bridge, and it is described in Chapter 2 of a book.

Feature image: Minority Report OMG. Provided by Youflavio / Flickr, CC BY-SA 2.0

Written by
Tom Dayton
Join the discussion

1 comment
  • Love this! So true! And you can see this in VR today…when designers try and put menus and application tools in VR it’s a total mess. Direct manipulation all the way—even in my bank’s website.

Follow @uxmastery

Instagram has returned empty data. Please authorize your Instagram account in the plugin settings .

Instagram

Instagram has returned empty data. Please authorize your Instagram account in the plugin settings .