ABSTRACTS OF PRESENTED PAPERS
In this paper, we present an approach to improving the usability of software systems based on software architectural decisions. We make specific connections between aspects of usability such as the ability for a user to undo or cancel and software architecture. Designers can use this collection both to generate solutions to those aspects of usability that they choose to include in their system and to evaluate their systems for specific aspects of usability. We present the results of using this approach in an evaluation of an actual system.
Virtual environments lack a standardised interface between the user and application, this makes it possible for the interface to be highly customised for the demands of individual applications. However, this requires a development process where the interface can be carefully designed to meet the requirements of an application. In practice, an ad-hoc development process is used which is heavily reliant on a developer's craft skills. A number of formalisms have been developed to address the problem of establishing the behavioural requirements by supporting its design prior to implementation. We have developed the Marigold toolset which provides a transition from one such formalism, Flownets, to a prototype-implementation. In this paper we demonstrate the use of the Marigold toolset for prototyping a small environment.
This paper describes videoSpace, a software toolkit designed to fa-cilitate the integration of image streams into existing or new applications to support new forms of human-computer interaction and collaborative activities. In this perspective, our primary interest is not in performance or reliability is-sues, but rather in the ability of the toolkit to support rapid prototyping and in-cremental development of video applications. VideoSpace is described in ex-tensive details, by showing the architecture and functionalities of its API and basic tools. We also present several projects developed with this toolkit that il-lustrate its potential and the new uses of video it will allow in the future.
Remote pointing is an interaction style for presentation systems, interactive TV, and other systems where the user is positioned an appreciable distance from the display. A variety of technologies and interaction techniques exist for remote pointing. This paper presents an empirical evaluation and comparison of two remote pointing devices. A standard mouse is used as a base-line condition. Using the ISO metric throughput (calculated from users' speed and accuracy of completing tasks) as the criterion, the two remote pointing devices performed poorly, demonstrating 32% and 65% worse performance than the mouse. Qualitatively, users indicated a strong preference for the mouse over the remote pointing devices. Implications for the design of present and future systems for remote pointing are discussed.
In this paper, we first define a class of problems that we have dubbed inherently-3D, which we believe should lend themselves to solutions that include user-controlled 3D models and animations. We next give a comparative discussion of two tools that we used to create presentations: CosmoWorlds and Flash. The presentations included text, pictures, and user-controlled 3D models or animations. We evaluated the two tools along two dimensions: 1) how well the tools support presentation development and 2) the effectiveness of the resultant presentations. From the first evaluation, we concluded that Flash in its current form was the more complete development environment. For a developer to integrate VRML into cohesive presentations required a more comprehensive development environment than is currently available with CosmoWorlds. From our second evaluation, based on our usability study, we have made two conclusions. First, our users were quite successful in completing the inherently-3D construction task, regardless of which presentation (Flash or VRML) they saw. Second, we found that enhancing the VRML models and including multiple perspectives in Flash animations were equally effective at reducing errors as compared to a more primitive VRML. Based on our results we believe that for tasks of the 3D-complexity that we used, Flash is the clear choice. Flash was easier to use to develop the presentations and the presentation was as effective as the model that we built with CosmoWorlds and Java. Finally, we postulate a relationship between inherently-3D task complexity and the relative effectiveness of the VRML presentation.
<em>Abstract</em>. QTk is a tool built on top of Tcl/Tk that allows user interface de-signers to adopt a cost-effective model-based approach for designing executa-ble user interfaces. QTk is based on a descriptive approach that uses a de-clarative style where appropriate (symbolic records to specify widget types, initial states, and geometry management; procedure values to specify actions), augmented with objects and threads to handle the active part of the interface. QTk offers four original advantages: unicity of language (only one language serves as both modeling and programming language), reduced development cost (the interface model immediately gives rise to an executable user inter-face), tight integration of tools (specification, construction, and execution tools are all integrated), and improved expressiveness (the interface model is very compact to produce and cheap to manipulate). The advantages are made possible by a tight integration with a multiparadigm programming language, Oz, that supports symbolic data structures, a functional programming style, an object programming style, and cheap threads. QTk is a module of the Mozart Programming System, which implements Oz. We show how to port QTk to Java, which allows to retain some but not all of the tool's advantages.
Despite the increasing availability of groupware, most systems are awkward and not widely used. While there are many reasons for this, a significant problem is that groupware is difficult to evaluate. In particular, there are no discount usability evaluation methodologies that can discover problems specific to teamwork. In this paper, we describe how we adapted Nielsens heuristic evaluation methodology, designed originally for single user applications, to help inspectors rapidly, cheaply and effectively identify usability problems within groupware systems. Specifically, we take the mechanics of collaboration framework and restate it as heuristics for the purposes of discovering problems in shared visual work surfaces for distance-separated groups.
Systematic user errors commonly occur in the use of interactive systems. We describe a formal reusable user model implemented in higher-order logic that can be used for machine-assisted reasoning about user errors. The core of this model is a series of non-deterministic guarded temporal rules. We consider how this approach allows errors of various specific kinds to be detected by proving a single theorem about a device. We illustrate the approach using a simple case study.
Development of application front-ends that are designed for deployment on multiple devices requires facilities for specifying device-independent semantics. This paper focuses on the user-interface requirements for specifying device-independent layout constraints. We describe a device independent application model, and detail a set of high-level constraints that support automated layout on a wide variety of target platforms. We then focus on the problems that are inherent in any single-view direct- manipulation WYSIWYG interface for specifying such constraints. We propose a two-view interface designed to address those problems, and discuss how this interface effectively meets the requirements of abstract specification for pervasive applications.
In practice, designers often select user interface elements like widgets intuitively. So, important design decisions may never become conscious or explicit, and therefore also not traceable. In order to improve this situation, we propose a systematic process for selecting user interface elements (in the form of widgets) in a few explicitly defined steps, starting from usage scenarios. This process provides a seamless way of going from scenarios through (attached) subtask definitions and various task classifications and (de)compositions to widget classes. In this way, it makes an important part of user interface design more systematic and conscious. For an initial evaluation of the usefulness of this approach, we conducted a small experiment that compares the widgets of an industrial GUI that was developed as usual by experienced practitioners, with the outcome of an independent execution of the proposed process. Since the results of this experiment are encouraging, we suggest to investigate this approach further in real-world practice.
Minna Makarainen, Nokia Mobile Phones
Johanna Tiitola, Nokia Mobile Phones
Katja Konkka, Nokia Mobile Phones
This paper discusses how cultural aspects should be
addressed in user interface design. It presents a summary of two case studies,
one performed in India and the other in South Africa, in order to identify the
needs and re-quirements for cultural adaptation. The case studies were performed
in three phases. First, a pre-study was conducted in Finland. The pre-study
included literature study about the target culture. Explored issues included
facts about the state, religions practiced in the area, demographics, languages
spoken, eco-nomics, conflicts between groups, legal system, telecommunication
infrastruc-ture and education system. Second, a field study was done in the
target culture. The field study methods used were observations in context,
semi-structured interviews in context, and expert interviews. A local
subcontractor was used for practical arrangements, such as selecting subjects
for the study. The subcontractors also had experience on user interface design,
so they could act as experts giving insight to the local culture. Third, the
findings were analyzed with the local experts, and the results were compiled
into presentations and design guidelines for user interface designers. The
results of the case studies indicate that there is a clear need for cultural
adaptation of products. The cultural adaptation should cover much more, than
only the language of the dia-log between the device and the end user. For
example, the South-Africa study revealed a strong need for user interface, which
could be used by non-educated people, who are not familiar with technical
devices. The mobile phone users are not anymore only well educated
technologically oriented people. Translat-ing the language of the dialog to the
local language is not enough, if the user cannot read. Another design issue
discovered in the study was that people were afraid of using data-intensive
applications (such as phonebook or calen-dar), because the criminality rates in
South Africa are very high, and the risk of the mobile phone getting stolen and
the data being lost is high. In India, some examples of the findings are the
long expected lifetimes of the products, and importance of religion. India is
not a throwaway culture. When a device gets broken, it is not replaced with a
new one, but instead it is repaired. The expected lifetime of the product is
long. The importance of religion, and espe-cially religious icons and rituals,
is much more visible in everyday life, than in Europe. For example, people carry
pictures of Gods instead of pictures of fam-ily with them. Addressing this in
the user interface would give the product added emotional value
Content creation for computer graphics applications is a very time-consuming process that requires skilled personnel. Many people find the manipulation of 3D object with 2D input devices non-intuitive and difficult. We present a system, which restricts the motion of objects in a 3D scene with constraints. In this publication we discuss an experiment that compares two different 3D manipulation interfaces via 2D input devices. The results show clearly that the new constraint-based interface performs significantly better than previous work.
In a world where competitors are just a mouse-click away, human- centered design (HCD) methods change from a last minute add-on to a vital part of the software development lifecycle. However, case studies indicate that existing process models for HCD are not prepared to cope with the organizational obstacles typically encountered during the introduction and establishment of HCD methods in industrial software development organizations. Knowledge about exactly how to most efficiently and smoothly integrate HCD methods into development processes practiced by software development organizations is still not available. To bridge this gap, we present the experience-based human-centered design lifecycle, an interdisciplinary effort of experts in the fields of software engineering, human-computer interaction, and process improvement. Our approach aims at supporting the introduction, establishment and continuous improvement of HCD processes in software development organizations. The approach comprises a process model, tools, and organizational measures that pro-mote the utilization of HCD methods in otherwise technology- centered development processes and facilitate organizational learning in HCD. We present promising results of a case study where our approach was successfully applied in a major industrial software development project.
As the body of knowledge on the design of interactive software systems becomes more mature, the need for disseminating the accumulated wisdom of the field becomes more important and critical to the design of useful and usable software systems. Usability guidelines in various forms are one technique that has been designed to convey usability knowledge and ensure a degree of consistency across applications. Another is the emerging discipline of usability patterns, which aims to apply the concepts of pattern languages used in architecture and software design to usability issues. This paper presents an approach that combines these techniques in a case-based architecture and utilizes a process to help an organization capture, adapt, and refine usability resources from project experiences. The approach utilizes a rule-based tool to represent the circumstances under which a given usability resource is applicable. Characteristics of the application under development are then captured and used to match usability resources to the project where they can be used to drive the design process. Design reviews are used to ensure that the repository remains a vital knowledge source for producing useful and usable software systems.
A study was conducted to investigate the effects of auditory, kinesthetic and force feedback for a "point and select" computing task at two levels of cognitive workload. Participants were assigned to one of three computer-mouse feedback groups (regular mouse, kinesthetic feedback, and kinesthetic and force feedback). Each group received two auditory feedback conditions (sound on, sound off) for each of the two workload conditions (single task or dual task). Even though auditory feedback did not significantly improve task performance, all groups rated the sound-on conditions as requiring less work than the sound off conditions. Similarly, participants believed that kinesthetic feedback improved their detection of errors, even though mouse feedback did not produce significant differences in performance. Implications for adding multi-modal feedback to computer-based tasks are discussed.
Growing use of computers in safety-critical systems increases the need for Human Computer Interfaces (HCIs) to be both smarter - to detect human errors - and better designed - to reduce likelihood of errors. We are developing methods for determining the likelihood of operator errors which combine current theory on the psychological causes of human errors with formal methods for modelling human-computer interaction. This paper outlines an approach to developing formal methods for evaluating safety of interactive systems, and illustrates the approach on a simplified problem from Air Traffic Control. We outline formal models for three components of an ATC simulator: the underlying computer system, the HCI and the operator.
This paper introduces a new technique for the verification of both safety and usability requirements for critical interactive systems. This technique uses the model-oriented formal method B and implements a hybrid version of the MVC and PAC software architecture models. Our claim is that this technique that uses proofs obligations can ensure both usability and safety requirements, from the specification step of the development process, to the implementation.
The paper focuses on Augmented Reality systems in which interaction with the real world is augmented by the computer, the task being performed in the real world. We first define what mobile AR systems, collaborative AR systems and finally mobile and collaborative AR systems are. We then present the augmented stroll and its software design as one example of a mobile and collaborative AR system. The augmented stroll is applied to Archaeology in the MAGIC (Mobile Augmented Group Interaction in Context) project.
We present a way of analyzing sensed context information formulated to help in the generation, documentation and assessment of the designs of context-aware applications. Starting with a model of sensed context that accounts for the particular characteristics of sensing, we develop a method for expressing requirements for sensed context information in terms of relevant quality attributes plus properties of the sensors that supply the information. We demonstrate on an example how this approach permits the systematic exploration of the design space of context sensing along dimensions pertinent to software development. Returning to our model of sensed context, we examine how it can be supported by a modular software architecture for context sensing that promotes separation between context sensing, user interaction, and application concerns.
In recent years because of the advances in computer vision research, free hand gestures have been explored as a means of human-computer interaction (HCI). Gestures in combination with speech can be an important step toward natural, multimodal HCI. However, inclusion of non-predefined gestures into a multimodal setting can be a particularly challenging problem. In this paper, we propose a structured approach for studying multimodal language in the context of a display control. Our approach is based on semantic phonology to represent organization of gestures and to draw link between observable gesture primitives and their meaning. An implemented testbed allows us to conduct user studies and address issues toward understanding of hand gestures in a multimodal computer interface. Proposed semantic classification of gestures for 2D-display control distinguishes two main categories of gesture classes based on their spatio-temporal deixis. The results of these studies imply syntax formation for gesticulation. These findings can help in the interpretation problem for natural gesture-speech interfaces.
The current generation of mobile context-aware applications must respond to a complex collection of changes in the state of the system and in its usage environment. We argue that dynamic links, as used in user interface software for many years, can be extended to support the change-sensitivity necessary for such systems. We describe an implementation of dynamic links in the Paraglide Anaesthetist's Clinical Assistant, a mobile context-aware system to help anaesthetists perform pre- and post-operative patient assessment. In particular, our implementation treats dynamic links as first class objects. They can be stored in XML documents and transmitted around a network. This alThe current generation of mobile context-aware applications must respond to a complex collection of changes in the state of the system and in its usage environment. We argue that dynamic links, as used in user interface software for many years, can be extended to support the change-sensitivity necessary for such systems. We describe an implementation of dynamic links in the Paraglide Anaesthetist's Clinical Assistant, a mobile context-aware system to help anaesthetists perform pre- and post-operative patient assessment. In particular, our implementation treats dynamic links as first class objects. They can be stored in XML documents and transmitted around a network. This allows our system to find and understand new sources of data at run-time.
Nowadays, UML is the most successful model-based approach to support software development. However, during the evolution of UML little attention has been paid to supporting user interface design and development. In the meantime, the user interface has become a crucial part of most software projects, and the use of models to capture requirements and express solutions for its design, a true necessity. Within the community of researchers investigating model-based approaches for interactive applications, particular attention has been paid to task models. ConcurTaskTrees is one of the most widely used notations for task modelling. This paper discusses a solution for obtaining a UML for interactive systems based on the integration of the two approaches and why this is a desirable goal.
Systems combining the real and the virtual are becoming more and more prevalent. Existing HCI design methods are currently not addressing the design issues raised by the mix of real and virtual entities. Facing this lack of design method, we present the OP-a-S notation: OP-a-S modeling of a system adopts an interaction-centered point of view and highlights the links between the real world and the virtual world. Based on the characteristics of the OP-a-S components and relations, predictive usability analysis can be performed by considering the ergonomic property of consistency. We illustrate our method on the retro-design of a computer assisted surgical application, CASPER.
The increasing proliferation of computational devices has introduced the need for applications to run on multiple platforms in different physical environments. Providing a user interface specially crafted for each context of use is extremely costly and may result in inconsistent behavior. User interfaces must now be capable of adapting to multiple sources of variation. This paper presents a unifying framework that structures the development process of plastic user interfaces. A plastic user interface is capable of adapting to variations of the context of use while preserving usability. The reference framework has guided the design of ARTStudio, a model-based tool that supports the plastic development of user interfaces. The framework as well as ARTStudio are illustrated with a common running example: a home heating control system.