The field of Human Computer Interaction has grown rapidly as a result of computers and computer-based systems. According to Lee and Paz (1991), more research is needed in this area because computer interface designers no longer accurately represent the users. Johnson, Clegg, and Ravden (1989) also note that literature on human computer interaction is expanding due to the diverse range of people interacting with computer systems and the fast pace of technological advancement. Various aspects of human computer interaction have been explored through research studies and literature by Baecker, Grudin, Buxton, and Greenberg (1995), Preece, Rogers, and Sharp (2002), Hall (1997), Dix, Finlay, Abowd, and Beale (1993), Carroll (1997), and Olson and Olson (2003).
Human computer interaction focuses on creating interactive products that support individuals in their daily and professional lives (Preece et al, 2002)
.... The human computer interface (HCI) is the component of a computer product that interacts with humans. The objective of HCI design is to ensure that the product is user-friendly, efficient, and enjoyable for the user (Preece et al, 2002). Insufficient interface design can lead to increased stress for users, decreased productivity, lower job satisfaction, and misuse or non-use of the computer system according to Lee and Paz (1991), Henderson, Podd, Smith, and Varela-Alvarez (1995), as well as Johnson (1989). It can also result in more errors, user frustration, subpar system performance,
employee dissatisfaction,
high turnover rates,
absenteeism,
and tardiness.
Adhering to HCI research and literature can help reduce the negative effects of poor interface design. Evaluation is an important step in HCI design principles, allowing designers to see how the product interacts with users. Without evaluation, designers cannot ensure usabilit
and user satisfaction (Preece et al, 2002). Various evaluation methods and techniques are discussed by authors to evaluate HCIs. This report reviews and discusses these evaluation methods, assisting potential evaluators in understanding and making informed decisions about the appropriate techniques for their evaluation of HCIs.
Evaluation of the human computer interface
The assessment of Human-Computer Interfaces (HCIs) enables the evaluator to determine if a design meets users' needs and their perceptions of it. This evaluation should be conducted throughout the entire design and production process, which is known as iterative design and evaluation. Baecker et al (1995), Preece et al (2002), and Christie, Scane ; Collyer (2002) all emphasize the significance of this approach. According to Baecker et al (1995), iterative design and evaluation involve repeating the design and evaluation process of an HCI until a satisfactory outcome is achieved. Johnson et al (1989) identified multiple benefits of incorporating evaluation into the design process, including increased user satisfaction, sales, and productivity, as well as reduced development costs, product returns, and training expenses.
Different methods of assessing the human computer interface
Before evaluating a HCI, it is important for the evaluator to consider the selected evaluation approach. This approach includes the overall theory, methodology, or perspective used in the evaluation and acts as a guide. It helps the evaluator define their objectives and goals clearly. Furthermore, approaches aid in categorizing and defining the different tools and techniques employed in evaluating HCIs.
Both Preece et al (2002) and Christie et al (2002) have suggested different models for assessing Human-Computer Interactions (HCIs). The evaluation methods are called paradigms (Preece et al, 2002) and
perspectives (Christie et al, 2002), respectively. Despite the differing terminology, both approaches effectively direct evaluators towards the appropriate tools and techniques for evaluation. However, Christie et al (2002) takes a more specialized approach because of its intended publication.
In their 2002 study, Christie et al discussed the theoretical perspectives in HCI evaluation. These perspectives aim to provide an overall understanding of the theoretical aspects that evaluations strive to achieve. They encourage evaluators to approach evaluations from a psychological perspective and identify the goals they aim to accomplish. The text proceeds by listing and summarizing the five perspectives.
The cognitive perspective involves the understanding and analysis of thoughts and mental processes.
The social psychological perspective is considered as the second viewpoint.
From an organizational standpoint, the third aspect to consider is the perspective.
The psychophysiological perspective offers insights into the interaction between psychological processes and the body's physiological responses.
The communication perspective focuses on the fifth aspect.
Christie et al (2002) state that the Cognitive perspective centers on the resemblance of the information processing model between human users and computer interfaces. Norman (1988) offers a precise explanation of this matter in his mental model diagram represented in Figure 1. Occasionally, there may be disparities between the designer model, user model, and system image. The cognitive perspective aims to assess HCIs with the objective of aligning the designer's model, user's model, and system image as closely as feasible.
Figure 1: Norman's conceptualization of Human-Computer Interaction (HCI)
The Social psychological perspective studies the interaction between humans and computer interfaces. The Organisational perspective uses an HCI model to analyze information processing and
workflow in organizations. The Psychophysiological perspective assesses human behavior. The Communication perspective explores the impact of constraints on communication between human or electronic partners during task performance (Christie et al, 2002).
In their study, Preece et al (2002) explored an alternative method for evaluating Human-Computer Interfaces (HCIs). Establishing an evaluation paradigm is crucial in determining the right assessment tools and techniques. These paradigms are similar to the previously mentioned theoretical approaches but provide a more extensive framework for evaluation. Moreover, they assist in organizing and classifying the different tools and techniques used to evaluate HCIs.
According to Preece et al (2002), there are various evaluation paradigms that have specific purposes. The evaluator's strengths, weaknesses, beliefs, and evaluation requirements determine the choice of paradigm to use. These paradigms offer a comprehensive framework for evaluations and direct evaluators towards appropriate tools and techniques based on their chosen paradigm. Factors like level of control, evaluation location, desired data collection, timing in the development process, and user involvement help determine the most suitable paradigm to employ (Preece et al., 2002). Four available paradigms include:
Quick and dirty evaluation
The second step is conducting usability testing.
Field studies can offer valuable insights.
4. Forecasting assessment
Quick and dirty evaluation is a fast and inexpensive way to collect feedback from users or professionals. However, it lacks thorough documentation of findings (Preece et al, 2002). Designers need to be aware that this method might overlook important aspects, so they should minimize the potential impact of any errors.
In the past, usability testing has been the primary method used to evaluate Human-Computer Interfaces (HCIs) (Preece
et al, 2002). Extensive research on this method has been conducted by Van den Haak et al (2004) and Henderson et al (1995). According to Preece et al (2002), usability testing encompasses multiple aspects and serves as both an evaluation technique and a paradigm.
Field studies can be conducted to evaluate how users interact with the HCI in their regular environment, providing valuable insight to the evaluator regarding the HCI's performance in its proposed setting.
The text emphasizes the utilization of predictive evaluation to anticipate potential usability issues in Human-Computer Interfaces (HCIs). This approach, utilized by experts, is efficient and affordable as it does not necessitate user participation. It should be noted that heuristic evaluation falls into the predictive evaluation category.
Different evaluation paradigms have specific requirements and challenges that make them appropriate for different evaluation scenarios. It is advisable to use triangulation or a combination of multiple paradigms in evaluations. Furthermore, within each paradigm, various evaluation techniques (which will be discussed later) can be used (Preece et al, 2002).
Preece et al (2002) offer less complex evaluation paradigms for inexperienced evaluators, which remain valid but may not be as thorough as the theoretical perspectives discussed by Christie et al (2002). Both the theoretical perspectives and evaluation paradigms encompass various tools and techniques for assessing HCIs.
Methods used for assessing the human-computer interface.
The evaluation techniques for HCIs are more specific compared to the approaches used. There is a wide range of evaluation techniques available, but no single technique can be universally applicable due to the various evaluation scenarios. The selection of the most appropriate technique depends on different
factors and is influenced by the utilization of previously discussed evaluation paradigms or theoretical perspectives.
Evaluation techniques can cover different paradigms and perspectives, and are connected to multiple paradigms and perspectives (Preece et al, 2002; Christie et al, 2002). The choice of evaluation technique depends on the stakeholders, whether they are experts or users. Experts and users require separate techniques. Although excluding users from the evaluation process can speed up the procedure and cut costs, it may also jeopardize the validity and reliability of the evaluation.
This article examines three HCI evaluation techniques, with a focus on the methods introduced by Preece et al (2002), to provide evaluation guidelines for less experienced users in this field.
In their work, Christie et al (2002) provided a comprehensive collection of evaluation tools and techniques for HCI. The authors took an expert evaluation approach, assuming that the reader possesses advanced knowledge and experience in HCI evaluation. Additionally, a thorough examination of relevant literature is necessary to enhance understanding. Christie et al (2002) also listed different tools and techniques, encompassing theoretical perspectives such as cognitive and social psychology.
An analysis conducted by an expert panel.
predictive models
audits and guidelines
The goal is to measure objective metrics.
The text discusses the analysis of dialogue errors.
The subject of the text is focus groups.
Questionnaires
Interviews are conducted.
* analysis of stakeholders
Observing users.
Physiological data
user walkthrough
Controlled tests
field trials
critical events
According to Lee and Paz (1991), there are various evaluation methods for HCIs. These methods
are not commonly known as usability testing or heuristic evaluation, and their names are more explanatory but open to interpretation by the evaluator. The evaluation methods mentioned by Lee and Paz include:
Concept test or paper and pencil test
Friendly users
Hostile users
Simulation of users.
Simulation trials
Iterative informal lab experiments
Formal laboratory experiments are conducted.
Field trials are being conducted.
The techniques described by Preece et al (2002) for evaluating HCIs are divided into five main groups. These techniques offer a more detailed analysis compared to the ones presented by Christie et al (2002) and Lee and Paz (1991). Preece et al's categories provide well-defined options and clarify the objectives and methodology for evaluation. The five categories include:
1. Observing users
Asking users to provide feedback
3. Inquiring professionals for their viewpoints.
Checking users' performance is a crucial step in the testing process.
5. Predicting the effectiveness of a user interface by modeling users' task performance.
The text highlights the value of observing users in determining their actions, context, technology support level, and any additional assistance needed. This technique can be applied in both quick usability studies and field studies. Observations can take place either in a controlled laboratory environment or the users' natural surroundings. The data collected during these observations may consist of notes, photos, videos, or audio recordings. These records can range from comprehensive information to simply written notes based on the evaluator's observations during the user's interaction with the human-computer interface (HCI).
Obtaining direct and immediate feedback on HCI is efficiently
achieved by asking users. This method enables researchers to discover users' actions, desires, preferences, and dislikes. It can be utilized in evaluations such as quick and dirty, usability, and field studies. Various forms of user interaction include interviews, questionnaires, discussions, and focus groups. The data collected may consist of evaluator's notes, questionnaire results, or user's own notes.
The process of consulting with experts can offer a convenient and cost-effective way to acquire more information about the HCI being assessed. This includes conducting inspections, walkthrough evaluations, and can be combined with quick and dirty as well as predictive evaluation paradigms. The quality of the data gathered depends on the level of expertise and number of experts participating in the evaluation. A group of experts is more likely to identify a higher percentage of problems related to an HCI compared to a single expert (Preece et al, 2002).
Heuristic evaluation involves experts evaluating a HCI based on a set of principles called heuristics. These heuristics include consistency and standards, error prevention, and visibility of system status and function (Lathan, Sebrechts, Newman and Doarn, 1999). Experts review the HCI, identify usability problems that violate these principles, and provide recommendations for improving them.
User testing, also known as usability testing, assesses the performance of users in order to evaluate the HCI. This evaluation includes measuring factors such as task completion time, errors made, and keystrokes performed. The data gathered from these tests is analyzed and can be compared to different design options. User tests are conducted in controlled environments with typical users of the HCI performing well-defined tasks, adhering to the usability paradigm.
Experts use
modeling users' task performance, or predictive models, to evaluate HCIs. This approach excludes users from the evaluation, making it quicker and more cost-effective. Predictive models aim to predict user performance, efficiency, and potential HCI problems. They are typically employed in the early stages of development and are unique to the predictive paradigm. An example of a predictive model is the GOMS (Goals, operators, methods and selection rules) model (Preece, 2002).
The evaluation of the human computer interface can be done through a specific method.Johnson (1989) outlined the essential characteristics of evaluation methods for the usability of HCIs. These methods should be systematic, utilizing existing criteria, iterative, understandable by a wide range of users, inclusive, straightforward to use, legitimate, reflective of real-world system usage, and thorough. To fulfill these requirements, evaluators must adhere to a well-defined methodology when assessing an HCI.
The DECIDE framework checklist, introduced by Preece et al (2002), aims to help evaluators approach, plan, and carry out a HCI evaluation. By employing this framework, evaluators can systematically design evaluations to optimize HCI outcomes. The DECIDE framework includes the following essential components:
Identify the evaluation's overarching objectives.
2. Identify the particular inquiries that need to be addressed.
3. Select the evaluation approach and methods to address the queries.
4. Practical issues, such as participant selection, need to be identified and addressed.
5. Determine the approach to addressing ethical concerns.
6. The data should be evaluated, interpreted, and presented.
The DECIDE framework is beneficial for evaluators in selecting the most suitable method and tool for their task.
In conclusion,
When evaluating a Human-Computer Interaction
(HCI), the evaluator has several choices and can employ different methods and techniques depending on the situation. The decision of which approach to use may be influenced by factors within the evaluator's control or external factors they must accommodate. For example, if there is a tight deadline and no budget for evaluation, they might choose a quick and simple assessment using heuristics. On the other hand, with more time and a larger budget, an evaluator could select a well-structured predictive evaluation paradigm that incorporates heuristics and expert input to address, discuss, and report on HCI-related issues.
To gain a thorough understanding of HCI, it is advisable to employ different evaluation methods. These can include user testing and gathering their input. By utilizing multiple techniques, the evaluation results will be enhanced and more issues will come to light.
Numerous research articles and literature studies have presented diverse approaches, methods, models, tools, and techniques for evaluating HCIs. While there may be different interpretations among these studies, there are crucial theories and techniques that should always be followed. These encompass the evaluation paradigm of predictive models along with usability testing and heuristics.
Bibliography
The book titled "Readings in Human-Computer Interaction (2nd Ed)" was edited by Baecker, R., Grudin, J., Buxton, W. and Greenberg, S in 1995. It was published by Morgan Kaufmann Publishers in San Francisco.
Carroll, J. (1997) Human Computer Interaction: Psychology as a science of design. Annu. Rev. Psych. 48, 61-83.
Christie, B., Scane, R., and Collyer, J. (2002) conducted a study to evaluate the human-computer interaction at the user interface of advanced IT systems. This study is mentioned in
the book "Evaluation of Human Work: A practical ergonomics methodology", 2nd Edition, published by Taylor and Francis.
The book "Human computer interaction" by Dix, A., Finlay, J., Abowd, G. and Beale, R. (1993) includes a chapter on the design process (Chapter 5). This chapter is found on pages 147-190.The citation for a conference proceeding is as follows:
Hall, R (1997) Proceedings of the 33rd Annual Conference of the Ergonomics Society of Australia,
held on 25-27 November 1997 in Gold Coast, Australia. Published by the Ergonomics Society of
Australia, pp 53-62. (Canberra, Ergonomics Society of Australia)
The authors of the article titled "An examination of four user-based software evaluation methods" from the journal "Interacting with computers" (1995) conducted a study on user-based software evaluation methods. The article provides valuable insights on this topic.The following text provides the citation for a journal article titled "Towards a practical method of user interface evaluation" by Johnson, G., Clegg, C. and Ravden, S. The article was published in Applied Ergonomics in 1989 and can be found in volume 20, issue 4 on pages 255-260.The Telemedicine Journal published an article in 1999 titled "Heuristic evaluation of a web-based interface for internet telemedicine" by Lathan, C., Sebrechts, M., Newman, D., and Doarn, C. The article is located in volume 5 issue 2 and covers pages 177-185.
Lee, C., and Paz, N. (1991) Human-computer interfaces: modelling and evaluation. Computers Industrial Engineering. 21, 577-581.
Olson, G. and Olson, J. (2003) Human Computer Interaction: Psychological Aspects of the Human Use of Computing. Annu. Rev. Psych. 54, 491-516.
Preece, J., Rogers, Y., and Sharp,
H. (2002). Interaction Design: beyond human-computer interaction. John Wiley ; Sons, USA.
OR
Preece, J., Rogers, Y., and Sharp, H. (2002). Interaction Design: beyond human-computer interaction. John Wiley ; Sons, USA.
Wilson, J. (2001) A framework and a context for ergonomics methodology. In Evaluation of Human Work. A practical ergonomics methodology. 2nd Ed. Taylor and Francis.
In a study conducted by Van den Haak, M., de Jong, M., and Schellens, P. (2004), the usability testing of online library catalogues was compared using think-aloud protocols and constructive interaction. The findings of this research were published in the Interacting with Computers journal (Vol. 16, pp. 1153-1170).
- Computer File essays
- Desktop Computer essays
- Servers essays
- Animals essays
- Charles Darwin essays
- Agriculture essays
- Archaeology essays
- Moon essays
- Space Exploration essays
- Sun essays
- Universe essays
- Birds essays
- Horse essays
- Bear essays
- Butterfly essays
- Cat essays
- Dolphin essays
- Monkey essays
- Tiger essays
- Whale essays
- Lion essays
- Elephant essays
- Mythology essays
- Time Travel essays
- Discovery essays
- Thomas Edison essays
- Linguistics essays
- Journal essays
- Chemistry essays
- Biology essays
- Physics essays
- Seismology essays
- Reaction Rate essays
- Roman Numerals essays
- Scientific Method essays
- Mineralogy essays
- Plate Tectonics essays
- Logic essays
- Genetics essays
- Albert einstein essays
- Stars essays
- Venus essays
- Mars essays
- Evolution essays
- Human Evolution essays
- Noam Chomsky essays
- Methodology essays
- Eli Whitney essays
- Fish essays
- Dinosaur essays