Check-in code: Introduction to Human Computer Interaction IMD07101 & IMD07401 Week 9 – 16th March 2023 Evaluation Dr Gemma Webster g.webster@napier.ac.uk Today’s plan Qualitative vs quantitative Expert-based evaluation methods Participant-based evaluation methods Data analytics QUANT evaluation methods Quantitative data (QUANT) Quantitative data is data expressing a certain quantity, amount or range. Usually, there are measurement units associated with the data, e.g. metres in the case of a person’s height. It makes sense to set boundary limits to such data, and it is also meaningful to apply arithmetic operations to the data. https://stats.oecd.org/glossary/detail.asp?ID=2219 QUANT Quantitative data is numerical and is analysed using “objective” statistical methods. Question: What data gathering techniques are used to obtain quantitative data? Answer: questionnaires, structured interviews, secondary data collection, usability studies (measurements of human behaviour), logs (e.g. web logs and activity reports). Qualitative data (QUAL) Qualitative data depends on descriptive words, images, and observations. It is defined as “data that approximates or characterizes but does not measure the attributes, characteristics, properties, etc., of a thing or phenomenon.” Qualitative data is subjective, exploratory, and aimed at increasing understanding of a problem or situation. https://www.springboard.com/blog/data- analytics/quantitative-data/ QUAL Qualitative data represents opinion. It is subjective in nature, open to interpretation and requires human coding. Question: How is qualitative data obtained in projects? Answer: open questions in questionnaires, semi-structured and unstructured interviews, focus groups, ethnographic studies, observation of behaviour, video and audio recordings and secondary data (e.g. content from diaries, websites, blogs, discussion for a, broadcasts, publications). Any Questions? What is evaluation? Evaluation is the fourth main process of UX design By evaluation we mean reviewing, trying out or testing a design idea, a piece of software, a product or a service to discover whether it meets criteria These criteria will often be summed up by the guidelines for good design concerning usability. At times, the designer will want to focus on UX and measure users’ enjoyment, engagement and aesthetic appreciation. At other times, the designer might be more interested in some other characteristic of the design such as whether a particular web page has been accessed, or whether a particular service moment is causing users to walk away from the interaction. Why, what, where, and when to evaluate? Why : To check users’ requirements and that users can use the product and they like it. What : A conceptual model, early prototypes of a new system and later, more complete prototypes or finished products (iterative process). Where : In natural and laboratory settings. When : Throughout design and finished products (in order to inform new products or improve existing ones). Three main types of evaluation (1) Expert-based A usability expert, or a UX designer, reviewing some form of envisioned version of a design. Will often pick up significant usability or UX issues quickly, but experts will sometimes miss detailed issues that real users find difficult. (2) Participant-based Recruiting people to use an envisioned version of a system, also called ‘user testing’. Must be used at some point in the development process to get real feedback from users. Notes: Expert and participant-based methods can be conducted in a controlled setting such as a usability laboratory, using controlled experiments or they can be undertaken ‘in the wild’ where much more realistic interactions will happen. If real users are not easily available for an evaluation, designers can ask people to take the role of particular types of user described by personas. (3) Data analytics Gather data on system performance once the system or service is deployed. Expert based evaluation methods Simple, relatively quick and effective No substitute of real users Effective, particularly in the design process The expert will walk through representative tasks or scenarios of use Three methods for this lecture: 1. Heuristic evaluation 2. Discount usability engineering 3. Cognitive walk-through Heuristic evaluation Heuristic evaluation refers to a number of methods in which a person trained in HCI or interaction design examines a proposed design to see how it measures up against a list of principles. Ideally, several experts will perform the evaluation. Heuristic evaluation is valuable as formative evaluation, to help the designer improve the interaction at an early stage. It should not be used as a summative assessment, which happens at the end of the design process, to make claims about the usability and other characteristics of a finished product. From your readings: Design Principles Helping people access, learn, remember system: Visibility Consistency Familiarity Affordance Giving them sense of control; knowing what/how to do: Navigation Control Feedback Safely and securely: Recovery Constraints In a way that suits them: Flexibility Style Conviviality (see detail in chapter 5, pp 117-118) Heuristic evaluation based on Design Principles Each expert goes through the design and notes the problems of a relevant heuristic and makes a recommendation for improvement where possible. Severity ratings ( i.e., 5-point scales ) can be used to assess how much the problem will impact the design. A functionality briefing is normally needed as well as the scenarios used in the design process. Issue: Experts’ disagreements. Issues with heuristic evaluation Experts might make certain assumptions, e.g. that users are naive, inexperienced with technology, etc. Heuristics might be misapplied. Heuristic evaluation should be used as a formative evaluation to get feedback on initial designs. Cognitive walkthroughs are more robust solutions (next slide!) (Woolrych and Cockton, 2000) Cognitive walkthrough It is a rigorous paper-based technique for checking through detailed design and logic of steps in interaction. It is derived from the human information processor view of cognition and closely related to task analysis (Chapter 11). Works in three steps (to follow...) Step 1: Inputs to cognitive walkthrough An understanding of the people who are expected to use the system. A set of concrete scenarios representing both (a) very common and (b) uncommon but critical sequences of activities. A complete description of the interface to the system - this should comprise both a representation of how the interface is presented, e.g. screens, and the correct sequence of actions for achieving the scenario tasks, usually as a hierarchical task analysis (HTA). (Wharton et al., 1994) Step 2: Questions for cognitive walkthrough Will the people using the system try to achieve the right effect? Will they notice that the correct action is available? Will they associate the correct action with the effect that they are trying to achieve? If the correct action is performed, will people see that progress is being made towards the goal of their activity?