Information Visualization Evaluation

From WikiViz
Revision as of 12:28, 16 March 2009 by Mazzar (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Evaluations of Information Visualization is one of the hottest topic of the field. Several visualization tools have been developed recently, but little effort has been made to evaluate the effectiveness and utility of the tools.

Evaluation criteria

Some criteria that can be addressed in a evaluation process are the following:

  • Functionality - to what extend the system provides the functionalities required by the users?
  • Effectiveness - do the visualization provide value? Do they provide new insight? How? Why?
  • Efficiency - to what extend the visualization may help the users in achieve a better performance?
  • Usability - how easily the users interact with the system? Are the information provided in clear and understandable format?
  • Usefulness - are the visualization useful? How may benefit from it?

Techniques

It is possible to classify visualization techniques into two main categories Analytic and Empiric evaluations:

  • Analytic evaluation, based on formal analysis models and conducted by experts; analytic evaluations can be further divided into the following:
    • Heuristic/expert evaluation, where experts act as less experienced users and describe the potential problems they foresee for such users;
    • Cognitive walkthroughs, where experts walks through a specific task using a prototype


  • Empirical evaluation, realized through experiments with user test; they can be further distinguished between quantitative studies and qualitative studies.
    • Quantitative studies, consist of an analysis of determinate hypotheses tested through direct measurements.
      • Controlled experiments, (or experimental studies), where an evaluator can manipulate a number of factors associated with interface design and study their effects on various aspects of user performance.
    • Qualitative studies, involves the analysis of qualitative data, which may be obtained through questionnaires, interviews and observing users using the system, to understand and explain social phenomena;
      • Focus group, group interviewing of individuals selected and assembled by researchers to discuss and comment on, from personal experience, the topic that is the subject of the research
      • interviews, can be conducted with users, asking specific questions to elicit information about users’ impression and general comments.

Some metrics

Classic methods of HCI are not appropriate for assessing InfoVis systems. Some HCI metrics used are:

  • time required to learn the system
  • time required to achieve a goal
  • error rates
  • retention of the use the interface over the time

Conferences and pubblications

Papers

In this section I collect a list of paper focusing on this issue:

  • Chen C. Top 10 Unsolved Information Visualization Problems. IEEE Computer Graphics and Applications. 25 (4). pp.12-16. 2005.
  • DOUG SCHAFFER et al. Navigating Hierarchically Clustered Networks through Fisheye and Full-Zoom Methods. ACM Transactions on Computer-Human Interaction, Vol. 3, No. 2, June 1996, Pages 162–188.
  • Kamran Sedig et al. Application of information visualization techniques to the design of a mathematical mindtool: a usability study. Information Visualization (2003) 2, 142–159
  • (on the usability of SoftVis) Marcus, A.; Comorski, D.; Sergeyev, A. Supporting the evolution of a software visualization tool through usability studies. Program Comprehension, 2005. IWPC 2005. Proceedings. 13th International Workshop on 15-16 May 2005 Page(s):307 - 316

People

Evaluation of Visualizations for information retrieval