Show simple item record

dc.contributor.authorCao, Shi 21:12:57 (GMT) 21:12:57 (GMT)
dc.descriptionCao, S. (2015). Progress towards Automated Human Factors Evaluation. 6th International Conference on Applied Human Factors and Ergonomics (AHFE 2015) and the Affiliated Conferences, AHFE 2015, 3, 4266–4272. This work is made available through a CC-BY-NC-ND 4.0 license. The licensor is not represented as endorsing the use made of this work.
dc.description.abstractHuman factors tests are important components of systems design. Designers need to evaluate users’ performance and workload while using a system and compare different design options to determine the optimal design choice. Currently, human factors evaluation and tests mainly rely on empirical user studies, which add a heavy cost to the design process. In addition, it is difficult to conduct comprehensive user tests at early design stages when no physical interfaces have been implemented. To address these issues, I develop computational human performance modeling techniques that can simulate users’ interaction with machine systems. This method uses a general cognitive architecture to computationally represent human cognitive capabilities and constraints. Task-specific models can be built with the specifications of user knowledge, user strategies, and user group differences. The simulation results include performance measures such as task completion time and error rate as well as workload measures. Completed studies have modeled multitasking scenarios in a wide range of domains, including transportation, healthcare, and human-computer interaction. The success of these studies demonstrated the modeling capabilities of this method. Cognitive-architecture-based models are useful, but building a cognitive model itself can be difficult to learn and master. It usually requires at least medium-level programming skills to understand and use the language and syntaxes that specify the task. For example, to build a model that simulates a driving task, a modeler needs to build a driving simulation environment so that the model can interact with the simulated vehicle. In order to simply this process, I have conducted preliminary programming work that directly connects the mental model to existing task environment simulation programs. The model will be able to directly obtain perceptual information from the task program and send control commands to the task program. With cognitive model-based tools, designers will be able to see the model performing the tasks in real-time and obtain a report of the evaluation. Automated human factors evaluation methods have tremendous value to support systems design and evaluationen
dc.relation.ispartofseriesProcedia Manufacturing;3en
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International*
dc.subjectSystems designen
dc.subjectUsability testsen
dc.subjectCognitive architectureen
dc.subjectHuman performance modelingen
dc.subjectMental workloaden
dc.titleProgress towards Automated Human Factors Evaluationen
dc.typeConference Paperen
dcterms.bibliographicCitationCao, S. (2015). Progress towards Automated Human Factors Evaluation. 6th International Conference on Applied Human Factors and Ergonomics (AHFE 2015) and the Affiliated Conferences, AHFE 2015, 3, 4266–4272.
uws.contributor.affiliation1Faculty of Engineeringen
uws.contributor.affiliation2Systems Design Engineeringen

Files in this item


This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivatives 4.0 International
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivatives 4.0 International


University of Waterloo Library
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
519 888 4883

All items in UWSpace are protected by copyright, with all rights reserved.

DSpace software

Service outages