Browsing by Author "Mogavi, Reza Hadi"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Designing Biofeedback Board Games: The Impact of Heart Rate on Player Experience(Association for Computing Machinery New York, NY, United States, 2025-04-25) Tu, Joseph; Kukshinov, Eugene; Mogavi, Reza Hadi; Wang, Derrick M.; Nacke, Lennart E.Biofeedback provides a unique opportunity to intensify tabletop gameplay. It permits new play styles through digital integration while keeping the tactile appeal of physical components. However, integrating biofeedback systems, like heart rate (HR), into game design needs to be better understood in the literature and still needs to be explored in practice. To bridge this gap, we employed a Research through Design (RtD) approach. This included (1) gathering insights from enthusiast board game designers (n = 10), (2) conducting two participatory design workshops (n = 20), (3) prototyping game mechanics with experts (n = 5), and (4) developing the game prototype artifact One Pulse: Treasure Hunter’s. We identify practical design implementation for incorporating biofeedback, particularly related to heart rate, into tabletop games. Thus, we contribute to the field by presenting design trade-offs for incorporating HR into board games, offering valuable insights for HCI researchers and game designers.Item The great AI witch hunt: Reviewers’ perception and (Mis)conception of generative AI in research writing(Computers in Human Behavior: Artificial Humans, 2024-10-24) Hadan, Hilda; Derrick, Wang; Mogavi, Reza Hadi; Tu, Joseph; Zhang-Kennedy, Leah; Nacke, LennartGenerative AI (GenAI) use in research writing is growing fast. However, it is unclear how peer reviewers recognize or misjudge AI-augmented manuscripts. To investigate the impact of AI-augmented writing on peer reviews, we conducted a snippet-based online survey with 17 peer reviewers from top-tier HCI conferences. Our findings indicate that while AI-augmented writing improves readability, language diversity, and informativeness, it often lacks research details and reflective insights from authors. Reviewers consistently struggled to distinguish between human and AI-augmented writing but their judgements remained consistent. They noted the loss of a “human touch” and subjective expressions in AI-augmented writing. Based on our findings, we advocate for reviewer guidelines that promote impartial evaluations of submissions, regardless of any personal biases towards GenAI. The quality of the research itself should remain a priority in reviews, regardless of any preconceived notions about the tools used to create it. We emphasize that researchers must maintain their authorship and control over the writing process, even when using GenAI's assistance.