Browsing by Author "Fugelsang, Jonathan"
Now showing 1 - 17 of 17
- Results Per Page
- Sort Options
Item A Two-Effects Model of Explanation on Exposing the Illusion of Understanding(University of Waterloo, 2024-12-20) Meyers, Ethan; Koehler, Derek; Fugelsang, JonathanPeople often overestimate their understanding of how things work. For instance, people believe that they can explain even ordinary phenomena such as the operation of zippers and speedometers in greater depth than they really can. This is called the illusion of understanding (originally known as the illusion of explanatory depth). Fortunately, a person can expose the illusion by attempting to generate a causal explanation for how the phenomenon operates (e.g., how a zipper works). This might be because explanation makes salient the gaps in a person’s knowledge of that phenomenon. However, recent evidence suggests that people might be able to expose the illusion by instead explaining a different phenomenon. Across six preregistered experiments and one secondary data analysis, I examined whether explaining one phenomenon (e.g., how a zipper works) leads individuals to lower their self-assessed knowledge of unrelated phenomena (e.g., how snow forms). My findings demonstrated that participants consistently revised their understanding downwards, not only for the item they explained but also for other items they did not explain. For instance, participants reported reduced understanding of speedometers after explaining helicopters or zippers. Contrary to prior research, participants did exhibit the illusion for familiar movie plots (Experiment 4), but consistent with prior research, participants did not exhibit the illusion for common procedures (Experiment 5). Additionally, when common procedures were included in the experimental design used in Experiments 2 and 3, participants showed no illusion whatsoever (Experiment 6). Finally, an analysis of explanation quality using ChatGPT to code the explanations revealed that the reduction in perceived understanding after explaining (compared to before) correlated with the difference between how well the participant thought they understood the item and how well they actually explained it, but only for explained items. These findings challenge the common framework of how the illusion of understanding operates. Throughout the thesis I evaluate alternative models of the illusion and ultimately find the most support for a two-effects model of explanation, wherein failing to explain a phenomenon temporarily makes people recognize the gaps in their knowledge of the item they explained and makes them feel less knowledgeable about most other things.Item Beyond Algorithm Aversion: The Impact of Conventionality on Evaluation of Algorithmic and Human-Made Errors(University of Waterloo, 2024-08-23) Tariq, Hamza; Fugelsang, Jonathan; Koehler, DerekPrior research has found that when an algorithm makes an error, people judge it more severely than when the same mistake is made by a human. This bias, known as algorithm aversion, was investigated across two studies (N = 1199). Specifically, we explored the effect of the status quo on people’s reactions to identical mistakes made by humans and algorithms. We found significant algorithm aversion when participants were informed that the decisions described in the scenarios are conventionally made by humans. However, when participants were told that the same decisions are conventionally made by algorithms, the bias diminishes, is eliminated, or even reverses direction. This effect of varying whether the algorithm or the human is described as the convention had a particularly strong influence on recommendations of which decision maker should be used in the future. These findings suggest that the existing status quo has a consequential influence on people’s judgments of mistakes. Implications for people’s evolving relationship with algorithms and technology are discussed.Item Bullshit Makes the Art Grow Profounder: Evidence for False Meaning Transfer Across Domains(University of Waterloo, 2018-09-05) Turpin, Martin; Stolz, Jennifer; Fugelsang, JonathanThe purpose of this thesis was to explore the decision making underlying the perception of meaning in abstract art. In particular, I explore if features adjacent to the content of the art itself predominantly drive the perception of depth and meaning in abstract art, especially by drawing a connection between the modes of communication present in the art world “International Art English” and the concept of Pseudo-Profound Bullshit. Across three studies, 500 participants completed tasks that assessed the degree to which Pseudo-Profound Bullshit can enhance the perceived profoundness of abstract art and examined mechanisms that underlie this enhancement. It was found that pairing abstract art pieces with randomly generated pseudo-profound titles enhanced the perception of profoundness in those art pieces (Exp 1), that being under a verbal working memory load enhanced the perception of profoundness of abstract art separately (Exp 2), but did not interact with the presence of a title, nor did it independently affect bullshit receptivity generally (Exp 3). This ultimately contributes to our understanding of the cognition of art, and decision making, especially as it relates to an application of models of cognitive miserliness to the evaluation of abstract art.Item Dazzled and Confused: Bullshitting as a Strategic Behaviour(University of Waterloo, 2023-12-05) Turpin, Martin Harry; Fugelsang, JonathanWhile much work has focused on receptivity to bullshit as a form of irrational belief which may predict the endorsement of other irrational beliefs, much less has been done examining how bullshit may be used strategically. For a highly social species such as humans, much can be gained by deploying cognitive and linguistic tricks to impress, confuse, and entice others toward favourable actions for the bullshitter. In the current research, I examine the persuasive power of bullshit in 9 studies. First, I demonstrate how the use of bullshit affects people’s judgments of things unrelated to the content of the bullshit itself, including enhancing the perceived profoundness of abstract art through the inclusion of bullshit titles (Chapter 2) or increasing reported willingness-to-pay for questionable products which are described using bullshit (Chapter 3). Further, I demonstrate that effective bullshitting may confer benefits in terms of how others perceive the bullshitter, including that good bullshitters are judged to be more intelligent. I also demonstrate that this judgement may not be completely unfounded insofar as cognitive ability predicts the ability to bullshit well (Chapter 4). I then propose a potential mechanism for why bullshit carries persuasive power, that is, through a unique combination of aesthetic appeal and confusing construction which leaves the target of bullshit baffled, but open to be impressed by the odd beauty of flowery nonsense. I ultimately find that the strongest predictor of receptivity to bullshit is how beautiful it is judged to be (Chapter 5). I discuss these results as they contribute to an understanding of bullshitting as a strategic behaviour which affords good bullshitters the opportunity to gain advantages through confusion, superficial impressiveness, and a flexible commitment to truth telling.Item The Effect of Strategic Language on Perceptions of Actions and Speakers(University of Waterloo, 2023-07-31) Walker, Alexander; Fugelsang, Jonathan; Koehler, DerekDescribing the actions of others (or oneself) necessitates that a speaker make linguistic choices, as multiple terms can often be used to describe the same act. The present work investigates the consequences of these linguistic choices, assessing the extent to which a self-serving speaker can, through the strategic use of euphemistic (agreeable) and dysphemistic (disagreeable) terms, influence peoples’ evaluations of actions while avoiding the reputational consequences typically associated with deception. In this dissertation, I aim to better understand the antecedents and consequences of strategic language across different social and information contexts, discussing the theoretical and applied implications of the results of eight experiments (N = 4,828) within the context of prior work on linguistic manipulation and political polarization. First, I demonstrate that participants’ evaluations of actions are made more favourable by language that replaces a disagreeable term (e.g., torture) with a semantically related agreeable term (e.g., enhanced interrogation) in an act’s description. Notably, providing participants with more knowledge about the actions they evaluated reduced (but did not eliminate) the persuasive influence of a speaker’s linguistic choices, suggesting that the persuasive potential of strategic language is greater when the details of an event are lacking. Even though the strategic use of euphemisms and dysphemisms affected action evaluations, participants judged both agreeable and disagreeable action descriptions as largely truthful and distinct from lies. Similarly, they viewed speakers attributed these descriptions as considerably more trustworthy and moral than liars. Taken together, the present work suggests that a strategic speaker can, through the careful use of language, shape public perception in a preferred direction while avoiding a majority of the reputational costs associated with less subtle forms of linguistic manipulation (e.g., lying). Second, I investigate the impact of strategic language in the context of political partisanship. Self-serving language is prevalent in the political realm, as liberals and conservatives are motivated to describe political events in a manner that supports group narratives and favourably presents the actions of co-partisans. Using a subset of liberal-biased (e.g., expand voting rights) and conservative-biased (e.g., reduce election security) terms from the aforementioned experiments, I find that partisans view speakers describing politically contentious events using ideologically-congruent language as more trustworthy, moral, and open-minded than speakers describing these same events in a non-partisan way (e.g., “relax voter ID requirements and expand mail-in voting”). Thus, in politically homogenous social networks, individuals (and organizations) may be incentivized to describe reality using ideologically-biased language. While beneficial to individuals in certain social contexts, the prevalence of partisan language may have negative consequences for society-at-large, exacerbating political polarization and hindering compromise across political divides. Support for this claim was found in the present work: When presented to political out-group members, partisan language produced negative evaluations of opposing partisans, with speakers attributed out-group language being viewed as untrustworthy, immoral, and closed-minded. Additionally, presenting Democrats and Republicans with ideologically-congruent descriptions of political events enhanced partisan disagreement and increased the ideological extremity of participants’ action evaluations. Therefore, partisan language, while praised by co-partisans, exacerbated political polarization, damaging trust and amplifying disagreements between Democrats and Republicans.Item Extremely Partisan Samples Impact Perceptions of Political Group Beliefs(University of Waterloo, 2023-12-05) van der Valk, Alexandra; Fugelsang, Jonathan; Koehler, DerekAccurately inferring the beliefs of a partisan group (e.g. Democrats, Republicans) can be challenging when exposed to extremely partisan beliefs from that group. Across two studies (total N = 566), we tested whether people correct these inferences for sample bias when it was explicitly disclosed. Study 2 further assessed how much of this correction is deliberate. Participants read 12 statements that most members of a political party (Democrats or Republicans) generally agree with. They were shown how strongly five party members agreed with each statement. In the biased sample conditions, these five party members were selected from the top 10% most partisan members; this bias was either disclosed or undisclosed. In the unbiased sample condition, the five members were representatively sampled from the entire party. Then, participants estimated on average how much the entire party agreed with each statement, and the likelihood that party members of the same or opposing parties agreed with each other. Participants’ mean estimates from the biased sample conditions were higher than the unbiased sample condition but lower than the samples viewed, indicating an (insufficient) attempt to correct for sample bias. Corrections were largest when sample bias was disclosed. Overall accuracy was highest when participants viewed unbiased samples, though across conditions there appeared a general tendency to overestimate strength of partisan beliefs. Parties were perceived as more homogeneous when participants viewed biased samples, regardless of whether bias was disclosed or not. While awareness of hyperpartisan bias helps correct judgments, it may not eliminate overestimation, overconfidence, or inflated perceptions of party homogeneity.Item Inducing Feelings of Ignorance Makes People More Receptive to Expert (economist) Opinion(University of Waterloo, 2019-08-19) Meyers, Ethan; Fugelsang, Jonathan; Koehler, DerekWhile they usually should, people do not revise their beliefs more to expert (economist) opinion than to lay opinion. The present research sought to better understand the factors that make it more likely for an individual to change their mind when faced with the opinions of expert economists versus the general public. Here, across five studies (N = 2,650), I examined the role that overestimation of one’s knowledge plays in this behavior. I replicated the finding that people fail to privilege the opinion of experts over the public on two different (Study 1) and five different (Study 5) economic issues. I then found that undermining an illusion of both topic relevant (Studies 2 - 4) and irrelevant knowledge (Studies 3 & 4) can lead to greater belief revision in response to expert rather than lay opinion. I suggest one reason that people fail to revise their beliefs more to experts is because people tend to think they know more than they really do.Item Intuitive Confidence Reflects Speed of Initial Responses in Point Spread Predictions(University of Waterloo, 2017-08-24) Walker, Alexander C.; Fugelsang, Jonathan; Koehler, DerekPrevious research has revealed that intuitive confidence is an important predictor of how people choose between intuitive and non-intuitive alternatives. Two studies were conducted to investigate the determinants of intuitive confidence. Across these studies participants predicted the outcomes of several National Basketball Association games, both with and without reference to a point spread. As predicted, after controlling for the variability associated with point spread magnitude, the faster participants were to predict the outright winner of a game (i.e., generate an intuition), the more likely participants were to predict the favourite against the point spread (i.e., endorse the intuition). Overall, my findings point to the speed of intuition generation as a determinant of intuitive confidence, and thus a predictor of choice in situations featuring intuitive and non-intuitive alternatives.Item An Investigation into the Self-deployment of Attentional Reminders(University of Waterloo, 2023-10-24) Leatham, Zion; Smilek, Daniel; Fugelsang, JonathanIn a series of studies, we sought to determine whether 1) people will self-deploy attentional reminders when asked to complete an attentionally demanding task (Experiment 1 & 2), 2) people modulate the number of attentional reminders they selected depending on the presence or absence of a continuous distraction (Experiment 1 & 2) and 3) if so, do reminders improve performance on the attentionally demanding task (Experiment 1, 2 & 3). In Experiments 1 and 2 participants were asked to complete an attentionally demanding task (2-back; primary task) and completed the 2-back task on its own (no distraction condition) or while a distracting video was played on the computer screen above the 2-back task stimuli (distraction condition). Critically, participants were given a preview of the 2-back task and the video (if present). After being given a preview of the task, they were asked to set how many (if any) reminders they wanted to receive during the task. We followed this up in Experiment 3, where we removed the choice component. Specifically, in this study, half of the participants received experimenter-set attentional reminders (every 2 minutes) while the other half did not receive any reminders. Findings from Experiments 1 and 2 indicated that people will opt to select attentional reminders when asked to complete an attentionally demanding task, however, their modulation of the reminders was irrespective of the presence or absence of a distracting video. In addition, the attentional reminders people set did not influence performance on the 2-back task. Experiment 3 demonstrated that people who received experimenter-set attentional reminders did not significantly perform better on the 2-back task in the presence of a distracting video. These results suggest that the attentional reminders may influence performance, however, their influence might be dependent on the contingent timing of the deployment of the attentional reminders.Item Judgments of effort depend on the temporal proximity to the task(University of Waterloo, 2019-08-26) Ashburner, Michelle Roshan Marie; Risko, Evan; Fugelsang, JonathanCognitive effort is a central construct in our lives, yet our understanding of the processes underlying our perception of effort are limited. Performance is typically used as one way to assess effort in cognitive tasks (e.g., tasks that take longer are generally thought to be more effortful); however, Dunn and Risko (2016) reported a recent case where such “objective” measures of effort were dissociated from judgments of effort (i.e., subjective effort). This dissociation occurred when participants either made their judgments of effort after the task (i.e., reading stimuli composed of rotated words) or without ever performing the task. This leaves open the possibility that if participants made their judgments of effort closer in time to the actual experience of performing the task (e.g., right after a given trial) that these judgments might better correspond to putatively “objective” measures of effort. To address this question, we conducted two experiments replicating Dunn and Risko (2016) with additional probes for immediate judgments of effort (i.e., a judgment of effort made right after each trial). Results provided some support for the notion that judgments of effort more closely follow reading times when made immediately after reading. Implications of the present work for our understanding of judgments of effort are discussed.Item Not so fast: Individual differences in impulsiveness are only a modest predictor of cognitive reflection.(Elsevier, 2020) Littrell, Shane; Fugelsang, Jonathan; Risko, Evan F.The extent to which a person engages in reflective thinking while problem-solving is often measured using the Cognitive Reflection Test (CRT; Frederick, 2005). Some past research has attributed poorer performance on the CRT to impulsiveness, which is consistent with the close conceptual relation between Type I processing and dispositional impulsiveness (and the putative relation between a tendency to engage in Type I processing and poor performance on the CRT). However, existing research has been mixed on whether such a relation exists. To address this ambiguity, we report two large sample size studies examining the relation between impulsiveness and CRT performance. Unlike previous studies, we use a number of different measures of impulsiveness, as well as measures of cognitive ability and analytic thinking style. Overall, impulsiveness is clearly related to CRT performance at the bivariate level. However, once cognitive ability and analytic thinking style are controlled, these relations become small and, in some cases, non- significant. Thus, dispositional impulsiveness, in and of itself, is not a strong predictor of CRT performance.Item The Numerical Distance Effect and Math Achievement: Assessing the validity of magnitude comparison paradigms(University of Waterloo, 2016-02-09) Rozario, Jordan; Fugelsang, JonathanThe numerical distance effect (NDE) is the inverse relationship between response times and the distance between two numbers in numerical magnitude comparison tasks. This robust effect has been obtained using multiple magnitude comparison paradigms (MCP). In addition, the size of an individual’s NDE has been found to predict mathematical achievement. The present investigation assessed 4 MCP (distance and ratio controlled simultaneous comparison; and primed and non-primed comparison-to-a-standard) for internal reliability, convergent validity, and their ability to predict mathematical achievement and numeracy. Results demonstrate that performance on MCPs correlated with math ability; however, only the NDE in the simultaneous comparison task is uniquely related to math achievement and numeracy.Item On Spoken Confidence: Characteristics of Explicit Metacognition in Reasoning(University of Waterloo, 2025-02-20) Stewart, Kaiden; Fugelsang, JonathanIn this thesis, I assess how explicit, subjective evaluations of confidence influence monitoring and control (i.e., metacognitive) processes in reasoning. Metacognitive processes play a crucial role in modern dual-process theories of reasoning and decision-making, the consequences of which have been implicated in numerous significant real-world decisional outcomes. It is tacitly assumed that monitoring one’s reasoning for the purpose of optimal deployment of controlled, deliberative processing functions similarly to monitoring one’s reasoning for the purpose of providing a judgment of confidence, despite evidence from other domains indicating otherwise. This thesis takes a critical step toward evaluating metacognitive theories of reasoning and their broader application by assessing the degree to which standard approaches represent realistic accounts of metacognitive processes. To aid in interpretation of the work directly testing this possibility, I first present six experiments addressing foundational issues with respect to the operation of metacognition in reasoning. Chapter 2 provides evidence for a causal relationship between confidence judgments and controlled behavior (specifically deliberation), a reality often assumed in the absence of direct evidence. I demonstrate across four experiments that processing manipulations affect confidence and influence control behavior, consistent with a causal relationship, but also that it is possible to target control behaviour without mirroring effects on confidence. Chapter 3 develops a simple predictive model of confidence that identifies heretofore unidentified, item-based predictors of confidence. This simple model allows a unique approach to testing the central question in Chapter 4. Chapter 4 investigates whether the relationship between confidence and controlled behavior partly depends on the requirement to make explicit confidence judgments. Using a paradigm adapted from research involving nonhuman primates, I compare implicit and explicit confidence conditions. Results reveal small differences in controlled behavior and substantial differences in monitoring. In the present thesis, I provide evidence of plausibly systematic influences of common measurement approaches on reasoning. To this effect, it is likely that the reasoning processes in which individuals engage in day-to-day life are reliably different than those commonly assessed in the lab. This has practical, but also theoretical implications which I discuss.Item Overconfidently Underthinking: Narcissism negatively predicts Cognitive Reflection(Taylor & Francis, 2020) Littrell, Shane; Fugelsang, Jonathan; Risko, Evan F.There exists a large body of work examining individual differences in the propensity to engage in reflective thinking processes. However, there is a distinct lack of empirical research examining the role of dispositional factors in these differences and understanding these associations could provide valuable insight into decision-making. Here we examine whether individual differences in cognitive reflection are related to narcissism (excessive self-focused attention) and impulsiveness (trait-based lack of inhibitory control). Participants across three studies completed measures of narcissism, impulsiveness and cognitive reflection. Results indicate that grandiose and vulnerable narcissists differ in their performance on problem-solving tasks (i.e., CRT) and preferences for intuitive thinking, as well as the degree to which they reflect on and understand their own thoughts and enjoy cognitively effortful activities. Additionally, though impulsiveness was significantly related to self-report measures of cognitive reflection (i.e., metacognitive reflection, metacognitive insight, and Need for Cognition), it showed no association with a behavioural measure of cognitive reflection (i.e., CRT scores). Our results suggest that certain individual differences in dispositional and personality characteristics may play important roles in the extent to which individuals engage in certain forms of reflective thinking.Item The psychology of bullshitting: Measurement, correlates, and outcomes of the propensity to mislead others(University of Waterloo, 2021-07-15) Littrell, Shane; Fugelsang, Jonathan; Risko, EvanRecent psychological research has identified important individual differences associated with receptivity to bullshit, which has greatly enhanced our understanding of the processes behind susceptibility to pseudo‐profound or otherwise misleading information. However, the bulk of this research attention has focused on cognitive and dispositional factors related to bullshit (the product), while largely overlooking the influences behind bullshitting (the act). Here, I present results from nine studies focusing on: 1) the construction and validation of a new, reliable scale measuring the frequency with which individuals engage in two types of bullshitting (persuasive and evasive) in everyday situations; 2) the associations of both types of bullshitting frequency with other relevant constructs, and; 3) the extent to which those who produce bullshit are also receptive to various types of bullshit. Overall, bullshitting frequency was negatively associated with sincerity, honesty, cognitive ability, open‐minded cognition, and self‐regard. Additionally, the Bullshitting Frequency Scale was found to reliably measure constructs that are (1) distinct from lying and (2) significantly related to performance on overclaiming and social decision tasks. Moreover, the frequency with which individuals engage in persuasive bullshitting (i.e., bullshitting intended to impress or persuade others) was found to positively predict susceptibility to various types of misleading information and this association is robust to individual differences in cognitive ability and analytic cognitive style. These results represent an important step forward in the study of the spread of misinformation by demonstrating the utility of the Bullshitting Frequency Scale as well as highlighting certain individual differences that may play important roles in the extent to which individuals engage in and are receptive to everyday bullshitting.Item Speed of Response does not Affect Feelings of Rightness in Reasoning(University of Waterloo, 2019-09-18) Stewart, Kaiden; Risko, Evan; Fugelsang, JonathanIt has been argued (Thompson, Prowse Turner & Pennycook, 2011) that the experience of ease (i.e., the ability to quickly generate an initial response) during processing influences one’s likelihood of engaging reflectively with a problem. Thompson and colleagues argued that the ease with which an answer comes to mind (i.e., answer fluency) is a critical determinant of Feelings of Rightness (FOR), which, in turn, determine one’s likelihood of reflecting. However, the possibility remained that the critical determinant of FORs is the speed of the total response, given the nature of the evidence for this claim. The critical difference between these two accounts is the contribution of factors occurring after the point where an answer comes to mind. Across two Experiments, we manipulated the duration of the physical response in order to identify whether participants’ confidence (FOR) judgments are at least partially based on factors occurring after the initial mental generation of an answer. We found no evidence that FORs nor reflection are influenced by a manipulation of response execution. Broadly, the present investigation provides evidence that the relation between speed of response and FORs is likely due to the speed with which an answer is generated internally. That is, events occurring after the generation of a response, at least as operationalized here, do not influence FORs. This is consistent with Thompson and colleagues’ (2011) suggestion that answer fluency is the critical variable in determining FORs.Item What Makes us Think? A Three-Stage Dual-Process Model of Analytic Engagement(University of Waterloo, 2016-07-29) Pennycook, Gordon; Koehler, Derek; Fugelsang, JonathanThe distinction between intuitive and analytic thinking is common in psychology. However, while often being quite clear on the characteristics of the two processes (‘Type 1’ processes are fast, autonomous, intuitive, etc. and ‘Type 2’ processes are slow, deliberative, analytic, etc.), dual-process theorists have been heavily criticized for being unclear on the factors that determine when an individual will think analytically or rely on their intuition. I address this issue by introducing a three-stage model that elucidates the bottom-up factors that cause individuals to engage Type 2 processing. According to the model, multiple Type 1 processes may be cued by a stimulus (Stage 1), leading to the potential for conflict detection (Stage 2). If successful, conflict detection leads to Type 2 processing (Stage 3), which may take the form of rationalization (i.e., the Type 1 output is verified post hoc) or decoupling (i.e., the Type 1 output is falsified). I tested key aspects of the model using a novel base-rate task where stereotypes and base-rate probabilities cued the same (non-conflict problems) or different (conflict problems) responses about group membership. My results support two key predictions derived from the model: 1) conflict detection and decoupling are dissociable sources of Type 2 processing and 2) conflict detection sometimes fails. I argue that considering the potential stages of reasoning allows us to distinguish early (conflict detection) and late (decoupling) sources of analytic thought. Errors may occur at both stages and, as a consequence, bias arises from both conflict monitoring and decoupling failures.