2014 CCN/CSBS Workshop: Social Perception

Organizers: Jon Freeman, Brad Duchaine, M. Ida Gobbini
Co-sponsored by the CCN and the Center for Social Brain Sciences

Dates: July 31 and August 1, 2014

Location: The Hanover Inn, Hanover, NH

WORKSHOP VIDEOS

Speakers: 

Reg Adams, Penn State

Ambiguity and the Temporal Dynamics of Threat-related Attention.
Reginald B. Adams, Jr., The Pennsylvania State University, USA
Kestutis Kveraga, Athinoula A. Martinos Center for Biomedical Imaging, Harvard Medical School, USA

Abstract:
In this talk, we will discuss research examining the intersectional impact of compound facial cues on attention. Early on using fMRI, we found greater amygdala responsivity to ambiguous (e.g., direct gaze/male fear) versus clear (e.g., averted gaze/female fear) combinations of threat cues. This work helped to resolve a long standing puzzle in the literature as to why amygdala activation was consistently found to fear displays, yet not to anger displays, when anger (at least when coupled with direct gaze) is arguably a clearer signal of threat. We have since also found the opposite pattern of results, with greater amygdala activation to clear- versus ambiguous-threat cues. In an effort to address this apparent discrepancy, we examined whether different adaptive attunements across the temporal stream moderate these effects. First, using a dot-probe paradigm, we found greater attentional orienting to rapid presentations of clear combinations of threat cues, and greatersustained attention to ambiguous threat-cue combinations. Paralleling these effects, using fMRI, we also found greater amygdala responses to clear-threat cues when rapidly presented (33ms and 300ms), and to ambiguous-threat cues when presented for more sustained times (1s, 1.5s, 2s). More recently, using MEG, we have begun examining the neurodynamics of threat perception as it unfolds, implicating magnocellular “action-related” vision in the processing of congruous threat cues, and parvocellular “analysis-related” vision in the processing of incongruous cues. We discuss these results within an adaptive dual-process framework that favors quick and efficient attentional orienting toward threat-congruent information and later attentional maintenance required to process threat-ambiguous information. We will discuss the implications of this work has to the growing field of social vision and specifically to our understanding the functional nature of compound social cue integration and how cross-cue/channel/modality interactions are likely to impact social and emotion perception.

Pascal Belin, University of Glasgow

A Vocal Brain: Cerebral Processing of Voice Information

Abstract:
The human voice carries speech but also a wealth of socially-relevant, speaker-related information. Listeners routinely perceive precious information on the speaker’s identity (gender, age), affective state (happy, scared), as well as more subtle cues on perceived personality traits (attractiveness, dominance, etc.), strongly influencing social interactions. Using voice psychoacoustics and neuroimaging techniques, we examine the cerebral processing of person-related information in perceptual and neural voice representations. Results indicate a cerebral architecture of voice cognition sharing many similarities with the cerebral organization of face processing, with the main types of information in voices (identity, affect, speech) processed in interacting, but partly dissociable functional pathways.

Michael Graziano, Princeton University

Consciousness and the Social Brain

Abstract:
What is consciousness and how can a brain, a mere collection of neurons, create it? In my lab we are developing a theoretical and experimental approach to these questions. In our proposal, the “attention schema” theory, awareness is the brain’s sometimes inaccurate representation of attentional state. In this proposal, the relationship between awareness and attention is similar to the relationship between the body schema and the body. The body schema is a model constructed by the brain that roughly, and sometimes inaccurately, describes the state of the body. Just so, we propose that the brain constructs an approximate, but more-or-less useful model of its own process of attention. The quirky, physically incoherent properties that humans typically attribute to awareness are a product of the inaccuracies in that model. In essence, the model describes a state of knowing without providing any information about the physical, mechanistic basis for it. When higher cognitive machinery accesses that internal model, it concludes that it has a non-physical, subjective awareness of things, because that is what its internal information tells it. We also propose that the brain uses a similar process to construct models of other people’s states of attention, in effect attributing awareness to others. The attention schema theory provides a systematic and testable theory of consciousness, tracing the evolution of awareness through steps from the advent of selective signal enhancement about half a billion years ago, to the top-down control of attention, to an internal model of attention — which allows a brain, for the first time, to attribute to itself that it has a mind that is aware of something — to the ability to attribute awareness to other beings, and from there to the human attribution of a rich spirit world surrounding us. Humans have been known to attribute awareness to plants, rocks, rivers, empty space, and the universe as a whole, as a central part of our cultural behavior. Deities, ghosts, souls — the spirit world swirling around us is arguably the exuberant modeling of attentional states.

Kerri Johnson, UCLA

Social Categorizations as Decisions Made Under Uncertainty

Abstract:
Social categorization — the tendency to perceive others in terms of their social category memberships — has well known impacts on spontaneously formed impressions, expectations, and evaluations of others. Such categorization occur through the dynamic integration of visual cues in the faces and bodies of people who we observe. The outcome of these mechanisms, however, varies dramatically between dimensions of social categorizations, revealing notably divergent decision biases across categories. Some social categorizations (e.g., male, black) are biased to be “conservative and quick,” insofar as they favor a minority percept and occur rapidly; other categorizations (e.g., gay, religious minority), in contrast, are biased to be “cautious and contemplative,” insofar as they eschew a minority categorization and occur more deliberatively. In this talk, I characterize social categorizations as heuristic decisions that are made under varying degrees of uncertainty. As such, social categorizations constitute decisions that are corrected for perceived utility. Importantly, some utility concerns have self-relevant implications, but others have considerably more other-relevant implications, each which is likely to bias social categorizations in a predictable manner. I therefore argue that a decision making framework informs how motivated utility concerns bias social categorizations, guide social reasoning, and influence downstream evaluations.

Neil Macrae, University of Aberdeen

How Do I See Me?
The Power of Imaginary Experiences

Abstract:
A fundamental capacity of the human mind is the ability to transcend the here-and-now (i.e., mental time travel), enabling us to visit distant times, far-away places and see ourselves from different visual perspectives. Escaping present reality can serve an important function. By drafting imaginary experiences and previewing their potential consequences, we can determine what needs to be done (or indeed not done) to achieve our desired objectives. In the current talk, I will outline how visual perspective shapes both the course and consequences of mental simulation. Behavioural and brain imaging data will be presented and the goal of the presentation will be to delineate when (and how) simulated experiences impact core aspects of social-cognitive functioning.

Jason Mitchell, Harvard University

Neural Origins of Prosocial behavior

Olivier Pascalis, Universite Pierre Mendes-France

On the Linkage between Face Processing, Language Processing, and Narrowing during Development

Abstract:
Social life requires relationships with other group members, acknowledgment of their status, and communication between individuals.  In humans, faces and language are essential for communication. Faces provide an early channel of communication prior to the comprehension of gestural or oral language but face processing seems to be facilitated by voice processing, even at an early age.  I will argue that what drives or motivates the development of both face and language processing is the urge to communicate.  I will first review our knowledge of the development of face processing during infancy and childhood. I will then make a parallel with language development and will try to convince you that narrowing is a mechanism for infants to adapt to their native social group that is modelling the face processing system.

Stefan Schweinberger, Friedrich Schiller University of Jena

Event-related Brain Potential Correlates of Face Recognition

Abstract:
Face recognition can be conceived as a complex facility, requiring the orchestrated activity of multiple neuro-cognitive subroutines (cf. contributions in Person Perception 25 years after Bruce and Young (1986). Special Issue, British Journal of Psychology, 2011, 102(4), 695-974). Prior electrophysiological research has seen a strong focus on the N170, at the expense of other face-sensitive ERP components - including those that are now known to relate more specifically to individual face recognition. I will discuss current evidence for multiple face-sensitive ERPs which suggests that (a) the N170 is related to face detection and structural encoding, but not recognition, (b) the occipitotemporal P200 is sensitive to second-order spatial configuration, and may index processes related to unfamiliar face learning and population expertise, (c) the posterior temporal N250(r) is sensitive to face familiarity and relates to individual face recognition, and (d) a centroparietal N400 systematically relates to domain-independent access of semantic information about people. Because other aspects of face perception (e.g., age, attractiveness, eye gaze, emotional expression, trustworthiness etc.) also depend on multiple neuro-cognitive subroutines, further progress in electrophysiological research necessitates appreciating a range of electrophysiological ERP components that relate to different functional components of face perception. Time permitting, I will conclude with a discussion of the broader perspective of person perception.

Patrik Vuilleumier, Universite de Geneve

Face recognition: from visual to social processing

Abstract:
Although it is well known that face perception involves a highly specific distributed brain network, much still remains unresolved concerning the role of different brain areas within this network in the processing of faces and face related information such as emotion expressions. Moreover, parts of this network overlap with systems engaged by affective and social signals conveyed by non-facial stimuli. This presentation will review recent work from our group using fMRI and DTI to investigate the function and structural interconnections of visual and limbic areas implicated in processing faces, facial expressions, and other facial features. It will also illustrate new approaches based on multivoxel pattern analysis of brain activations, allowing us to decode distinct information contents from areas activated during fMRI. The latter approach suggests that different kinds of facial information are represented in different cortical regions within the temporal and frontal lobe that respond to facial expression, with some of these areas holding higher-level supramodal representations of emotions and mental states. 

Leslie Zebrowitz, Brandeis University

Trait Impressions from Faces and Their Accuracy: Origins, Rater Age, and Face Age

Abstract:
Research has documented surprising agreement and accuracy in younger adults’ trait impressions from faces (e.g., Carré, McCormick, & Mondloch, 2009; Zebrowitz, Hall, Murphy & Rhodes, 2002; Zebrowitz & Rhodes, 2004). Although the accuracy effect sizes have been modest, the limited amount of information still photographs provide to perceivers make this a situation in which “small effects are impressive” (Prentice & Miller, 1992). I will discuss: 1) theoretical explanations for first impressions from faces and their accuracy; 2) reasons to expect impressions and accuracy to vary with rater age and face age; 3) reasons to expect level of attractiveness to moderate the accuracy of impressions; 4) research investigating trait impressions from faces and their accuracy as a function of rater age, face age, and level of attractiveness; and 4) facial cues that contribute to accurate first impressions.