The Psychology of Political Behavior Studies (PPBS) are an ongoing series of quota-based national cross-sections containing the same core questions on social and political attitudes, values, voting, and political participation of more than 20,000 Americans. PPBS aims to inform politico-psychological explanations of political behavior, attitudes, values, voting, and participation. The studies are broad-ranging and include researchers from a number of American and European universities, including the University of Jena and University of Cologne (Germany), University of Washington (US), Université Grenoble Alpes (France), Long Island University (US), Vrije Universiteit Amsterdam and Maastricht (The Netherlands), University of Arkansas (US), and University of Minho (Portugal).
PPBS leverages 12 datasets collected through professional survey companies for a combined N of 21.107 interviews, including four nationally representative cross-sections of the American population (N=7.259) during last three elections in the US (i.e., Presidential 2016 and 2020, and Midterms 2018). It also includes their replication/confirmatory analogues (N=11.582), in pre- and post-election studies, with added repeated measures for three surveys (N=2.215) across two time-points, allowing for both within and between investigations.
PPBS includes a true interdisciplinary selection of constructs relevant to both fields comprising Political Psychology: Social Psychology and Political Science. From Political Science, PPBS surveys traditional party- and ideology identities, feeling thermometer, political participation, voter registration, and political sophistication. From Psychology, PBBS surveys constructs such as various types of System Justification, Social Dominance Orientation, Needs, Collective Narcissism, and Justice Concerns. PPBS also surveys relevant constructs used in both fields such as social attitudes and prejudice, but also those conceptualized and measured differently across fields (e.g., right-wing authoritarianism in Social Psychology and child-reading attitudes in Political Science). Taken together, it amounts to more than 60 political and psychological constructs, allowing for the testing of competing explanations of the psychology of political behavior.
PPBS is an attempt to unite theoretical and methodological insights from two, fairly independent, sciences studying the same phenomena: the psychology of political behavior. It is well established that scholars from the two main fields composing Political Psychology adopt widely different practices when conducting research. Political scientists and psychologists often differ when it comes to samples types, sample sizes, constructs & measures, statistical techniques, choice of control variables, and data collection philosophies and methods. PPBS was conceived as an attempt to bridge the two major approaches to scientific research in Political Psychology, in both substantive and methodological spheres. On one hand, PBBS takes insights from Political Science by drawing heavily from Political Behavior measures of gold-standard national election surveys (e.g., ANES, GSS, VOTER). On the other hand, PPBS draws insights from Psychology in that it surveys full instruments of a rich variety of psychological constructs thought to underlie and motivate political behavior. The methodological dimension of PPBS sought to combine Political Science’s literature on survey methodology with Psychology’s psychometrics and measurement. Finally, PPBS sought to incorporate the lessons from the social sciences’ “credibility revolutions” (for a review in Political Science, see Wuttke, 2019; for Psychology, see Vazire, 2018) and implement for its surveys demographic representation, high N, and an exploratory and replication/confirmatory design in which the same survey instrument is fielded in two separate groups from the same source to increase robustness and avoid false positives. Taken together, these characteristics reflect a desire for improved practices in both substantive and methodological spheres for research in Political Psychology.
In specific, there are 7 underlying substantive and methodological motivations for the creation of PPBS. They are: theory construction; the inter-relations of political psychological constructs; sample type and size; ad-hoc selection of politico-psychological constructs; reproducibility of findings in political psychology; effect-size benchmarking; and promoting the development of a cumulative approach to the psychology of political behavior. Below these are fleshed out in more detail. Please note this is a work in progress.
Theory construction. While Political Science and Psychology are broad disciplines, and one should not generalize across their sub-domains, it is fairly accepted that theories about political behavior, and its psychology, lack of integration. Several authors, across social sciences, likened lack of theory integration to the reproducibility crises (Borsboom, 2013; Klein, 2014; Muthukrishna & Henrich, 2019; Oberauer & Lewandowsky, 2019; Smaldino, 2017). Mischel (2008), speaking about psychology and its sub-disciplines, explained that political psychologists tend to treat other scholars’ theories like toothbrushes - no self-respecting person wants to use anyone else’s. Known as the “tooth-brush problem”, it suggests that social scientists interested in understanding political behavior may be nurturing practices that are to the detriment of the quality of the science produced. That is to say, hardly ever, these constructs are compared and contrasted, which is a major obstacle towards the ideal of an integrative view of social sciences. For example, while RWA and SDO are usually surveyed in tandem, System Justification, MFQ, Values, Empathy, Collective Narcissism, Justice Concerns and Emotions –to name a few– tend to be surveyed -and empirically tested- within their own theoretical framework. And as theory depends on the iterative process Theory <-> Data (Borsboom et al, 2020), and as theories based on social constructs can only be understood in relation to their inter-relationships (i.e., nomological network), the lack of data allowing for (existing) theories to be compared and contrasted via the inter-relations of political psychological constructs precludes the building of richer, more integrative theories of the psychology of political behavior.
Sample type and size. Political scientists and psychologists differ substantially when it comes to sample type and size in their typical studies. Social psychologists have historically relied on student, snowball convenience samples. The lack of representation from several population strata constrain the extent to which its findings generalize to the population, limiting its external validity. Furthermore, (social) psychological studies tend to be, on average, widely underpowered (for a review, see Crutzen & Peters, 2017). This is particularly troublesome as the median effect-size studied in Psychology is of about r = 0.1, and there is a wide discrepancy between reported effect sizes with and without pre-registration (Schäfer & Schwarz, 2019). For these reasons, it is important to test (social) psychological hypotheses with large, representative samples (and ideally, costs not forbidding it, random samples too).
Ad-hoc selection of proper treatment of politico-psychological constructs. Political scientists typically make use of items already present in national election studies and cross-national surveys as proxies for their constructs. This practice is classified as a known questionable measurement practice (Eiko and Flake, 2018), and it can bias results for at least two reasons. With the use of proxies, short-scales and on-the-fly scales, the quality of measurement suffers. In addition, and because of the unconstrained choice of questions/items, research based on this approach essentially leads to a garden of forking paths. As substantiated by recent reproducibility projects in both psychology and political science, researchers – often unknowingly – create false positives/negatives highly contingent on their analytic choices.
Work in progress.
Borsboom, D. (2013). Theoretical amnesia. Open Science Collaboration Blog (November 20).
Borsboom, D., van der Maas, H., Dalege, J., Kievit, R., & Haig, B. (2020). Theory Construction Methodology: A practical framework for theory formation in psychology.
Crutzen, R., & Peters, G. J. Y. (2017). Targeting next generations to change the common practice of underpowered research. Frontiers in psychology, 8, 1184.
Haste, H. (2012). Where do we go from here in political psychology? An introduction by special issue editor. Political Psychology, 33(1), 1-9.
Huddy, L., Sears, D. O., & Levy, J. S. (2013). Introduction: Theoretical foundations of political psychology.
Klein, S. B. (2014). What can recent replication failures tell us about the theoretical commitments of psychology?. Theory & Psychology, 24, 326-338.
King, G. (1995). Replication, replication. PS: Political Science and Politics, 28(3), 444-452.
Maxwell, S. E. (2004). The persistence of underpowered studies in psychological research: causes, consequences, and remedies. Psychological methods, 9(2), 147.
Muthukrishna, M., & Henrich, J. (2019). A problem in theory. Nature Human Behaviour, 3, 221-229.
Mischel, W. (2008). The toothbrush problem. APS Observer, 21, 11. Retrieved from https://www.psychologicalscience.org/observer/the-toothbrush-problem
Oberauer, K., & Lewandowsky, S. (2019). Addressing the theory crisis in psychology. Psychonomic Bulletin & Review, 26, 1596–1618.
Rosenberg, S. W. (2003). Theorizing political psychology: Doing integrative social science under the condition of postmodernity. Journal for the Theory of Social Behaviour, 33(4), 427-459.
Schäfer, T., & Schwarz, M. A. (2019). The meaningfulness of effect sizes in psychological research: Differences between sub-disciplines and the impact of potential biases. Frontiers in Psychology, 10, 813.
Smaldino, P. E. (2017) Models are stupid, and we need more of them. In: R. R. Vallacher, A. Nowak, & S. J. Read (Eds.), Computational social psychology (pp. 311-331). Psychology Press.
Vazire, S. (2018). Implications of the Credibility Revolution for Productivity, Creativity, and Progress. Perspectives on Psychological Science, 13(4), 411-417. https://doi.org/10.1177/1745691617751884
Wuttke, A. (2019). Why too many political science findings cannot be trusted and what we can do about it: A review of meta-scientific research and a call for academic reform. Politische Vierteljahresschrift, 60(1), 1-19.