People overestimate the size of every minority group
And underestimate every majority group too
Now, you can quibble that they used poor estimates of immigrant% that only counted first generation, or those with only foreign citizenship, or in some other way underestimated the group you consider immigrants (non-ethnic majority members in a typical ethnostate). However, this kind of underestimation is not really about politics or media representation. It is about uncertainty.
Guay, B., Marghetis, T., Wong, C., & Landy, D. (2025). Quirks of cognition explain why we dramatically overestimate the size of minority groups. Proceedings of the National Academy of Sciences, 122(14), e2413064122. https://www.pnas.org/doi/10.1073/pnas.2413064122
Americans dramatically overestimate the size of African American, Latino, Muslim, Asian, Jewish, immigrant, and LGBTQ populations, leading to concerns about downstream racial attitudes and policy preferences. Such errors are common whenever the public is asked to estimate proportions relevant to political issues, from refugee crises and polarization to climate change and COVID-19. Researchers across the social sciences interpret these errors as evidence of widespread misinformation that is topic-specific and potentially harmful. Here, we show that researchers and journalists have misinterpreted the origins and meaning of these misestimates by overlooking systematic distortions introduced by the domain-general psychological processes involved in estimating proportions under uncertainty. In general, people systematically rescale estimates of proportions toward more central prior expectations, resulting in the consistent overestimation of smaller groups and underestimation of larger groups. We formalize this process and show that it explains much of the systematic error in estimates of demographic groups (N=100,170 estimates from 22 countries). This domain-general account far outperforms longstanding group-specific explanations (e.g., biases toward specific groups). We find, moreover, that people make the same errors when estimating the size of racial, nonracial, and entirely nonpolitical groups, such as the proportion of Americans who have a valid passport or own a washing machine. Our results call for researchers, journalists, and pundits alike to reconsider how to interpret misperceptions about the demographic structure of society.
These kinds of estimation tasks concern proportions or percentages, i.e., fixed sum (proportional data). Peoples' estimates do in fact often not even sum to 100% as you would expect (or demand!). In fact, this phenomenon has anything to do with sociology at all and occur in trivial scenarios as well:
For these non-political tasks, the estimates do not deviate that much from the calibration line (slope = 1, intercept = 0), however, we see an S-shaped function which you might call a 90-degree rotated logistic function. This can be readily interpreted in mathematical theory:
Given perfect accuracy, the estimates would just be the line of calibration. Deviation from perfect accuracy on a linear scale gives you the 'Dunning-Kruger' pattern, and if on log scale, gives you the S curve. Obviously, most subjects in research studies do not sit around memorizing census tables just in case their professor or paid online survey considers asking them to estimate this or that population subset. As such, people have to guess. Mostly, they have some kind of information to go on, which they can combine with a Bayesian prior of 50%, which is the best guess if you don't know anything at all. As a result, any estimate given some uncertainty is moved somewhat towards the prior of 50%, that is, towards the middle. If this is done on the log-scale mentally speaking, the S shape will result.
They then took every study they could find that supposedly showed that this or that group was biased in some way or another because politics and showed that the empirical results overall match very well with the S shape:
As can be seen, most results were very close to the predicted line. There are no error bars, so we can't easily judge the outlier, but since it concerns geography, it is probably not due to political influence.
Finally, they tested the usual theories of social contact (more accurate if more contact) and threat (more overestimation if more threatening), but found that their contributions to the predictions were marginal:
Misperceptions of racial and non-racial groups followed the same inverted s-shaped pattern of systematic error characteristic of proportion estimation in non-demographic domains. Moreover, we found that errors in estimates of hot-topic groups (e.g., undocumented immigrants, gay Democrats) looked no different from errors in estimates of mundane demographic groups (e.g., Apple product owners, passport holders).In contrast, we found almost no empirical support for theories of perceived threat and social contact.
These results also generalize to any other proportional estimation task (which is basically beta/Dirichlet regression):
Our findings also have implications for how misperceptions about non-demographic quantities are interpreted. Social scientists are often interested in people’s perceptions of quantities relating to the economy, such as the proportion of government spending dedicated to welfare, the unemployment rate, and inflation (Conover et al., 1986; Holbrook and Garand, 1996; Kuklinski et al., 2000). For instance, past studies have documented errors in the public’s perception of the human and financial cost of armed conflict (Berinsky, 2007), the likleihood of contracting Covid-19 (McColl et al., 2021; Schlager and Whillans, 2022), and the proportion of the federal budget spent on foreign aid (Gilens, 2001; Scotto et al., 2017). More recent work has documented the alarming pattern of elected representatives and citizens mis-estimating public opinion, such as support for climate legislation, gun control, and abortion policy (Walgrave et al., 2023; Sparkman et al., 2022; Broockman and Skovron, 2018; Pasek et al., 2022), as well as others’ beliefs more generally (Bursztyn and Yang, 2022; Lees and Cikara, 2020). Together, these findings have been interpreted as worrying evidence of bias or ignorance among elites and the voting public. But seen in the light of the current study, these errors may say less about topic-specific bias or ignorance and more about the psychology of numerical estimation in general. In explaining errors in such estimates, topic-general psychological processes such as hedging under uncertainty should be accounted for before invoking topic-specific bias or ignorance.
You can see how social scientists using unrepresentative target groups have been able to build a literature of findings that supposedly show that people are very inaccurate in estimating the population proportions of small left-wing groups (typically immigrants, gays, trans, handicapped) and this is because of [Russian bots/bad education/bad people on X]. They did not look for a broader more neutral explanation of the facts. This misguided approach is also why their interventions didn't work:
These findings also raise questions for the growing body of research that attempts to change attitudes (e.g., toward immigration policy) by correcting numeric misperceptions (e.g., of the size of the current immigrant population). The assumption behind these attempts is that the negative attitudes are caused, in part, by the misperceptions revealed in estimation errors. However, a recurring pattern across studies is that offering correct information often succeeds in reducing errors in explicit estimates but fails to change downstream attitudes (Kuklinski et al., 2000; Lawrence and Sides, 2014; Hopkins et al., 2019; Thorson and Abdelaaty, 2023). For instance, providing correct information about the size of the immigrant population leads to substantially improved estimates of the size of the immigrant population, but almost no change in attitudes toward immigration policy (Hopkins et al., 2019). The current study offers a potential explanation: interventions that present people with corrected values (e.g., demographic proportions) may change the way people report their perceptions as explicit proportions on surveys—for instance, by reducing the amount of hedging under uncertainty—without changing other underlying beliefs and attitudes. For example, in a study of misestimates of home energy use by the public, Marghetis et al. (2019) reported that an information-based intervention massively reduced estimation errors but had only negligible impacts on downstream decisions about energy use.
This is not to say that estimation errors cannot differ systematically by some third variable (which might be political ideology), which is why researchers have been able to find that Republicans (or whatever) overestimate immigrant% more others. But these will be rather marginal effects. So small they couldn't even be detected in this study for the most part.
This study is a reminder that one should look for general and neutral explanations before jumping to politically motivated theories of the outgroup=bad type.
Interesting!
Interesting....🤔