A service of the CrimRxiv Consortium Visit CrimRxiv →

Part 2

What criminology already knows

Nine clusters of evidence that platform teams routinely re-derive without knowing the literature already exists. Cite these. Skip the wheel.

A surprising amount of what Trust & Safety teams need to know has already been studied — by criminologists, victimologists, and adjacent fields — over the last three decades. The literature is uneven, mostly behind paywalls, and rarely translated into operational terms. This part is a guided tour of the nine clusters that recur most often in platform work, with the studies a T&S researcher should be able to cite from memory by the end of their first year.

The bibliography contains the full annotated set, filterable by method, harm domain, and difficulty. What follows is the map.

Most of these studies were not written with platform operators in mind. That is precisely why they are useful: the findings are independent of any platform's reporting incentives.
— A note on this section

1. Online harassment and hate speech

Empirical work on who gets harassed, what predicts victimization, and what interventions reduce incidence. The literature applies and extends routine activity theory and lifestyle-exposure theory to online environments. Two findings recur. First, online deviance — engaging in risky or boundary-testing conduct — predicts cyberstalking victimization more powerfully than simple internet use frequency (Reyns, Henson, & Fisher, 2011). Second, counter-speech is not protective: users who actively confronted hate content were at higher risk of subsequent retaliation (Costello, Hawdon, & Hughes, 2017). For T&S product teams, this complicates the design of bystander-reporting flows and undermines naive "more counter-speech" interventions.

2. Online fraud and romance scams

A clinical-psychology-adjacent literature on the psychological dynamics of romance scam victimization. Whitty's (2018) population survey of ~11,800 respondents found that high-education, sensation-seeking, middle-aged women were at highest risk; loneliness alone was not predictive. Buchanan & Whitty (2014) showed that romantic idealization was a stronger predictor than neuroticism. The operational implication is that friction at the point of escalating commitment — not at first contact — is the higher-leverage intervention. Cross, Smith, & Richards (2014) document profound underreporting of fraud to police, which means platform fraud queues are a tiny fraction of actual victimization — the dark figure shows up at scale.

3. CSAM and online child sexual exploitation

A legally sensitive but empirically growing field. Wolak, Finkelhor, Mitchell, & Ybarra's (2008) "Online Predators and Their Victims" corrected a generation of media narratives: most online-facilitated exploitation of minors does not involve deception about age or intent. Victims are often adolescents who knowingly communicate with adults who express interest. Platform architecture that assumes grooming looks like stranger danger will miss the modal case. On the production side, Henshaw, Ogloff, & Clough's (2025) Australian study finds distinct producer sub-types with different pathways and risk levels — directly relevant to T&S triage and escalation. The Internet Watch Foundation annual reports are the field's authoritative trend data.

4. Extremism and online radicalization

The empirical picture is more nuanced than public narratives suggest. Chen et al. (2023) in Science Advances, using matched survey and web-browsing data, found that the YouTube algorithm rarely recommended extremist content to mainstream users — but that users who sought out such content already espoused extremist beliefs. The "pipeline" story is partly displaced by an "infrastructure-and-affinity" story: extremist communities are organizing infrastructure for the already-radicalized more than radicalization engines for the mainstream. The GWU Program on Extremism (2023) argues current extremism is platform-agnostic — mainstream platforms for recruitment, fringe platforms for operational planning — so cross-platform coordination, not single-platform enforcement, is the lever.

5. Technology-facilitated intimate partner violence (TFA / IPV)

A rapidly growing literature documenting how abusers weaponize platform features — tracking apps, shared accounts, NCII threats, smart-home surveillance — as instruments of coercive control. Dragiewicz et al.'s (2022) systematic review establishes that TFA sits on a continuum with in-person IPV, not as a separate category. Freed et al. (2018, IMWUT) document the specific failure modes: account recovery flows designed to protect against third-party attackers fail when the threat actor is the credentialed intimate partner. Buller et al. (2022) show TFA co-occurs with physical violence at high rates — meaning a harassment report that involves account compromise may be a signal of co-occurring physical danger, not just an online problem.

6. Non-consensual intimate imagery (NCII) and doxxing

Empirical work is growing fast, driven partly by the emergence of AI-generated NCII. McGlynn et al. (2024), working with the UK Revenge Porn Helpline, found that Meta-owned platforms facilitated 78% of social-media NCII distribution cases — a finding with enforcement-strategy implications. Henry, Powell, & Flynn's (2017) Australian survey shows image-based abuse extends far beyond the "scorned ex" stereotype; perpetrators include current partners, strangers, and misogyny-motivated groups. The Eaton, Jacobs, & Ruvalcaba (2017) Cyber Civil Rights Initiative survey established that 1 in 8 social-media users have been NCII victims — the empirical grounding for the NCII hash-matching infrastructure now operated by StopNCII.org.

7. Darknet drug markets and illicit platforms

A criminologically rich literature on cryptomarkets that maps onto T&S concerns about marketplace integrity, trust mechanisms on illicit platforms, and displacement after enforcement. The Europol/EMCDDA (2017) report documents resilience: when AlphaBay and Hansa were seized, activity migrated rapidly to alternative markets — the same whack-a-mole dynamic that T&S teams see in ban-evasion. Martin (2014) and Aldridge & Décary-Hétu (2016) document the internal social control of illicit platforms (reputation systems, escrow, dispute resolution) — bad actors build governance infrastructure too, and understanding it sharpens detection.

8. Disinformation, election integrity, and coordinated inauthentic behavior

Scholarly findings are more cautious than policy debates imply, but structural findings on coordinated manipulation are robust. Guriev & Treisman (2024), across 74 democracies, find that high-disinformation media ecosystems decoupled citizen perceptions of election fairness from actual electoral integrity. Cinelli et al. (2023) reframe CIB as primarily an amplification-structure problem, not a fake-content problem — the manipulation surrounds real grievances rather than inventing them. Bail et al. (2018) in PNAS found that exposure to opposing political bots on Twitter increased polarization, particularly for conservatives — a cautionary finding for algorithmic diversity interventions.

9. Platform governance and content moderation

An interdisciplinary field examining how platforms make, implement, and legitimate moderation decisions. Jiang et al.'s (2024) systematic study of the 43 largest user-generated-content platforms found that platforms rarely explicitly define what they intend to moderate, rely heavily on users for moderation, and provide little meaningful recourse after action. The procedural justice critique is empirically substantiated. Jhaver et al. (2021) on Twitter deplatforming find that removing prominent accounts reduces their followers' downstream hate speech — but does not eliminate it. Tarleton Gillespie's Custodians of the Internet (Yale, 2018) remains the field's foundational theoretical text.

What this collectively means for T&S work

The clusters above are not separate islands. They share a small number of recurring findings: harms are massively underreported; perpetrators adapt faster than enforcement; victim populations are structurally — not randomly — vulnerable; and intervention design matters more than enforcement volume. A T&S team that takes these findings seriously will spend less time on enforcement metrics and more time on victim-side design, friction calibration, and cross-platform coordination. That shift is the practical contribution a criminologist can make on day one.

The bibliography contains the studies cited above and the rest of the working set. Each entry is annotated with what the study does, what it found, and why a T&S practitioner would want it on the shelf.