Our friends at Max Tactical Firearms took a hard look at the Brian M. Hicks study published in JAMA Network Open — specifically the methodology, measurement choices, and interpretive leaps that don’t hold up under scrutiny. What follows is their breakdown.
When a psychiatry professor publishes research about firearms, the first question worth asking is: are we looking at behavioral science — or a policy conclusion dressed up in data?
When you actually read the Hicks study, what you find is a study whose design appears aligned with a predetermined direction — and then works to keep the data consistent with it.
The premise sounds serious enough: measuring the “prevalence of thoughts of shooting others” among U.S. adults. But once you get past the title and into the methodology, things stop feeling like precision science and start feeling like diagnosing a population based on a single, vaguely worded question.
The Weaknesses Aren’t Hidden
The study’s biggest problems aren’t buried in footnotes. The headline finding rests on a single self-reported survey item about “thoughts of shooting others,” with no separation of intrusive thoughts, fantasy, anger, or genuine intent. “Somewhat agree” gets lumped together with strong endorsement, inflating prevalence and erasing severity distinctions. There’s no demonstrated link between endorsing this item and actual violent behavior. The survey relies on response rates as low as 0.39% and scales those results to millions of Americans. And a cross-sectional snapshot — one moment in time — is discussed as though it identifies stable, policy-relevant danger.
What the Survey Was Actually Built to Study
One of the more revealing details is in the supplemental materials. Participants were told they were entering a study on relationships among firearms, suicide, and alcohol use — and that they’d be answering questions about gun ownership, mental health, suicide, alcohol and drug use, and gun violence. This wasn’t a narrow instrument built to measure firearm-specific homicidal intent. It was a sprawling survey combining firearm attitudes, access, storage, risky behavior, substance use, mental health, suicide items, antisocial behavior, intimate-partner violence, and personality questions.
That doesn’t automatically invalidate the research. But when your subject matter is as politically charged as firearm violence, vague wording and interpretive elasticity aren’t features — they’re liabilities.
The Entire Study Hangs on One Sentence
At the center of the study is a single survey item. Respondents were asked to rate their agreement with the statement, “I have thought about shooting another person or multiple people.”
That’s it. One sentence does all the heavy lifting.
In psychology, the difference between an intrusive thought, a fleeting emotional reaction, a hypothetical scenario, and genuine intent to act isn’t subtle — it’s fundamental. Collapsing those categories into a single measure is like asking people if they’ve ever “thought about driving fast” and using the results to estimate how many street racers are on the road.
Research on intrusive thoughts in nonclinical populations has long established that unwanted, disturbing, or aggressive thoughts occur in ordinary people who never act on them. That doesn’t make those thoughts irrelevant. It does mean they can’t be treated as stand-ins for imminent violence.
Hospitals don’t diagnose heart disease by asking whether a patient has ever felt chest discomfort after sprinting upstairs. Context isn’t a luxury in serious risk assessment. It is the assessment.
The Follow-Up Questions Don’t Fix the Foundation
Respondents who endorsed lifetime firearm homicidal ideation were then asked follow-up questions about acquiring a gun to shoot someone, bringing a gun somewhere with that intent, selecting targets from a list of twelve categories, and more.
But that only helps if the gateway measure is valid enough to justify everything downstream. If your front door is crooked, every room attached to it inherits the problem. The follow-up battery produces more dramatic numbers, but it doesn’t fix the underlying ambiguity of the initial screen. It subdivides people who passed through a broad, under-specified filter.
The study escalates from “I somewhat agree that I have thought about shooting someone” to target categories, acquisition questions, and intervention policy implications — as though that first step cleanly separates meaningful risk from ordinary human mental noise. That assumption is the whole game. The paper never earns it.
The Statistical Shortcut
According to the supplementary materials, anyone who selected “somewhat agree,” “agree,” or “strongly agree” was grouped together as having had these thoughts.
Mild and ambiguous agreement is treated identically to strong endorsement. This is called dichotomization, and it’s widely criticized because it flattens meaningful differences into a binary outcome. A pilot who says “I was a little tired on descent” isn’t treated the same as a pilot who says “I blacked out in the cockpit.” Severity gradients exist for a reason. Flattening them generates a cleaner headline and a dirtier analysis.
The Survey’s Ideological Footprint
Respondents weren’t just asked about violence in the abstract. They were asked about gun ownership reasons including protection from the government, a sense of freedom, a feeling of power and respect, and use in criminal activity or for street cred. They were also asked about gun carrying, storage, loaded access at home, risky gun handling after alcohol or cannabis, antisocial behavior, intimate-partner violence, and personality items about power, grievance, suspicion, and impulsivity.
The instrument also included overtly normative gun-attitude items — whether a well-armed citizenry is the best defense against tyrannical government, whether owning a gun makes you safer, whether the Second Amendment means citizens can carry in any public place.
Once a survey mixes public-health claims, ideology, personal grievance, risky conduct, and constitutional attitudes into the same instrument, correlation can easily wander into caricature.
Thoughts Are Not Behavior
The study measures thoughts but is consistently discussed as though it identifies risk. There is no demonstrated link between these self-reported thoughts and actual violent behavior — no verified incidents, no longitudinal follow-up, no predictive modeling. Research distinguishing aggressive intrusive thoughts from violent behavior underscores exactly why that distinction matters.
A person thinking “I could wring that guy’s neck” after being cut off in traffic is not the same as someone planning an assault. Good science sorts signal from noise. It doesn’t package noise as signal and then call Congress.
The Self-Report Problem
Every key variable in the study is self-reported. Ask ten people what it means to have “thought about shooting someone” and you may get ten different interpretations — a flash of anger, a hypothetical scenario, something from a movie. All of those answers get recorded the same way. At that point the study isn’t measuring a consistent construct. It’s measuring how different people interpret a sentence on a screen.
Low Response Rates, High Confidence Headlines
The response rate for address-based sampling was about 3.83%. For SMS recruitment it dropped to 0.39%.
The study attempts to correct for this through statistical weighting. But weighting can only adjust for known characteristics — it cannot correct for who chose to respond, why they responded, or how they interpreted the questions. If a restaurant surveyed 0.39% of its customers by text and declared what all diners thought of the menu, most people would laugh. When the topic is firearms and the journal is prestigious, the same weakness gets dressed up as national clarity.
The Consent Form’s Built-In Frame
Participants were told before reaching the key survey item that they’d be answering questions about gun ownership, suicide, alcohol, drug use, and gun violence. Context affects answers — that’s not paranoia, that’s survey design. If you prime respondents with a battery of firearm, violence, substance-use, and suicide questions, then ask whether they’ve thought about shooting another person, you’re not eliciting that answer from a neutral starting point. This paper mostly glides past that problem.
A Snapshot Masquerading as a Forecast
The study is cross-sectional — one moment in time. It cannot establish causation, persistence, or future behavior. Yet the discussion moves toward identifying a “high-risk group” and suggesting intervention strategies. Without longitudinal data, behavioral outcomes, and predictive validation, “high-risk group” is doing narrative work, not scientific work.
Reporting frameworks like STROBE exist to prevent this kind of design-to-conclusion overreach. A one-time survey showing some people thought about quitting their jobs wouldn’t justify a paper on “the prevalence of impending workforce abandonment.” But once the subject is firearms, a cross-sectional snapshot gets promoted to policy-weather radar.
Gun Ownership Didn’t Show a Clear Association
One of the paper’s more awkward findings is that gun ownership itself was not consistently associated with these thoughts in the study’s reported models. If the popular takeaway is that firearm access is the obvious central driver, finding no significant ownership association should slow the parade considerably. Instead, the paper and surrounding media treatment pivot to the familiar public-health wish list anyway: intervention opportunities, waiting periods, extreme risk protection orders, broader violence-prevention efforts.
When the study’s own regression tables don’t identify ownership as a distinguishing factor, the jump to broad gun policy interventions is hard to justify. The conversation should be shifting toward human risk factors — social environment, substance use, grievance, victimization, impulsivity — not back toward the oldest script in the binder.
How the University Spun It
The University of Michigan’s public-facing writeup titled their coverage “Thoughts don’t kill people, but study suggests options for keeping guns from doing so” — emphasizing red flag laws, waiting periods, background checks, and Hicks’ comment that even a small proportion acting on such thoughts could produce large numbers of firearm injuries.
The paper asks a vague question, the methods flatten the answer, the tables upscale the result, and the university’s media release frames it all as policy urgency. Once a figure like “millions of Americans” enters a news cycle, it’s rarely followed by the reminder that the estimate rests on a small number of respondents, broad wording, and very low response rates. The caveat travels coach. The headline flies private.
The Data Is Locked Until 2028
According to the study’s data-sharing statement, deidentified participant data won’t be available through the National Data Archive until August 1, 2028. The public gets the headlines now. The broader research community gets a shot at the underlying data later — much later. Public narrative on the front end, independent scrutiny on the installment plan.
What a Better Study Would Have Done
A stronger study would have started with construct development — clearly distinguishing intrusive thoughts, anger-related rumination, retaliatory fantasy, and genuine violent intent — rather than jumping straight to prevalence estimation. It would not have collapsed an ordinal response scale into a yes-or-no without justification. The Standards for Educational and Psychological Testing are explicit about that burden.
If the goal was risk identification, the study would have needed a prospective design — following respondents over time and evaluating the measure against verified outcomes. That’s where predictive validity actually lives, not in the intuitive force of a survey item.
On sampling, AAPOR’s guidance is clear that response rates are a starting point, not a quality certificate. A stronger paper would have examined auxiliary data, compared respondents with external benchmarks, and been honest about what weighting can and cannot fix.
When the Conclusion Shows Up First
The data is real. The methods are standard. The publication is legitimate.
But science is not just about what you measure — it’s about how you define it, how you interpret it, and how far you extend those interpretations. When a study begins with a vague construct, relies on subjective self-report, compresses nuance into binary categories, and reaches toward policy relevance, it stops feeling like discovery and starts feeling like confirmation.
When confirmation becomes predictable — when every study arrives at the same destination regardless of the starting point — it’s worth asking whether the research is exploring reality or reinforcing a story that was already written.
Because at some point, “peer-reviewed” starts to feel a lot like “pre-aligned.”
This analysis was originally published by Max Tactical Firearms and is reprinted here with permission.
Read the full article here


