The recent retraction
of “Boom, Headshot!”: Effect of Video Game Play and Controller Type on Firing Aim and Accuracy
 from the Journal of Communication Research
ended a protracted five year controversy over the papers findings, not because they were wrong, but as a consequence of the data necessary to refute or verify such claims essentially going missing
. This is significant given the studies purpose was to inform the debate on violent FPS 'shooter' style games being "murder simulators", essentially asserting violent video games are effective tools to train people to kill through "opperant conditioning"
(learning through rote or repetition), a claim that's not actually debunked by virtue of the paper having been retracted.
The situation is worthy of note for a number of reasons; it speaks to longstanding concerns over an apparent 'soft science' bias
in academia; the 'Jerry-mandered' of Journal Publishing; the veracity of the results and their meaning relative to the approach used to test the papers hypothesis/assertions. It's this latter point that's particularly interesting given the narrative if feeds in to (see above) and the fundamentally flawed approach to discovery taken; in a nut-shell participants fired an airsoft pistol at a fixed (man-shaped) target after
a 'conditioning' period playing specific video games using mouse/keyboard or 'gun' shaped controllers, but were not required to do so beforehand
. Individual baselines were instead established though Q&A surveys
that built a psychological profile
of each persons general attitude towards firearms etc., not their raw live-fire ability.
In other words, the participants thoughts and feelings
towards firearms and related usage were the initial gauge, that when added to the conditioning period, allowed a correlation to be established; the more favorable the attitude, and the greater the conditioning exposure, the greater and accuracy predicted and measured, the follow-through then being that first-person shooters do indeed 'train' people be be killers, an extraordinary extrapolation of implied cause and affect that has greater implications for society, not for any corollary aspects towards violence per se, but rather because the paper is actually an attempt to establish a universal psychological test
that can determine a persons state of being based on a series of theoretical reference points, that 8 year old Charlotte's crude "L" shaped crayon sketch of a gun predicts her to be a killer.
The problem with this is that the leap to conclusion is so grossly over-compensatory (for want of a better word) it can be used as a predictive measure of just about any outcome plugged in to it; a person with a favorable attitude towards bassoons, who played hours of "Bassoon & You too
", is predicted to be an "accurate" Bassoon player. Replace "firearm" with "baseball", "cricket", "bow and arrow", "tiddlywinks" and the formulas prediction is the same; [positive attitude] + [increased virtual/fictional exposure] = [increasing real-world outcome]
, not the hours and hours of hands-on live-fire (in this instance) instruction, training, practice and dedication.
That "Boom Headshot['s]" failings were not spotted or commented on at inception or any other point during research and publication, perhaps speaks just as much to the concerns mentioned in the opening paragraph as it does to simply poor science and the propensity towards 'celebrity publication bias
' to court controversy as a means of garnering corporate/business/financial interest, problems not easily solved for subjects with such low efficacy/high output rates.
P.S. The articles full heading/title should be "Boom Headshot! Perpetuating the "violent video games are murder simulators" narrative through bad science
" but it doesn't fit.
 “Boom, Headshot!”: Effect of Video Game Play and Controller Type on Firing Aim and Accuracy [researchgate]. "Abstract: Video games are excellent training tools. Some writers have called violent video games “murder simulators.” [Jack Thompson] Can violent games “train” a person to shoot a gun? There are theoretical reasons to believe they can. Participants (N = 151) played a violent shooting game with humanoid targets that rewarded headshots, a nonviolent shooting game with bull’s-eye targets, or a nonviolent nonshooting game. Those who played a shooting game used either a pistol-shaped or a standard controller. Next, participants shot a realistic gun at a mannequin. Participants who played a violent shooting game using a pistol-shaped controller had 99% more headshots and 33% more other shots than did other participants. These results remained significant even after controlling for firearm experience, gun attitudes, habitual exposure to violent shooting games, and trait aggressiveness. Habitual exposure to violent shooting games also predicted shooting accuracy. Thus, playing violent shooting video games can improve firing accuracy and can influence players to aim for the head." (emphasis added).
 Although the term "murder simulators" was popularised through Jack Thompson's efforts in the late 1990's to restrict the availability of video games to minors. it originates with author David Grossman who used the phrase to describe the overall effect he asserted playing violent video games had on players (minors in particular). It has since been used variously by other critics of violent video games.
 The soft science bias is essential a symptom of the publication of subjective topics of research that prove difficult to properly or reliably replicate or prove one way or the other. This raises further concerns over such research being given platforms and published because it essentially falls into the realms of being subjective op-ed and non-falsifiable in nature, a position more typical of advocacy, politicised or propaganda research conducted by a stake-holders or vested interests as a means to push a supportive or favourable narrative - 'soft' subjects like political science, gender studies, the arts & humanities generally, exhibit greater propensity to towards bias than hard sciences like physics, biology etc.
Search terms "publication bias soft sciences"
- Proceedings of the National Academy of Sciences: "US studies may overestimate effect sizes in softer research".
- Public Library of Science: "“Positive” Results Increase Down the Hierarchy of the Sciences".
- Department of Education, University of Chicago: "How Hard Is Hard Science, How Soft Is Soft Science? The Empirical Cumulativeness of Research".
- International Council for Science: "Advisory Note "Bias in science publishing"".
- et cetera, et cetera.
 Gerrymandering research is not so much cherry picking but selective filtering, data that supports a predefined conclusion is allowed though the filter even though it may not be fully supportive of the goal, whereas cherry-picking deals exclusive with selective bias. The difference between the two is that the former can give a greater appearance of veracity because conclusions aren't quite so easily refuted. In addition to this, journals giving voice to such 'soft-science' research are notorious for courting controversy for sake of notoriety or interest in their publications, often publishing controversial subjects that may or may not be backed by thorough research and/or exhibit preference towards fashionable political topics of discussion. In other words "Boom, Headshot...", 1) should not have been published in the first place if the data was not available at the time, and what should have been a glaring problems with their approach, and 2) was published because the subject is politically topical (as are the papers authors) which brings in interest to the Journal, not because the topic had any greater 'truth' to tell or merit than other research in the field.
 The determining factor using this approach is 'exposure', the prediction being essentially the greater the exposure the better the persons transferable accuracy (they are predicted to be better with the real(ish) thing). This allows for the establishment of a non-falsifiable correlation because the baseline is theoretical predictive assertion, not contextually measured objective observation - control variable (the baseline) was established by having "[p]articipants ... [complete] a number of control variables, including the Aggression Questionnaire, the Attitudes Toward Guns Scale, and the Attitudes Toward Guns and Violence Questionnaire [that were] combined to form a composite measure of attitudes toward guns. [They were also asked] whether they had received firearms training and ... their three favorite video games ... used to measure habitual exposure to shooting video games. With the exception of a deer-hunting video game, all shooting games involved killing humanoid targets and all were rated “M” (for mature players 17 and older)".
 An interesting throw-away from the paper reveals there to have been no difference in outcome based on the individuals sex/gender, female participants were as likely to perform as males "[t]here were no main or interactive effects involving participant sex for headshots or other shots, so the data from men and women were combined.". In addition to this, the live-fire section of the research was conducted in a way that replicated the game environment, not real-world usage, i.e., the distance between shooter and target was sufficient to allow WYAIWYH (Where You Aim Is Where You Hit), rather than demanding the shooter involve more complex motor skills to compensate for ballistics etc., "... the firing distance selected for this experiment (20 ft; 6.1 m) was determined during pretesting to be an optimal distance for most successfully landing a hit where one aimed on the target". In other words the test was designed not to fail.
 It's important to note "[p]articipants were instructed in the use of the pistol and wore safety goggles while shooting. A post-test-only design was employed to eliminate pistol-firing practice effects" (the compensatory factors are not disclosed). This fact should have repudiated the papers central premise, that violent video games train people to use firearm, especially without any live-fire comparative baseline.
 The paper does not appear to have been peer-reviewed.
 Perhaps a more objective way to have conducted the test might have been to have a control group that were not asked any questions, were not conditioned and only shot at targets. This could be expanding to have been done several time at fixed intervals, or interval matching the start/end of each phase other participants were involved with. For example, baseline shoot, Q&A end shoot, conditioning end shoot, final shoot. Same for the other participants, a shoot after each phase. Without any of this there is no historical comparative test to be made, which makes the research not much more than vanity publishing and one more disingenuous step towards the litany of 'pre' tests to determine potential 'pre' crime as it were.