Why This Dreadful “Vaccinated vs. Unvaccinated” Study Doesn’t Hold Water (or Logic)
This recently released small, unpublished study on Vaccinated vs. Unvaccinated kids is a load of bollocks. It is designed to find that vaccinated kids have more health problems than unvaccinated. This is like calling a scribble on a napkin a Picasso — only this one is being waved around in Congress
Every now and then, a study emerges that looks, at first glance, like it has found the Holy Grail of antivax talking points. The idea that nobody ever looks at vaccinated vs. unvaccinated people is a conspiracy narrative. The latest ‘hidden study’ claims that childhood vaccines increase the risk of all sorts of long-term chronic conditions: asthma, eczema, speech delay, and even autoimmune disease. If true, this would be big news. But when you peek under the hood, it’s less “revolutionary science” and more “how not to design a study 101.” Let’s go there to see why.
**The claim is this:** In 2016, journalist Del Bigtree challenged a leading infectious-disease expert (who promotes quackery) to do the ultimate study of vaccinated vs unvaccinated children. According to the promo, the expert ran it, expected to disprove antivax claims—but what they found was “so horrifying” that the results were locked away until now.
Translation: it makes a great Netflix trailer, but not a credible scientific backstory
I cannot find a _public record_ of such a commission (by Bigtree or otherwise), no published version of a definitive “vaccinated vs. unvaccinated” cohort study matching that description, and no disclosure by academic institutions that such a study was suppressed.
The document called _“Impact of Childhood Vaccination on Short-and-Long-Term Chronic Health Outcomes in Children – A Birth Cohort Study”_ does not appear in any major peer-reviewed journal as of this writing. The authorship, methods, and peer review status are not transparently documented. However, the host institution has only just been made aware of it and it appears they are rather displeased!
In short, the “hidden vaccinated vs. unvaccinated study” lacks corroboration in academic, regulatory, or media archives. It’s not proof of suppression, but proof of a claim that is not yet backed by credible evidence. I have reviewed it, and it is a load of crap. Read on as to why. There are lay messages in the blue boxes.
## Bottom-line assessment
**_If you have no time, here is the key verdict._**
* **Claim tested:** “Any exposure to childhood vaccination increases risk of diverse long-term chronic conditions.” In the ‘ultimate vaccinated vs. unvaccinated’ never before done study.
* **Design used:** Retrospective birth-cohort in a single integrated system; exposure coded as **ever vaccinated vs never** , outcomes from ICD codes; Cox models and IRRs with limited covariate adjustment; several sensitivity checks.
* **My verdict:** The analytic choices are **not fit for the causal question** as framed. There are major, unresolved biases (especially **ascertainment/health-care utilisation bias** , **immortal-time/time-varying exposure bias** , **selection & residual confounding**, and **outcome misclassification & multiplicity**) that are likely sufficient to generate the reported hazard ratios—even if the true effect were null. The results, therefore, **cannot be interpreted causally** and should not be used to infer vaccine harm.
## Key threats to validity
### **1. Ascertainment & health-care utilisation bias (primary threat)**
* **FACT: Vaccination status is strongly tied to well-child care;** vaccinated children had ~7 annual encounters vs ~2 among unvaccinated overall; even in unvaccinated children with a condition, encounters rose to ~5—classic detection bias favouring diagnosis in the vaccinated. Sensitivity analyses restricted to “≥1 encounter” don’t remove systematic under-ascertainment. This single factor plausibly explains higher rates of diagnoses like asthma, eczema, speech delay, otitis media, etc. [Ref for US context]
* **Asthma/atopy/eczema:** These are highly sensitive to **care access** and diagnostic labelling; strong ties to well-child use. Without matching on visit density and environmental covariates, these are **not interpretable** causally.
* **Neurodevelopmental & speech disorders:** Diagnosed later and via repeated assessments; significantly shorter follow-up in unvaccinated + fewer visits = systematic under-capture. The striking HRs are compatible with surveillance bias.
### The real issue: doctor visits ≠ disease
Here’s an elephant in the room: **vaccinated kids go to the doctor far more often than unvaccinated kids**. Why? Because vaccines are delivered at routine well-child visits. That means these kids get checked, measured, poked, prodded, and yes—diagnosed—far more often.
Unvaccinated kids? They mostly skip that system. Less time in the clinic = fewer diagnoses in the medical record, whether they’re sick or not.
👉 This is called **ascertainment bias**. In plain English: you only find what you look for. If you never take your car in for a service, you’ll have fewer repair bills too — until it breaks down on the motorway
### **2. Time-varying exposure & immortal time**
* Children enter the “vaccinated” group **after** their first shot; outcomes occurring **before** the first vaccination are counted in the “unexposed” risk set unless carefully modelled with something called **time-dependent vaccination status**. The paper labels exposure as “exposed vs unexposed prior to onset,” but it does not demonstrate a **time-dependent** variation (e.g., splitting person-time at vaccination). Without that, immortal time can artifactually inflate hazards in the vaccinated vs. unvaccinated (How convenient?).
### A numbers game stacked from the start
In this ‘vaccinated vs. unvaccinated’ study, the authors lumped all vaccines together as one big “exposure.” That’s like blaming _all food_ for your peanut allergy. It ignores differences in schedules, ages, and doses.
Worse, they treated vaccination as if it were a one-time event, instead of a time-varying process. Kids can only be called “vaccinated” after their first shot, so they were guaranteed to be “disease-free” until then.
👉 This sets up what’s called **immortal time bias**. If you classify people as “exposed” only after they’ve received a vaccine, then all the time before vaccination (when they were still outcome-free) is wrongly counted as unexposed time, giving the exposed group an unfair disadvantage. They gave one team a head start and then acted shocked when the other lost.
### **3. Strong baseline imbalance & insufficient adjustment**
* You can see in the tables that the vaccinated vs. unvaccinated groups differ in **sex, race/ethnicity, prematurity, low birth weight, respiratory distress,** and birth trauma—all things that are well established to be related to later morbidity and health service use. In other words, the vaccinated vs. unvaccinated children are quite different. Adjustment is limited to a handful of perinatal covariates and omits SES, maternal factors, smoking, parity, clinic, calendar time, neighbourhood, clinician, and utilisation intensity. Given the breadth of outcomes, residual confounding is very likely, and it is vital to account for this.
### The confounding mess
Vaccinated vs. unvaccinated kids weren’t comparable to begin with. They differed in sex, race, prematurity, and early health complications—all things that affect later health. And then there’s the big one: socioeconomic status. Families who refuse vaccines often live very different lives from those who vaccinate. None of this was adequately adjusted for.
👉 The investigators compared apples and oranges and did not account for all the differences _._ Vaccinated vs. unvaccinated kids were different at the outset.
### **4. Exposure definition too crude for inference**
* Treating**any vaccine** as a single exposure collapses distinct schedules, ages, combinations, and indications. There’s no dose–response (only vaccinated vs. unvaccinated), spacing, or calendar-time modelling; no active comparator (e.g., alternative schedule vs on-time schedule) to align care-seeking. This invites confounding by parenting style/health behaviour.
👉 Key point – One group lived in the health system; the other mostly stayed outside it.
### **5. Outcome construction & multiplicity**
* A very broad composite (“any chronic condition”) mixes heterogeneous etiologies and severities, many susceptible to detection bias (e.g., speech disorder, developmental delay, atopy). Dozens of outcomes are tested (and many sub-outcomes), with no multiple comparison control, which inflates false positives. The “HR/IRR” results (zero events in unvaccinated) most likely reflect low power + short follow-up in refusers or people who have less access to services, not biology. Also, there is no list of the codes used provided.
### Rolling the dice until you get a scary number
The study tested dozens of outcomes: asthma, eczema, ADHD, allergies, diabetes, you name it. When you roll the dice that many times, you’re bound to get some “significant” results by chance. But the paper doesn’t adjust for that, and it should have.
Also, study power. This is not a particularly large study, and there are only 1,957 unvaccinated children. That’s why in some outcomes the hazard ratio is infinity—because there were literally zero cases in the small unvaccinated group. That’s not proof of safety or harm; it’s proof the sample was too small.
👉 If you roll the dice a hundred times, you’ll always find some scary numbers by chance, so you must account for this. Also, zero events in tiny groups don’t prove safety or danger**.** They just prove the group was too small.
### **6. Follow-up time imbalance and censoring**
* In the vaccinated vs. unvaccinated groups, the follow-up time was vastly different. Median follow-up ≈ **970 days** (vaccinated) vs **461 days** (unvaccinated). Shorter observation in refusers reduces the opportunity to record diagnoses that emerge at later ages (e.g., ADHD, learning disorders, autoimmune disease), biasing hazards upward for vaccinated. The K-M curves showing large separation likely mirror different surveillance intensity and time at risk, not causal effects. **This one is a real howler in my opinion.**
### **Half the Time, Half the Data**
Unvaccinated kids were tracked for only about half as long as vaccinated kids, so they never had the same chance to be diagnosed with conditions that appear later—making vaccines look riskier when it’s really just less follow-up.
👉 This is like comparing marathon runners by stopping the stopwatch early for one group. If you only let some runners go halfway, of course, they’ll have fewer blisters, pulled muscles, or exhaustion recorded—because they never had time to develop them.
### **7. Negative/positive controls underused**
* The paper notes the absence of a signal for cancer as a kind of negative control, but there’s no structured negative control outcome set (e.g., forearm fractures) nor negative control exposure (e.g., non-vaccine preventive care) to quantify bias.
### Skipping the ‘placebo’ or dummy test
The authors point out there was no link to cancer, but they didn’t check other outcomes that vaccines couldn’t possibly cause—like broken bones—or compare to other routine care. Those kinds of “placebo tests” help show whether the study is picking up real effects or just bias, and this study skipped them.
👉 If you test a smoke alarm only by lighting a match under it, sure it beeps for smoke — but unless you also check whether it stays quiet when you wave a spoon or a pillow under it, you don’t know if it’s detecting real danger or just going off randomly. This study never ran that kind of “dummy test.”
## Would My team Do this Better? Hell Yes!
* * *
### What good science would look like when comparing vaccinated vs. unvaccinated people
To really answer this question, you need:
* **A large population and high quality data** , representative of the real population.
* **A team of investigators** who know what they are doing
* **Transparency** and peer review
* **Time-varying models** that handle when kids get vaccinated.
* **Active comparators** , like on-time vs delayed schedules, not “everything vs nothing.”
* **Controls for clinic visits** —matching kids by how often they are seen, not just by birth weight.
* **Family-based comparisons** —siblings with different vaccination histories are a goldmine if possible.
* **Pre-registered outcomes** —pick a small number of plausible health conditions as primary outcomes, not a shopping list.
**In other words: build the study like a _target trial_ rather than a game of epidemiological bingo.**
### Sound Bites Box
Vaccinated vs. unvaccinated kids see doctors more → more diagnoses.
Fewer visits ≠ healthier kids.
Comparing vaccinated vs. unvaccinated is apples vs oranges.
The math setup guaranteed bias.
Testing everything at once ensures false alarms.
Zero events in tiny groups = noise, not proof.
Vaccines were unlikely to be the cause. Healthcare access was.
* * *
### Share this:
* Click to share on Facebook (Opens in new window) Facebook
* Click to share on X (Opens in new window) X
*
* * *
### Discover more from diplomatic immunity
Subscribe to get the latest posts sent to your email.
Type your email…
Subscribe