The Unjournal (Unjournal.org)
banner
unjournal.bsky.social
The Unjournal (Unjournal.org)
@unjournal.bsky.social
Researchers, practitioners, & open science advocates building a better system for research evaluation. Nonprofit. We commission public evaluation & rating of hosted work. To make rigorous research more impactful, & impactful research more rigorous.
Honorable Mention Lottery: Drawing Tuesday, February 10

Two of five honorable mention evaluators will win an extra $500 each. Why a lottery? Real incentives, not token recognition — larger payments to fewer people reduces transaction costs for everyone.

Results announced… http://dlvr.it/TQr5fQ
Honorable Mention Lottery — The Unjournal 2024–25 Evaluator Prize
Transparent random draw to select two $500 prize winners from the five honorable mentions in The Unjournal's 2024–25 Evaluator Prize.
dlvr.it
February 8, 2026 at 5:56 PM
New video: The Unjournal Process — a 2-minute walkthrough of how we evaluate research. Produced by one of our field specialists. http://dlvr.it/TQr4cc http://dlvr.it/TQr4cg
- YouTube
Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.
www.youtube.com
February 8, 2026 at 5:23 PM
🎥 NEW VIDEO: The Unjournal Process — How We Evaluate Research

Watch our 6-step process for rigorous, transparent peer review:
• Expert evaluators compensated $350-450/review
• 3-5 week turnaround
• Public evaluations with DOIs
• Author response before publication… http://dlvr.it/TQqbpC
- YouTube
Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.
www.youtube.com
February 8, 2026 at 2:48 AM
Thanks again for your hard work, expertise, and engagement with the authors and evaluation managers, and productive collaboration with the other evaluator.
They asked me to review/evaluate an interesting paper. I learned a lot and I got ~$1500 (Actually donated, don't come after me Ausländerbehörde 🙅). Alternative peer review processes seem like a promising part of moving away from our journal heavy systems.
🥈 SECOND PRIZE: Cannon Cloud
Evaluated "Global Potential for Natural Regeneration in Deforested Tropical Regions"
February 7, 2026 at 3:17 PM
🏆 The Unjournal's 2024-25 Evaluator Prizes
Recognizing 8 of 80+ expert peer reviewers for outstanding public evaluation work. $6,500 in prizes.
→ info.unjournal.org/evaluator-prizes-2024-25 🧵
February 7, 2026 at 4:30 AM
Unjournal + Metaculus is hosting a #ForecastingTournament for predictions surrounding animal welfare decisions & funding.

Sign up to be notified.

Consider donating to the prize pool.
Predict the Future of Animal Welfare — The Unjournal
The Unjournal is collaborating on Metaculus' first animal-focused forecasting tournament — harnessing collective intelligence to inform decisions that affect billions of animals.
dlvr.it
February 5, 2026 at 5:14 PM
The Unjournal only evaluates research that is publicly accessible. We also put all our evaluation packages in the public domain. And we *do* pay reviewers (aka evaluators).
I learned there is a "No free view? No review!" initiative. Here
nofreeviewnoreview.org you can pledge that you only #PeerReview for #OpenAccess journals. Reviewing for hybrid journals seems to be discouraged
February 3, 2026 at 2:17 PM
Designing & building a new interface for Unjournal research evaluations.

Would love your thoughts & feedback. A very preliminary version:
Evaluation Form - Unjournal
Please aim to write a report up to the standards of a high-quality referee report for a traditional journal. Consider standard guidelines as well as The Unjournal's emphases. Remember to address any specific considerations mentioned by the evaluation manager, including in our bespoke evaluation notes. Please provide a concise summary of your evaluation below. Otherwise, please write your evaluation here, provide a link to it, or let us know you have emailed it. If you are linking or sending a file, we prefer the 'native format' (word, latex, markdown, bibtex, etc.) and not a pdf (but if necessary we can handle a pdf). These are percentile ratings relative to a reference group: serious research in the same area that you have encountered in the last few years. A rating of 50 means the paper is at the median of this reference group; 80 means it is in the top 20%; 20 means only 20% of comparable work is worse. See the guidelines on quantitative metrics for details. Tip: Use the Calibrate button above to practice rating sample papers and check your calibration. Guidance Judge the quality of the research heuristically. Consider all aspects of quality, credibility, importance to future impactful applied research, and practical relevance and usefulness, importance to knowledge production, and importance to practice. Benchmark: serious research in the same area encountered in the last three years. Guidance Consider the following: Are the methods well-justified and explained? Are they a reasonable approach to answering the question(s) in this context? Are the underlying assumptions reasonable? Are the results and methods likely to be robust to reasonable changes in assumptions? Does the author demonstrate robustness? Did the authors take steps to reduce bias from opportunistic reporting and questionable research practices? Guidance To what extent does the project contribute to the field or to practice, particularly in ways relevant to global priorities and impactful interventions? Focus on "improvements that are actually helpful" (applied stream) Originality and cleverness should be weighted less than typical journals — we focus on impact More weight on "contribution to global priorities" than "contribution to academic field" Do the paper's insights inform beliefs about important parameters and intervention effectiveness? Does the project add useful value to other impactful research? Sound, well-presented null results can also be valuable Guidance Are goals and questions clearly expressed? Are concepts clearly defined and referenced? Is the reasoning transparent? Assumptions explicit? Are all logical steps clear and correct? Does the writing make arguments easy to follow? Are conclusions consistent with the evidence presented? Do authors accurately characterize evidence and its support for main claims? Are data and analysis relevant to the arguments? Are tables, graphs, diagrams easy to understand (no major labeling errors)? Guidance This covers several considerations: Replicability, reproducibility, data integrity: Would another researcher be able to perform the same analysis and get the same results? Are methods explained clearly enough for credible replication? Is code provided? Is data source clear and as available as reasonably possible? Consistency: Do numbers in the paper and code output make sense? Are they internally consistent throughout? Useful building blocks: Do authors provide tools, resources, data, and outputs that might enable future work and meta-analysis? Reference: COS TOP Guidelines — a framework for evaluating transparency across 8 dimensions including data, code, materials, and preregistration. Guidance Is the topic and approach useful to global priorities, cause prioritization, and high-impact interventions? Does the paper consider real-world relevance, policy, and implementation questions? Are the setup, assumptions, and focus realistic? Do authors report results relevant to practitioners? Do they provide useful quantified estimates (costs, benefits) for impact quantification? Do they communicate in ways policymakers can understand without misleading oversimplification? Guidance Where should this paper be published based on merit alone? Imagine a journal process that is fair, unbiased, and free of noise — where status, connections, and lobbying don't matter. 0: Won't publish / little to no value 1: OK / Somewhat valuable journal 2: Marginal B-journal / Decent field journal 3: Top B-journal / Strong field journal 4: Marginal A-journal / Top field journal 5: A-journal / Top journal Non-integer scores encouraged (e.g., 4.6, 2.2). 0 Won't publish 1 OK 2 Marginal B 3 Top B 4 Marginal A 5 Top A Guidance Where will this research actually be published? If already published and you know where, report the prediction you would have given absent that knowledge. 0: Won't publish / little to no value 1: OK / Somewhat valuable journal 2: Marginal B-journal / Decent field journal 3: Top B-journal / Strong field journal 4: Marginal A-journal / Top field journal 5: A-journal / Top journal Non-integer scores encouraged (e.g., 4.6, 2.2). 0 Won't publish 1 OK 2 Marginal B 3 Top B 4 Marginal A 5 Top A This section is meant to help practitioners use this research to inform their funding, policymaking, and other decisions. It is not intended as a metric to judge the research quality per se. This is mainly relevant for empirical research. If 'claim assessment' does not make sense for this paper, please consult the evaluation manager, or skip this section. I. Identify the most important and impactful factual claim this research makes — e.g., a binary claim or a point estimate or prediction. III. [Optional] What additional information, evidence, replication, or robustness check would make you substantially more (or less) confident in this claim? IV. [Optional] Identify the important implication of the above claim for funding and policy choices. To what extent do you believe this implication? How should it inform policy choices? + Add Another Claim We generally incorporate this into the 'abstract' of your evaluation (see examples at unjournal.pubpub.org). Your comments here will not be public or seen by authors. Please use this section only for comments that are personal/sensitive in nature. Please place most of your evaluation in the public section. Please disclose your use of AI/LLM tools in this evaluation. AI tools may be used for specific tasks (literature search, methodology checks, writing assistance) but not for generating overall evaluations or ratings. You must independently verify any AI-generated suggestions. See recommended tools → Responses to these will be public unless you mention in your response that you want us to keep them private. Feedback (responses below will not be public or seen by authors) Would you be interested in discussing this with other evaluators and writing a joint report? See bit.ly/UJevalcollab. It will come with some additional compensation. If you and other evaluators are interested, we may follow up to arrange an (anonymous) discussion space or synchronous meetings. Tick here if you would like to be part of our 'evaluator pool' in the future. To be contacted for compensated evaluation work when research comes up in your area. To expedite this, fill out the EOI form at Join the Unjournal. Do you have any other suggestions or questions about this process or The Unjournal? (We will try to respond, and incorporate your suggestions.)
dlvr.it
January 30, 2026 at 9:30 PM
http://dlvr.it/TQf87s

Some of the many ways to follow and get involved with The Unjournal.
Follow The Unjournal — Stay Connected
Follow The Unjournal on social media, subscribe to our newsletter, and stay up to date on open evaluation of impactful research.
dlvr.it
January 30, 2026 at 1:40 AM
Evaluation Summary and Metrics: "A review of GiveWell’s discount rate"
Evaluation Summary and Metrics: "A review of GiveWell’s discount rate"
Evaluation Summary and Metrics: "A review of GiveWell’s discount rate" for The Unjournal.
bit.ly
January 26, 2026 at 11:16 PM
With new coding tools (Claude code etc.) comes updated and better dashboards.

See https://bit.ly/4qGmkip to track Unjournal.org's progress, compare evaluation ratings by category, and more.

Feedback welcomed!
January 24, 2026 at 4:42 PM
https://bit.ly/3NgDDI4

This is very good news for the sustainability of The Unjournal (see output at unjournal.pubpub.org) and for other initiatives in open evaluation and open research publishing. Congrats and thanks to the PubPub team on making this work!
A Path Forward for PubPub
Knowledge Futures mission is to make information useful.
bit.ly
January 13, 2026 at 6:43 PM
Reposted by The Unjournal (Unjournal.org)
I wish 2026 is the year we stop the drain that scientific publishers impose on science.

Instead of funding science, increasing shares of shrinking research budgets are funneled to publishers in exchange of.. not much.

Understand the strain: tinyurl.com/2b6wxx5r
Stop the drain: tinyurl.com/3jfscscy
December 31, 2025 at 3:25 PM
Unjournal pays "reviewers" for their work: www.unjournal.org/news/unjourn...
Avg: $450/$350 per eval in base + bonuses + prizes.
bit.ly
December 23, 2025 at 7:16 PM
Unjournal.org is tracking the impact of our evaluations on research & collecting evidence: https://bit.ly/3MHVRlq (WIP).

Ffor "Macroeconomic Effects of Climate Change" by Bilal & Kanzig we see big 2024 --> 2025 updates in line w/ evaluator suggestions.

https://bit.ly/492YFkC
December 18, 2025 at 9:20 PM
Do you work at an global-impact-focused organization that uses research? Do you find Unjournal evaluations useful (or not)? How can we improve & boost our impact?

We want your 'stakeholder' feedback:
Stakeholder Feedback: help Unjournal measure & enhance its impact — The Unjournal
We've evaluated 55+ papers/projects across global health, animal welfare, AI governance, environmental risks, and development economics; with more in the pipeline. We've built tools, databases , and partnerships. We’re now focusing on understanding and enhancing our impact — “ is this actually u
bit.ly
December 15, 2025 at 5:44 PM
Reposted by The Unjournal (Unjournal.org)
FWIW at Unjournal.org we

1. Prioritize research in terms of its potential for impact (see coda.io/d/Public-Dat...)

2. Commission public, expert, evaluations of this research, asking for reports & ratings across several criteria, including Relevance/Usefulness
December 9, 2025 at 4:03 PM
Evaluation Summary and Metrics: "Adjusting for Scale-Use Heterogeneity in Self-Reported Well-Being" (interim eval.)
Evaluation Summary and Metrics: "Adjusting for Scale-Use Heterogeneity in Self-Reported Well-Being" (interim eval.)
Evaluation Summary and Metrics: "Adjusting for Scale-Use Heterogeneity in Self-Reported Well-Being" for The Unjournal.
bit.ly
December 9, 2025 at 3:26 PM
Researchers, research-users, practitioners, funders, managers:
Unjournal.org wants your feedback on our work & how we can improve & boost our impact.

Survey with lottery incentive: See
Stakeholder Feedback: help Unjournal measure & enhance its impact — The Unjournal
We've evaluated 55+ papers/projects across global health, animal welfare, AI governance, and development economics; and more in the pipeline . We've built tools, databases, and partnerships. We’re now focusing on understanding and enhancing our impact — “ is this actually useful, who is using it,
bit.ly
December 4, 2025 at 1:58 PM
Reposted by The Unjournal (Unjournal.org)
This year’s @sjdm-tweets.bsky.social presidential address from @donandrewmoore.bsky.social built on last year’s address from @joesimmons.bsky.social about improving #cogSci.


Don celebrated our normalization of pre-registration and #openAccess data, materials, etc. 🙌

But he reminded us of how...
@joesimmons.bsky.social's 2024 Prez Address at #SJDM was WILD — a behind the scenes of @datacolada.bsky.social:

Takeaways:
- P-curves aren't sufficient
- Share EVERYTHING (including #Qualtrics files)
- Describe findings AS OPERATIONALIZED
- Loads of #JDM is good! (cf. headlines)
- Keep speaking up!
November 23, 2025 at 7:41 PM
Reposted by The Unjournal (Unjournal.org)
...many leaders in the field are still seeking prestige from a few for-profit, rent-seeking journals.



He then pointed us to better publication practices:
- @peercommunityin.bsky.social

- @unjournal.bsky.social

- @metaror.bsky.social

Post credits: @alexh.bsky.social, @jwastrachan.bsky.social
November 23, 2025 at 7:41 PM