tonyRH.bsky.social
@tonyrh.bsky.social
430 followers 110 following 23 posts
Posts Media Videos Starter Packs
Reposted by tonyRH.bsky.social
tier2-project.eu
📅On 27th November 2024, TIER2 and Taylor & Francis Group hosted a workshop at the 19th Munin Conference on Scholarly Publishing in Tromsø, Norway, addressing research reproducibility. A report on the workshop is now available on TIER2's website.💡
🔗Learn more here: tier2-project.eu/news/tier2-a...
tonyrh.bsky.social
Sadly spent most of this very sunny day writing a narrative CV for a grant application. Jeez, that's a lot of extra work that I'm not sure anyone is likely to read too deeply.

What do others think of them?
Reposted by tonyRH.bsky.social
tier2-project.eu
A new TIER2 preprint, authored by Serge Horbach, Nicki Lisa Cole, Simone Kopeinik, Barbara Leitner, Tony Ross-Hellauer and Joeri Tijdink, explores the barriers and enablers for reproducibility in research.
🔗Learn more: tier2-project.eu/news/new-tie...
Reposted by tonyRH.bsky.social
mhmdhsini.bsky.social
"Conflation of synthetic #GenAI and real data could corrupt the research record; degrade the #quality and #reproducibility of scientific data and analytical methods; and, ironically, sabotage the training of AI models." www.pnas.org/doi/10.1073/... #synthetic_data #research_integrity
GenAI synthetic data create ethical challenges for scientists. Here’s how to address them. | PNAS
GenAI synthetic data create ethical challenges for scientists. Here’s how to address them.
www.pnas.org
Reposted by tonyRH.bsky.social
pittso.bsky.social
Was reminded of this ultimate Venn today.
tonyrh.bsky.social
New paper by Thomas Klebel "Investigating patterns of knowledge production in research on three UN sustainable development goals", just published in Online Information Review! doi.org/10.1108/OIR-...
Reposted by tonyRH.bsky.social
floriannaudet.bsky.social
While I see (again) many new persons here, here is the meta-research and open science starting pack 2.
tonyrh.bsky.social
New Preprint! "Reproducibility and replicability of qualitative research: An integrative review of concepts, barriers and enablers" - osf.io/preprints/me...

A nice ouput from our TIER2 project, led by Nicki Lisa Cole :)
tonyrh.bsky.social
5/ 🌍 A Call for Responsible Use
To ensure GenAI aligns with Open Science values:
- Researchers must integrate GenAI with care and scrutiny.
- Developers need to create transparent, unbiased tools.
- Policymakers must balance innovation and risk.
tonyrh.bsky.social
4/ 🔍 The Risk
Despite the potential, there are challenges:
❌ Opaque “black box” models undermine transparency
❌ Bias in training data risks reinforcing inequalities
❌ High computational demands raise sustainability concerns.
tonyrh.bsky.social
3/ ✨ The Opportunity
GenAI can:
✅ Increase efficiency of enhanced documentation
✅ Simplify complex science into accessible language
✅ Break language barriers through translation
✅ Enable public participation in research
✅ Promote inclusivity, accessibility, and understanding.
tonyrh.bsky.social
2/ TL;DR
2/ TL;DR. Mohammad Hosseini, Serge Horbach, @kristiholmes.bsky.social and I explore GenAI's enormous potential to enhance accessibility and efficiency in science. But we emphasise that to do so, GenAI must bespeak Open Science principles of openness, fairness, and transparency.
tonyrh.bsky.social
1/ 🚨 NEW PAPER! “Open Science at the Generative AI Turn”
In a new study just published in Quantitative Science Studies, we explore how GenAI both enables and challenges Open Science, and why GenAI will benefit from adopting Open Science values. 🧵
doi.org/10.1162/qss_...
#OpenScience #AI #GenAI
Screenshot of paper "Open Science at the generative AI turn: An exploratory analysis of challenges and opportunities" by Mohammad Hosseini, Serge P. J. M. Horbach, Kristi Holmes and Tony Ross-Hellauer 

Crossmark: Check for Updates
Author and Article Information
Quantitative Science Studies 1–24.
https://doi.org/10.1162/qss_a_00337

Abstract
Technology influences Open Science (OS) practices, because conducting science in transparent, accessible, and participatory ways requires tools and platforms for collaboration and sharing results. Due to this relationship, the characteristics of the employed technologies directly impact OS objectives. Generative Artificial Intelligence (GenAI) is increasingly used by researchers for tasks such as text refining, code generation/editing, reviewing literature, and data curation/analysis. Nevertheless, concerns about openness, transparency, and bias suggest that GenAI may benefit from greater engagement with OS. GenAI promises substantial efficiency gains but is currently fraught with limitations that could negatively impact core OS values, such as fairness, transparency, and integrity, and may harm various social actors. In this paper, we explore the possible positive and negative impacts of GenAI on OS. We use the taxonomy within the UNESCO Recommendation on Open Science to systematically explore the intersection of GenAI and OS. We conclude that using GenAI could advance key OS objectives by broadening meaningful access to knowledge, enabling efficient use of infrastructure, improving engagement of societal actors, and enhancing dialogue among knowledge systems. However, due to GenAI’s limitations, it could also compromise the integrity, equity, reproducibility, and reliability of research. Hence, sufficient checks, validation, and critical assessments are essential when incorporating GenAI into research workflows.
tonyrh.bsky.social
Its presumptuous but I dont mind that so much - but what I really dislike is when I need to use those details and validate the new account just in order to decline.
tonyrh.bsky.social
8/ Read the full paper here for insights on how to reshape research evaluation systems for fairness and effectiveness: doi.org/10.1093/rese...
doi.org
tonyrh.bsky.social
7/ We close with recommendations: clarify core purposes of research assessment, use shared frameworks, train assessors on bias, reduce over-frequent assessments, and move beyond binary thinking on qualitative/quantitative methods.
tonyrh.bsky.social
6/ We examine the “performativity of assessment criteria,” revealing a tension between rigid/flexible criteria and how transparently they are communicated. Transparent, equitable frameworks are vital to align formal criteria with the realities of research evaluation.
tonyrh.bsky.social
5/ Respondents noted that beyond metrics, informal factors—social dynamics, politics, and demographics—play key roles in assessment outcomes. These hidden criteria emerge in opaque processes, granting assessors significant flexibility.
tonyrh.bsky.social
4/ Through qualitative analysis of free-text responses from 121 international researchers, we highlight a major gap between formal evaluation criteria and their practical application.
tonyrh.bsky.social
3/ How do current systems enable “hidden factors” like cronyism or evaluator biases, and how might these change under proposed reforms? Our study examines researchers' perceptions of social and political influences on assessment processes.
tonyrh.bsky.social
2/ Reform of research assessment, especially to avoid over-quantification and empower qualitative assessment, is a hot topic. Change is coming. But how do we balance broader criteria to value activities beyond publishing/funding, peer review reliance, and merit-based rewards?
tonyrh.bsky.social
New Paper! “Understanding the social and political dimensions of research(er) assessment: evaluative flexibility and hidden criteria in promotion processes at research institutes”, just published in Research Evaluation by me, @naubertbonn.bsky.social & Serge Horbach. Thread below!