Marek McGann
@marekmcgann.sciences.social.ap.brid.gy
91 followers 4 following 22 posts
Cognitive scientist, teacher, nerd. I do theoretical and philosophical work in enactive, ecological, and embodied cognitive science. I am also interested in scientific practice in psychology (and it's the Department of Psychology at MIC, Limerick that pa…
Posts Media Videos Starter Packs
Reposted by Marek McGann
marekmcgann.sciences.social.ap.brid.gy
Five principles to protect the human knowledge ecosystem 5/5
Responsibility precludes scientists from using AI products whose use is irresponsible, e.g. harmful to people, animals, and the environment, or otherwise in violation of legal guidelines (e.g. copyright, data privacy, labour laws, Butterick 2025; Cole 2025; Rijo 2025; Tafani 2024a). Minimizing harm is vital for both engineers (ACM Code 2018 Task Force 2018) and theoreticians (Guest 2024).
marekmcgann.sciences.social.ap.brid.gy
Five principles to protect the human knowledge ecosystem 5/5
Responsibility precludes scientists from using AI products whose use is irresponsible, e.g. harmful to people, animals, and the environment, or otherwise in violation of legal guidelines (e.g. copyright, data privacy, labour laws, Butterick 2025; Cole 2025; Rijo 2025; Tafani 2024a). Minimizing harm is vital for both engineers (ACM Code 2018 Task Force 2018) and theoreticians (Guest 2024).
marekmcgann.sciences.social.ap.brid.gy
Five preoccupied to protect the human ecosystem 4/5
Independence means that scientists ensure that their research is unbiased by AI companies agendas, and that any potential conflicts of interest are declared in publications and other public communications (this also follows from Honesty and Transparency; cf. Mohamed Abdalla and Moustafa Abdalla 2021; Atkin 2025; Forbes and Guest 2025; Knoester et al. 2025).
marekmcgann.sciences.social.ap.brid.gy
Five principles to protect the human knowledge ecosystem 3/5
Transparency requires that the AI technologies are open source and computationally reproducible.
Here we must recall the technology industry’s obfuscatory tactics: “the name of the current producer of ChatGPT. ‘OpenAI’ sounds like it is engaged in open science, but as we have now seen, ‘open’ never really means what you think it does.” (Mirowski 2023, p. 738; see also Dingemanse 2025; Hao 2025; Jackson 2024; Liesenfeld and Dingemanse 2024; Liesenfeld, Lopez, et al. 2023;
Maffulli 2023; Maris 2025; Nolan 2025; Solaiman 2023; Thorne 2009; Widder et al. 2024)
marekmcgann.sciences.social.ap.brid.gy
Five principles to protect the human knowledge ecosystem. 2/5
Scrupulousness demands, among other things, that scientists only use AI products whose functionality is well-specified and validated for its specific scientific usage (cf. Kwisthout 2024; Kwisthout andRenooij2025).Thisincludesterminologicalprecisionaboutwhatformalisms,models,and/or technologies are used (recall Figure 1) and rigorous argumentation to motivate why these technologies are appropriate for the scientific purposes at hand.
marekmcgann.sciences.social.ap.brid.gy
Five principles to protect the human knowledge ecosystem. 2/5
Scrupulousness demands, among other things, that scientists only use AI products whose functionality is well-specified and validated for its specific scientific usage (cf. Kwisthout 2024; Kwisthout andRenooij2025).Thisincludesterminologicalprecisionaboutwhatformalisms,models,and/or technologies are used (recall Figure 1) and rigorous argumentation to motivate why these technologies are appropriate for the scientific purposes at hand.
marekmcgann.sciences.social.ap.brid.gy
Five principles to protect the ecosystem of human knowledge. 1/5

@tomstafford has argued that the infoglut we live in is like when densely populated cities made plagues more likely. We need to normalise new hygienic practices. @olivia et al set some out.
Honesty implies that we do not secretly use AI technologies without disclosure, and that one does not make unfounded claims about the presumed capabilities of AI technologies (this also follows from Responsibility; see example from MIT Economics 2025, where perhaps too little too late was done).
marekmcgann.sciences.social.ap.brid.gy
"But AI tools will be just like calculators..."
we ban calculators when teaching children addition and other basic arithmetic operations for a reason (cf. Lodge et al. 2023). Otherwise, they would not learn these arithmetic operations, and calculators do not help to understand the basic mathematical rules. For the same reasons, we also do not allow the use of spellcheck software for children learning to spell, or keyboard typing when learning to write by hand
marekmcgann.sciences.social.ap.brid.gy
Students aren't generally looking to cheat, and most think LLM use should not be allowed in college education. Let's not allow salespeople to speak for them.
Technology companies are not shy of falsely claiming that students are lazy or lack writing skills. Such a mantra serves only to sell products — or coverupandexcuseoverworkingthembyourcolleagues—withnoreflectiononreality.We condemn those claims and reassert students’ agency vis-à-vis corporate control.
marekmcgann.sciences.social.ap.brid.gy
Why higher ed teaching is hard, and always will be. But we don't do it because it is easy, and we are trusted to do it well because we have hard earned expertise in our fields. Let's not undervalue that, hey?
Students have always cheated. Bending and breaking the rules is human nature. And by the same token, educators are not police. We are not here to obsessively surveil our students — education is based on mutual trust. Therefore, our duty is to build mutually shared values with our students and colleagues.
marekmcgann.sciences.social.ap.brid.gy
It is vital that we don't overestimate our understanding of human cognition.

Yes, there is reason to believe we have a better grasp of some things than when AI hype first appeared in the 1950s, but that better understanding is not clear, and almost certainly isn't on the basis of simple computation
Although we do not fully understand human thinking, this does not licence attributing thinking to whichever machine or technology, uncritically and through anthropomorphisation. Such arguments from ignorance lack all scientific rigour. The only argument from ignorance that science permits is caution, more research, and care as appropriate actions when something is truly unknown.
marekmcgann.sciences.social.ap.brid.gy
The more I learn about history, the more I understand how important learning about history is. This is doubly important in my own professional field.

@olivia and colleagues driving home the point against the inevitability of AI reminding us of this.
When we engage with the public, we notice people think that AI, as a field or a technology, appeared on the scene in the last three years. And they experience confusion and even dissonance when they discover the field and the technologies have existed for decades, if not centuries or even millennia (Bloomfield 1987; Boden 2006; Bogost 2025; Guest 2025; Hamilton 1998; Mayor 2018). Such ahistoricism facilitates “the AI-hype cycles that have long been fuelled by extravagant claims that substitute fictionforscience.”(Heffernan, 2025,n.p. Duarte etal.2024).
Reposted by Marek McGann
apostateenglishman.mastodon.world.ap.brid.gy
Perhaps it's time to tap the chart again. Left-handed people were once seen as sinister (the word sinister, from Latin, originally meant "left" or "on the left side") so kids were forced to be right-handed. However, with advances in science it became clear […]

[Original post on mastodon.world]
Chart showing the prevalence of left-handedness since 1880. It skyrockets from 3% in the early 20th century to 12% in the 1960s, when it flatlines at remains constant to the present.
Reposted by Marek McGann
satrevik.fediscience.org.ap.brid.gy
I'm putting together a short bibliography of recent papers about #bigteamscience, with emphasis on challenges and solutions for large social science projects. Am I missing any important papers? I'm in particular looking for any tools or checklists to use when […]

[Original post on fediscience.org]
A screen shot of a bullet list containing the following items: 
Teasley, S., & Wolinsky, S. (2001). Scientific collaborations at a distance. Science, 292(5525), 2254-2255.
Bammer, G. (2008). Enhancing research collaborations: Three key management challenges. Research policy, 37(5), 875-887.
Vogel, A. L., Hall, K. L., Fiore, S. M., Klein, J. T., Bennett, L. M., Gadlin, H., ... & Falk-Krzesinski, H. J. (2013). The team science toolkit: enhancing research collaboration through online knowledge sharing. American journal of preventive medicine, 45(6), 787-789.
Yao, B. (2021). International research collaboration: Challenges and opportunities. Journal of Diagnostic Medical Sonography, 37(2), 107-108.
Coles, N. A., Hamlin, J. K., Sullivan, L. L., Parker, T. H., & Altschul, D. (2022). Build up big-team science. Nature, 601(7894), 505-507.
Baumgartner, H. A., Alessandroni, N., Byers-Heinlein, K., Frank, M. C., Hamlin, J. K., Soderstrom, M., ... & Coles, N. A. (2023). How to build up big team science: A practical guide for large-scale collaborations. Royal Society Open Science, 10(6), 230235.
Forscher, P. S., Wagenmakers, E. J., Coles, N. A., Silan, M. A., Dutra, N., Basnight-Brown, D., & IJzerman, H. (2023). The benefits, barriers, and risks of bi
Reposted by Marek McGann
nazgul.infosec.exchange.ap.brid.gy
❝ The crazy thing is not that garlic isn’t grown from seeds; it’s that, for the most part, it can’t be. Ever since people began cultivating garlic — six millennia ago by some estimates, 10 by others — it’s primarily been done through asexual reproduction. In all those thousands of years […]
Original post on infosec.exchange
infosec.exchange
Reposted by Marek McGann
Reposted by Marek McGann
Reposted by Marek McGann
csmarcum.sciences.social.ap.brid.gy
One of the reasons I have been a strong advocate for the use of persistent digital identifiers, especially ORCIDs, is because they facilitate trust in authorship while simultaneously supporting the personal life decisions of authors to change their name. #openscience @eLife […]
Original post on sciences.social
sciences.social
Reposted by Marek McGann
tomstafford.mastodon.online.ap.brid.gy
New off-topic newsletter from me!

https://tomstafford.substack.com/p/annals-of-scholarly-profanity

"I have collected three triumphant examples of scholarly profanity, pieces of writing with a swear word in the title which fill an important gap in our conceptual universe, not despite but […]
Original post on mastodon.online
mastodon.online
Reposted by Marek McGann
skuebeck.graz.social.ap.brid.gy
Microsoft marketing: “Your data stays in Europe.”

Microsoft’s Legal Director (under oath, in French Parliament): “No, I cannot guarantee that.”

Still think Microsoft Teams is a sovereign solution?

Credit @ponceto91 for the meme […]

[Original post on graz.social]
Ab boat labeled FRENCH GOVERNMENT with a fishermen about to rescue a girl (labeled MICROSOFT). The part of the girl that's under water shows that she is actually a sea monster labeled UNITED STATES.