Jose M. Casas 'two-' 🇦🇷 🏴‍☠️
@josemariacasas.bsky.social
29 followers 82 following 51 posts
Psicologia, Política, y algo mas..
Posts Media Videos Starter Packs
Reposted by Jose M. Casas 'two-' 🇦🇷 🏴‍☠️
olivia.science
important on LLMs for academics:

1️⃣ LLMs are usefully seen as lossy content-addressable systems

2️⃣ we can't automatically detect plagiarism

3️⃣ LLMs automate plagiarism & paper mills

4️⃣ we must protect literature from pollution

5️⃣ LLM use is a CoI

6️⃣ prompts do not cause output in authorial sense
5 Ghostwriter in the Machine
A unique selling point of these systems is conversing and writing in a human-like way. This is imminently understandable, although wrong-headed, when one realises these are systems that
essentially function as lossy2
content-addressable memory: when
input is given, the output generated by the model is text that
stochastically matches the input text. The reason text at the output looks novel is because by design the AI product performs
an automated version of what is known as mosaic or patchwork
plagiarism (Baždarić, 2013) — due to the nature of input masking and next token prediction, the output essentially uses similar words in similar orders to what it has been exposed to. This
makes the automated flagging of plagiarism unlikely, which is
also true when students or colleagues perform this type of copypaste and then thesaurus trick, and true when so-called AI plagiarism detectors falsely claim to detect AI-produced text (Edwards, 2023a). This aspect of LLM-based AI products can be
seen as an automation of plagiarism and especially of the research paper mill (Guest, 2025; Guest, Suarez, et al., 2025; van
Rooij, 2022): the “churn[ing] out [of] fake or poor-quality journal papers” (Sanderson, 2024; Committee on Publication Ethics, Either way, even if
the courts decide in the favour of companies, we should not allow
these companies with vested interests to write our papers (Fisher
et al., 2025), or to filter what we include in our papers. Because
it is not the case that we only operate based on legal precedents,
but also on our own ethical values and scientific integrity codes
(ALLEA, 2023; KNAW et al., 2018), and we have a direct duty to
protect, as with previous crises and in general, the literature from
pollution. In other words, the same issues as in previous sections
play out here, where essentially now every paper produced using
chatbot output must declare a conflict of interest, since the output text can be biased in subtle or direct ways by the company
who owns the bot (see Table 2).
Seen in the right light — AI products understood as contentaddressable systems — we see that framing the user, the academic
in this case, as the creator of the bot’s output is misplaced. The
input does not cause the output in an authorial sense, much like
input to a library search engine does not cause relevant articles
and books to be written (Guest, 2025). The respective authors
wrote those, not the search query!
josemariacasas.bsky.social
Además de visibilizar no solo la afectación a la salud mental, sino dar las bases para hablar de racismo institucional.

Ambos estudios están abiertos en ResearchGate :
josemariacasas.bsky.social
Éstos informes están escritos en coautoría de Gabriela Laura Frias Goytia, Jose Maria Casas , Lucía Mañes Collazos. Tienen como objetivos explicar la realidad de profesionales de odontología y psicología que buscan el reconocimiento de sus títulos universitarios extranjeros en España .
josemariacasas.bsky.social
#Murcia
XI CONGRESO MIGRACIONES - 2025
Movilidad y arraigos en un mundo en disputa.

Dos de los informes del Centro de Estudios sobre la Migracion la Discriminacion y el Racismo Institucional se presentarán como comunicaciones el jueves 16 de Octubre en el hashtag#CongresoMigraciones2025 (Murcia).
josemariacasas.bsky.social
UNI_ASK_U: A Weibull-Link Response Model for measuring Unipolar-Skewed constructs with Continuous Responses

psico.fcep.urv.cat/utilitats/UN...
josemariacasas.bsky.social
MULTIPOL-C: An R program for fitting unipolar log-logistic IRT multidimensional models to continuous item responses

psico.fcep.urv.cat/utilitats/MU...
josemariacasas.bsky.social
"Critical Artificial Intelligence Literacy for Psychologists"
by Olivia Guest and Iris van Rooij

PsyArXiv Preprints - doi.org/10.31234/osf...
josemariacasas.bsky.social
En noviembre estare hablando sobre el Cuestionario INCA y los rasgos CU en Foro organizado por la UAM.
josemariacasas.bsky.social
Para el de Tarragona que sienta lejos la flotilla solo un detalle: uno de los secuestrados en el acto de piratería en aguas internacionales realizado por Israel es Personal Docente Investigador de la @urv.cat No cae lejos, cae justo acá en el corazón mismo de la ciudad.
Reposted by Jose M. Casas 'two-' 🇦🇷 🏴‍☠️
Reposted by Jose M. Casas 'two-' 🇦🇷 🏴‍☠️
echo-pbreyer.digitalcourage.social.ap.brid.gy
🇪🇺#ChatControl 👁️ threatens secure messengers: Signal 💬 & WhatsApp could be shut down in the EU 🇪🇺 & you'll lose all your contacts! 🛑

🔗 https://www.tagesschau.de/wirtschaft/digitales/signal-app-rueckzug-europa-100.html (Article in German)

Say NO to #chatcontrol and call now to protest: ✊
👉 […]
Original post on digitalcourage.social
digitalcourage.social
Reposted by Jose M. Casas 'two-' 🇦🇷 🏴‍☠️
Reposted by Jose M. Casas 'two-' 🇦🇷 🏴‍☠️
desenvolurv.bsky.social
📢Nova publicació al grup de recerca❗️

En aquest estudi analitzem el funcionament de l'instrument "The Inventory of Callous-unemotional Traits and Antisocial Behaviour" (INCA) en població de justícia.

Llegeix l'article a l'enllaç: www.tandfonline.com/doi/full/10....
Reposted by Jose M. Casas 'two-' 🇦🇷 🏴‍☠️
Reposted by Jose M. Casas 'two-' 🇦🇷 🏴‍☠️
luissmi.bsky.social
Hola! Desde la URV, en Tarragona, hacemos un estudio de Identidad Positiva Trans🏳️‍⚧️ . Si quieres participar y saber un poco más de nuestra investigación, ingresa, comparte o responde el cuestionario (si eres trans) en identidadpositivaurv.wixsite.com/identidadpos...
#transgenero #lgtbi #transespana
Home | IdentidadpositivaURV
identidadpositivaurv.wixsite.com
Reposted by Jose M. Casas 'two-' 🇦🇷 🏴‍☠️
irisvanrooij.bsky.social
There is absolutely no good faith reason to use the term “AI” for any technology one is selling.

It serves only for dazzling people into thinking the technology has capabilities that it doesn’t.

If one wants a technology to be trustworthy, just use a transparent, informative term without hype.
Reposted by Jose M. Casas 'two-' 🇦🇷 🏴‍☠️
olivia.science
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues. Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA). Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe. Protecting the Ecosystem of Human Knowledge: Five Principles
josemariacasas.bsky.social
32k... me estarian faltando 14k de nomina...
fjiprecarios.bsky.social
👩‍🎓 Predocs!

• Sueldo digno: >32.000 €/año (100% M3)
• Trienios y permisos reconocidos
• Calendario claro de convocatorias
• Más contratos y mejor financiación
• Protección frente a abusos y acoso
• Financiación para congresos
#PactoXLaCiencia