Olivier Driessens
@odriessens.bsky.social
250 followers 300 following 40 posts
Media Sociologist; Associate Prof in Media and Communication, Centre for Tracking and Society, University of Copenhagen Media, tech & social change, continuity, digital futures, sustainability
Posts Media Videos Starter Packs
Reposted by Olivier Driessens
louisoncf.bsky.social
#Denmark politicians wage a #culture #war on #academia —with @roskildeuni.bsky.social as their go-to target.

These attacks are assaults on #freedom, #rights and #democracy. We spoke out:

@berlingske.bsky.social : www.berlingske.dk/synspunkter/...

@politiken.dk : politiken.dk/debat/debati...
Reposted by Olivier Driessens
meredithmeredith.bsky.social
📣 Germany's close to reversing its opposition to mass surveillance & private message scanning, & backing the Chat Control bill. This could end private comms-& Signal-in the EU.

Time's short and they're counting on obscurity: please let German politicians know how horrifying their reversal would be.
signal.org
We are alarmed by reports that Germany is on the verge of a catastrophic about-face, reversing its longstanding and principled opposition to the EU’s Chat Control proposal which, if passed, could spell the end of the right to privacy in Europe. signal.org/blog/pdfs/ge...
signal.org
Reposted by Olivier Driessens
olivia.science
New preprint 🌟 Psychology is core to cognitive science, and so it is vital we preserve it from harmful frames. @irisvanrooij.bsky.social & I use our psych and computer science expertise to analyse and craft:

Critical Artificial Intelligence Literacy for Psychologists. doi.org/10.31234/osf...

🧵 1/
Cover page of Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 1 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 2 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
Reposted by Olivier Driessens
justinhendrix.bsky.social
The UN Independent International Scientific Panel on AI must include social, environmental, and public perspectives in its work and membership, and public voices must have a formal, ongoing role in the Global Dialogue on AI Governance, write Tim Davies and Anna Colom.
The UN’s Global Dialogue on AI Must Give Citizens a Real Seat at the Table | TechPolicy.Press
Learning from decades of global convening on climate change, AI Governance must place local lived experiences at its heart, write Tim Davies and Anna Colom.
www.techpolicy.press
Reposted by Olivier Driessens
aaroncantu.bsky.social
NEW INVESTIGATION

California predicts data centers will consume as much power as adding another LA to grid by 2030

A utility anticipates additional emissions equal to 21 gas plants

Some environmentalists see reducing gas power as “a lot less likely” due to AI capitalandmain.com/the-insatiab...
The Insatiable Energy Demands of Data Centers Could Increase Fossil Fuel Emissions in California
By 2030, the centers could consume the equivalent of adding another city the size of L.A. to the state’s power grid.
capitalandmain.com
Reposted by Olivier Driessens
tobiasdienlin.com
Have added Global Perspectives in Communication (@gpccomm.bsky.social) to @moritzbuchi.bsky.social's and my list of open access journals in the field of Communication. #openscience #opencomm
Open Media and Communication Research
docs.google.com
odriessens.bsky.social
Georges Bouchez, pro-Israel leader of the French-speaking Liberal Democrats (MR) in Belgium, also suggested banning 'antifa' recently. His party is part of the coalition of the federal and regional governments.
Reposted by Olivier Driessens
jerthorp.bsky.social
This video is really important.

www.nytimes.com/2025/09/26/o...

It connects the dots between A.I. and climate disasters and is just a perfectly crafted piece of investigation and exposition.

@katecrawford.bsky.social
Opinion | A.I.’s Environmental Impact Will Threaten Its Own Supply Chain
www.nytimes.com
Reposted by Olivier Driessens
drbeef.bsky.social
I am *really* hoping some of my fav critical AI people will contribute to this AJPH call for contributions on "Responsible Artificial Intelligence Use for Advancing Public Health."

Please, please, please, someone write a paper that says “there is no responsible use” and here’s why….!!?! Please?
SUBMIT HERE: https://ajph.aphapublications.org/pb-assets/Supporting%20Documents/AJPH%20CFP%20AI%20Use_Full_Final-1758036159977.pdf

Submission Due Date: January 2nd, 2026.

The American Journal of Public Health (AJPH) issues this Call for Papers to invite AI researchers, public health practitioners, ethicists, and policymakers to articulate practical barriers and transformative possibilities pertaining to the use of AI technologies in public health. Papers that discuss research experiences, dissect operability and implementation challenges, and explore the ethical use of AI are desired. 

Our central question is:

How do we efficiently, effectively, and ethically integrate AI into public health practice?
Reposted by Olivier Driessens
triofrancos.bsky.social
Today is publication day!

EXTRACTION: The Frontiers of Green Capitalism is officially out with @wwnorton.com - find it at a bookstore near you or order online💚📚 wwnorton.com/books/978132...
Reposted by Olivier Driessens
bigdatasoc.bsky.social
📊 New in Big Data & Society!

Lindsay Weinberg examines how Microsoft’s Power BI is reshaping Danish higher ed governance—turning students into data points, linking programs to job metrics, and pushing new forms of accountability.

🔗 Read here: journals.sagepub.com/doi/10.1177/...
Reposted by Olivier Driessens
benpatrickwill.bsky.social
Claims of the novelty of AI and its potential for innovation in education always make me wince a bit because really it continues a bunch of long-running tendencies in the sector. It’s an *intensifier* rather than an innovation. Some examples… www.forbes.com/councils/for...
The Impact Of AI Tools On The Next Decade Of Education Innovation
Education technology is more of a commitment to shaping a future where every learner has the tools to succeed.
www.forbes.com
Reposted by Olivier Driessens
casmudde.bsky.social
The far-right majority in the Dutch parliament (BBB-FvD-JA21-PVV-SGP-VVD) has just designated “Antifa” a terrorist organization.

This is a dark day for Dutch democracy and the final nail in the coffin of the VVD as a serious liberal democratic party.
Kamermeerderheid vindt Antifa terroristische organisatie
Een meerderheid in de Tweede Kamer wil dat Nederland, in navolging van de Verenigde Staten, de extreemlinkse beweging Antifa aanmerkt als terroristische organisatie. Een motie daartoe van Lidewij de V...
www.rd.nl
Reposted by Olivier Driessens
Reposted by Olivier Driessens
elinorcarmi.bsky.social
LinkedIn is changing its terms of use and will automatically opt you in to use your data for training their AI models...

Make sure you opt out in the settings..

I guess it's nice of them to warm people? 🥲

Great to see all those privacy by design regulations doing its magic 🫠
Settings LinkedIn use of data for AI
Reposted by Olivier Driessens
revolvingdoordc.bsky.social
Andreessen Horowitz, which will be one of three firms to lead the acquisition of TikTok, is headed by Marc Andreessen, a Silicon Valley tech titan who considered himself to be "an unpaid intern" of Elon Musk's DOGE. But he's not the only major Trump ally involved with this deal
wsj.com
TikTok’s U.S. business would be controlled by an investor consortium including Oracle, Silver Lake and Andreessen Horowitz under a framework the U.S. and China are finalizing.
Details Emerge on U.S.-China TikTok Deal
Oracle, Silver Lake and Andreessen Horowitz are part of an investor consortium that would control an 80% stake.
on.wsj.com
Reposted by Olivier Driessens
olivia.science
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues. Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA). Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe. Protecting the Ecosystem of Human Knowledge: Five Principles