AlgorithmWatch
@algorithmwatch.org
1.4K followers 30 following 250 posts
We ensure that the use of algorithms & AI benefits the many, not just the few! 🇪🇺 We’re the EU/German part 🇨🇭 For Switzerland: @algorithmwatchch.bsky.social 🌐 https://algorithmwatch.org/
Posts Media Videos Starter Packs
Pinned
algorithmwatch.org
Ob du zu einem Vorstellungsgespräch eingeladen wirst, einen Kredit oder die neue Wohnung bekommst – immer öfter entscheiden Algorithmen mit. Doch sie sind nicht neutral.

Hast du #AlgorithmischeDiskriminierung erfahren? Dann melde deinen Fall an #AlgorithmWatch: algorithmwatch.org/de/algorithm...
algorithmwatch.org
👉 Über unser Meldeformular sammeln wir zudem Fälle algorithmischer Diskriminierung. Auch wenn Unsicherheit besteht, ob ein System wirklich diskriminiert hat: algorithmwatch.org/de/algorithm...
algorithmwatch.org
Generative KI-Modelle wie GPT können bei der Bewertung von Bewerbungen diskriminierende Muster reproduzieren. Namen, die Assoziationen mit einer „anderen“ ethnischen Herkunft wecken, haben systematisch schlechter abgeschnitten: www.bloomberg.com/graphics/202...
Reposted by AlgorithmWatch
olivermmarsh.bsky.social
V proud to be pushing a new approach for DSA Data Access - a "mass data access" request for highly viewed content - with the fantastic @clairepershan.bsky.social @lkseiling.bsky.social and @louisbarclay.bsky.social for @algorithmwatch.org .

More here & 🧵: www.mozillafoundation.org/en/what-we-d...
algorithmwatch.org
#Discrimination through #AI in recruitment is a lose-lose scenario that cannot be solved on a technical level alone. In our toolkit for policymakers, we show what can be done to reduce #AlgorithmicDiscrimination in #AlgorithmicHiring.
➡️ findhr.eu/toolkit/poli...
algorithmwatch.org
“Hey AI, send Jane Doe a Signal message!” Not a particularly good idea – and part of a paradigm shift currently taking place through Agentic AI, as @meredithmeredith.bsky.social told us.
algorithmwatch.org
Software developers have the power to reduce #AlgorithmicBiases in their systems. In our Toolkit for software developers, we explain why #AlgorithmicDiscrimination is problematic, what EU law says about #AlgorithmicHiring, and what this means for developers: findhr.eu/toolkit/deve...
algorithmwatch.org
If Python isn't a snake to you and Java isn't an Indonesian island, then pay attention!

The use of AI-based software for personnel recruitment carries the risk of #discrimination. As part of #FINDHR, we have developed concrete recommendations for developers: findhr.eu/toolkit/deve...
algorithmwatch.org
In our latest #AutomatedSociety newsletter issue, we share what we learned after reading all 1,320 pages, and what it reveals not about TikTok, but about European politicians. Subscribe now: automatedsociety.algorithmwatch.org/#/en
algorithmwatch.org
Last month, a group of French MPs published a lengthy report on #TikTok’s impact on the mental health of the country’s youth. Despite the wealth of material collected, the report bends facts to declare TikTok a “poison” that is sole responsible for driving teenage girls to self-harm and depression.
algorithmwatch.org
The increasing use of AI-based recruitment systems, #AlgorithmicHiring, promises to save time, but also carries risks. As part of the EU-Horizon Project #FINDHR, we’ve developed practical recommendations to help tackle #AlgorithmicBias in hiring.

findhr.eu/toolkit/hr-p...
algorithmwatch.org
Do you work in HR and want to prevent #discrimination through #AI hiring tools? Your contribution is key!

Our FINDHR-Toolkit for HR professionals offers insights on discrimination risks, legal frameworks, and what action you can take to counter #AlgorithmicDiscrimination: findhr.eu/toolkit/hr-p...
algorithmwatch.org
Don’t get distracted by ‘Artificial General Intelligence’ (AGI). Many tech CEOs and scientists praise AI as the savior of humanity, while others see it as an existential threat. We explain why both fail to address the real questions of responsibility: algorithmwatch.org/en/agi-and-l...
Focus Attention on Accountability for AI − not on AGI and Longtermist Abstractions - AlgorithmWatch
Many tech CEOs and scientists praise AI as the savior of humanity, while others see it as an existential threat. We explain why both fail to address the real questions of responsibility.
algorithmwatch.org
Reposted by AlgorithmWatch
algorithmwatchch.bsky.social
How can the risk of discrimination in #AlgorithmicHiring be reduced❓ Within the Horizon Europe project #FINDHR we have developed guidelines, methods and technical solutions for the responsible development and use of algorithmic recruiting systems.
👉 algorithmwatch.ch/en/findhr/#g...
Grafic with Text: Software Development Guide, Impact Assessment & Auditing Framework, Equality Monitoring Protocol
algorithmwatch.org
Melde dich jetzt schnell für das Webinar an – die Plätze sind begrenzt! algorithmwatch.org/de/webinar-k...
algorithmwatch.org
Sicherheitspaket, biometrische Fernidentifizierung und Massenüberwachung – du hast das alles schon mal gehört, aber nur eine vage Vorstellung davon, was es bedeutet? Wir möchten dir erklären, wie wichtig das Thema für unsere Gesellschaft ist.

Wann: 09.10.2025 | 10:00 – 11:00 Uhr
Wo: Online
algorithmwatch.org
@meredithmeredith.bsky.social, @mariaexner.bsky.social (Publix) and @spielkamp.bsky.social (@algorithmwatch.org) will discuss what we must do to bring technology in line with human needs – particularly in protecting the privacy of individuals from powerful platforms and operating system providers.
algorithmwatch.org
TOMORROW 6:30 PM (CET) exclusive talk with @meredithmeredith.bsky.social, President of the Signal Foundation, on reclaiming privacy in the age of AI.
Newsletter subscribers can join the livestream: algorithmwatch.org/en/exclusive...
algorithmwatch.org
Du möchtest Forschung und Kampagnen wie diese unterstützen, die Veränderungen herbeiführen sollen? Dann werde jetzt offiziell Friend of AlgorithmWatch: algorithmwatch.org/de/foerdermi... Deine Unterstützung hilft dabei, gierigen Tech-Unternehmen und deren Plattformen zur Verantwortung zu ziehen.
algorithmwatch.org
Hilf uns, sie zu finden. Wenn du Apps, Webseiten oder Accounts siehst, die sexualisierende Deepfakes erstellen oder verbreiten, melde sie uns: algorithmwatch.org/de/lasst-uns...
algorithmwatch.org
Non-consensual Sexualisation Tools (#NSTs) sind Apps und Webseiten, die sexualisierende #Deepfakes von echten Menschen erzeugen – ohne ihre Zustimmung. Oft #NudifyApps genannt, zeigen einige Tools Menschen zwar ganz nackt, können aber auch Bilder in Unterwäsche oder Badeanzügen generieren.
algorithmwatch.org
Join us for an exclusive conversation with Meredith Whittaker, President of the Signal Foundation, on reclaiming privacy in the age of AI. More information on the event and how to participate here: algorithmwatch.org/en/exclusive...
algorithmwatch.org
Wir bleiben dran, klären auf und machen Druck. Doch all das ist nur dank euch möglich! Unterstütze unsere Arbeit nachhaltig, werde Fördermitglied bei AlgorithmWatch. Damit stärkst du gründliche Recherchen, Kampagnen und unsere Expertise in Politik und Wirtschaft: algorithmwatch.org/de/foerdermi...
algorithmwatch.org
Dank eurer Unterstützung konnte die Petition an Dr. Anna Lührmann (Bündnis 90/Die Grünen) übergeben werden. Sie ist stellvertretende Vorsitzende des Ausschusses für Digitales und betont die Bedeutung der Petition.