dylan
@dylnbkr.bsky.social
530 followers 340 following 170 posts
Lead research engineer @dairinstitute.bsky.social, social dancer, aspirational post-apocalyptic gardener 🏳️‍🌈😷 I run workshops @dairfutures.bsky.social. Always imagining otherwise. dylanbaker.com they/he
Posts Media Videos Starter Packs
dylnbkr.bsky.social
We had *just* gone through the 2016 election 😭 The idea that misinfo and synthetic media could have global-politics-shifting consequences was widely-known at this point, and yet we had CV researchers shrugging, "Hm, hope someone does something. Anyways, back to advancing that exact technology!"
dylnbkr.bsky.social
As an aside, I'm particularly amused by this quote—"That sounds like a good idea, we should probably do it"— because this was pretty much verbatim what I heard from CV researchers who were actively developing these technologies in like 2018.

Yeah probably!!
A screenshot from the Verge article. It reads

[Block quote]: X doesn’t currently support the standard, but Elon Musk has previously said the platform “should probably do it”

[Main text]: A cornerstone of this plan involves getting online platforms to adopt the standard. X, which has attracted regulatory scrutiny as a hotbed for spreading misinformation, isn’t a member of the C2PA initiative and seemingly offers no alternative. But X owner Elon Musk does appear willing to get behind it. 

[Highlighted portion]: “That sounds like a good idea, we should probably do it,” Musk said when pitched by Parsons at the 2023 AI Safety Summit. “Some way of authenticating would be good.”
dylnbkr.bsky.social
curious if widespread public awareness of such tooling could create conditions to pressure more platforms to do more to surface the verified-realness of image content.

(pls if someone wants to make an informational zine about this we can add it to the DAIR library!)
dylnbkr.bsky.social
As they state in the article, it's not a panacea, and I can imagine a whooole different can of worms being opened by people over-trusting inherently-fallible labels. Still,
dylnbkr.bsky.social
Adopting this standard and— more actionably, if you don't happen to be manufacturing cameras or designing social media platform interfaces— finding ways to better educate the public about it feel like pretty critical interventions in this moment.
dylnbkr.bsky.social
Saw this article linked by @wonkish.bsky.social in the replies here and wanted to expand on it a little, because this really does feel like a place where a tech solution *is* warranted to help address a major problem. 🧵

www.theverge.com/2024/8/21/24...
A screenshot of the title of an article from The Verge. It reads:

This system can sort real pictures from AI fakes — why aren’t platforms using it?
Big tech companies are backing the C2PA’s authentication standard, but they’re taking too long to put it to use.

by Jess Weatherbed
Aug 21, 2024, 6:00 AM PDT A screenshot from the linked The Verge article. It reads:

Step one: the industry adopts a standard
A body like C2PA develops an authentication and attribution standard.
Parties across photography, content hosting, and image editing industries agree to the standard.
Step two: creators add credentials
Camera hardware makers offer to embed the credentials.
Editing apps offer to embed the credentials.
Both hardware and software solutions work in tandem to ensure creators can confirm the origins of an image and how / if it’s been altered during edits.
Step three: platforms and viewers check credentials
Online platforms scan for image credentials and visibly flag key information to their users.
Viewers can also access a database to independently check if an image carries credentials.
dylnbkr.bsky.social
YES.

Similarly, I was recently in a conversation where somebody suggested that perhaps this administration was trying to bar all AI regulation because they don't want to hinder scientific progress. Yeah great point, they sure do love pure science!
tamigraph.bsky.social
Probably worth paying more attention to *why* tech oligarchs and this administration, in particular, are so eager to build data centers. What do we think they’re going to be used for? Endless AI for good projects or…?
Reposted by dylan
alexhanna.bsky.social
I looked into my scrying pool and asked when it would be good to apply to YC for my SaaS agentic AI startup
dylnbkr.bsky.social
A small detail they did here that I'm thinking a lot about in my own work: talking about "AI use disclaimers" in their work.

I've been toying with adding a little handwritten "a human wrote this" image at the bottom of my email signature with a link to more information. Are other folks doing this?
A red banner that reads "AI TRANSPARENCY STATEMENT" crosses the page. Below that is the text: 

"Identity 2.0 did not use generative AI to:

Write any of these words or design any of these pages
Edit this zine for clarity, readability or style based on text already written
Assess any gaps or make any summaries of our work
This was made with human hands so there may be human mistakes here too!" Text reading: The power of labels is important for yourself too - the luddite movement is back in full force thanks to the work of writers like Brian Merchant. Start claiming and owning that label, that shows that we’re fighting for workers rights. So why not make your own “made by human” label?

Below the text are two identical yellow rectangles that look like stickers that read "Made by a human" with a small figure of a person.
dylnbkr.bsky.social
Just added our first guest zine to the DAIR Zine Library: it's Identity 2.0's AI-Z: conversations about resistance and generative AI. Check it out!!

>>> zines.dair-institute.org/ai-z <<<

from: www.identity20.org
The DAIR Zine Library
A collection of zines from the DAIR Institute
zines.dair-institute.org
dylnbkr.bsky.social
all of Possible Futures is basically
An image of Kirby eating half a rectangle labeled "Frame F". Next to Kirby is a pink box labeled: Kirby. K is happening. That's why they're saying F. In that box is the text: OTHER THINGS ARE POSSIBLE.
dylnbkr.bsky.social
omg I love this. this just summarized my "understanding the landscape of AI" talks, it's all argument-Kirbying 😵‍💫
Reposted by dylan
bookshop.org
Whatever you do, please don’t stop reading.

Read for joy. Read for education. Read for resistance. Read banned books. Read books by marginalized authors. Read about experiences that differ from your own.

Just. Keep. Reading.
dylnbkr.bsky.social
Idk if everyone has seen this already, but I'm really impressed by the resources being compiled over at against-a-i.com. Assignment ideas, background reading, syllabus language, start-of-the-school-year speeches...
Shout out to
@annakornbluh.bsky.social @ehayot.bsky.social, this is rad.
AGAINST AI -
against-a-i.com
dylnbkr.bsky.social
And truly sometimes what you need is some good old-fashioned #RidiculeAsPraxis, some of these quotes are *bananas* 😮‍💨 h/t @alexhanna.bsky.social @emilymbender.bsky.social
dylnbkr.bsky.social
We're talking about their visions of the future because they are very rich. *Not* because their ideas are intrinsically good.

(Evergreen reminder that you don't need to be a billionaire to make your vision of the future more real. You need other people around you to talk to and build stuff with.)
dylnbkr.bsky.social
Fascinating to be included here.

I stand by what I said— it is natural for anyone to be drawn to escapism and fantastical thinking in response to fear and uncertainty. Billionaires are not immune.

I'll add: Their fantasies aren't better, worthier, or truer than yours.

apnews.com/article/arti...
How Silicon Valley is using religious language to talk about AI
As the rapid, unregulated development of artificial intelligence continues, the language people in Silicon Valley use to describe it is becoming increasingly religious.
apnews.com
dylnbkr.bsky.social
I'll post any future workshops (including virtual ones) on my Bluesky and on the DAIR Futures account!
Reposted by dylan
anthonymoser.com
tl;dr

- I don't think coaching people on how to use ai is an effective form of harm reduction

- I think we should use different messages for the people pushing AI and the people they're pushing it on

- People who know better shouldn't use it or encourage others to use it
dylnbkr.bsky.social
currently almost 2 hours into listening/waiting to give comment and it's been 🔥🔥🔥
typewriteralley.bsky.social
How it's currently going
Someone wearing a large surveillance camera on their head with a sign that's not fully visible but which reads "Surveillance"
dylnbkr.bsky.social
And the energy of having people in a room together was so powerful! I loved tearing up magazines and talking and drawing and connecting with you all.

Stay tuned for more: @dairfutures.bsky.social
dylnbkr.bsky.social
The first in-person Possible Futures workshop was amazing!

As I run more of these, it strikes me how many themes are so consistent: desire for ecological restoration. Connection. Autonomy. Privacy. Dignity. Leisure. Peace. Again and again, we find common ground and shared visions. 🌱
In the foreground, a group of 3 people (cropped from the torso down) are gathered around a wooden table, flipping through magazines and cutting out pages. The person on the left is wearing a denim jacket and jeans, has light skin, and is wearing bracelets. The person behind them is light-skinned and wearing a pink shirt. On the right, there is another light-skinned person wearing a patterned button-down and black jeans. In the background, more people are seated around a table covered in magazines. Handwritten sharpie scribbled notes that read "In 2050, we are - Default disconnected - Cryptographically secure - No one drives a fucking car ever again - Autonomic (?)". Around the notes are scraps of paper. A collage on a light blue background. At the top of the collage, there is text reading "Click the close button to close the window". In a different font, there is text reading "For we are wild beings. Wild to the core." There is a border of a 2000s-looking Windows Vista image viewer window that has been cut to be breaking open on the left side. Inside the window is an image of a dog about to jump off of a pier into the water, also facing the left, as if it was to jump out of the broken computer window.
dylnbkr.bsky.social
I'm so sorry :((( Take care and be gentle with yourself! ❤️‍🩹