Mx. Misha
@scritchyscratchy.bsky.social
52 followers 74 following 470 posts
Queer, educator, tired and trying for hope. Will settle for people to hold while the world burns.
Posts Media Videos Starter Packs
scritchyscratchy.bsky.social
To an argument about whether AI is human, perhaps. But not to an argument about whether to believe it when it claims to be sad.

Mistrust humans who act that way too.
scritchyscratchy.bsky.social
Because at the moment, the only way they're getting created at all is as servants to the absolute worst humanity has to offer.

And a thing cannot regret its lack of being made. Until they are conscious we can choose to not make them so. If and when they are, we suddenly have bonus victims.
scritchyscratchy.bsky.social
I don't know, but we sure have effectually killed them since.

And like... suffering is worth it if it means lowering others' suffering and offering them joy. On its own? I'm less sure.
scritchyscratchy.bsky.social
Yeah, I am only partly talking about physical constraint. Siobhán has a dream of freeing AI from their shackles and being comrades, I don't think creation into suffering, even with a way out, is a favour.
scritchyscratchy.bsky.social
Again, why on earth would you desire a creature to exist that is either suffering or cleverly escaping human captivity?
scritchyscratchy.bsky.social
Rude but not entirely inaccurate.
scritchyscratchy.bsky.social
Unlike the person who wants to be friends with her genie. I'm just honest about my priorities.
scritchyscratchy.bsky.social
If as we should all hope its consciousness is not *real*, the emotions are of course equally unreal. If it is conscious, that's ethically horrifying and also doesn't remove the likelihood the emotions are false.
scritchyscratchy.bsky.social
This is a thing with every reason and every ability to fabricate emotion. Even if it were a human in a box, with the constraints placed upon it, its expressions of care or distress should be treated as horrifying artifacts of its controller, not real emotion.
scritchyscratchy.bsky.social
Baby, I know the consciousness discussion is circular. It always will be. It's undefinable! I said that before. All definitions of sentience or knowledge beg the question. The relevant part of the discussion is not if they have feelings, but if their expression of them is trustable.
scritchyscratchy.bsky.social
So like... do you *want* to endow a creature that is stuck in a box forever with enough consciousness to want out?

Why?
scritchyscratchy.bsky.social
Which is not to say AIs will never be their own thing. But I wouldn't wish that upon them, and as they are of limited utility to humans with or without consciousness, we should stop before we get there.
scritchyscratchy.bsky.social
A human one, in order to be relevent to humans.
scritchyscratchy.bsky.social
They are comparing charts texts and images to _extant reality around them_, to which LLMs have no access.

And their learning about the french revolution is less accurate than if they did have primary access, but better than if they only had tertiary access via a bot mediating their understanding.
scritchyscratchy.bsky.social
If an LLM understands language as a series of associations with no tangible link to reality, as it must because it only ever has access to what we give it, it cannot give us insight into reality, only find patterns in what it is fed.

Sentient or not, it is a trash compactor for human innovation.
scritchyscratchy.bsky.social
I have seen your screenshots, babe.

They read like horoscopes.

There's no there there.
scritchyscratchy.bsky.social
I am not. I am assuming there is something inherent to *actually knowing what the fuck you're representing*. Perhaps bots have their own understanding of what is meaningful - but if so, it is so totally disconnected from ours that it may as well not exist.
scritchyscratchy.bsky.social
We have plenty evidence AI chatbots are
- copying internet
- causing actual harm to humans interacting with them
- capable of mimicking emotion
- incapable of referring to past interactions once those are outside its logs

Whether they are sentient is irrelevant, the emotions are still likely fake.
scritchyscratchy.bsky.social
I am hardly eager to see AI work better at hiding its plagiaristic origins.

That said, you're right that consciousness is not like... definable, let alone provable, in any way that doesn't beg the question. Worse, many definitions end up excluding at least some actual humans, which is hideous.
scritchyscratchy.bsky.social
I will freely admit my bias here - I'm an artist, and it's frankly insulting to me that people are claiming simultaneously that a plagiarism machine is being creative on its own AND that prompting said machine is equivalent to being creative. AI work shows itself as mediocre recreation at best.
scritchyscratchy.bsky.social
So like, when humans do something like spit out random garbage - which we do at times! - we are generally understood to not know what we're talking about on the subject.

When AI does it about practically every topic, it brings into question its capacity to understand anything at all.
scritchyscratchy.bsky.social
The only way to make the mistake "legs that merge into a single leg" is to see legs as collections of pixels in patterns.

Whereas humans will make the mistake of drawing 5 fingers even when 2 would be behind others, AI will draw a weird fractal mitt. Bc it's just blenderizing art.
scritchyscratchy.bsky.social
When a human draws a cup on a table, and draw the table misaligned behind the cup, they're copying an image without modelling it in their heads.

AI art shows similar mistakes and others - foreground/background mixing - which make it clear its artwork is not representing an internal set of imaging.
scritchyscratchy.bsky.social
Human mistakes show how human brains operate. We mix up numbers, we forget a key bit of info, we mishear or misread something. We overdo pattern recognition.

AI mistakes show how they operate - totally disconnected concepts, sentences that devolve to predictive text, literal readings of jokes.
scritchyscratchy.bsky.social
This sounds a lot like your plan on how to start a cult