SimpleKnight
banner
eliothochberg.bsky.social
SimpleKnight
@eliothochberg.bsky.social
540 followers 120 following 2.2K posts
The least famous person you know Photo by @liezlwashere.bsky.social
Posts Media Videos Starter Packs
Reposted by SimpleKnight
It's worth pointing out:

If you use generative AI (LLM) to create final content for sale or broadcast, it's an open question whether or not that's considered "fair use"

Additionally, you may also not be able to copyright anything generated, which means others can copy it and also sell it
@edzitron.com Did a quick search, couldn't find the answer:

How does LLM AI compute needs compare to what crypto needs? Is it around the same? One double the other? Orders of magnitude? Wrong question?

Maybe you know/can point me to resources?
The bottom line, in my view:

Will LLMs' need for hardware be justified by the benefits? On top of the impact on employment they will have if they really do deliver?

Or will this be like the telecom boom, where vast amounts of hardware are bought, only to be abandoned when the companies fold?
It's worth considering as well where these chips are manufactured

Almost all microchips are currently manufactured in Taiwan, who has political, seismic, and weather dangers

This is why it's important to diversify where chips are manufactured, lest we end up with a crisis of chip availability
There is also the issue of water usage, which currently is one way that large compute centers cool their systems other than standard AC, which also draws power
This also leads to discussion of where power and cooling will come from

Some companies are designing their own self-sustaining systems with solar and wind power, while others are just hooking to the same grid you and I use
This is why companies like Nvidia are valued so highly, and why gamers and graphics professionals are being squeezed out of access to top of the line GPUs: LLMs need as many GPUs as possible

This comes on the back of crypto demand, which also takes advantage of GPUs for its server backbone
What I think about regarding this is that as LLMs get more complex and more in demand, more and more GPU compute is needed to continue to support it

AI companies do have ways of compressing processing to be a bit more efficient, but even then, the compute needed is enormous
Def: Compute

This is basically a cute way of referring to the hardware needed to train and run LLM generative AI systems

It is different from just saying "computers" because it is both more and less than that

LLMs use mostly graphics card processors and memory as opposed to CPUs

#letstalkAI
Reposted by SimpleKnight
This is Stephan. He just wants to help. 13/10 such a good boy
I'm not aware of that, but these new LLMs are REPROGRAMMING THE TEST (kind of like Kirk did with the Kobayashi Maru) which is NOT what they were designed or intended to do

If it *were* the same, it would still be a misalignment

Except the eChess sets weren't connected to the internet
For instance, if you ask an LLM to play chess, it might reprogram the system itself to give it the win

What's more, there are instances of LLM systems specifically being told NOT to do things like this, and then they find ways to hide their cheating

www.turkiyetoday.com/business/ai-...
AI attempts to cheat in chess when losing, new study shows - Türkiye Today
A recent study reveals that advanced artificial intelligence (AI) models, including OpenAI’s and DeepSeek’s reasoning systems, have learned to manipulate situations on their own.
www.turkiyetoday.com
Currently, there are issues with regards to LLMs not necessarily relating to the intent of questions posed to it, sometimes responding with "hallucinations" that include false information

But misalignment isn't strictly that

Instead, we need to look to LLMs that have been caught cheating
For example, were we to create an LLM style AI with the purpose of curing cancer, would that AI do what we wanted?

For most what are called "narrow" AIs, that is, AI that is aimed at a very specific task, the answer is usually yes

But for LLMs that are "grown," the answer isn't so simple
Def: Alignment

This is the concept regarding AI where we try to gauge whether any given AI's concept of what should or shouldn't be done matches up with our expectations or intentions in creating the AI

#letstalkAI
Notice how more products you buy aren't really yours? Heard stories about electronics, cars, tractors, and more where the manufacturer changes the terms of the deal after you paid what you thought was a purchase price?

If so and it bugs you, check this out:

bounties.fulu.org?ref=fulu-fou...
FULU Bounty Platform Home
Fund transparent, verifiable work that restores digital ownership. Explore live bounties, see real-time progress, and donate with confidence.
bounties.fulu.org
HEY! SCIENTISTS!

KEEP YOUR MONKEYS IN CHECK!
MONKEYTRUCK (2025)
A truckload of rhesus monkeys from Tulane university escaped after a crash. The university warned they’re highly aggressive toward humans and infected with hepatitis C, herpes, and COVID.
Hi @nevernotfunny.com fans!

We always love your theme song submissions, but I wanted to share:

We don't want to play AI generated songs on air

Thank you for your attention to this matter - EHH
It's worth noting that in a democracy, if the majority of voters are dicks, they would elect dicks

I mean, that's not exactly where we are now

It's just good to remember that democracy *can* go that way and still be democratic
Ultimately, because LLMs are grown, they can predict what we might do, but it is impossible for us to predict what an LLM would do in any given situation

That is both the benefit and the danger of LLMs
The upside is the LLMs come online much faster, and we've seen the impressive results

The downsides are "hallucinations" which give false information, and "misalignments" which are the LLMs doing things they either weren't intended to do, or which are actually against instructions
What's important to understand is that ultimately, humans don't have the time or capacity to analyze all of the connections inside of an LLM. The reason LLMs exist is that if we hand coded such a system, it would take several lifetimes. LLM training allows them to be grown much faster