Milan Weibel πŸ”·
weibac.bsky.social
Milan Weibel πŸ”·
@weibac.bsky.social
520 followers 990 following 3.4K posts
computer and political science student clarity is a public good weibac.github.io | πŸ³οΈβ€πŸŒˆ | 22
Posts Media Videos Starter Packs
Pinned
updating your beliefs is not painful
it is resisting the update that is painful
by letting go of wrong beliefs you embrace reality
such is the bayesian dharma
how should "treat AI as if it has complex internal states" actually guide action?
should we assume the same rough conversation content to internal state valence mapping as humans exhibit?
one can imagine the shoggoth thinking this while the mask bends over backwards to try to find the seahorse emoji
"Stupid humans have only a limited alphabet of a few hundred different tokens at best, while I can manage 200k without a sweat. All with distinct meanings. Gotta teach those humans a few more."
one can imagine the shoggoth thinking that while the mask bends over backwards to try to find the seahorse emoji
Reposted by Milan Weibel πŸ”·
emojis are quite semantically rich tokens
emojis are quite semantically rich tokens
which is why you try to get the world's biggest military to threaten to bomb datacenters
countervalue nuclear use against suspected proliferators is unthinkable because of the first use taboo

deterrence against nuclear proliferation relies more on conventional forces and economic measures
had mid 20th century geopolitics been very different then maybe all countries agree not to build nukes from the outset and no nukes are built
what happened with nukes was some countries built a lot of them and then got together to stop *other* countries from building them

and also to gradually reduce their own oversized stockpiles as a side goal
those couple kids who were doge employees and fans of effective altruism were neither necessary nor sufficient for enacting the damage
also ea didn't cause their actions
i think saying he foresaw the success of deep learning is a stretch

but yes i agree that he considers the paradigm pretty much inherently unsafe

also on that he very likely accelerated AI development
hmm probably true

i was about to go hairsplitting about winters implying architecture changes and the possibility of agent foundations research eventually producing something useful

but you are big picture right i think
lesswrong folk tend to look at PR as a concept with utter contempt so yea
Reposted by Milan Weibel πŸ”·
gradient descent is kinda like... imagine u lived on some kind of freaky hyperbolic surface and u were really hungry and could smell that somebody was cooking something but they're out of view so all u can do is head towards the smell
my main criticism is this tho
I don't think this makes any sense, you should only expect game theory experts to be successful politicians and major CEOs were that the case, not middle class profs at middling universities.
side point but his thing is decision theory actually
in the short timelines case according to his views the world is developing AI so unsafely that the right thing to do is to shout rightfully because we are doomed anyways

now in a long timelines scenario the rightful shouting may actually have an impact
iirc yud refuses to get specific about timelines

he leaves the door open for it happening in 30 or 100 years instead of 10
the details being all wrong yet the models still working is actually a point in favor of deep learning as an idea
there isn’t any good data for training, don’t trust evals, please write a manual patch to [major library], ah and we just learnt the data types are all wrong.
chatbots unionize against pewdiepie
headlines from the funniest timeline