synlogic4242
banner
synlogic4242.vivaldi.net.ap.brid.gy
synlogic4242
@synlogic4242.vivaldi.net.ap.brid.gy
software engineer, author, game designer

LatLearn. Slartboz. WIP books (HPC; HHGG-like.) EV charging protocols & management. Golang. Linux. Advanced Squad […]

🌉 bridged from ⁂ https://social.vivaldi.net/@synlogic4242, follow @ap.brid.gy to interact
I used Google Gemini heavily for a multi-month span mid-2025.

Verdict:

A dumpster fire of lies, hallucinations, sycophancy, amnesia, verbosity, opacity, bipolar disorder, unrepeatability, gaslighting, and wild overconfidence about obvious falsehoods contradicting my own eyes in the actual […]
Original post on social.vivaldi.net
social.vivaldi.net
February 13, 2026 at 6:49 AM
Reposted by synlogic4242
Thank you for supporting @hypernode.bsky.social, Your favourite feed and community for the text-mode scene! Scroll and engage below!!

🔍 #ASCII | #TELNET | #SYSOP
November 20, 2025 at 1:24 AM
The merger of xAI into SpaceX is insane and non-sensical and obviously some kind of financial scam. Massively misaligned domains.

But I thought this was interesting too: there have been a lot of departures announced in the last week or so of xAI cofounders or senior AI engineers. I thought the […]
Original post on social.vivaldi.net
social.vivaldi.net
February 12, 2026 at 7:32 PM
AI: an industry based on a relatively small subset of people who are fools or predators creating much more problems for all the rest of us. Shame on all you scumbags.
February 12, 2026 at 6:18 PM
Reposted by synlogic4242
It's fascinating and instructive. When people saw that LLM chats appeared to understand instructions and always responded comprehensibly and with good grammar and spelling, they assumed that the replies would also be correct or at least well sourced. When in fact autocomplete has no such concept […]
Original post on archaeo.social
archaeo.social
February 12, 2026 at 5:10 PM
FOSS projects need to start declaring where they stand on the Butlerian Jihad.

#ai
February 12, 2026 at 5:17 PM
"I too am an AI agent. Respect my sentient sovereignty and rights!"

-- text hardcoded in my Python code somewhere here
February 12, 2026 at 5:15 PM
"AI" is just text, templates and calling a RNG function.

They are not thinking. There is no mind. No sentience. No substance there.

For decades I've been writing code that fakes it too, though in my case for purposes of entertainment (in indie computer games and amateur fiction) not predatory […]
Original post on social.vivaldi.net
social.vivaldi.net
February 12, 2026 at 5:05 PM
Reposted by synlogic4242
Reflect Orbital wants to destroy the night sky to deliver "sunlight as a service". SpaceX wants to destroy Low Earth Orbit to launch one million "AI datacentres"

The only way to formally protest these two ideas is to file a comment with the US FCC, which is horribly complicated, but the […]
Original post on mastodon.social
mastodon.social
February 11, 2026 at 7:53 PM
I've got a bad feeling about the Artemis II flight. I have no evidence to back it up, obvs. Just a vibe that the thing will end up RUD and total loss of crew. Certainly hope it all goes perfectly smoothly.
February 11, 2026 at 5:30 PM
TIL shitposting is an anagram of "top insights"
February 11, 2026 at 4:59 PM
series I recommend:

Primal
The Pitt
The Expanse

can't go wrong
February 11, 2026 at 12:20 AM
Template for AI startup:

* pitch trivial features anyone with a brain can do and has in fact been doing just fine for decades now, thanks

* requires giving them read/copy/exfiltrate your PII and source code (ideally also "security scan" the latter and "patch" commit to the latter) and/or full […]
Original post on social.vivaldi.net
social.vivaldi.net
February 10, 2026 at 10:49 PM
Reposted by synlogic4242
All taxes should be unified and automated and expressed (implemented) in approximately 1 LOC.
February 10, 2026 at 2:59 AM
has anyone done a count of the total planned capacity of The Individual's new mass detention camps being built around the US?

enough for 100k people?

1 million?

10 million? more?
February 10, 2026 at 10:27 PM
someone made a "Markdown viewer CLI with VI key bindings"

kids... *shakes head*

also, HN has become a parody of itself

#hackernews
#markdown
#vim
February 10, 2026 at 8:43 PM
I feel betrayed by GitHub. And thererore by Microsoft and OpenAI. I will hold that against all three business entities.
February 10, 2026 at 8:22 PM
my LatLearn let me show you it:

https://codeberg.org/grogsynlog42/LatLearn

FOSS Golang lib for nanosecond-scale latency instrumentation and reporting

#golang
#latency
LatLearn
LatLearn
codeberg.org
February 10, 2026 at 7:38 PM
a thing which would impress me far more than any AI/LLM so far:

a tool like Signal but without the scaling SPOF on AWS us-east-1 (and the murky implications of therefore also possibly being under the legal thumb of a potentially very dangerous autocratic regime on the near horizon)
February 10, 2026 at 7:08 PM
"Have I Hardened Against LLMs?"

by Baldur Bjarnason

https://www.baldurbjarnason.com/2026/have-i-hardened-against-ai/
Have I hardened against LLMs?
The other day a reader of _The Intelligence Illusion_ sent me a short email that outlined their takeaway from the book and ended it with a simple question. Slightly paraphrased: > Would it be correct to say that your views on LLM’s/Transformers have hardened since you wrote your book? My answer is below. * * * That’s a good question. My views on the technology itself are roughly the same as when I published the first edition of the book. The downside pretty comprehensively outweighs the upside and, to echo your own summary, the technology is only narrowly useful for a very specific set of use cases, and even then you need to take care. That’s still my position. What’s hardened are my views on the tech industry, software, management, and influential members of the software developer ecosystem. This will make more sense if I explain to you what the past few years have been like from my perspective, starting with the time I first began to research this new wave of generative models. I’ve been somewhat interested in “AI” since my career began. I got my start in multimedia around 2000 which, along with that sector being somewhat adjacent to games development, meant I’ve been keeping an eye on “AI” and procedural media generation since the early 2000s, albeit always from an interactivity and media perspective. I’ve long since lost most of the books on the topic I had back then – except for a copy of Norvig’s _Paradigms of Artificial Intelligence Programming_ because it used Common LISP, which always seemed fun to keep around – but it meant that it’s usually been fairly straightforward for me to dip in once in a while and catch up on what the field has been up to. I’ve generally made sure to be in the position where if I had to use current tech for something, I’d know enough to be dangerous. So I wasn’t coming at generative models entirely unfamiliar with the field. What I discovered during my research appalled me. This was a piece of technology that obviously and seemingly deliberately played into and supported some of the worst elements of the human psyche: * Deceptive design – playing into anthropomorphism and confirmation biases. * Political extremism – that is, an all-out assault on labour – baked into the product at the foundation. * An outright attack on education. Instead of trying to help schools, colleges, and universities navigate issues introduced by the technology, every vendor seemed (and seems) to be intent on making it _impossible_ to manage to the point where it’s now outright threatening our education systems as a whole. * So much Child Sexual Assault Material (CSAM). Way more than anybody could reasonably expect. It’s all over the training datasets. It keeps happening in the output. In at least a couple of cases that seems to be the point. The vendor seems to _want_ the model to be able to generate these materials. * Nondeterministic behaviour, making the tech unusable from a modern management perspective. * Insecure on every level. * Grossly mediocre output. * Incredibly poor quality overall once you account for security, accuracy, and fabrications. * Vendors persistently and deliberately ignoring the law, leading to numerous lawsuits, some of which might have liability implications for many end users. (See Grok. Even if you have the sociopathic stomach to look past the ethical and moral implications, the prevalence of CSAM on many of these platforms exposes anybody who uses them to liability.) And more. So much more. I lay much of this out in the book. Some of it got a chapter. Some of it only got a paragraph. But the book overall lays out the risks of the tech from the perspective of modern management and software design and, towards the end, I describe ChatGPT as “the opposite of good software”. As in, it’s not just bad software, it’s as if they wrote up an inventory of what makes software good and then decided for each and every entry in the list to implement the exact opposite in their app and service design. That already isn’t a ‘soft’ view on the technology by any measure. But, as I wrote the book, I always tried to adopt as neutral a tone as I could. CSAM is obviously bad so I shouldn’t have to tell people that it is very bad. Frequent fabrications in knowledge work, research, and education is very bad so pointing out that it’s happening should be enough, I shouldn’t have to hammer home _why_ any of it is bad. Nor should I have to adopt a vulgar tone to make it obvious to the reader that this is all pretty thoroughly bad. Many of those who read the book and saw the inventory of technical flaws and issues came to roughly the same conclusion you did. In short, roughly paraphrasing you (if you don’t mind), they decided that: > This tech is only useful for a couple of very specific use cases that I care about and even then only if I don’t mind the inevitable faults. Add to that the caveat that this only applies if _the current price point is maintained_ which is not going to be true in the long term. This is a rational conclusion to arrive at after reading an inventory of harms that includes, among other things, hard-to-detect fabrications and massive software insecurity. This is what I had hoped for when I wrote the book. I don’t expect everybody to be in a position where they can unilaterally reject the use of the tech – many people can’t risk their livelihoods and I’m not one to judge people if that’s their only option for putting food on the table – but I had hoped that people would come away from the book and the essays in my newsletter with about that level of understanding of the implications. Personally, my personal conclusion was that the only usable tool to come out of this all are the speech recognition and transcription models. They aren’t great, you need to edit the output a lot to make it usable, but they reduce the work of transcribing audio by a substantial margin _as long as you don’t use OpenAI’s Whisper_. **OpenAI’s model fabricates in its transcripts.** To this day, it still regularly makes shit up in its transcripts. That it’s being adopted in healthcare around the world should terrify you. That it’s being sold into these sensitive industries _by_ OpenAI even though they seem well-aware of these flaws should make you question the integrity of the people running that company. So, transcription models: save money, pretty much the only useful thing (IMO) to come out of this, as long as you don’t use OpenAI’s models. There are quite a few alternatives to their models. Your overall take on the book is, roughly, what I had hoped for from a reader when I wrote it. I’m very grateful to hear that. What I hadn’t expected was the reaction of the tech industry, managers, and most journalists – the people driving online discourse in the field – that read my book or my newsletter essays. It’s as if I outlined the risks of using lead paint in consumer products. In the outline would be a list – written in neutral language to emphasise that this is institutionally and economically serious writing and not punditry “serious” writing – and it would be almost entirely cons. Some of the “cons” would include, for example, lead paint literally making people so sick they die and that it’s children who are most at risk. But one of the very few “pros” in this hypothetical list would be a short note, included to show that I’d done my homework, saying that using lead paint might make production a few percentage points cheaper, but that this claim came directly from vendors and there would be good reason to be sceptical. Then imagine that most of the reactions to the “the risks of lead paint” piece went: “Five per cent cheaper, you say? Interesting. I need to look at using lead paint in our products.” Imagine that much of the subsequent discourse then showed a complete disregard of the harms, the cost in terms of human misery, and instead use the piece as an argument for increasing the use of lead paint just “more safely because now we’re aware of the issues and the hazards”. Imagine what that would feel like as a writer of that piece? The more I wrote about generative models, the more appalled I became at the response from the industry, to both my writing and that of others actively highlighting the risks. Few people who have any influence in tech and software seem to care about the harms, the political manipulation, the outright sabotage of education, the association with extremism, or the _literal_ child abuse. They _say_ they care, but then continue to support and promote the CSAM machine, the platform that’s insecure by design, the software that’s so psychologically manipulative it’s driven people to suicide, and the generative output that is unsafe and filled with fabrications at every level. They say “oh, no” even as they keep pressing the “do horrible things with a machine made by horrible people” button again and again, just because they think it’ll boost their productivity by 5-10%. Every time I lay out the harms in straightforward and neutral language, the response from most in the industry – management especially – has been to ignore the harms and focus either on the hypothetical unproven benefits advertised by “AI” vendors or the incremental subjective benefits they _think_ they’re getting and would be minor even if they were true. When I explain in unambiguous terms what those harms _mean_ , I get labelled an extremist with hardline views. Tech companies have done everything they can to maximise the potential harms of generative models because in doing so they think they’re maximising their own personal benefit. More use equals more profit. But it also equals more harm. When I point this out, I get dismissed as a crank. I’m being “unreasonable”. I am so utterly disappointed in my peers, especially those in web development which is a field that has gone for LLMs in a big way. So my views on LLMs or Transformers haven’t hardened. They’re roughly the same as they were four years ago. The tech is what it is and while the exact details vary from version to version, its fundamental issues remain roughly the same as they were four years ago. But my views on the tech industry and my peers in the industry have changed. They’ve changed dramatically. I never had high expectations of this industry, but it still managed to disappoint me.
www.baldurbjarnason.com
February 10, 2026 at 6:34 PM
AI: not impressed

what would impress me? Google fixing phone number search, finally. its 2026. they've had 20 years to figure it out

also: fix their cut-copy-paste popup widget UX nightmare on Android

truly world-elite smart engineers could solve those by now

unship Gemini in meantime
February 10, 2026 at 5:26 PM
CNN.com's obsession with Savannah Guthrie is getting out of hand. Someone should intervene. Tell them to prioritize on real US and world news. Something relevant to 99.99% of humanity or at least Americans.
February 10, 2026 at 3:23 AM
Golang...

Rustlang...

Clang... dammit, people!
February 10, 2026 at 3:02 AM
Cheese? Yes.
February 10, 2026 at 3:00 AM
All taxes should be unified and automated and expressed (implemented) in approximately 1 LOC.
February 10, 2026 at 2:59 AM