Sunset Road
banner
realsunsetroad.bsky.social
Sunset Road
@realsunsetroad.bsky.social
@Sunsetroad on X.com
Reposted by Sunset Road
AI training firms are hiring freelance specialists—from physicians to art historians—to create problems, solutions and rubrics that teach models complex tasks, while Meta announced a $14.3 billion investment in Scale AI and hired its CEO to lead a Superintelligence lab.
How human expertise is quietly powering AI
AI training firms are increasingly relying on human specialists, from physicians to art historians, to drive the tech forward. Contrary to popular belief, the experts do not think they're training the...
www.fastcompany.com
November 4, 2025 at 1:04 PM
Reposted by Sunset Road
“We can’t police that whole thing,” Common Crawl said. “It’s not our job. We’re just a bunch of dusty bookshelves.”

Meanwhile, CC has accepted hundreds of thousands in donations from AI companies such as OpenAI and Anthropic. And it expressed open antagonism toward the media:
November 4, 2025 at 12:22 PM
Reposted by Sunset Road
"In the process ... Common Crawl has opened a back door for AI companies to train their models with paywalled articles from major news websites. And the foundation appears to be lying to publishers about this—as well as masking the actual contents of its archives."
NEW: Common Crawl, the massive archiver of the web, has gotten cozy with AI companies and is providing paywalled articles for training data. They’re also lying to publishers who have asked for material to be removed. “The robots are people too,” CC’s exec director told us when we asked about this.
The Nonprofit Feeding the Entire Internet to AI Companies
Common Crawl claims to provide a public benefit, but it lies to publishers about its activities.
www.theatlantic.com
November 4, 2025 at 12:31 PM
Reposted by Sunset Road
In Sirotin’s case, the fatal mistake came in the form of two online purchases — a knife, bought with the same email address used to rent the suspicious servers discovered by investigators, and a pair of plane tickets he had bought for his parents

on.ft.com/4hhtmGd
An unlikely couple, a doomed affair and their €64mn ransomware scam
How a mysterious tip-off led investigators to uncover the inner workings of a highly unusual hacking operation
on.ft.com
October 18, 2025 at 9:19 AM
Reposted by Sunset Road
Google has just used AI and threat intel to foil a zeroday before it could launch. Working from artifacts gathered by GTIG, Big Sleep was used to identify a vuln before actors could ramp up exploitation. It doesn’t get much better than this in intel. blog.google/technology/s...
A summer of security: empowering cyber defenders with AI
Here’s what we’re announcing at cybersecurity conferences like Black Hat USA and DEF CON 33.
blog.google
July 15, 2025 at 2:26 PM
Reposted by Sunset Road
this is the perfect academic crime. it's like robbing another thief. what are you going to do, complain that i tricked the AI you're using to do your work? asia.nikkei.com/Busi...
'Positive review only': Researchers hide AI prompts in papers
Instructions in preprints from 14 universities highlight controversy on AI in peer review
asia.nikkei.com
July 4, 2025 at 1:09 AM
Reposted by Sunset Road
They advocates for a paradigm shift in web agent research: rather than forcing web agents to adapt to interfaces designed for humans, we should develop a new interaction paradigm specifically optimized for agents.

"Build the web for agents, not agents for the web"

arxiv.org/abs/2506.10953
June 14, 2025 at 10:05 PM
Reposted by Sunset Road
We need to wake up to the reality that there won’t be jobs for a significant number of the population in the not too distant future. We will need to ensure those people are taken care of - and to do that a universal wage will be needed.
June 15, 2025 at 4:23 PM
Reposted by Sunset Road
OpenAI introduces Codex, its first full-fledged AI agent for coding
OpenAI introduces Codex, its first full-fledged AI agent for coding
It replicates your development environment and takes up to 30 minutes per task.
arstechnica.com
May 16, 2025 at 10:35 PM
Reposted by Sunset Road
The company speculates “that when told not to answer in great detail, models simply don’t have the ‘space’ to acknowledge false premises and point out mistakes. Strong rebuttals require longer explanations, in other words.”
Asking chatbots for short answers can increase hallucinations, study finds | TechCrunch
Turns out, telling an AI chatbot to be concise could make it hallucinate more than it otherwise would have.
techcrunch.com
May 8, 2025 at 1:24 PM
Reposted by Sunset Road
People are using ChatGPT’s image recognition to figure out the location shown in pictures. That means someone could screenshot say, a person’s Instagram Story, and work out where they are. And it’s highly accurate. Huge safety issue.
techcrunch.com/2025/04/17/t...
The latest viral ChatGPT trend is doing 'reverse location search' from photos | TechCrunch
There's a somewhat concerning new trend going viral on social media: people are using ChatGPT to figure out the location shown in a photo.
techcrunch.com
April 18, 2025 at 1:35 PM
Reposted by Sunset Road
OpenAI’s new “reasoning” models (o3 and o4-mini) actually hallucinate MORE than their predecessors

OpenAI’s internal tests show o3 hallucinated on 33% of person-related questions, double the rate of previous models. Even worse, o4-mini hit 48%.
OpenAI's new reasoning AI models hallucinate more | TechCrunch
OpenAI's reasoning AI models are getting better, but their hallucinating isn't, according to benchmark results.
techcrunch.com
April 19, 2025 at 2:23 PM