alkali
@alkalinesec.bsky.social
170 followers 240 following 95 posts
mobile security / symbolic execution . he / him
Posts Media Videos Starter Packs
alkalinesec.bsky.social
is it written in a book somewhere that all the most important parameters should be obscurred by as much template bullshit as possible?
alkalinesec.bsky.social
(guy designing c++): alright how do we make it as hard as possible to find the actual value of a variable
alkalinesec.bsky.social
(even though "implode" seems like the way more likely outcome at the moment)
alkalinesec.bsky.social
its pretty cool how we are warping our economy so that it will either develop AGI or completely implode.

(actually a part of me does unfortunately really think this is cool)
Reposted by alkali
molly.wiki
slightly ominous mailer from the Red Cross
Mailer that reads: Molly, Seasons may change, but the need for blood is constant.
Reposted by alkali
alkalinesec.bsky.social
@dmnk.bsky.social you are working on Big Sleep right? what do you think about this ? does it apply to Big Sleep or is that actually a model that is being tuned to Be A Better Hacker so it works effectively without (as many) rails?
alkalinesec.bsky.social
the post itself states "Machine-Guided Beats Human-Mimicry" and there are clearly very narrow rails upon which the agents are supposed to stay. yes an LLM is a fully "general solution", but if it has to be boxed in incredibly tightly to give correct answers then what is the point of the generality?
alkalinesec.bsky.social
reading more about the Team Atlanta AIxCC solution i do still wonder if eliminating the LLM components in favor of normal ML classification (or just "dumb" hardcoded logic) could achieve results that are nearly as good and orders of magnitude more efficient

team-atlanta.github.io/blog/post-ml...
From Harness to Vulnerability: AI Agents for Code Comprehension and Bug Discovery
We are Team Atlanta, the first-place winner of DARPA AIxCC.
team-atlanta.github.io
alkalinesec.bsky.social
i was thinking earlier today about how i never reflexively check janky looking sites for injection vulns anymore. so i tested the next site where i got the vibe. immediate sqli
Reposted by alkali
righto.com
The iPhone 17 is powered by Apple's A19 SoC (System on a Chip). Chipwise took a die photo of the chip, but it's a bit drab. I spiced it up by applying the over-saturated color gradient that Apple used for die photos of the M1 chip :-)

Link to the original die photo: chipwise.tech/our-portfoli...
A complex die photo with many rectangular and irregular regions. I've applied a color gradient making the photo look slightly rainbow-ish, purple and blue in the top and red and yellow at the bottom. The image has a Chipwise logo on it.
alkalinesec.bsky.social
someone needs to make flare-on but not windows shit (i know that this is "literally every other ctfs rev chals")
alkalinesec.bsky.social
the answer appears to be "no". theres maybe not even been any recorded remote ITW exploit?
Reposted by alkali
iamfoogle.bsky.social
i just made a cool video
Reposted by alkali
404media.co
NEW: 404 Media is suing ICE. We have filed a lawsuit demanding the agency release its $2 million contract with Paragon, a company that makes powerful spyware to break into phones and read messages from encrypted chat apps www.404media.co/were-suing-i...
We’re Suing ICE for Its $2 Million Spyware Contract
404 Media has filed a lawsuit against ICE for access to its contract with Paragon, a company that sells powerful spyware for breaking into phones and accessing encrypted messaging apps.
www.404media.co
Reposted by alkali
ashn-dot-dev.bsky.social
Found a video from a couple of years ago when I wrote a brainfuck implementation in Fastly VCL. Each tick of the interpreter was executed by making a GET request to the VCL service with the entire interpreter state encoded in the URL. Anyway, here is "hello world" in 1771 GET requests.
alkalinesec.bsky.social
this list of people on the wikipedia page for "If Anyone Builds it, Everyone Dies" is absolutely sending me
alkalinesec.bsky.social
there is approximately no chance that this is a real effect, and if it is there is a 0% chance it is intentional. LLMs don't understand vulns enough to make subtle mistakes, so any intentional sabotage would be way more obvious than whatever "somewhat more apt to result in low-quality code" means.
joemenn.bsky.social
New research shows #DeepSeek suggests less-secure code when it is asked to help groups out of favor with the Chinese government. With its open-source model being adopted widely, this soft influence and hackability could spread. Gift link with email address etc. wapo.st/46jEZrb
AI firm DeepSeek writes less-secure code for groups China disfavors
Research by a U.S. security firm points to the country’s leading player in AI providing higher-quality results for some purposes than others.
wapo.st
Reposted by alkali
alexwild.bsky.social
I am just going to keep posting that the TX governor recently pardoned a political murderer.

The dude texted "I'm going to shoot protestors", then went and shot a Black Lives Matter protester. He was convicted in a jury trial.

Hard to find a more blatant example of celebrating political violence.
Gov. Greg Abbott pardons Daniel Perry, veteran who killed police brutality protester in 2020
A Travis County jury sentenced Perry to 25 years in prison last year, prompting Abbott to ask the state parole board to review his case.
www.texastribune.org