~swapgs
banner
swapgs.infosec.exchange.ap.brid.gy
~swapgs
@swapgs.infosec.exchange.ap.brid.gy
zigzagging my way through cursed code and bugs

[bridged from https://infosec.exchange/@swapgs on the fediverse by https://fed.brid.gy/ ]
Reposted by ~swapgs
My dependabot satire post is doing the rounds on HN and the comments are just 👌

https://news.ycombinator.com/item?id=46583914
Reducing Dependabot Noise | Hacker News
news.ycombinator.com
January 17, 2026 at 10:30 PM
Reposted by ~swapgs
(Reddit)
January 16, 2026 at 9:26 AM
Reposted by ~swapgs
(before anyone of you weirdos ask 😁 ) the graph for number of graphs:
January 16, 2026 at 1:42 PM
mastodon.social
January 14, 2026 at 6:02 PM
Reposted by ~swapgs
Cool bug 🐞

CVE-2025-4802: Arbitrary library path #vulnerability in static setuid binary in #glibc

https://hackyboiz.github.io/2025/12/03/millet/cve-2025-4802/
[하루한줄] CVE-2025-4802 : GLIBC의 정적 setuid 바이너리에서 발생하는 임의 라이브러리 경로 취약점 - hackyboiz
## URL * https://cyberpress.org/critical-glibc-flaw/ ## Target * 2.27부터 2.38까지의 GNU C Library를 사용하는 환경 ## Explain ### background 일반적으로 리눅스에서 setuid/setgid 권한이 적용된 바이너리를 실행하면 커널은 `execve()` 내부에서 secure execution이라는 특수 모드를 활성화합니다. 이 과정에서 커널은 `bprm->secureexec = 1`을 설정[1]하고 ELF 보조 벡터에 `AT_SECURE = 1` 값을 삽입[2]합니다. > **linux-6.17.9/security/commoncap.c** int cap_bprm_creds_from_file(struct linux_binprm *bprm, const struct file *file) ... /* Check for privilege-elevated exec. */ if (id_changed || !uid_eq(new->euid, old->uid) || !gid_eq(new->egid, old->gid) || (!__is_real(root_uid, new) && (effective || __cap_grew(permitted, ambient, new)))) bprm->secureexec = 1; // [1] ... > **linux-6.17.9/fs/binfmt_elf.c** static int create_elf_tables(struct linux_binprm *bprm, const struct elfhdr *exec, unsigned long interp_load_addr, unsigned long e_entry, unsigned long phdr_addr) ... NEW_AUX_ENT(AT_SECURE, bprm->secureexec); // [2] ... `AT_SECURE`가 `1`이면 glibc는 secure mode로 전환하여 내부적으로 `__libc_enable_secure = 1`을 설정[3]하고 `LD_LIBRARY_PATH`, `LD_PRELOAD`, `LD_AUDIT`과 같은 위험한 환경 변수를 무시[4]하게 됩니다. > **glibc-2.35/elf/dl-support.c** void _dl_aux_init (ElfW(auxv_t) *av) ... case AT_SECURE: seen = -1; __libc_enable_secure = av->a_un.a_val; // [3] __libc_enable_secure_decided = 1; break; ... > **glibc-2.35/elf/rtld.c** static void dl_main (const ElfW(Phdr) *phdr, ElfW(Word) phnum, ElfW(Addr) *user_entry, ElfW(auxv_t) *auxv) ... dl_main_state_init (&state); ... static void dl_main_state_init (struct dl_main_state *state) { audit_list_init (&state->audit_list); state->library_path = NULL; state->library_path_source = NULL; ... static void process_envvars (struct dl_main_state *state) ... case 12: /* The library search path. */ if (!__libc_enable_secure // [4] && memcmp (envline, "LIBRARY_PATH", 12) == 0) { state->library_path = &envline[13]; state->library_path_source = "LD_LIBRARY_PATH"; break; } ... 해당 필터링은 프로세스의 초기화 루틴에서 수행되며 이로 인해 일반 사용자가 setuid 바이너리에 임의의 라이브러리 경로를 전달함으로써 root 권한을 획득하는 경로가 차단됩니다. ### root cause 해당 취약점의 경우 glibc의 `__libc_enable_secure` 값을 이용한 환경 변수 필터링 로직이 동적 로더의 초기화 함수인 `dl_main()` 에서만 적용되어 있었기 때문에 발생하였습니다. 정적 링크된 바이너리는 동적 로더를 사용하지 않기 때문에 `_dl_non_dynamic_init()` 함수를 통해 라이브러리 경로를 설정[5]하는데 해당 함수에서는 `__libc_enable_secure` 값에 따른 환경 변수 필터링이 미흡했습니다. > **glibc-2.35/elf/dl-support.c** void _dl_non_dynamic_init (void) { ... /* Initialize the data structures for the search paths for shared objects. */ _dl_init_paths (getenv ("LD_LIBRARY_PATH"), "LD_LIBRARY_PATH", // [5] /* No glibc-hwcaps selection support in statically linked binaries. */ NULL, NULL); ... 이로 인해 커널이 secure execution 모드를 활성화하더라도 `dlopen()`등의 함수 호출에서 환경 변수 기반의 경로를 그대로 사용하게 되어 공격자가 `LD_LIBRARY_PATH` 환경 변수에 제어 가능한 디렉터리를 설정하면 setuid 바이너리가 공격자의 라이브러리를 로드할 수 있게 됩니다. ### patch 패치 커밋 `5451fa962cd0a90a0e2ec1d8910a559ace02bba0`에서 `__libc_enable_secure` 값을 통해 환경 변수를 필터링[6]한 후 라이브러리 경로를 로드[7]하도록 변경되었습니다. > **glibc-2.39/elf/dl-support.c** void _dl_non_dynamic_init (void) { _dl_main_map.l_origin = _dl_get_origin (); _dl_main_map.l_phdr = GL(dl_phdr); _dl_main_map.l_phnum = GL(dl_phnum); /* Set up the data structures for the system-supplied DSO early, so they can influence _dl_init_paths. */ setup_vdso (NULL, NULL); /* With vDSO setup we can initialize the function pointers. */ setup_vdso_pointers (); if (__libc_enable_secure) // [6] { static const char unsecure_envvars[] = UNSECURE_ENVVARS ; const char *cp = unsecure_envvars; while (cp < unsecure_envvars + sizeof (unsecure_envvars)) { __unsetenv (cp); cp = strchr (cp, '\0') + 1; } } ... /* Initialize the data structures for the search paths for shared objects. */ _dl_init_paths (getenv ("LD_LIBRARY_PATH"), "LD_LIBRARY_PATH", // [7] /* No glibc-hwcaps selection support in statically linked binaries. */ NULL, NULL); ... ## Reference * https://www.man7.org/linux/man-pages/man3/getauxval.3.html * https://ubuntu.com/security/CVE-2025-4802 * https://cyberpress.org/critical-glibc-flaw/ * https://articles.manugarg.com/aboutelfauxiliaryvectors * https://patchwork.yoctoproject.org/project/oe-core/patch/[email protected]/#28605 - hack & life
hackyboiz.github.io
January 10, 2026 at 9:15 AM
RE: https://infosec.exchange/@jvoisin/115853495555073144

Barely spent any time on the laptop this Christmas but still got away with this little RCE on the train back home.

Snuffleupagus is really neat and I plan to spend more time on it in 2026 :)
infosec.exchange
January 7, 2026 at 11:22 AM
Reposted by ~swapgs
How to Ruin All of Package Management
Prediction markets are having a moment. After Polymarket called the 2024 election better than the pollsters, the model is expanding everywhere: sports, weather, Fed interest rate decisions. The thesis is that markets aggregate information better than polls or experts. Put money on the line and people get serious about being right. Package metrics would make excellent prediction markets. Will lodash hit 50 million weekly downloads by March? Will the mass-deprecated package that broke the internet last month recover its dependents? What’s the over/under on GitHub stars for the hot new AI framework? These questions have answers that resolve to specific numbers on specific dates. That’s all a prediction market needs. Manifold already runs one on GitHub stars.1 Imagine you could bet on these numbers. Go long on stars, buy a few thousand from a Fiverr seller, collect your winnings. Go long on downloads, publish a hundred packages that depend on it, run npm install in a loop from cloud instances. The manipulation is mostly one-directional: pumping is easier than dumping, since nobody unstars a project. But you can still short if you know something others don’t. Find a zero-day in a popular library, take a position against its download growth, then publish the vulnerability for maximum impact. Time your disclosure for when the market’s open. It’s like insider trading, but for software security. The attack surface includes anyone who can influence any metric: maintainers who control release schedules, security researchers who control vulnerability disclosures, and anyone with a credit card and access to a botnet. Prediction markets are supposed to be hard to manipulate because manipulation is expensive and the market corrects. This assumes you can’t cheaply manufacture the underlying reality. In package management, you can. The entire npm registry runs on trust and free API calls. This sounds like a dystopian thought experiment, but we’re already in it. ### The tea.xyz experiment Tea.xyz promised to reward open source maintainers with cryptocurrency tokens based on their packages’ impact. The protocol tracked metrics like downloads and dependents, then distributed TEA tokens accordingly. The incentive structure was immediately gamed. In early 2024, spam packages started flooding npm, RubyGems, and PyPI. Not malware in the traditional sense, just empty shells with `tea.yaml` files that linked back to Tea accounts. By April, about 15,000 spam packages had been uploaded. The Tea team shut down rewards temporarily. It got worse. The campaigns evolved into coordinated operations with names like “IndonesianFoods” and “Indonesian Tea.” Instead of just publishing empty packages, attackers created dependency chains. Package A depends on Package B depends on Package C, all controlled by the same actor, each inflating the metrics of the others. In November 2025, Amazon Inspector researchers uncovered over 150,000 packages linked to tea.xyz token farming. That’s nearly 3% of npm’s entire registry. The Tea team responded with ownership verification, provenance checks, and monitoring for Sybil attacks. But the damage makes the point: attach financial value to a metric and people will manufacture that metric at scale. Even well-intentioned open source funding efforts can fall into this trap. If grants or sustainability programs distribute money based on downloads or dependency counts, maintainers have an incentive to split their packages into many smaller ones that all depend on each other. A library that could ship as one package becomes ten, each padding the metrics of the others. More packages means more visibility on GitHub Sponsors, more impressive-looking dependency graphs, more surface area for funding algorithms to notice. The maintainer isn’t being malicious, just responding rationally to how the system measures impact. The same dynamic that produced 150,000 spam packages can reshape how legitimate software gets structured. ### GitHub stars for sale Stars are supposed to signal quality or interest. Developers use them to evaluate libraries. Investors use them to evaluate startups. So there’s a market. A CMU study found approximately six million suspected fake stars on GitHub between July 2019 and December 2024. The activity surged in 2024, peaking in July when over 16% of starred repositories were associated with fake star campaigns. You can buy 100 stars for $8 on Fiverr. Bulk rates go down to 10 cents per star. Complete GitHub accounts with achievements and history sell for up to $5,000. The researchers found that fake stars primarily promote short-lived phishing and malware repositories. An attacker creates a repo with a convincing name, buys enough stars to appear legitimate, and waits for victims. The Check Point security team identified a threat group called “Stargazer Goblin” running over 3,000 GitHub accounts to distribute info-stealers. Fake stars become a liability long-term. Once GitHub detects and removes them, the sudden drop in stars is a red flag. The manipulation only works for hit-and-run attacks, not sustained presence. But hit-and-run is enough when you’re distributing malware. Add a prediction market and the same infrastructure gets a new revenue stream. ### Why it’s so easy to break Publishing a package costs nothing. No identity verification. No deposit. No waiting period. You sign up, you push, it’s live. This was a feature: low barriers to entry let unknown developers share useful code without gatekeepers. The npm ecosystem grew to over 5 million packages because anyone could participate. Downloading costs nothing too. Add a line to your manifest and the package manager fetches whatever you asked for. No verification that you meant to type that name. No warning that the package was published yesterday by a brand new account. The convenience that made package managers successful is the same property that makes them exploitable. Metrics are just counters. Downloads increment when someone runs `npm install`. Stars increment when someone clicks a button. Dependencies increment when someone publishes a `package.json` that references you. None of these actions require demonstrating that the thing being measured (quality, popularity, utility) actually exists. When the value of gaming these systems was low, the honor system worked well enough. That’s changing. Stars, downloads, and dependency counts were always proxies for quality and trustworthiness. When the manipulation stayed artisanal, the signal held up well enough. Now that package management underpins most of the software industry, the numbers matter for real decisions: government supply chain requirements, investor due diligence, corporate procurement. The numbers are worth manufacturing at scale, and a prediction market would just make the arbitrage efficient. ### AI has entered the chat AI coding assistants are trained on the same metrics being gamed. When Copilot or Claude suggests a package, it’s drawing on training data that includes stars, downloads, and how often packages appear in code. A package with bought stars and farmed downloads looks popular to an LLM in the same way it looks popular to a human scanning search results. The difference is that humans might notice something feels off. A developer might pause at a package with 10,000 stars but three commits and no issues. An AI agent running `npm install` won’t hesitate. It’s pattern-matching, not evaluating. The threat models multiply. An attacker who games their package into enough training data gets free distribution through every AI coding tool. Developers using vibe coding workflows, where you accept AI suggestions and fix problems as they arise, don’t scrutinize each import. Agents running in CI/CD pipelines have elevated permissions and no human in the loop. The attack surface isn’t just the registry anymore; it’s every model trained on registry data. Package management worked because the stakes were low and almost everyone played fair. The stakes aren’t low anymore. The numbers feed into government policy, corporate procurement, AI training data, and now, potentially, financial markets. When you see a package with 10,000 stars, you’re not looking at 10,000 developers who evaluated it and clicked a button. You’re looking at a number that could mean anything. Maybe it’s a beloved tool. Maybe it’s a marketing campaign. Maybe it’s a malware distribution front with a Stargazer Goblin account network behind it, it’s pretty much impossible to tell. 1. Thanks to @mlinksva for the tip. ↩
nesbitt.io
December 28, 2025 at 9:10 PM
Reposted by ~swapgs
Here's a copy of the filesystem that has been extracted as a .tar file: http://squoze.net/UNIX/v4/
UNIX - v4
squoze.net
December 20, 2025 at 1:59 AM
Reposted by ~swapgs
I’ve been on contract work for one company the last six months, hoping to get picked up full-time at the start of the year. Sadly, they couldn’t fit it in the budget.

So I’m officially unemployed again. :(
December 19, 2025 at 10:10 PM
Reposted by ~swapgs
:-/

U.S. Government asks for social media profiles to be marked as public for H-1B, H-4, F, M and J NIVs
December 12, 2025 at 10:21 PM
Reposted by ~swapgs
I made a thing... an incomplete family tree of code editors. See it in action: https://arjenwiersma.nl/editors.html . During my years in software development I have seen many editors come and go, so I thought I would create a visualisatie for it... it got kinda out of hand :D
December 9, 2025 at 10:25 PM
Reposted by ~swapgs
this is the most executive-brained thing i've seen this month

(to the S3 object) "hello, computer?"
December 3, 2025 at 6:31 PM
Reposted by ~swapgs
I made myself a very simple rope bag for #climbing

I kind of yolo’ed the design using scraps and leftovers. It’s made from only two, rectangular pieces of fabric. A lightweight ripstop nylon makes an inner bag, and a heavy backpack fabric wraps around to […]

[Original post on chaos.social]
November 30, 2025 at 3:29 PM
Han I would looove to audit this thing! Having devices with Bluetooth and Wi-Fi support in a DC sounds fun :ablobcatwave:

https://www.scaleway.com/en/blog/how-we-turn-apples-mac-mini-into-high-performance-dedicated-servers/
How We Turn Apple’s Mac Mini Into High-Performance Dedicated Servers
From desktop to datacenter: how Scaleway turns Apple's Mac mini into a fully managed, high-performance cloud server for macOS and iOS developers.
www.scaleway.com
November 26, 2025 at 7:37 PM
Reposted by ~swapgs
The 2026 online public sessions of my "Mastering Burp Suite Pro" course have been published 📅

- March 24th to 27th, in French 🇫🇷
- April 14th to 17th, in English 🇬🇧

hackademy.agarri.fr/2026

PS: feel free to ping me if you'd like to temporarily block a seat or are looking for a 10% coupon 🎁
Agarri
Training
hackademy.agarri.fr
November 24, 2025 at 10:14 AM
Reposted by ~swapgs
if your company sets a `Content-Security-Policy` header: who's in charge of deciding what it should be? (someone in security? someone who works on the frontend? other? multiple people?)

https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/CSP
Content Security Policy (CSP) - HTTP | MDN
Content Security Policy (CSP) is a feature that helps to prevent or minimize the risk of certain types of security threats. It consists of a series of instructions from a website to a browser, which instruct the browser to place restrictions on the things that the code comprising the site is allowed to do.
developer.mozilla.org
November 20, 2025 at 7:50 PM
And… it’s lacking the juiciest technical stuff :( https://fedi.lwn.net/@lwn/115577257668492393
LWN.net (@[email protected])
Postmortem of the Xubuntu.org download site compromise https://lwn.net/Articles/1047056/ #LWN
fedi.lwn.net
November 20, 2025 at 3:58 PM
DMA is fun and all… until you have to flash the FPGA and pull these insane proprietary toolchains. All of that to dump a binary and exploit a bug I already have ;-;
November 5, 2025 at 9:24 PM
Reposted by ~swapgs
I wrote up some notes on two new papers on prompt injection: Agents Rule of Two (from Meta AI) and The Attacker Moves Second (from Anthropic + OpenAI = DeepMind + others) https://simonwillison.net/2025/Nov/2/new-prompt-injection-papers/
New prompt injection papers: Agents Rule of Two and The Attacker Moves Second
Two interesting new papers regarding LLM security and prompt injection came to my attention this weekend. Agents Rule of Two: A Practical Approach to AI Agent Security The first is …
simonwillison.net
November 2, 2025 at 11:11 PM
Reposted by ~swapgs
There is a new "Share on Mastodon" button on the official @Mastodon blog, feel free to try it out and let me know what you think.
November 2, 2025 at 8:30 PM
lol this one was sitting in my mail drafts since the recent CVE on the remote OCI feature. Still a bunch of bugs left in the loaders :> https://bird.makeup/users/0xmadvise/statuses/1983893375498776932
bird.makeup - Tweet
Sucks, yesterday i've discovered a path traversal in docker compose, but unfortunately it will not be assigned as a CVE. Because i was supposed to send an email instead of opening a public issue in GH😅 anyhow the poc can be found here: https://github.com/0pepsi/DockerCompose-path-traversal
bird.makeup
October 31, 2025 at 4:38 PM