Planet Debian
planet.debian.org.web.brid.gy
Planet Debian
@planet.debian.org.web.brid.gy
Russ Allbery: Review: Dark Ambitions
Review: Dark Ambitions, by Michelle Diener Series: | Class 5 #4.5 ---|--- Publisher: | Eclipse Copyright: | 2020 ISBN: | 1-7637844-2-8 Format: | Kindle Pages: | 81 Dark Ambitions is a science fiction romance novella set in Michelle Diener's Class 5 series, following the events of Dark Matters. It returns to Rose as the protagonist and in that sense is a sequel to Dark Horse, but you don't have to remember that book in detail to read this novella. Rose and Dav (and the Class 5 ship Sazo) are escorting an exploration team to a planet that is being evaluated for settlement. Rose has her heart set on going down to the planet, feeling the breeze, and enjoying the plant life. Dav and his ship are called away to deal with a hostage situation. He tries to talk her out of going down without him, but Rose is having none of it. Predictably, hijinks ensue. This is a very slight novella dropped into the middle of the series but not (at least so far as I can tell) important in any way to the overall plot. It provides a bit of a coda to Rose's story from Dark Horse, but given that Rose has made cameos in all of the other books, readers aren't going to learn much new here. According to the Amazon blurb, it was originally published in the Pets in Space 5 anthology. The pet in question is a tiny creature a bit like a flying squirrel that Rose rescues and that then helps Rose in exactly the way that you would predict in this sort of story. This is so slight and predictable that it's hard to find enough to say about it to write a review. Dav is protective in a way that I found annoying and kind of sexist. Rose doesn't let that restrict her decisions, but seems to find this behavior more charming than I did. There is a tiny bit of Rose being awesome but a bit more damsel in distress than the series usually goes for. The cute animal is cute. There's the obligatory armory scene with another round of technomagical weapons that I think has appeared in every book in this series. It all runs on rather obvious rails. There is a subplot involving Rose feeling some mysterious illness while on the planet that annoyed me entirely out of proportion to how annoying it is objectively, mostly because mysterious illnesses tend to ramp up my anxiety, which is not a pleasant reading emotion. This objection is probably specific to me. This is completely skippable. I was told that in advance and thus only have myself to blame, but despite my completionist streak, I wish I'd skipped it. We learn one piece of series information that will probably come up in the future, but it's not the sort of information that would lead me to seek out a story about it. Otherwise, there's nothing wrong with it, really, but it would be a minor and entirely forgettable chapter in a longer novel, padded out with a cute animal and Dav trying to be smothering. Not recommended just because you probably have something better to do with that reading time (reading the next full book of the series, for example), but there's nothing wrong with this if you want to read it anyway. Followed by Dark Class. Rating: 5 out of 10
www.eyrie.org
December 30, 2025 at 7:04 AM
Balasankar 'Balu' C: Granting Namespace-Specific Access in GKE Clusters
Heyo, ## The Challenge In production Kubernetes environments, access control becomes critical when multiple services share the same cluster. I recently faced this exact scenario: a GKE cluster hosting multiple services across different namespaces, where a new team needed access to maintain and debug their service—but only their service. The requirement was straightforward yet specific: grant external users the ability to exec into pods, view logs, and forward ports, but restrict this access to a single namespace within a single GKE cluster. No access to other clusters in the Google Cloud project, and no access to other namespaces. ## The Solution Achieving this granular access control requires combining Google Cloud IAM with Kubernetes RBAC (Role-Based Access Control). Here’s how to implement it: ### Step 1: Tag Your GKE Cluster First, apply a unique tag to your GKE cluster. This tag will serve as the identifier for IAM policies. ### Step 2: Grant IAM Access via Tags Add an IAM policy binding that grants users access to resources with your specific tag. The `Kubernetes Engine Viewer` role (`roles/container.viewer`) provides sufficient base permissions without granting excessive access. ### Step 3: Create a Kubernetes ClusterRole Define a ClusterRole that specifies the exact permissions needed: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: custom-access-role rules: - apiGroups: [""] resources: ["pods", "pods/exec", "pods/attach", "pods/portforward", "pods/log"] verbs: ["get", "list", "watch", "create"] Note: While you could use a namespace-scoped `Role`, a `ClusterRole` offers better reusability if you need similar permissions for other namespaces later. ### Step 4: Bind the Role to Users Create a `RoleBinding` to connect the role to specific users and namespaces: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: custom-rolebinding namespace: my-namespace subjects: - kind: User name: [email protected] apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: custom-access-role apiGroup: rbac.authorization.k8s.io Apply both configurations using `kubectl apply -f <filename>`. ## How It Works This approach creates a two-layer security model: * **GCP IAM** controls which clusters users can access using resource tags * **Kubernetes RBAC** controls what users can do within the cluster and limits their scope to specific namespaces The result is a secure, maintainable solution that grants teams the access they need without compromising the security of other services in your cluster.
balasankarc.in
December 28, 2025 at 12:57 PM
Russ Allbery: Review: Machine
Review: Machine, by Elizabeth Bear Series: | White Space #2 ---|--- Publisher: | Saga Press Copyright: | October 2020 ISBN: | 1-5344-0303-5 Format: | Kindle Pages: | 485 Machine is a far-future space opera. It is a loose sequel to Ancestral Night, but you do not have to remember the first book to enjoy this book and they have only a couple of secondary characters in common. There are passing spoilers for Ancestral Night in the story, though, if you care. Dr. Brookllyn Jens is a rescue paramedic on Synarche Medical Vessel I Race To Seek the Living. That means she goes into dangerous situations to get you out of them, patches you up enough to not die, and brings you to doctors who can do the slower and more time-consuming work. She was previously a cop (well, Judiciary, which in this universe is mostly the same thing) and then found that medicine, and specifically the flagship Synarche hospital Core General, was the institution in all the universe that she believed in the most. As Machine opens, Jens is boarding the Big Rock Candy Mountain, a generation ship launched from Earth during the bad era before right-minding and joining the Synarche, back when it looked like humanity on Earth wouldn't survive. Big Rock Candy Mountain was discovered by accident in the wrong place, going faster than it was supposed to be going and not responding to hails. The Synarche ship that first discovered and docked with it is also mysteriously silent. It's the job of Jens and her colleagues to get on board, see if anyone is still alive, and rescue them if possible. What they find is a corpse and a disturbingly servile early AI guarding a whole lot of people frozen in primitive cryobeds, along with odd artificial machinery that seems to be controlled by the AI. Or possibly controlling the AI. Jens assumes her job will be complete once she gets the cryobeds and the AI back to Core General where both the humans and the AI can be treated by appropriate doctors. Jens is very wrong. Machine is Elizabeth Bear's version of a James White Sector General novel. If one reads this book without any prior knowledge, the way that I did, you may not realize this until the characters make it to Core General, but then it becomes obvious to anyone who has read White's series. Most of the standard Sector General elements are here: A vast space station with rings at different gravity levels and atmospheres, a baffling array of species, and the ability to load other people's personalities into your head to treat other species at the cost of discomfort and body dysmorphia. There's a gruff supervisor, a fragile alien doctor, and a whole lot of idealistic and well-meaning people working around complex interspecies differences. Sadly, Bear does drop White's entertainingly oversimplified species classification codes; this is the correct call for suspension of disbelief, but I kind of missed them. I thoroughly enjoy the idea of the Sector General series, so I was delighted by an updated version that drops the sexism and the doctor/nurse hierarchy and adds AIs, doctors for AIs, and a more complicated political structure. The hospital is even run by a sentient tree, which is an inspired choice. Bear, of course, doesn't settle for a relatively simple James White problem-solving plot. There are interlocking, layered problems here, medical and political, immediate and structural, that unwind in ways that I found satisfyingly twisty. As with Ancestral Night, Bear has some complex points to make about morality. I think that aspect of the story was a bit less convincing than Ancestral Night, in part because some of the characters use rather bizarre tactics (although I will grant they are the sort of bizarre tactics that I could imagine would be used by well-meaning people using who didn't think through all of the possible consequences). I enjoyed the ethical dilemmas here, but they didn't grab me the way that Ancestral Night did. The setting, though, is even better: An interspecies hospital was a brilliant setting when James White used it, and it continues to be a brilliant setting in Bear's hands. It's also worth mentioning that Jens has a chronic inflammatory disease and uses an exoskeleton for mobility, and (as much as I can judge while not being disabled myself) everything about this aspect of the character was excellent. It's rare to see characters with meaningful disabilities in far-future science fiction. When present at all, they're usually treated like Geordi's sight: something little different than the differential abilities of the various aliens, or even a backdoor advantage. Jens has a true, meaningful disability that she has to manage and that causes a constant cognitive drain, and the treatment of her assistive device is complex and nuanced in a way that I found thoughtful and satisfying. The one structural complaint that I will make is that Jens is an astonishingly talkative first-person protagonist, particularly for an Elizabeth Bear novel. This is still better than being inscrutable, but she is prone to such extended philosophical digressions or infodumps in the middle of a scene that I found myself wishing she'd get on with it already in a few places. This provides good characterization, in the sense that the reader certainly gets inside Jens's head, but I think Bear didn't get the balance quite right. That complaint aside, this was very fun, and I am certainly going to keep reading this series. Recommended, particularly if you like James White, or want to see why other people do. > The most important thing in the universe is not, it turns out, a single, objective truth. It's not a hospital whose ideals you love, that treats all comers. It's not a lover; it's not a job. It's not friends and teammates. > > It's not even a child that rarely writes me back, and to be honest I probably earned that. I could have been there for her. I didn't know how to be there for anybody, though. Not even for me. > > The most important thing in the universe, it turns out, is a complex of subjective and individual approximations. Of tries and fails. Of ideals, and things we do to try to get close to those ideals. > > It's who we are when nobody is looking. Followed by The Folded Sky. Rating: 8 out of 10
www.eyrie.org
December 25, 2025 at 4:46 AM
Daniel Lange: Getting scanning to work with Gimp on Trixie
Trixie ships Gimp 3.0.4 and the 3.x series has gotten incompatible to XSane, the common frontend for scanners on Linux. Hence the maintainer, Jörg Frings-Fürst, has disabled the Gimp integration temporarily in response to a Debian bug #1088080. There seems to be no tracking bug for getting the functionality back but people have been commenting on Debian bug #993293 as that is ... loosely related . There are two options to get the Scanning functionality back in Trixie until this is properly resolved by an updated XSane in Debian (e.g. via trixie-backports): Lee Yingtong Li (RunasSudo) has created a Python script that calls XSane as a cli application and published it at https://yingtongli.me/git/gimp-xsanecli/. This worked okish for me but needed me to find the scan in `/tmp/` a number of times. This is a good stop-gap script if you need to scan from Gimp $now and look for a quick solution. Upstream has completed the necessary steps to get XSane working as a Gimp 3.x plugin at https://gitlab.com/sane-project/frontend/xsane. Unfortunately compiling this is a bit involved but I made a version that can be dropped into `/usr/local/bin` or `$HOME/bin` and works alongside Gimp and the system-installed XSane. So: 1. `sudo apt install gimp xsane` 2. Download xsane-1.0.0-fit-003 (752kB, AMD64 executable for Trixie) and place it in `/usr/local/bin` (as root) 3. `sha256sum /usr/local/bin/xsane-1.0.0-fit-003` # result needs to be af04c1a83c41cd2e48e82d04b6017ee0b29d555390ca706e4603378b401e91b2 4. `sudo chmod +x /usr/local/bin/xsane-1.0.0-fit-003` 5. # Link the executable into the Gimp plugin directory as the user running Gimp: `mkdir -p $HOME/.config/GIMP/3.0/plug-ins/xsane/` `ln -s /usr/local/bin/xsane-1.0.0-fit-003 $HOME/.config/GIMP/3.0/plug-ins/xsane/` 6. Restart Gimp 7. Scan from Gimp via File → Create → Acquire → XSane The source code for the `xsane` executable above is available under GPL-2 at https://gitlab.com/sane-project/frontend/xsane/-/tree/c5ac0d921606309169067041931e3b0c73436f00. This points to the last upstream commit from 27. September 2025 at the time of writing this blog article.
daniel-lange.com
December 24, 2025 at 10:45 AM
Daniel Kahn Gillmor: AI and Secure Messaging Don't Mix
# AI and Secure Messaging Don't Mix Over on the ACLU's Free Future blog, I just published an article titled AI and Secure Messaging Don't Mix. The blogpost assumes for the sake of the argument that people might actually want to have an AI involved in their personal conversations, and explores why Meta's Private Processing doesn't offer the level of assurance that they want it to offer. In short, the promises of "confidential cloud computing" are built on shaky foundations, especially against adversaries as powerful as Meta themselves. If you really want AI in your chat, the baseline step for privacy preservation is to include it in your local compute base, not to use a network service! But these operators clearly don't value private communication as much as they value binding you to their services. But let's imagine some secure messenger that actually does put message confidentiality first -- and imagine they had integrated some sort of AI capability into the messenger. That at least bypasses the privacy questions around AI use. Would you really want to talk with your friends, as augmented by their local AI, though? Would you want an AI, even one running locally with perfect privacy, intervening in your social connections? What if it summarized your friend's messages to you in a way that led you to misunderstand (or ignore) an important point your friend had made? What if it encouraged you to make an edgy joke that comes across wrong? Or to say something that seriously upsets a friend? How would you respond? How would you even know that it had happened? My handle is `dkg`. More times than i can count, i've had someone address me in a chat as "dog" and then cringe and apologize and blame their spellchecker/autocorrect. I can laugh these off because the failure mode is so obvious and transparent -- and repeatable. (also, dogs are awesome, so i don't really mind!) But when our attention (and our responses!) are being shaped and molded by these plausibility engines, how will we even know that mistakes are being made? What if the plausibility engine you've hooked into your messenger embeds subtle (or unsubtle!) bias? Don't we owe it to each other to engage with actual human attention?
dkg.fifthhorseman.net
December 23, 2025 at 6:43 PM
Jonathan Dowland: Remarkable
My Remarkable tablet, displaying my 2025 planner. During my PhD, on a sunny summer’s day, I copied some papers to read onto an iPad and cycled down to an outdoor cafe next to the beach. armed with a coffee and an ice cream, I sat and enjoyed the warmth. The only problem was that due to the bright sunlight, I couldn’t see a damn thing. In 2021 I decided to take the plunge and buy the Remarkable 2 that has been heavily advertised at the time. Over the next four or so years, I made good use of it to read papers; read drafts of my own papers and chapters; read a small number of technical books; use as a daily planner; take meeting notes for work, PhD and later, personal matters. I didn’t buy the remarkable stylus or folio cover instead opting for a (at the time, slightly cheaper) LAMY AL-star EMR. And a fantastic fabric sleeve cover from Emmerson Gray. I installed a hack which let me use the Lamy’s button to activate an eraser and also added a bunch of other tweaks. I wouldn’t recommend that specific hack anymore as there are safer alternatives (personally untested, but e.g. https://github.com/isaacwisdom/RemarkableLamyEraser) Pros: the writing experience is unparalleled. Excellent. I enjoy writing with fountain pens on good paper but that experience comes with inky fingers, dried up nibs, and a growing pile of paper notebooks. The remarkable is very nearly as good without those drawbacks. Cons: lower contrast than black on white paper and no built in illumination. It needs good light to read. Almost the opposite problem to the iPad! I’ve tried a limited number of external clip on lights but nothing is frictionless to use. The traditional two-column, wide margin formatting for academic papers is a bad fit for the remarkable’s size (just as it is for computer display sizes. Really is it good for anything people use anymore?). You can pinch to zoom which is OK, or pre-process papers (with e.g. Briss) to reframe them to be more suitable but that’s laborious. The newer model, the _Remarkable Paper Pro_ , might address both those issues: its bigger; has illumination and has also added colour which would be a nice to have. It’s also a lot more expensive. I had considered selling on the tablet after I finished my PhD. My current plan, inspired to some extent by my former colleague Aleksey Shipilëv, who makes great use of his, is to have a go at using it more often, to see if it continues to provide value for me: more noodling out thoughts for work tasks, more drawings (e.g. plans for 3D models) and more reading of tech books.
jmtd.net
December 23, 2025 at 12:44 PM
Isoken Ibizugbe: Everybody Struggles
That’s right: everyone struggles. You could be working on a project only to find a mountain of new things to learn, or your code might keep failing until you start to doubt yourself. I feel like that sometimes, wondering if I’m good enough. But in those moments, I whisper to myself: _“You don’t know it yet; once you do, it will get easy.”_ While contributing to the **Debian openQA** project, there was so much to learn, from understanding what Debian actually is to learning the various installation methods and image types. I then had to tackle the installation and usage of openQA itself. I am incredibly grateful for the installation guideprovided by Roland Clobus and the documentation on writing code for openQA. ### **Overcoming Technical Hurdles** Even with amazing guides, I hit major roadblocks. Initially, I was using Windows with VirtualBox, but openQA couldn’t seem to run the tests properly. Despite my mentors (Roland and Phil) suggesting alternatives, the issues persisted. I actually installed openQA twice on VirtualBox and realized that if you miss even one small step in the installation, it becomes very difficult to move forward. Eventually, I took the big step and **dual-booted my machine to Linux**. Even then, the challenges didn’t stop. My openQA Virtual Machine (VM) ran out of allocated space and froze, halting my testing. I reached out on the IRC chat and received the help I needed to get back on track. ### **My Research Line-up** When I’m struggling for information, I follow a go-to first step for research, followed by other alternatives: 1. **Google:** This is my first stop. It helped me navigate the switch to a Linux OS and troubleshoot KVM connection issues for the VM. Whether it’s an Ubuntu forum or a technical blog, I skim through until I find what can help. 2. **The “Upstream” Documentation:** If Google doesn’t have the answer, I go straight to the official openQA documentation. This is a goldmine. It explains functions, how to use them, and lists usable test variables. 3. **The Debian openQA UI:** While working on the apps_startstop tests, I look at previous similar tests on openqa.debian.net/tests. I checked the “Settings” tab to see exactly what variables were used and how the test was configured. 4. **Salsa (Debian’s GitLab):** I reference the Salsa Debian openQA README and the developer guides sometimes; Getting started, Developer docs on how to write tests I also had to learn the basics of the **Perl** programming language during the four-week contribution stage. While we don’t need to be Perl experts, I found it essential to understand the logic so I can explain my work to others. I’ve spent a lot of time studying the codebase, which is time-consuming but incredibly valuable. For example, my apps_startstop test command originally used a long list of applications via ENTRYPOINT. I began to wonder if there was a more efficient way. With Roland’s guidance, I explored the main.pm file. This helped me understand how the apps_startstop function works and how it calls variables. I also noticed there are utility functions that are called in tests. I also check them and try to understand their function, so I know if I need them or not. I know I still have a lot to learn, and yes, the doubt still creeps in sometimes. But I am encouraged by the support of my mentors and the fact that they believe in my ability to contribute to this project. If you’re struggling too, just remember: _you don’t know it yet; once you do, it will get easy._
isokenibizugbe.wordpress.com
December 23, 2025 at 10:42 AM
Jonathan McDowell: NanoKVM: I like it
I bought a NanoKVM. I’d heard some of the stories about how terrible it was beforehand, and some I didn’t learn about until afterwards, but at £52, including VAT + P&P, that seemed like an excellent bargain for something I was planning to use in my home network environment. Let’s cover the bad press first. apalrd did a video, entitled NanoKVM: The S stands for Security (Armen Barsegyan has a write up recommending a PiKVM instead that lists the objections raised in the video). Matej Kovačič wrote an article about the hidden microphone on a Chinese NanoKVM. Various other places have picked up both of these and still seem to be running with them, 10 months later. Next, let me explain where I’m coming from here. I have over 2 decades of experience with terrible out-of-band access devices. I still wince when I think of the Sun Opteron servers that shipped with an iLOM that needed a 32-bit Windows browser in order to access it (IIRC some 32 bit binary JNI blob). It was a 64 bit x86 server from a company who, at the time, still had a major non-Windows OS. Sheesh. I do not assume these devices are fit for exposure to the public internet, even if they come from “reputable” vendors. Add into that the fact the NanoKVM is very much based on a development board (the LicheeRV Nano), and I felt I knew what I was getting into here. And, as a TL;DR, I am perfectly happy with my purchase. Sipeed have actually dealt with a bunch of apalrd’s concerns (GitHub ticket), which I consider to be an impressive level of support for this price point. Equally the microphone is explained by the fact this is a £52 device based on a development board. You’re giving it USB + HDMI access to a host on your network, if you’re worried about the microphone then you’re concentrating on the wrong bit here. I started out by hooking the NanoKVM up to my Raspberry Pi classic, which I use as a serial console / network boot tool for working on random bits of hardware. That meant the NanoKVM had no access to the outside world (the Pi is not configured to route, or NAT, for the test network interface), and I could observe what went on. As it happens you can do an SSH port forward of port 80 with this sort of setup and it all works fine - no need for the NanoKVM to have _any_ external access, and it copes happily with being accessed as `http://localhost:8000/` (though you do need to choose MJPEG as the video mode, more forwarding or enabling HTTPS is needed for an H.264 WebRTC session). IPv6 is enabled in the kernel. My test setup doesn’t have a router advertisements configured, but I could connect to the web application over the v6 link-local address that came up automatically. My device reports: Image version: v1.4.1 Application version: 2.2.9 That’s recent, but the GitHub releases page has 2.3.0 listed as more recent. Out of the box it’s listening on TCP port 80. SSH is not running, but there’s a toggle to turn it on and the web interface offers a web based shell (with no extra authentication over the normal login). On first use I was asked to set a username + password. Default access, as you’d expect from port 80, is HTTP, but there’s a toggle to enable HTTPS. It generates a self signed certificate - for me it had the CN `localhost` but that might have been due to my use of port forwarding. Enabling HTTPS does not disable HTTP, but HTTP just redirects to the HTTPS URL. As others have discussed it does a bunch of DNS lookups, primarily for NTP servers but also for `cdn.sipeed.com`. The DNS servers are hard coded: ~ # cat /etc/resolv.conf nameserver 192.168.0.1 nameserver 8.8.4.4 nameserver 8.8.8.8 nameserver 114.114.114.114 nameserver 119.29.29.29 nameserver 223.5.5.5 This is actually restored on boot from `/boot/resolv.conf`, so if you want changes to persist you can just edit that file. NTP is configured with a standard set of `pool.ntp.org` services in `/etc/ntp.conf` (this does not get restored on reboot, so can just be edited in place). I had `dnsmasq` on the Pi setup to hand out DNS + NTP servers, but both were ignored (though actually `udhcpc` does write the DNS details to `/etc/resolv.conf.dhcp`). My assumption is the lookup to `cdn.sipeed.com` is for firmware updates (as I bought the NanoKVM cube it came fully installed, so no need for a `.so` download to make things work); when working DNS was provided I witness attempts to connect to HTTPS. I’ve not bothered digging further into this. I did go grab the `latest.zip` being served from the URL, which turned out to be v2.2.9, matching what I have installed, not the latest on GitHub. I note there’s an `iptables` setup (with `nftables` underneath) that’s not fully realised - it seems to be trying to allow inbound HTTP + WebRTC, as well as outbound SSH, but everything is default accept so none of it gets hit. Setting up a default deny outbound and tweaking a little should provide a bit more reassurance it’s not going to try and connect out somewhere it shouldn’t. It looks like updates focus solely on the KVM application, so I wanted to take a look at the underlying OS. This is buildroot based: ~ # cat /etc/os-release NAME=Buildroot VERSION=-g98d17d2c0-dirty ID=buildroot VERSION_ID=2023.11.2 PRETTY_NAME="Buildroot 2023.11.2" The kernel reports itself as `5.10.4-tag-`. Somewhat ancient, but actually an LTS kernel. Except we’re now up to 5.10.247, so it obviously hasn’t been updated in some time. TBH, this is what I expect (and fear) from embedded devices. They end up with some ancient base OS revision and a kernel with a bunch of hacks that mean it’s not easily updated. I get that the margins on this stuff are tiny, but I do wish folk would spend more time upstreaming. Or at least updating to the latest LTS point release for their kernel. The SSH client/daemon is full-fat OpenSSH: ~ # sshd -V OpenSSH_9.6p1, OpenSSL 3.1.4 24 Oct 2023 There are a number of CVEs fixed in later OpenSSL 3.1 versions, though at present nothing that looks too concerning from the server side. Yes, the image has tcpdump + aircrack installed. I’m a little surprised at aircrack (the device has no WiFi and even though I know there’s a variant that does, it’s not a standard debug tool the way tcpdump is), but there’s a copy of GNU Chess in there too, so it’s obvious this is just a kitchen-sink image. FWIW it looks like the buildroot config is here. Sadly the UART that I believe the bootloader/kernel are talking to is not exposed externally - the UART pin headers are for UART1 + 2, and I’d have to open up the device to get to UART0. I’ve not yet done this (but doing so would also allow access to the SD card, which would make trying to compile + test my own kernel easier). In terms of actual functionality it did what I’d expect. 1080p HDMI capture was fine. I’d have gone for a lower resolution, but I think that would have required tweaking on the client side. It looks like the 2.3.0 release allows EDID tweaking, so I might have to investigate that. The keyboard defaults to a US layout, which caused some problems with the `|` symbol until I reconfigured the target machine not to expect a GB layout. There’s also the potential to share out images via USB. I copied a Debian trixie netinst image to `/data` on the NanoKVM and was able to select it in the web interface and have it appear on the target machine easily. There’s also the option to fetch direct from a URL in the web interface, but I was still testing without routable network access, so didn’t try that. There’s plenty of room for images: ~ # df -h Filesystem Size Used Available Use% Mounted on /dev/mmcblk0p2 7.6G 823.3M 6.4G 11% / devtmpfs 77.7M 0 77.7M 0% /dev tmpfs 79.0M 0 79.0M 0% /dev/shm tmpfs 79.0M 30.2M 48.8M 38% /tmp tmpfs 79.0M 124.0K 78.9M 0% /run /dev/mmcblk0p1 16.0M 11.5M 4.5M 72% /boot /dev/mmcblk0p3 22.2G 160.0K 22.2G 0% /data The NanoKVM also appears as an RNDIS USB network device, with `udhcpd` running on the interface. IP forwarding is not enabled, and there’s no masquerading rules setup, so this doesn’t give the target host access to the “management” LAN by default. I guess it could be useful for copying things over to the target host, as a more flexible approach than a virtual disk image. One thing to note is this makes for a bunch of devices over the composite USB interface. There are 3 HID devices (keyboard, absolute mouse, relative mouse), the RNDIS interface, and the USB mass storage. I had a few occasions where the keyboard input got stuck after I’d been playing about with big data copies over the network and using the USB mass storage emulation. There is a HID-only mode (no network/mass storage) to try and help with this, and a restart of the NanoKVM generally brought things back, but something to watch out for. Again I see that the 2.3.0 application update mentions resetting the USB hardware on a HID reset, which might well help. As I stated at the start, I’m happy with this purchase. Would I leave it exposed to the internet without suitable firewalling? No, but then I wouldn’t do so for any KVM. I wanted a lightweight KVM suitable for use in my home network, something unlikely to see heavy use but that would save me hooking up an actual monitor + keyboard when things were misbehaving. So far everything I’ve seen says I’ve got my money’s worth from it.
www.earth.li
December 22, 2025 at 6:40 PM
Hellen Chemtai: Overcoming Challenges in OpenQA Images Testing: My Internship Journey
Hello there . Today will be an in depth review on my work with the Debian OpenQA images testing team. I will highlight the struggles that I have had so far during my Outreachy internship. The OpenQA images testing team uses OpenQA to automatically install images e.g. Gnome Images. The images are then tested using tests written in Perl. My current tasks include speech install and capture all audio. I am also installing Live Gnome image to Windows using BalenaEtcher then testing it. A set of similar tasks will also be collaborated on. While working on tasks, I have to go through the guides. I also learn how Perl works so as to edit and create tests. For every change made, I have to re-run the job in developer mode. I have to create needles that have matches and click co-ordinates. I have been stuck on some of these instances: 1. During installation, my job would not process a second HDD I had added. Roland Clobus , one of my mentors from the team gave me a variable to work with. The solution was adding “NUMDISKS=2” as part of the command. 2. While working on a file, one of the needles would only work after file edits. Afterwards it would fail to “assert_and_click”. What kept bugging me was why it was passing after the first instance then failing after. The solution was adding a “wait_still_screen” to the code. This would ensure any screen changes loaded first before clicking happened. 3. I was stuck on finding the keys that would be needed for a context menu. I added “button => ‘right’ ” in the “assert_and_click ” code. 4. Windows 11 installation was constantly failing. Roland pointed out he was working on it so I had to use Windows 10. 5. Windows 10 Virtual Machine does not connect to the internet because of update restrictions. I had to switch to Linux Virtual Machine for a download job. When I get stuck, at times I seek guidance from the mentors. I still look for solutions in the documentation. Here are some of the documentation that have helped me get through some of these challenges. 1. Installation and creating tests guide – https://salsa.debian.org/qa/openqa/openqa-tests-debian/-/tree/debian/documentation . These guides help in installation and creating of tests. 2. OpenQA official documentation – https://open.qa/docs/ . This documentation is very comprehensive. I used it recently to read about PUBLISH_HDD_n to save the updated version of a HDD_n I am using. 3. OpenQA test API documentation – https://open.qa/api/testapi/ . This documentation shows me which parameters to use. I have used it recently to find how to right click on a mouse and special characters. 4. OpenQA variables file in Gitlab – https://salsa.debian.org/qa/openqa/openqa-tests-debian/-/blob/debian/VARIABLES.md . This has explanations of the most commonly used variables . 5. OpenQA repository in Gitlab – https://salsa.debian.org/qa/openqa/openqa-tests-debian . I go through the Perl tests. Understand how they work . Then integrate my tests using the a similar manner so that it would look uniform. 6. OpenQa tests – https://openqa.debian.net/tests. I use these tests to find machine settings. I also find test sequences and the assets I would need to create similar tests. I used it recently to look at how graphical login was being implemented then shutdown. The list above are the documentation that are supposed to be used for these tests and finding solutions. If I don’t find anything within these, I then ask Roland for help. I try to go through the autoinst documentation that are from the links provided in the Gitlab README.md file : https://salsa.debian.org/qa/openqa/openqa-tests-debian/-/blob/debian/README.md . They are also comprehensive but are very technical . They might lead to confusion if the the above are not understood first. In general, I get challenges but there is always a means to solve them through documentation provided. The mentors are also very helpful whenever we get challenges. I have gained team contribution skills , learned Perl and how to test using OpenQA. I am still polishing on how to make my needles better. My current progress is thus good. We learn one day at a time.
hellenchemtai.wordpress.com
December 22, 2025 at 2:41 PM
Emmanuel Kasper: Configuring a mail transfert agent to interact with the Debian bug tracker
## Email interface of the Debian bug tracker The main interface of the Debian bug tracker, at http://bugs.debian.org, is e-mail, and modifications are made to existing bugs by sending an email to an address like [email protected]. The web interface allows to browse bugs, but any addition to the bug itself will require an email client. This sounds a bit weird in 2025, as http REST clients with Oauth access tokens for command line tools interacting with online resources are today the norm. However we should remember the Debian project goes back to 1993 and the bug tracker software debugs, was released in 1994. REST itself was first introduced in 2000, six years later. In any case, using an email client to create or modify bug reports is not a bad idea per se: * the internet mail protocol, SMTP, is a well known and standardized protocol defined in an IETF RFC. * no need for account creation and authentication, you just need an email address to interact. There is a risk of spam, but in my experience this has been very low. When authentication is needed, Debian Developpers _sign_ their work with their private GPG key. * you can use the bug tracker using the interface of your choice: webmail, graphical mail clients like Thunderbird or Evolution, text clients like Mutt or Pine, or command line tools like **`bts`**. ## A system wide minimal Mail Transfert Agent to send mail We can configure **`bts`** as a SMTP client, with username and password. In SMTP client mode, we would need to enter the SMTP settings from our mail service provider. The other option is to configure a _Mail Transfert Agent_ (MTA) which provides a system wide `sendmail` interface, that all command line and automation tools can use send email. For instance **`reportbug`** and **`git send-email`** are able to use the sendmail interface. Why a sendmail interface ? Because sendmail used to be the default MTA of Unix back in the days, thus many programs sending mails expect something which looks like sendmail locally. A popular, maintained and packaged minimal MTA is **`msmtp`**, we are going to use it. ## msmtp installation and configuration Installation is just an **`apt`** away: # apt install msmtp msmtp-mta # msmtp --version msmtp version 1.8.23 You can follow this blog post to configure msmtp, including saving your mail account credentials in the Gnome keyring. Once installed, you can verify that `msmtp-mta` created a sendmail symlink. $ ls -l /usr/sbin/sendmail lrwxrwxrwx 1 root root 12 16 avril 2025 /usr/sbin/sendmail -> ../bin/msmtp `bts`, `git-send-email` and `reportbug` will pipe their output to `/usr/sbin/sendmail` and `msmtp` will send the email in the background. ## Testing with with a simple mail client Debian comes out of the box with a primitive mail client, **`bsd-mailx`** that you can use to test your MTA set up. If you have configured msmtp correctly you send an email to yourself using $ echo "hello world" | mail -s "my mail subject" [email protected] Now you can open bugs for Debian with `reportbug`, tag them with `bts` and send git formated patches from the command line with `git send-email`.
00formicapunk00.wordpress.com
December 22, 2025 at 10:40 AM
Russell Coker: Samsung 65″ QN900C 8K TV
As a follow up from my last post about my 8K TV [1] I tested out a Samsung 65″ QN900C Neo QLED 8K that’s on sale in JB Hifi. According to the JB employee I spoke to they are running out the last 8K TVs and have no plans to get more. In my testing of that 8K TV YouTube had a 3840*2160 viewport which is better than the 1920*1080 of my Hisense TV. When running a web browser the codeshack page reported it as 1920*1080 with a 1.25* pixel density (presumably a configuration option) that gave a usable resolution of 1536*749. The JB Hifi employee wouldn’t let me connect my own device via HDMI but said that it would work at 8K. I said “so if I buy it I can return it if it doesn’t do 8K HDMI?” and then he looked up the specs and found that it would only do 4K input on HDMI. It seems that actual 8K resolution might work on a Samsung streaming device but that’s not very useful particularly as there probably isn’t much 8K content on any streaming service. Basically that Samsung allegedly 8K TV only works at 4K at best. It seems to be impossible to buy an 8K TV or monitor in Australia that will actually display 8K content. ASUS has a 6K 32″ monitor with 6016*3384 resolution for $2016 [2]. When counting for inflation $2016 wouldn’t be the most expensive monitor I’ve ever bought and hopefully prices will continue to drop. Rumour has it that there are 8K TVs available in China that actually take 8K input. Getting one to Australia might not be easy but it’s something that I will investigate. Also I’m trying to sell my allegedly 8K TV. * 1][ https://etbe.coker.com.au/2025/11/25/edid-and-my-8k-tv/ * 2][ https://tinyurl.com/2arl3al7 Related posts: 1. Samsung Galaxy Note 3 In June last year I bought a Samsung Galaxy Note... 2. Samsung Galaxy Camera – a Quick Review I recently had a chance to briefly play with the... 3. Philips 438P1 43″ 4K Monitor I have just returned a Philips 438P1 43″ 4K Monitor...
etbe.coker.com.au
December 22, 2025 at 8:39 AM
C.J. Collier: I’m learning about perlguts today.
[ ](https://wp.colliertech.org/cj/wp-content/uploads/2025/12/im-learning-about-perlguts-today.png) ## 0.23 2025-12-20 commit be15aa25dea40aea66a8534143fb81b29d2e6c08 Author: C.J. Collier Date: Sat Dec 20 22:40:44 2025 +0000 Fixes C-level test infrastructure and adds more test cases for upb_to_sv conversions. - **Makefile.PL:** - Allow `extra_src` in `c_test_config.json` to be an array. - Add ASan flags to CCFLAGS and LDDLFLAGS for better debugging. - Corrected echo newlines in `test_c` target. - **c_test_config.json:** - Added missing type test files to `deps` and `extra_src` for `convert/sv_to_upb` and `convert/upb_to_sv` test runners. - **t/c/convert/upb_to_sv.c:** - Fixed a double free of `test_pool`. - Added missing includes for type test headers. - Updated test plan counts. - **t/c/convert/sv_to_upb.c:** - Added missing includes for type test headers. - Updated test plan counts. - Corrected Perl interpreter initialization. - **t/c/convert/types/**: - Added missing `test_util.h` include in new type test headers. - Completed the set of `upb_to_sv` test cases for all scalar types by adding optional and repeated tests for `sfixed32`, `sfixed64`, `sint32`, and `sint64`, and adding repeated tests to the remaining scalar type files. - **Documentation:** - Updated `01-xs-testing.md` with more debugging tips, including ASan usage and checking for double frees and typos. - Updated `xs_learnings.md` with details from the recent segfault. - Updated `llm-plan-execution-instructions.md` to emphasize debugging steps. ## 0.22 2025-12-19 commit 2c171d9a5027e0150eae629729c9104e7f6b9d2b Author: C.J. Collier Date: Fri Dec 19 23:41:02 2025 +0000 feat(perl,testing): Initialize C test framework and build system This commit sets up the foundation for the C-level tests and the build system for the Perl Protobuf module: 1. **Makefile.PL Enhancements:** * Integrates `Devel::PPPort` to generate `ppport.h` for better portability. * Object files now retain their path structure (e.g., `xs/convert/sv_to_upb.o`) instead of being flattened, improving build clarity. * The `MY::postamble` is significantly revamped to dynamically generate build rules for all C tests located in `t/c/` based on the `t/c/c_test_config.json` file. * C tests are linked against `libprotobuf_common.a` and use `ExtUtils::Embed` flags. * Added `JSON::MaybeXS` to `PREREQ_PM`. * The `test` target now also depends on the `test_c` target. 2. **C Test Infrastructure (`t/c/`): * Introduced `t/c/c_test_config.json` to configure individual C test builds, specifying dependencies and extra source files. * Created `t/c/convert/test_util.c` and `.h` for shared test functions like loading descriptors. * Initial `t/c/convert/upb_to_sv.c` and `t/c/convert/sv_to_upb.c` test runners. * Basic `t/c/integration/030_protobuf_coro.c` for Coro safety testing on core utils using `libcoro`. * Basic `t/c/integration/035_croak_test.c` for testing exception handling. * Basic `t/c/integration/050_convert.c` for integration testing conversions. 3. **Test Proto:** Updated `t/data/test.proto` with more field types for conversion testing and regenerated `test_descriptor.bin`. 4. **XS Test Harness (`t/c/upb-perl-test.h`):** Added `like_n` macro for length-aware regex matching. 5. **Documentation:** Updated architecture and plan documents to reflect the C test structure. 6. **ERRSV Testing:** Note that the C tests (`t/c/`) will primarily check *if* a `croak` occurs (i.e., that the exception path is taken), but will not assert on the string content of `ERRSV`. Reliably testing `$@` content requires the full Perl test environment with `Test::More`, which will be done in the `.t` files when testing the Perl API. This provides a solid base for developing and testing the XS and C components of the module. ## 0.21 2025-12-18 commit a8b6b6100b2cf29c6df1358adddb291537d979bc Author: C.J. Collier Date: Thu Dec 18 04:20:47 2025 +0000 test(C): Add integration tests for Milestone 2 components - Created t/c/integration/030_protobuf.c to test interactions between obj_cache, arena, and utils. - Added this test to t/c/c_test_config.json. - Verified that all C tests for Milestones 2 and 3 pass, including the libcoro-based stress test. ## 0.20 2025-12-18 commit 0fcad68680b1f700a83972a7c1c48bf3a6958695 Author: C.J. Collier Date: Thu Dec 18 04:14:04 2025 +0000 docs(plan): Add guideline review reminders to milestones - Added a "[ ] REFRESH: Review all documents in @perl/doc/guidelines/**" checklist item to the start of each component implementation milestone (C and Perl layers). - This excludes Integration Test milestones. ## 0.19 2025-12-18 commit 987126c4b09fcdf06967a98fa3adb63d7de59a34 Author: C.J. Collier Date: Thu Dec 18 04:05:53 2025 +0000 docs(plan): Add C-level and Perl-level Coro tests to milestones - Added checklist items for `libcoro`-based C tests (e.g., `t/c/integration/050_convert_coro.c`) to all C layer integration milestones (050 through 220). - Updated `030_Integration_Protobuf.md` to standardise checklist items for the existing `030_protobuf_coro.c` test. - Removed the single `xt/author/coro-safe.t` item from `010_Build.md`. - Added checklist items for Perl-level `Coro` tests (e.g., `xt/coro/240_arena.t`) to each Perl layer integration milestone (240 through 400). - Created `perl/t/c/c_test_config.json` to manage C test configurations externally. - Updated `perl/doc/architecture/testing/01-xs-testing.md` to describe both C-level `libcoro` and Perl-level `Coro` testing strategies. ## 0.18 2025-12-18 commit 6095a5a610401a6035a81429d0ccb9884d53687b Author: C.J. Collier Date: Thu Dec 18 02:34:31 2025 +0000 added coro testing to c layer milestones ## 0.17 2025-12-18 commit cc0aae78b1f7f675fc8a1e99aa876c0764ea1cce Author: C.J. Collier Date: Thu Dec 18 02:26:59 2025 +0000 docs(plan): Refine test coverage checklist items for SMARTness - Updated the "Tests provide full coverage" checklist items in C layer plan files (020, 040, 060, 080, 100, 120, 140, 160, 180, 200) to explicitly mention testing all public functions in the corresponding header files. - Expanded placeholder checklists in 140, 160, 180, 200. - Updated the "Tests provide full coverage" and "Add coverage checks" checklist items in Perl layer plan files (230, 250, 270, 290, 310, 330, 350, 370, 390) to be more specific about the scope of testing and the use of `Test::TestCoverage`. - Expanded Well-Known Types milestone (350) to detail each type. ## 0.16 2025-12-18 commit e4b601f14e3817a17b0f4a38698d981dd4cb2818 Author: C.J. Collier Date: Thu Dec 18 02:07:35 2025 +0000 docs(plan): Full refactoring of C and Perl plan files - Split both ProtobufPlan-C.md and ProtobufPlan-Perl.md into per-milestone files under the `perl/doc/plan/` directory. - Introduced Integration Test milestones after each component milestone in both C and Perl plans. - Numbered milestone files sequentially (e.g., 010_Build.md, 230_Perl_Arena.md). - Updated main ProtobufPlan-C.md and ProtobufPlan-Perl.md to act as Tables of Contents. - Ensured consistent naming for integration test files (e.g., `t/c/integration/030_protobuf.c`, `t/integration/260_descriptor_pool.t`). - Added architecture review steps to the end of all milestones. - Moved Coro safety test to C layer Milestone 1. - Updated Makefile.PL to support new test structure and added Coro. - Moved and split t/c/convert.c into t/c/convert/*.c. - Moved other t/c/*.c tests into t/c/protobuf/*.c. - Deleted old t/c/convert.c. ## 0.15 2025-12-17 commit 649cbacf03abb5e7293e3038bb451c0406e9d0ce Author: C.J. Collier Date: Wed Dec 17 23:51:22 2025 +0000 docs(plan): Refactor and reset ProtobufPlan.md - Split the plan into ProtobufPlan-C.md and ProtobufPlan-Perl.md. - Reorganized milestones to clearly separate C layer and Perl layer development. - Added more granular checkboxes for each component: - C Layer: Create test, Test coverage, Implement, Tests pass. - Perl Layer: Create test, Test coverage, Implement Module/XS, Tests pass, C-Layer adjustments. - Reset all checkboxes to `[ ]` to prepare for a full audit. - Updated status in architecture/api and architecture/core documents to "Not Started". feat(obj_cache): Add unregister function and enhance tests - Added `protobuf_unregister_object` to `xs/protobuf/obj_cache.c`. - Updated `xs/protobuf/obj_cache.h` with the new function declaration. - Expanded tests in `t/c/protobuf_obj_cache.c` to cover unregistering, overwriting keys, and unregistering non-existent keys. - Corrected the test plan count in `t/c/protobuf_obj_cache.c` to 17. ## 0.14 2025-12-17 commit 40b6ad14ca32cf16958d490bb575962f88d868a1 Author: C.J. Collier Date: Wed Dec 17 23:18:27 2025 +0000 feat(arena): Complete C layer for Arena wrapper This commit finalizes the C-level implementation for the Protobuf::Arena wrapper. - Adds `PerlUpb_Arena_Destroy` for proper cleanup from Perl's DEMOLISH. - Enhances error checking in `PerlUpb_Arena_Get`. - Expands C-level tests in `t/c/protobuf_arena.c` to cover memory allocation on the arena and lifecycle through `PerlUpb_Arena_Destroy`. - Corrects embedded Perl initialization in the C test. docs(plan): Refactor ProtobufPlan.md - Restructures the development plan to clearly separate "C Layer" and "Perl Layer" tasks within each milestone. - This aligns the plan with the "C-First Implementation Strategy" and improves progress tracking. ## 0.13 2025-12-17 commit c1e566c25f62d0ae9f195a6df43b895682652c71 Author: C.J. Collier Date: Wed Dec 17 22:00:40 2025 +0000 refactor(perl): Rename C tests and enhance Makefile.PL - Renamed test files in `t/c/` to better match the `xs` module structure: - `01-cache.c` -> `protobuf_obj_cache.c` - `02-arena.c` -> `protobuf_arena.c` - `03-utils.c` -> `protobuf_utils.c` - `04-convert.c` -> `convert.c` - `load_test.c` -> `upb_descriptor_load.c` - Updated `perl/Makefile.PL` to reflect the new test names in `MY::postamble`'s `$c_test_config`. - Refactored the `$c_test_config` generation in `Makefile.PL` to reduce repetition by using a default flags hash and common dependencies array. - Added a `fail()` macro to `perl/t/c/upb-perl-test.h` for consistency. - Modified `t/c/upb_descriptor_load.c` to use the `t/c/upb-perl-test.h` macros, making its output consistent with other C tests. - Added a skeleton for `t/c/convert.c` to test the conversion functions. - Updated documentation in `ProtobufPlan.md` and `architecture/testing/01-xs-testing.md` to reflect new test names. ## 0.12 2025-12-17 commit d8cb5dd415c6c129e71cd452f78e29de398a82c9 Author: C.J. Collier Date: Wed Dec 17 20:47:38 2025 +0000 feat(perl): Refactor XS code into subdirectories This commit reorganizes the C code in the `perl/xs/` directory into subdirectories, mirroring the structure of the Python UPB extension. This enhances modularity and maintainability. - Created subdirectories for each major component: `convert`, `descriptor`, `descriptor_containers`, `descriptor_pool`, `extension_dict`, `map`, `message`, `protobuf`, `repeated`, and `unknown_fields`. - Created skeleton `.h` and `.c` files within each subdirectory to house the component-specific logic. - Updated top-level component headers (e.g., `perl/xs/descriptor.h`) to include the new sub-headers. - Updated top-level component source files (e.g., `perl/xs/descriptor.c`) to include their main header and added stub initialization functions (e.g., `PerlUpb_InitDescriptor`). - Moved code from the original `perl/xs/protobuf.c` to new files in `perl/xs/protobuf/` (arena, obj_cache, utils). - Moved code from the original `perl/xs/convert.c` to new files in `perl/xs/convert/` (upb_to_sv, sv_to_upb). - Updated `perl/Makefile.PL` to use a glob (`xs/*/*.c`) to find the new C source files in the subdirectories. - Added `perl/doc/architecture/core/07-xs-file-organization.md` to document the new structure. - Updated `perl/doc/ProtobufPlan.md` and other architecture documents to reference the new organization. - Corrected self-referential includes in the newly created .c files. This restructuring provides a solid foundation for further development and makes it easier to port logic from the Python implementation. ## 0.11 2025-12-17 commit cdedcd13ded4511b0464f5d3bdd72ce6d34e73fc Author: C.J. Collier Date: Wed Dec 17 19:57:52 2025 +0000 feat(perl): Implement C-first testing and core XS infrastructure This commit introduces a significant refactoring of the Perl XS extension, adopting a C-first development approach to ensure a robust foundation. Key changes include: - **C-Level Testing Framework:** Established a C-level testing system in `t/c/` with a dedicated Makefile, using an embedded Perl interpreter. Initial tests cover the object cache (`01-cache.c`), arena wrapper (`02-arena.c`), and utility functions (`03-utils.c`). - **Core XS Infrastructure:** - Implemented a global object cache (`xs/protobuf.c`) to manage Perl wrappers for UPB objects, using weak references. - Created an `upb_Arena` wrapper (`xs/protobuf.c`). - Consolidated common XS helper functions into `xs/protobuf.h` and `xs/protobuf.c`. - **Makefile.PL Enhancements:** Updated to support building and linking C tests, incorporating flags from `ExtUtils::Embed`, and handling both `.c` and `.cc` source files. - **XS File Reorganization:** Restructured XS files to mirror the Python UPB extension's layout (e.g., `message.c`, `descriptor.c`). Removed older, monolithic `.xs` files. - **Typemap Expansion:** Added extensive typemap entries in `perl/typemap` to handle conversions between Perl objects and various `const upb_*Def*` pointers. - **Descriptor Tests:** Added a new test suite `t/02-descriptor.t` to validate descriptor loading and accessor methods. - **Documentation:** Updated development plans and guidelines (`ProtobufPlan.md`, `xs_learnings.md`, etc.) to reflect the C-first strategy, new testing methods, and lessons learned. - **Build Cleanup:** Removed `ppport.h` from `.gitignore` as it's no longer used, due to `-DPERL_NO_PPPORT` being set in `Makefile.PL`. This C-first approach allows for more isolated and reliable testing of the core logic interacting with the UPB library before higher-level Perl APIs are built upon it. ## 0.10 2025-12-17 commit 1ef20ade24603573905cb0376670945f1ab5d829 Author: C.J. Collier Date: Wed Dec 17 07:08:29 2025 +0000 feat(perl): Implement C-level tests and core XS utils This commit introduces a C-level testing framework for the XS layer and implements key components: 1. **C-Level Tests (`t/c/`)**: * Added `t/c/Makefile` to build standalone C tests. * Created `t/c/upb-perl-test.h` with macros for TAP-compliant C tests (`plan`, `ok`, `is`, `is_string`, `diag`). * Implemented `t/c/01-cache.c` to test the object cache. * Implemented `t/c/02-arena.c` to test `Protobuf::Arena` wrappers. * Implemented `t/c/03-utils.c` to test string utility functions. * Corrected include paths and diagnostic messages in C tests. 2. **XS Object Cache (`xs/protobuf.c`)**: * Switched to using stringified pointers (`%p`) as hash keys for stability. * Fixed a critical double-free bug in `PerlUpb_ObjCache_Delete` by removing an extra `SvREFCNT_dec` on the lookup key. 3. **XS Arena Wrapper (`xs/protobuf.c`)**: * Corrected `PerlUpb_Arena_New` to use `newSVrv` and `PTR2IV` for opaque object wrapping. * Corrected `PerlUpb_Arena_Get` to safely unwrap the arena pointer. 4. **Makefile.PL (`perl/Makefile.PL`)**: * Added `-Ixs` to `INC` to allow C tests to find `t/c/upb-perl-test.h` and `xs/protobuf.h`. * Added `LIBS` to link `libprotobuf_common.a` into the main `Protobuf.so`. * Added C test targets `01-cache`, `02-arena`, `03-utils` to the test config in `MY::postamble`. 5. **Protobuf.pm (`perl/lib/Protobuf.pm`)**: * Added `use XSLoader;` to load the compiled XS code. 6. **New files `xs/util.h`**: * Added initial type conversion function. These changes establish a foundation for testing the C-level interface with UPB and fix crucial bugs in the object cache implementation. ## 0.09 2025-12-17 commit 07d61652b032b32790ca2d3848243f9d75ea98f4 Author: C.J. Collier Date: Wed Dec 17 04:53:34 2025 +0000 feat(perl): Build system and C cache test for Perl XS This commit introduces the foundational pieces for the Perl XS implementation, focusing on the build system and a C-level test for the object cache. - **Makefile.PL:** - Refactored C test compilation rules in `MY::postamble` to use a hash (`$c_test_config`) for better organization and test-specific flags. - Integrated `ExtUtils::Embed` to provide necessary compiler and linker flags for embedding the Perl interpreter, specifically for the `t/c/01-cache.c` test. - Correctly constructs the path to the versioned Perl library (`libperl.so.X.Y.Z`) using `$Config{archlib}` and `$Config{libperl}` to ensure portability. - Removed `VERSION_FROM` and `ABSTRACT_FROM` to avoid dependency on `.pm` files for now. - **C Cache Test (t/c/01-cache.c):** - Added a C test to exercise the object cache functions implemented in `xs/protobuf.c`. - Includes tests for adding, getting, deleting, and weak reference behavior. - **XS Cache Implementation (xs/protobuf.c, xs/protobuf.h):** - Implemented `PerlUpb_ObjCache_Init`, `PerlUpb_ObjCache_Add`, `PerlUpb_ObjCache_Get`, `PerlUpb_ObjCache_Delete`, and `PerlUpb_ObjCache_Destroy`. - Uses a Perl hash (`HV*`) for the cache. - Keys are string representations of the C pointers, created using `snprintf` with `"%llx"`. - Values are weak references (`sv_rvweaken`) to the Perl objects (`SV*`). - `PerlUpb_ObjCache_Get` now correctly returns an incremented reference to the original SV, not a copy. - `PerlUpb_ObjCache_Destroy` now clears the hash before decrementing its refcount. - **t/c/upb-perl-test.h:** - Updated `is_sv` to perform direct pointer comparison (`got == expected`). - **Minor:** Added `util.h` (currently empty), updated `typemap`. These changes establish a working C-level test environment for the XS components. ## 0.08 2025-12-17 commit d131fd22ea3ed8158acb9b0b1fe6efd856dc380e Author: C.J. Collier Date: Wed Dec 17 02:57:48 2025 +0000 feat(perl): Update docs and core XS files - Explicitly add TDD cycle to ProtobufPlan.md. - Clarify mirroring of Python implementation in upb-interfacing.md for both C and Perl layers. - Branch and adapt python/protobuf.h and python/protobuf.c to perl/xs/protobuf.h and perl/xs/protobuf.c, including the object cache implementation. Removed old cache.* files. - Create initial C test for the object cache in t/c/01-cache.c. ## 0.07 2025-12-17 commit 56fd6862732c423736a2f9a9fb1a2816fc59e9b0 Author: C.J. Collier Date: Wed Dec 17 01:09:18 2025 +0000 feat(perl): Align Perl UPB architecture docs with Python Updates the Perl Protobuf architecture documents to more closely align with the design and implementation strategies used in the Python UPB extension. Key changes: - **Object Caching:** Mandates a global, per-interpreter cache using weak references for all UPB-derived objects, mirroring Python's `PyUpb_ObjCache`. - **Descriptor Containers:** Introduces a new document outlining the plan to use generic XS container types (Sequence, ByNameMap, ByNumberMap) with vtables to handle collections of descriptors, similar to Python's `descriptor_containers.c`. - **Testing:** Adds a note to the testing strategy to port relevant test cases from the Python implementation to ensure feature parity. ## 0.06 2025-12-17 commit 6009ce6ab64eccce5c48729128e5adf3ef98e9ae Author: C.J. Collier Date: Wed Dec 17 00:28:20 2025 +0000 feat(perl): Implement object caching and fix build This commit introduces several key improvements to the Perl XS build system and core functionality: 1. **Object Caching:** * Introduces `xs/protobuf.c` and `xs/protobuf.h` to implement a caching mechanism (`protobuf_c_to_perl_obj`) for wrapping UPB C pointers into Perl objects. This uses a hash and weak references to ensure object identity and prevent memory leaks. * Updates the `typemap` to use `protobuf_c_to_perl_obj` for `upb_MessageDef *` output, ensuring descriptor objects are cached. * Corrected `sv_weaken` to the correct `sv_rvweaken` function. 2. **Makefile.PL Enhancements:** * Switched to using the Bazel-generated UPB descriptor sources from `bazel-bin/src/google/protobuf/_virtual_imports/descriptor_proto/google/protobuf/`. * Updated `INC` paths to correctly locate the generated headers. * Refactored `MY::dynamic_lib` to ensure the static library `libprotobuf_common.a` is correctly linked into each generated `.so` module, resolving undefined symbol errors. * Overrode `MY::test` to use `prove -b -j$(nproc) t/*.t xt/*.t` for running tests. * Cleaned up `LIBS` and `LDDLFLAGS` usage. 3. **Documentation:** * Updated `ProtobufPlan.md` to reflect the current status and design decisions. * Reorganized architecture documents into subdirectories. * Added `object-caching.md` and `c-perl-interface.md`. * Updated `llm-guidance.md` with notes on `upb/upb.h` and `sv_rvweaken`. 4. **Testing:** * Fixed `xt/03-moo_immutable.t` to skip tests if no Moo modules are found. This resolves the build issues and makes the core test suite pass. ## 0.05 2025-12-16 commit 177d2f3b2608b9d9c415994e076a77d8560423b8 Author: C.J. Collier Date: Tue Dec 16 19:51:36 2025 +0000 Refactor: Rename namespace to Protobuf, build system and doc updates This commit refactors the primary namespace from `ProtoBuf` to `Protobuf` to align with the style guide. This involves renaming files, directories, and updating package names within all Perl and XS files. **Namespace Changes:** * Renamed `perl/lib/ProtoBuf` to `perl/lib/Protobuf`. * Moved and updated `ProtoBuf.pm` to `Protobuf.pm`. * Moved and updated `ProtoBuf::Descriptor` to `Protobuf::Descriptor` (.pm & .xs). * Removed other `ProtoBuf::*` stubs (Arena, DescriptorPool, Message). * Updated `MODULE` and `PACKAGE` in `Descriptor.xs`. * Updated `NAME`, `*_FROM` in `perl/Makefile.PL`. * Replaced `ProtoBuf` with `Protobuf` throughout `perl/typemap`. * Updated namespaces in test files `t/01-load-protobuf-descriptor.t` and `t/02-descriptor.t`. * Updated namespaces in all documentation files under `perl/doc/`. * Updated paths in `perl/.gitignore`. **Build System Enhancements (Makefile.PL):** * Included `xs/*.c` files in the common object files list. * Added `-I.` to the `INC` paths. * Switched from `MYEXTLIB` to `LIBS => ['-L$(CURDIR) -lprotobuf_common']` for linking. * Removed custom keys passed to `WriteMakefile` for postamble. * `MY::postamble` now sources variables directly from the main script scope. * Added `all :: ${common_lib}` dependency in `MY::postamble`. * Added `t/c/load_test.c` compilation rule in `MY::postamble`. * Updated `clean` target to include `blib`. * Added more modules to `TEST_REQUIRES`. * Removed the explicit `PM` and `XS` keys from `WriteMakefile`, relying on `XSMULTI => 1`. **New Files:** * `perl/lib/Protobuf.pm` * `perl/lib/Protobuf/Descriptor.pm` * `perl/lib/Protobuf/Descriptor.xs` * `perl/t/01-load-protobuf-descriptor.t` * `perl/t/02-descriptor.t` * `perl/t/c/load_test.c`: Standalone C test for UPB. * `perl/xs/types.c` & `perl/xs/types.h`: For Perl/C type conversions. * `perl/doc/architecture/upb-interfacing.md` * `perl/xt/03-moo_immutable.t`: Test for Moo immutability. **Deletions:** * Old test files: `t/00_load.t`, `t/01_basic.t`, `t/02_serialize.t`, `t/03_message.t`, `t/04_descriptor_pool.t`, `t/05_arena.t`, `t/05_message.t`. * Removed `lib/ProtoBuf.xs` as it's not needed with `XSMULTI`. **Other:** * Updated `test_descriptor.bin` (binary change). * Significant content updates to markdown documentation files in `perl/doc/architecture` and `perl/doc/internal` reflecting the new architecture and learnings. ## 0.04 2025-12-14 commit 92de5d482c8deb9af228f4b5ce31715d3664d6ee Author: C.J. Collier Date: Sun Dec 14 21:28:19 2025 +0000 feat(perl): Implement Message object creation and fix lifecycles This commit introduces the basic structure for `ProtoBuf::Message` object creation, linking it with `ProtoBuf::Descriptor` and `ProtoBuf::DescriptorPool`, and crucially resolves a SEGV by fixing object lifecycle management. Key Changes: 1. **`ProtoBuf::Descriptor`:** Added `_pool` attribute to hold a strong reference to the parent `ProtoBuf::DescriptorPool`. This is essential to prevent the pool and its C `upb_DefPool` from being garbage collected while a descriptor is still in use. 2. **`ProtoBuf::DescriptorPool`:** * `find_message_by_name`: Now passes the `$self` (the pool object) to the `ProtoBuf::Descriptor` constructor to establish the lifecycle link. * XSUB `pb_dp_find_message_by_name`: Updated to accept the pool `SV*` and store it in the descriptor's `_pool` attribute. * XSUB `_load_serialized_descriptor_set`: Renamed to avoid clashing with the Perl method name. The Perl wrapper now correctly calls this internal XSUB. * `DEMOLISH`: Made safer by checking for attribute existence. 3. **`ProtoBuf::Message`:** * Implemented using Moo with lazy builders for `_upb_arena` and `_upb_message`. * `_descriptor` is a required argument to `new()`. * XS functions added for creating the arena (`pb_msg_create_arena`) and the `upb_Message` (`pb_msg_create_upb_message`). * `pb_msg_create_upb_message` now extracts the `upb_MessageDef*` from the descriptor and uses `upb_MessageDef_MiniTable()` to get the minitable for `upb_Message_New()`. * `DEMOLISH`: Added to free the message's arena. 4. **`Makefile.PL`:** * Added `-g` to `CCFLAGS` for debugging symbols. * Added Perl CORE include path to `MY::postamble`'s `base_flags`. 5. **Tests:** * `t/04_descriptor_pool.t`: Updated to check the structure of the returned `ProtoBuf::Descriptor`. * `t/05_message.t`: Now uses a descriptor obtained from a real pool to test `ProtoBuf::Message->new()`. 6. **Documentation:** * Updated `ProtobufPlan.md` to reflect progress. * Updated several files in `doc/architecture/` to match the current implementation details, especially regarding arena management and object lifecycles. * Added `doc/internal/development_cycle.md` and `doc/internal/xs_learnings.md`. With these changes, the SEGV is resolved, and message objects can be successfully created from descriptors. ## 0.03 2025-12-14 commit 6537ad23e93680c2385e1b571d84ed8dbe2f68e8 Author: C.J. Collier Date: Sun Dec 14 20:23:41 2025 +0000 Refactor(perl): Object-Oriented DescriptorPool with Moo This commit refactors the `ProtoBuf::DescriptorPool` to be fully object-oriented using Moo, and resolves several issues related to XS, typemaps, and test data. Key Changes: 1. **Moo Object:** `ProtoBuf::DescriptorPool.pm` now uses `Moo` to define the class. The `upb_DefPool` pointer is stored as a lazy attribute `_upb_defpool`. 2. **XS Lifecycle:** `DescriptorPool.xs` now has `pb_dp_create_pool` called by the Moo builder and `pb_dp_free_pool` called from `DEMOLISH` to manage the `upb_DefPool` lifecycle per object. 3. **Typemap:** The `perl/typemap` file has been significantly updated to handle the conversion between the `ProtoBuf::DescriptorPool` Perl object and the `upb_DefPool *` C pointer. This includes: * Mapping `upb_DefPool *` to `T_PTR`. * An `INPUT` section for `ProtoBuf::DescriptorPool` to extract the pointer from the object's hash, triggering the lazy builder if needed via `call_method`. * An `OUTPUT` section for `upb_DefPool *` to convert the pointer back to a Perl integer, used by the builder. 4. **Method Renaming:** `add_file_descriptor_set_binary` is now `load_serialized_descriptor_set`. 5. **Test Data:** * Added `perl/t/data/test.proto` with a sample message and enum. * Generated `perl/t/data/test_descriptor.bin` using `protoc`. * Removed `t/data/` from `.gitignore` to ensure test data is versioned. 6. **Test Update:** `t/04_descriptor_pool.t` is updated to use the new OO interface, load the generated descriptor set, and check for message definitions. 7. **Build Fixes:** * Corrected `#include` paths in `DescriptorPool.xs` to be relative to the `upb/` directory (e.g., `upb/wire/decode.h`). * Added `-I../upb` to `CCFLAGS` in `Makefile.PL`. * Reordered `INC` paths in `Makefile.PL` to prioritize local headers. **Note:** While tests now pass in some environments, a SEGV issue persists in `make test` runs, indicating a potential memory or lifecycle issue within the XS layer that needs further investigation. ## 0.02 2025-12-14 commit 6c9a6f1a5f774dae176beff02219f504ea3a6e07 Author: C.J. Collier Date: Sun Dec 14 20:13:09 2025 +0000 Fix(perl): Correct UPB build integration and generated file handling This commit resolves several issues to achieve a successful build of the Perl extension: 1. **Use Bazel Generated Files:** Switched from compiling UPB's stage0 descriptor.upb.c to using the Bazel-generated `descriptor.upb.c` and `descriptor.upb_minitable.c` located in `bazel-bin/src/google/protobuf/_virtual_imports/descriptor_proto/google/protobuf/`. 2. **Updated Include Paths:** Added the `bazel-bin` path to `INC` in `WriteMakefile` and to `base_flags` in `MY::postamble` to ensure the generated headers are found during both XS and static library compilation. 3. **Removed Stage0:** Removed references to `UPB_STAGE0_DIR` and no longer include headers or source files from `upb/reflection/stage0/`. 4. **-fPIC:** Explicitly added `-fPIC` to `CCFLAGS` in `WriteMakefile` and ensured `$(CCFLAGS)` is used in the custom compilation rules in `MY::postamble`. This guarantees all object files in the static library are compiled with position-independent code, resolving linker errors when creating the shared objects for the XS modules. 5. **Refined UPB Sources:** Used `File::Find` to recursively find UPB C sources, excluding `/conformance/` and `/reflection/stage0/` to avoid conflicts and unnecessary compilations. 6. **Arena Constructor:** Modified `ProtoBuf::Arena::pb_arena_new` XSUB to accept the class name argument passed from Perl, making it a proper constructor. 7. **.gitignore:** Added patterns to `perl/.gitignore` to ignore generated C files from XS (`lib/*.c`, `lib/ProtoBuf/*.c`), the copied `src_google_protobuf_descriptor.pb.cc`, and the `t/data` directory. 8. **Build Documentation:** Updated `perl/doc/architecture/upb-build-integration.md` to reflect the new build process, including the Bazel prerequisite, include paths, `-fPIC` usage, and `File::Find`. Build Steps: 1. `bazel build //src/google/protobuf:descriptor_upb_proto` (from repo root) 2. `cd perl` 3. `perl Makefile.PL` 4. `make` 5. `make test` (Currently has expected failures due to missing test data implementation). ## 0.01 2025-12-14 commit 3e237e8a26442558c94075766e0d4456daaeb71d Author: C.J. Collier Date: Sun Dec 14 19:34:28 2025 +0000 feat(perl): Initialize Perl extension scaffold and build system This commit introduces the `perl/` directory, laying the groundwork for the Perl Protocol Buffers extension. It includes the essential build files, linters, formatter configurations, and a vendored Devel::PPPort for XS portability. Key components added: * **`Makefile.PL`**: The core `ExtUtils::MakeMaker` build script. It's configured to: * Build a static library (`libprotobuf_common.a`) from UPB, UTF8_Range, and generated protobuf C/C++ sources. * Utilize `XSMULTI => 1` to create separate shared objects for `ProtoBuf`, `ProtoBuf::Arena`, and `ProtoBuf::DescriptorPool`. * Link each XS module against the common static library. * Define custom compilation rules in `MY::postamble` to handle C vs. C++ flags and build the static library. * Set up include paths for the project root, UPB, and other dependencies. * **XS Stubs (`.xs` files)**: * `lib/ProtoBuf.xs`: Placeholder for the main module's XS functions. * `lib/ProtoBuf/Arena.xs`: XS interface for `upb_Arena` management. * `lib/ProtoBuf/DescriptorPool.xs`: XS interface for `upb_DefPool` management. * **Perl Module Stubs (`.pm` files)**: * `lib/ProtoBuf.pm`: Main module, loads XS. * `lib/ProtoBuf/Arena.pm`: Perl class for Arenas. * `lib/ProtoBuf/DescriptorPool.pm`: Perl class for Descriptor Pools. * `lib/ProtoBuf/Message.pm`: Base class for messages (TBD). * **Configuration Files**: * `.gitignore`: Ignores build artifacts, editor files, etc. * `.perlcriticrc`: Configures Perl::Critic for static analysis. * `.perltidyrc`: Configures perltidy for code formatting. * **`Devel::PPPort`**: Vendored version 3.72 to generate `ppport.h` for XS compatibility across different Perl versions. * **`typemap`**: Custom typemap for XS argument/result conversion. * **Documentation (`doc/`)**: Initial architecture and plan documents. This provides a solid foundation for developing the UPB-based Perl extension. Tweet
wp.c9h.org
December 22, 2025 at 2:37 AM
Ian Jackson: Debian’s git transition
tl;dr: There is a Debian git transition plan. It’s going OK so far but we need help, especially with outreach and updating Debian’s documentation. * Goals of the Debian git transition project * Achievements so far, and current status * Core engineering principle * Correspondence between dsc and git * Patches-applied vs patches-unapplied * Consequences, some of which are annoying * Distributing the source code as git * Tracking the relevant git data, when changes are made in the legacy Archive * Why *.dgit.debian.org is not Salsa * Roadmap * In progress * Future Technology * Mindshare and adoption - please help! * A rant about publishing the source code * Documentation * Personnel * Thanks # Goals of the Debian git transition project 0. **Everyone who interacts with Debian source code should be able to do so entirely in git.** That means, more specifically: 1. All examination and edits to the source should be performed via normal git operations. 2. Source code should be transferred and exchanged as git data, not tarballs. git should be the canonical form everywhere. 3. Upstream git histories should be re-published, traceably, as part of formal git releases published by Debian. 4. No-one should have to learn about Debian Source Packages, which are bizarre, and have been obsoleted by modern version control. This is very ambitious, but we have come a long way! ## Achievements so far, and current status We have come a very long way. But, there is still much to do - especially, the git transition team **needs your help with adoption, developer outreach, and developer documentation overhaul.** We’ve made big strides towards goals 1 and 4. Goal 2 is partially achieved: we currently have dual running. Goal 3 is within our reach but depends on widespread adoption of tag2upload (and/or dgit push). Downstreams and users can obtain the source code of any Debian package in git form. (dgit clone, 2013). They can then work with this source code completely in git, including building binaries, merging new versions, even automatically (eg Raspbian, 2016), and all without having to deal with source packages at all (eg Wikimedia 2025). A Debian maintainer can maintain their own package entirely in git. They can obtain upstream source code from git, and do their packaging work in git (`git-buildpackage`, 2006). Every Debian maintainer can (and should!) release their package _from git_ reliably and in a standard form (dgit push, 2013; tag2upload, 2025). This is not only more principled, but also more convenient, and with better UX, than pre-dgit tooling like `dput`. Indeed a Debian maintainer can now often release their changes to Debian, from git, using _only_ git branches (so no tarballs). Releasing to Debian can be simply pushing a signed tag (tag2upload, 2025). A Debian maintainer can maintain a stack of changes to upstream source code in git (gbp pq 2009). They can even maintain such a delta series as a rebasing git branch, directly buildable, and use normal `git rebase` style operations to edit their changes, (git-dpm, 2010; git-debrebase, 2018) An authorised Debian developer can do a modest update to _any_ package in Debian, even one maintained by someone else, working entirely in git in a standard and convenient way (dgit, 2013). Debian contributors can share their work-in-progress on git forges and collaborate using merge requests, git based code review, and so on. (Alioth, 2003; Salsa, 2018.) # Core engineering principle The Debian git transition project is based on one core engineering principle: **Every Debian Source Package can be losslessly converted to and from git.** In order to _transition_ away from Debian Source Packages, we need to _gateway_ between the old `dsc` approach, and the new git approach. This gateway obviously needs to be bidirectional: source packages uploaded with legacy tooling like `dput` need to be imported into a canonical git representation; and of course git branches prepared by developers need to be converted to source packages for the benefit of legacy downstream systems (such as the Debian Archive and `apt source`). This bidirectional gateway is implemented in `src:dgit`, and is allowing us to gradually replace dsc-based parts of the Debian system with git-based ones. ## Correspondence between dsc and git A faithful bidirectional gateway must define an invariant: **The canonical git tree, corresponding to a .dsc, is the tree resulting from`dpkg-source -x`**. This canonical form is sometimes called the “dgit view”. It’s sometimes not the same as the maintainer’s git branch, because many maintainers are still working with “patches-unapplied” git branches. More on this below. (For `3.0 (quilt)` .dscs, the canonical git tree doesn’t include the quilt `.pc` directory.) ## Patches-applied vs patches-unapplied The canonical git format is “patches applied”. That is: **If Debian has modified the upstream source code, a normal git clone of the canonical branch gives the modified source tree, ready for reading and building.** Many Debian maintainers keep their packages in a different git branch format, where the changes made by Debian, to the upstream source code, are in actual `patch` files in a `debian/patches/` subdirectory. Patches-applied has a number of important advantages over patches-unapplied: * **It is familiar to, and doesn’t trick, outsiders to Debian**. Debian insiders radically underestimate how weird “patches-unapplied” is. Even expert software developers can get very confused or even accidentally build binaries without security patches! * Making changes can be done with just normal git commands, eg `git commit`. Many Debian insiders working with patches-unapplied are still using `quilt(1)`, a footgun-rich contraption for working with patch files! * When developing, one can make changes to upstream code, and to Debian packaging, together, without ceremony. There is no need to switch back and forth between patch queue and packaging branches (as with `gbp pq`), no need to “commit” patch files, etc. One can always edit every file and commit it with `git commit`. The downside is that, with the (bizarre) `3.0 (quilt)` source format, the patch files files in `debian/patches/` must somehow be kept up to date. Nowadays though, tools like `git-debrebase` and `git-dpm` (and dgit for NMUs) make it very easy to work with patches-applied git branches. `git-debrebase` can deal very ergonomically even with big patch stacks. (For smaller packages which usually have no patches, plain `git merge` with an upstream git branch, and a much simpler dsc format, sidesteps the problem entirely.) ### Prioritising Debian’s users (and other outsiders) We want everyone to be able to share and modify the software that they interact with. That means we should make source code truly accessible, on the user’s terms. Many of Debian’s processes assume everyone is an insider. It’s okay that there are Debian insiders and that people feel part of something that they worked hard to become involved with. But lack of perspective can lead to software which fails to uphold our values. Our source code practices — in particular, our determination to share properly (and systematically) — are a key part of what makes Debian worthwhile at all. Like Debian’s installer, we want our source code to be useable by Debian outsiders. This is why we have chosen to privilege a git branch format which is more familiar to the world at large, even if it’s less popular in Debian. ## Consequences, some of which are annoying The requirement that the conversion be _bidirectional_ , _lossless_ , and _context-free_ can be inconvenient. For example, we cannot support `.gitattributes` which modify files during git checkin and checkout. `.gitattributes` cause the meaning of a git tree to depend on the context, in possibly arbitrary ways, so the conversion from git to source package wouldn’t be stable. And, worse, some source packages might not to be representable in git at all. Another example: Maintainers often have existing git branches for their packages, generated with pre-dgit tooling which is less careful and less principled than ours. That can result in discrepancies between git and dsc, which need to be resolved before a proper git-based upload can succeed. That some maintainers use patches-unapplied, and some patches-unapplied, means that there _has_ to be some kind of conversion to a standard git representation. Choosing the less-popular patches-applied format as the canonical form, means that _many_ packages need their git representation converted. It also means that user- and outsider-facing branches from `{browse,git}.dgit.d.o` and `dgit clone` are not always compatible with maintainer branches on Salsa. User-contributed changes need cherry-picking rather than merging, or conversion back to the maintainer format. The good news is that dgit can automate much of this, and the manual parts are usually easy git operations. # Distributing the source code as git Our source code management should be normal, modern, and based on git. That means the Debian Archive is obsolete and needs to be replaced with a set of git repositories. The replacement repository for source code formally released to Debian is `*.dgit.debian.org`. This contains all the git objects for every git-based upload since 2013, including the signed tag for each released package version. The plan is that it will contain a git view of _every_ uploaded Debian package, by centrally importing all legacy uploads into git. ## Tracking the relevant git data, when changes are made in the legacy Archive Currently, many critical source code management tasks are done by changes to the legacy Debian Archive, which works entirely with dsc files (and the associated tarballs etc). The contents of the Archive are therefore still an important source of truth. But, the Archive’s architecture means it cannot sensibly directly contain git data. To track changes made in the Archive, we added the `Dgit:` field to the `.dsc` of a git-based upload (2013). This declares which git commit this package was converted from. and where those git objects can be obtained. Thus, given a Debian Source Package from a git-based upload, it is possible for the new git tooling to obtain the equivalent git objects. If the user is going to work in git, there is no need for any tarballs to be downloaded: the git data could be obtained from the depository using the git protocol. The signed tags, available from the git depository, have standardised metdata which gives traceability back to the uploading Debian contributor. ## Why *.dgit.debian.org is not Salsa We need a git _depository_ - a formal, reliable and permanent git repository of source code actually released to Debian. Git forges like Gitlab can be very convenient. But Gitlab is not sufficiently secure, and too full of bugs, to be the principal and only archive of all our source code. (The “open core” business model of the Gitlab corporation, and the constant-churn development approach, are critical underlying problems.) Our git depository lacks forge features like Merge Requests. But: * It is dependable, both in terms of reliability and security. * It is append-only: once something is pushed, it is permanently recorded. * Its access control is precisely that of the Debian Archive. * Its ref namespace is standardised and corresponds to Debian releases. * Pushes are authorised by PGP signatures, not ssh keys, so traceable. The dgit git depository outlasted Alioth and it may well outlast Salsa. We need _both_ a good forge, and the `*.dgit.debian.org` formal git depository. # Roadmap ## In progress Right now we are quite focused on **tag2upload**. We are working hard on eliminating the remaining issues that we feel need to be addressed before declaring the service out of beta. ## Future Technology ### Whole-archive dsc importer Currently, the git depository only has git data for git-based package updates (tag2upload and dgit push). Legacy dput-based uploads are not currently present there. This means that the git-based and legacy uploads must be resolved client-side, by `dgit clone`. We will want to start importing legacy uploads to git. Then downstreams and users will be able to get the source code for any package simply with `git clone`, even if the maintainer is using legacy upload tools like dput. ### Support for git-based uploads to security.debian.org Security patching is a task which would particularly benefit from better and more formal use of git. git-based approaches to applying and backporting security patches are much more convenient than messing about with actual patch files. Currently, one can use git to help prepare a security upload, but it often involves starting with a dsc import (which lacks the proper git history) or figuring out a package maintainer’s unstandardised git usage conventions on Salsa. And it is not possible to properly perform the security release _as git_. ### Internal Debian consumers switch to getting source from git Buildds, QA work such as lintian checks, and so on, could be simpler if they don’t need to deal with source packages. And since git is actually the canonical form, we want them to use it directly. ### Problems for the distant future For decades, Debian has been built around source packages. Replacing them is a long and complex process. Certainly source packages are going to continue to be supported for the foreseeable future. There are no doubt going to be unanticipated problems. There are also foreseeable issues: for example, perhaps there are packages that work very badly when represented in git. We think we can rise to these challenges as they come up. # Mindshare and adoption - please help! We and our users are very pleased with our technology. It is convenient and highly dependable. `dgit` in particular is superb, even if we say so ourselves. As technologists, we have been very focused on building good software, but it seems we have fallen short in the marketing department. ## A rant about publishing the source code **git is the preferred form for modification**. Our upstreams are overwhelmingly using git. We are overwhelmingly using git. It is a scandal that for many packages, Debian does not properly, formally and officially publish the git history. _Properly_ publishing the source code as git means publishing it in a way that means that anyone can _automatically_ and _reliably_ obtain _and build_ the _exact_ source code corresponding to the binaries. The test is: could you use that to build a derivative? Putting a package in git on Salsa is often a good idea, but it is not sufficient. No standard branch structure git on Salsa is enforced, nor should it be (so it can’t be automatically and reliably obtained), the tree is not in a standard form (so it can’t be automatically built), and is not _necessarily identical_ to the source package. So `Vcs-Git` fields, and git from Salsa, will never be sufficient to make a derivative. **Debian is not publishing the source code!** The time has come for proper publication of source code by Debian to no longer be a minority sport. Every maintainer of a package whose upstream is using git (which is nearly all packages nowadays) should be basing their work on upstream git, and properly publishing that via tag2upload or dgit. And it’s not even difficult! The modern git-based tooling provides a far superior upload experience. ### A common misunderstanding **dgit push is not an alternative to gbp pq or quilt. Nor is tag2upload.** These upload tools _complement your existing git workflow_. They replace and improve source package building/signing and the subsequent dput. If you are using one of the usual git layouts on salsa, and your package is in good shape, you can adopt tag2upload and/or dgit push right away. `git-debrebase` is distinct and _does_ provides an alternative way to manage your git packaging, do your upstream rebases, etc. ## Documentation Debian’s documentation all needs to be updated, including particularly instructions for packaging, to recommend use of git-first workflows. Debian should not be importing git-using upstreams’ “release tarballs” into git. (Debian outsiders who discover this practice are typically horrified.) We should use _only_ upstream git, work only in git, and properly release (and publish) in git form. We, the git transition team, are experts in the technology, and can provide good suggestions. But we do not have the bandwidth to also engage in the massive campaigns of education and documentation updates that are necessary — especially given that (as with any programme for change) many people will be sceptical or even hostile. So we would greatly appreciate help with writing and outreach. # Personnel We consider ourselves the Debian git transition team. Currently we are: * Ian Jackson. Author and maintainer of dgit and git-debrebase. Co-creator of tag2upload. Original author of dpkg-source, and inventor in 1996 of Debian Source Packages. Alumnus of the Debian Technical Committee. * Sean Whitton. Co-creator of the tag2upload system; author and maintainer of git-debpush. Co-maintainer of dgit. Debian Policy co-Editor. Former Chair of the Debian Technical Committee. We wear the following hats related to the git transition: * Maintainers of src:dgit * tag2upload Delegates; operators of the tag2upload service. * service operators of the git depository *.dgit.debian.org. You can contact us: * By email: Ian Jackson [email protected]; Sean Whitton [email protected]; [email protected]. * By filing bugs in the Debian Bug System against src:dgit. * On OFTC IRC, as `Diziet` and `spwhitton`. We do most of our heavy-duty development on Salsa. ## Thanks Particular thanks are due to Joey Hess, who, in the now-famous design session in Vaumarcus in 2013, helped invent dgit. Since then we have had a lot of support: most recently political support to help get tag2upload deployed, but also, over the years, helpful bug reports and kind words from our users, as well as translations and code contributions. Many other people have contributed more generally to support for working with Debian source code in git. We particularly want to mention Guido Günther (git-buildpackage); and of course Alexander Wirt, Joerg Jaspert, Thomas Goirand and Antonio Terceiro (Salsa administrators); and before them the Alioth administrators. comments
diziet.dreamwidth.org
December 22, 2025 at 12:37 AM
Russell Coker: Links December 2025
Russ Allbery wrote an interesting review of Politics on the Edge, by Rory Stewart who sems like one of the few conservative politicians I could respect and possibly even like [1]. It has some good insights about the problems with our current political environment. The NY Times has an amusing article about the attempt to sell the solution to the CIA’s encrypted artwork [2]. Wired has an interesting article about computer face recognition systems failing on people with facial disabilities or scars [3]. This is a major accessibility issue potentially violating disability legislation and a demonstration of the problems of fully automating systems when there should be a human in the loop. The October 2025 report from the Debian Reproducible Builds team is particularly interesting [4]. “kpcyrd forwarded a fascinating tidbit regarding so-called ninja and samurai build ordering, that uses data structures in which the pointer values returned from malloc are used to determine some order of execution” LOL Louis Rossmann made an insightful youtube video about the moral case for piracy of software and media [5]. Louis Rossman made an insightful video about the way that Hyundai is circumventing Right to Repair laws to make repairs needlessly expensive [6]. Korean cars aren’t much good nowadays. Their prices keep increasing and the quality doesn’t. Brian Krebs wrote an interesting article about how Google is taking legal action against SMS phishing crime groups [7]. We need more of this! Josh Griffiths wrote an informative blog post about how YouTube is awful [8]. I really should investigate Peertube. Louis Rossman made an informative YouTube video about Right to Repair and the US military, if even the US military is getting ripped off by this it’s a bigger problem than most people realise [9]. He also asks the rhetorical question of whether politicians are bought or whether it’s a “subscription model”. Brian Krebs wrote an informative article about the US plans to ban TP Link devices, OpenWRT seems like a good option [10]. Brian Krebs wrote an informative article about “free streaming” Android TV boxes that act as hidden residential VPN proxies [11]. Also the “free streaming” violates copyright law. Bruce Schneier and Nathan E. Sanders wrote an interesting article about ways that AI is being used to strengthen democracy [12]. Cory Doctorow wrote an insightful article about the incentives for making shitty goods and services and why we need legislation to protect consumers [13]. Linus Tech Tips has an interesting interview with Linus Torvalds [14]. Interesting video about the Kowloon Walled City [15]. It would be nice if a government deliberately created a hive city like that, the only example I know of is the Alaskan town in a single building. David Brin wrote an insightful set of 3 blog posts about a Democratic American deal that could improve the situation there [16]. * 1][ https://tinyurl.com/2czzxp77 * 2][ https://tinyurl.com/22k6yo6s * 3][ https://tinyurl.com/29fg9ses * 4][ https://reproducible-builds.org/reports/2025-10/ * 5][ https://www.youtube.com/watch?v=YAx3yCNomkg * 6][ https://www.youtube.com/watch?v=uv9jAQ_MiK0 * 7][ https://tinyurl.com/276qmbuh * 8][ https://tinyurl.com/2bhp6sa2 * 9][ https://www.youtube.com/watch?v=C0LmjzXV7IA * 10][ https://tinyurl.com/22pgz36h * 11][ https://tinyurl.com/2xveozvd * 12][ https://tinyurl.com/2bjs4g2j * 13][ https://tinyurl.com/2bsh3wbt * 14][ https://www.youtube.com/watch?v=mfv0V1SxbNA * 15][ https://www.youtube.com/watch?v=hoNclh1K_zY * 16][ https://tinyurl.com/29opfj43 Related posts: 1. Links May 2025 Christopher Biggs gave an informative Evrything Open lecture about voice... 2. Links March 2025 Anarcat’s review of Fish is interesting and shows some benefits... 3. Links April 2025 Asianometry has an interesting YouTube video about elecrolytic capacitors degrading...
etbe.coker.com.au
December 21, 2025 at 8:35 AM
Sahil Dhiman: MiniDebConf Navi Mumbai 2025
MiniDebConf Navi Mumbai 2026 which was MiniDebConf Mumbai, which in turn was FOSSMumbai x MiniDebian Conference, happened on 13th and 14th December, 2026, with a hotel as Day 1 venue and a college on Day 2. Originally planned for the 13th of November, it got postponed to December due to operational reasons. Most of the on-ground logistics and other heavy lifting was done by Arya, Vidhya, MumbaiFOSS, and the Remigies Technologies team, so we didn’t had to worry much. This time, I gave a talk on _Basics of a Free Software Mirror (and how Debian does it)_ (Presentation URL). I had the idea for this talk for a while and gave a KDE version of it during KDE India Conf 2025. The gist was to explain how Free Software is delivered to users and how one can help. For MDC, I focused a bit on Debian mirror network(s) and who else host mirrors in India and trends. _Me during mirror talk Credits - niyabits. Thanks for the pictures_ At the onset someone mentioned my Termux mirror. Termux is a good project to get into mirror hosting with. I got into mirroring with it. It has low traffic (usually less than 20 GB/day) demands with a high request count and can be done on an existing 6 USD Digital Ocean nodes. Q&A time turned out more interesting than I anticipated. Folks touched upon commercial CDNs instead of community mirrors, supply chain security issues, and a bit of other stuff. We had quite a number of interesting talks and I remember when Arya was telling me during CFP time, _“bro we have too many talks now”_ :D. Now, preparations have already started for MiniDebConf Kanpur 2026, scheduled for March 14th and 15th at the IIT campus. If you want to help, see the following thread. See you in the next one. _Day 1 group photo.Click to enlarge _ _Day 2 group photo.Click to enlarge _
blog.sahilister.in
December 20, 2025 at 6:34 PM
Otto Kekäläinen: Backtesting trailing stop-loss strategies with Python and market data
In January 2024 I wrote about the insanity of the _magnificent seven_ dominating the MSCI World Index, and I wondered how long the number can continue to go up? It has continued to surge upward at an accelerating pace, which makes me worry that a crash is likely closer. As a software professional I decided to analyze **if using stop-loss orders could be a reliable way to automate avoiding deep drawdowns**. As everyone with some savings in the stock market (hopefully) knows, the stock market eventually experiences crashes. It is just a matter of _when_ and _how deep_ the crash will be. Staying on the sidelines for years is not a good investment strategy, as inflation will erode the value of your savings. Assuming the current true inflation rate is around 7%, the price of a restaurant dinner that is today 20 euros will cost 24.50 euros in three years. Savings of 1000 euros today would drop in purchasing power from 50 dinners to only 40 dinners in three years. Hence, if you intend to retain the value of your dear savings, they need to be invested in something that grows in value. Most people try to beat inflation by buying shares in stable companies, directly or via broad market ETFs. These historically **grow faster than inflation** during normal years, **but likely drop in value during recessions**. ## What is a trailing stop-loss order? What if you could buy stocks to benefit from their value increasing without having to worry about a potential crash? All modern online stock brokers have a feature called stop-loss, where you can enter a price at which your stocks automatically get sold if they drop down to that price. A trailing stop-loss order is similar, but instead of a fixed price, you enter a margin (e.g. 10%). If the stock price rises, the stop-loss price will trail upwards by that margin. For example, if you buy a share at 100 euros and it has risen to 110 euros, you can set a 10% trailing stop-loss order which automatically sells it if the price drops 10% from the peak of 110 euros, at 99 euros. Thus no matter what happens, you lost only 1 euro. And if the stock price continues to rise to 150 euros, the trailing stop-loss would automatically readjust to 150 euros minus 10%, which is 135 euros (150-15=135). If the price dropped to 135 euros, you would lock in a gain of 35 euros, which is not the peak price of 150 euros, but still better than whatever the price fell down to as a result of a large crash. In the simple case above it obviously makes sense in _theory_ , but it might not make sense in _practice_. Prices constantly oscillate, so you don’t want a margin that is too small, otherwise you exit too early. Conversely, having a large margin may result in a too large of a drawdown before exiting. If markets crash rapidly it might be that nobody buys your stocks at the stop-loss price and shares have to be sold at an even lower price. Also, what will you do once the position is sold? The reason you invested in the stock market was to avoid holding cash, so would you buy the same stock back when the crash bottoms? But how will you know when the bottom has been reached? ## Backtesting stock market strategies with Python, YFinance, Pandas and Lightweight Charts I am not a professional investor, and nobody should take investment advice from me. However, I know what backtesting is and how to leverage open source software. So, I wrote a Python script to test if the trading strategy of using trailing stop-loss orders with specific margin values would have worked for a particular stock. **First you need to have data.** YFinance is a handy Python library that can be used to download the historic price data for any stock ticker on Yahoo.com. **Then you need to manipulate the data.** Pandas is _the_ Python data analysis library with advanced data structures for working with relational or labeled data. **Finally, to visualize the results** , I used Lightweight Charts, which is a fast, interactive library for rendering financial charts, allowing you to plot the stock price, the trailing stop-loss line, and the points where trades would have occurred. I really like how the zoom is implemented in Lightweight Charts, which makes drilling into the datapoints feel effortless. The full solution is not polished enough to be published for others to use, but you can piece together your own by reusing some of the key snippets. To avoid re-downloading the same data repeatedly, I implemented a small caching wrapper that saves the data locally (as Parquet files): python Copy `CACHE_DIR.mkdir(parents=True, exist_ok=True) end_date = datetime.today().strftime("%Y-%m-%d") cache_file = CACHE_DIR / f"{TICKER}-{START_DATE}--{end_date}.parquet" if cache_file.is_file(): dataframe = pandas.read_parquet(cache_file) print(f"Loaded price data from cache: {cache_file}") else: dataframe = yfinance.download( TICKER, start=START_DATE, end=end_date, progress=False, auto_adjust=False ) dataframe.to_parquet(cache_file) print(f"Fetched new price data from Yahoo Finance and cached to: {cache_file}")` CACHE_DIR.mkdir(parents=True, exist_ok=True) end_date = datetime.today().strftime("%Y-%m-%d") cache_file = CACHE_DIR / f"{TICKER}-{START_DATE}--{end_date}.parquet" if cache_file.is_file(): dataframe = pandas.read_parquet(cache_file) print(f"Loaded price data from cache: {cache_file}") else: dataframe = yfinance.download( TICKER, start=START_DATE, end=end_date, progress=False, auto_adjust=False ) dataframe.to_parquet(cache_file) print(f"Fetched new price data from Yahoo Finance and cached to: {cache_file}") The **dataframe** is a Pandas object with a powerful API. For example, to print a snippet from the beginning and the end of the dataframe to see what the data looks like you can use: python Copy `print("First 5 rows of the raw data:") print(df.head()) print("Last 5 rows of the raw data:") print(df.tail())` print("First 5 rows of the raw data:") print(df.head()) print("Last 5 rows of the raw data:") print(df.tail()) Example output: Copy `First 5 rows of the raw data Price Adj Close Close High Low Open Volume Ticker BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA BNP.P Dat 2014-01-02 29.956285 55.540001 56.910000 55.349998 56.700001 316552 2014-01-03 30.031801 55.680000 55.990002 55.290001 55.580002 210044 2014-01-06 30.080338 55.770000 56.230000 55.529999 55.560001 185142 2014-01-07 30.943321 57.369999 57.619999 55.790001 55.880001 370397 2014-01-08 31.385597 58.189999 59.209999 57.750000 57.790001 489940 Last 5 rows of the raw data Price Adj Close Close High Low Open Volume Ticker BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA BNP.P Dat 2025-12-11 78.669998 78.669998 78.919998 76.900002 76.919998 357918 2025-12-12 78.089996 78.089996 80.269997 78.089996 79.470001 280477 2025-12-15 79.080002 79.080002 79.449997 78.559998 78.559998 233852 2025-12-16 78.860001 78.860001 79.980003 78.809998 79.430000 283057 2025-12-17 80.080002 80.080002 80.150002 79.080002 79.199997 262818` First 5 rows of the raw data Price Adj Close Close High Low Open Volume Ticker BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA BNP.P Dat 2014-01-02 29.956285 55.540001 56.910000 55.349998 56.700001 316552 2014-01-03 30.031801 55.680000 55.990002 55.290001 55.580002 210044 2014-01-06 30.080338 55.770000 56.230000 55.529999 55.560001 185142 2014-01-07 30.943321 57.369999 57.619999 55.790001 55.880001 370397 2014-01-08 31.385597 58.189999 59.209999 57.750000 57.790001 489940 Last 5 rows of the raw data Price Adj Close Close High Low Open Volume Ticker BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA BNP.P Dat 2025-12-11 78.669998 78.669998 78.919998 76.900002 76.919998 357918 2025-12-12 78.089996 78.089996 80.269997 78.089996 79.470001 280477 2025-12-15 79.080002 79.080002 79.449997 78.559998 78.559998 233852 2025-12-16 78.860001 78.860001 79.980003 78.809998 79.430000 283057 2025-12-17 80.080002 80.080002 80.150002 79.080002 79.199997 262818 Adding new columns to the dataframe is easy. For example I used a custom function to calculate relative strength index (RSI) and to add a new column “RSI” with a value for every row based on the price from that row, only one line of code is needed, without custom loops: python Copy `df["RSI"] = compute_rsi(df["price"], period=14)` df["RSI"] = compute_rsi(df["price"], period=14) After manipulating the data, the series can be converted into an array structure and printed as JSON into a placeholder in an HTML template: python Copy ` baseline_series = [ {"time": ts, "value": val} for ts, val in df_plot[["timestamp", BASELINE_LABEL]].itertuples(index=False) ] baseline_json = json.dumps(baseline_series) template = jinja2.Template("template.html") rendered_html = template.render( title=title, heading=heading, description=description_html, ... baseline_json=baseline_json, ... ) with open("report.html", "w", encoding="utf-8") as f: f.write(rendered_html) print("Report generated!")` baseline_series = [ {"time": ts, "value": val} for ts, val in df_plot[["timestamp", BASELINE_LABEL]].itertuples(index=False) ] baseline_json = json.dumps(baseline_series) template = jinja2.Template("template.html") rendered_html = template.render( title=title, heading=heading, description=description_html, ... baseline_json=baseline_json, ... ) with open("report.html", "w", encoding="utf-8") as f: f.write(rendered_html) print("Report generated!") In the HTML template the marker `{{ variable }}` in Jinja syntax gets replaced with the actual JSON: html Copy `<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>{{ title }}</title> ... </head> <body> <h1>{{ heading }}</h1> <div id="chart"></div> <script> // Ensure the DOM is ready before we initialise the chart document.addEventListener('DOMContentLoaded', () => { // Parse the JSON data passed from Python const baselineData = {{ baseline_json | safe }} const strategyData = {{ strategy_json | safe }} const markersData = {{ markers_json | safe }} // Create the chart – use a unique variable name to avoid any clash with the DOM element ID const chart = LightweightCharts.createChart(document.getElementById('chart'), width: document.getElementById('chart').clientWidth height: 500 layout: background: { color: "#222" } textColor: "#ccc" } grid: vertLines: { color: "#555" } horzLines: { color: "#555" } } }) // Add baseline serie const baselineSeries = chart.addLineSeries( title: '{{ baseline_label }}' lastValueVisible: false priceLineVisible: false priceLineWidth: 1 }) baselineSeries.setData(baselineData) baselineSeries.priceScale().applyOptions( entireTextOnly: true }) // Add strategy serie const strategySeries = chart.addLineSeries( title: '{{ strategy_label }}' lastValueVisible: false priceLineVisible: false color: '#FF6D00' ) strategySeries.setData(strategyData) // Add buy/sell markers to the strategy serie strategySeries.setMarkers(markersData) // Fit the chart to show the full data range (full zoom chart.timeScale().fitContent() }) </script> </body> </html>` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>{{ title }}</title> ... </head> <body> <h1>{{ heading }}</h1> <div id="chart"></div> <script> // Ensure the DOM is ready before we initialise the chart document.addEventListener('DOMContentLoaded', () => { // Parse the JSON data passed from Python const baselineData = {{ baseline_json | safe }} const strategyData = {{ strategy_json | safe }} const markersData = {{ markers_json | safe }} // Create the chart – use a unique variable name to avoid any clash with the DOM element ID const chart = LightweightCharts.createChart(document.getElementById('chart'), width: document.getElementById('chart').clientWidth height: 500 layout: background: { color: "#222" } textColor: "#ccc" } grid: vertLines: { color: "#555" } horzLines: { color: "#555" } } }) // Add baseline serie const baselineSeries = chart.addLineSeries( title: '{{ baseline_label }}' lastValueVisible: false priceLineVisible: false priceLineWidth: 1 }) baselineSeries.setData(baselineData) baselineSeries.priceScale().applyOptions( entireTextOnly: true }) // Add strategy serie const strategySeries = chart.addLineSeries( title: '{{ strategy_label }}' lastValueVisible: false priceLineVisible: false color: '#FF6D00' ) strategySeries.setData(strategyData) // Add buy/sell markers to the strategy serie strategySeries.setMarkers(markersData) // Fit the chart to show the full data range (full zoom chart.timeScale().fitContent() }) </script> </body> </html> There are also Python libraries built specifically for backtesting investment strategies, such as Backtrader and Zipline, but they do not seem to be actively maintained, and probably have too many features and complexity compared to what I needed for doing this simple test. The screenshot below shows an example of backtesting a strategy on the Waste Management Inc stock from January 2015 to December 2025. The baseline “Buy and hold” scenario is shown as the blue line and it fully tracks the stock price, while the orange line shows how the strategy would have performed, with markers for the sells and buys along the way. ## Results I experimented with multiple strategies and tested them with various parameters, but I don’t think I found a strategy that was consistently and clearly better than just buy-and-hold. It basically boils down to the fact that I was **not able to find any way to calculate when the crash has bottomed** based on historical data. You can only know in hindsight that the price has stopped dropping and is on a steady path to recovery, but at that point it is already too late to buy in. In my testing, **most strategies underperformed buy-and-hold** because they sold when the crash started, but bought back after it recovered at a slightly higher price. In particular when using narrow margins and selling on a 3-6% drawdown the strategy performed very badly, as those small dips tend to recover in a few days. Essentially, the strategy was repeating the pattern of selling 100 stocks at a 6% discount, then being able to buy back only 94 shares the next day, then again selling 94 shares at a 6% discount, and only being able to buy back maybe 90 shares after recovery, and so forth, never catching up to the buy-and-hold. The **strategy worked better in large market crashes** as they tended to last longer, and there were higher chances of buying back the shares while the price was still low. For example in the 2020 crash selling at a 20% drawdown was a good strategy, as the stock I tested dropped nearly 50% and remained low for several weeks, so the strategy bought back the stocks while the price was still low and had not yet started to climb significantly. But that was just a lucky incident, as the delta between the trailing stop-loss margin of 20% and total crash of 50% was large enough. If the crash would have been only 25%, the strategy would have missed the rebound and ended up buying back the stocks at a slightly higher price. Also, note that the simulation assumes that the trade itself is too small to affect the price formation. We should keep in mind that in reality, if a lot of people have stop-loss orders in place, a large price drop would trigger all of them, and create a flood of sales orders, which in turn would affect the price and drive it lower even faster and deeper. Luckily, it seems that stop-loss orders are generally not a good strategy, and we don’t need to fear that too many people would be using them. ## Conclusion Even though using a trailing stop-loss strategy does not seem to help in getting consistently higher returns based on my backtesting, I would still say it is **useful in protecting from the downside** of stock investing. It can act as a kind of _“insurance policy”_ to considerably decrease the chances of losing _big_ while increasing the chances of losing _a little bit_. If you are risk-averse, which I think I probably am, this tradeoff can make sense. I’d rather miss out on an initial 50% loss _and_ an overall 3% gain on recovery than have to sit through weeks or months with a 50% loss before the price recovers to prior levels. Most notably, the **trailing stop-loss strategy works best if used only once**. If it is repeated multiple times, the small losses in gains will compound into big losses overall. Thus, I think I might actually put this automation in place at least on the stocks in my portfolio that have had the highest gains. If they keep going up, I will ride along, but once the crash happens, I will be out of those particular stocks permanently. Do you have a favorite open source investment tool or are you aware of any strategy that actually works? Comment below!
optimizedbyotto.com
December 19, 2025 at 10:29 AM
Dirk Eddelbuettel: dang 0.0.17: New Features, Plus Maintenance
A new release of my mixed collection of things package dang package arrived at CRAN earlier today. The dang package regroups a few functions of mine that had no other home as for example `lsos()` from a StackOverflow question from 2009 (!!), the overbought/oversold price band plotter from an older blog post, the market monitor blogged about as well as the `checkCRANStatus()` function tweeted about by Tim Taylor. And more so take a look. This release retires two functions: the social media site nobody ever visits anymore shut down its API too, so no way to mute posts by a given handle. Similarly, the (never official) ability by Google to supply financial data is no more, so the function to access data this way is gone too. But we also have two new ones: one that helps with CRAN entries for ORCiD ids, and another little helper to re-order `microbenchmark` results by summary column (defaulting to the median). Other than the usual updates to continuous integrations, as well as a switch to Authors@R which will result in CRAN nagging me less about this, and another argument update. The detailed NEWS entry follows. > #### Changes in version 0.0.17 (2025-12-18) > > * Added new funtion `reorderMicrobenchmarkResults` with alias `rmr` > > * Use `tolower` on email argument to `checkCRANStatus` > > * Added new function `cranORCIDs` bootstrapped from two emails by Kurt Hornik > > * Switched to using Authors@R in DESCRIPTION and added ORCIDs where available > > * Switched to `r-ci` action with included bootstrap step; updated updated the checkout action (twice); added (commented-out) log accessor > > * Removed `googleFinanceData` as the (unofficial) API access point no longer works > > * Removed `muteTweeters` because the API was turned off > > Via my CRANberries, there is a comparison to the previous release. For questions or comments use the the issue tracker at the GitHub repo. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
dirk.eddelbuettel.com
December 19, 2025 at 12:29 AM
Jonathan McDowell: 21 years of blogging
21 years ago today I wrote my first blog post. Did I think I’d still be writing all this time later? I’ve no idea to be honest. I’ve always had the impression my readership is small, and people who mostly know me in some manner, and I post to let them know what I’m up to in more detail than snippets of IRC conversation can capture. Or I write to make notes for myself (I frequently refer back to things I’ve documented here). I write less about my personal life than I used to, but I still occasionally feel the need to mark some event. From a software PoV I started out with Blosxom, migrated to MovableType in 2008, ditched that, when the Open Source variant disappeared, for Jekyll in 2015 (when I also started putting it all in git). And have stuck there since. The static generator format works well for me, and I outsource comments to Disqus - I don’t get a lot, I can’t be bothered with the effort of trying to protect against spammers, and folk who don’t want to use it can easily email or poke me on the Fediverse. If I ever feel the need to move from Jekyll I’ll probably take a look at Hugo, but thankfully at present there’s no push factor to switch. It’s interesting to look at my writing patterns over time. I obviously started keen, and peaked with 81 posts in 2006 (I’ve no idea how on earth that happened), while 2013 had only 2. Generally I write less when I’m busy, or stressed, or unhappy, so it’s kinda interesting to see how that lines up with various life events. During that period I’ve lived in 10 different places (well, 10 different houses/flats, I think it’s only 6 different towns/cities), on 2 different continents, working at 6 different employers, as well as a period where I was doing my Masters in law. I’ve travelled around the world, made new friends, lost contact with folk, started a family. In short, I have _lived_ , even if lots of it hasn’t made it to these pages. At this point, do I see myself stopping? No, not really. I plan to still be around, like Flameeyes, to the end. Even if my posts are unlikely to hit the frequent from back when I started out.
www.earth.li
December 17, 2025 at 6:25 PM
Dirk Eddelbuettel: RcppArmadillo 15.2.3-1 on CRAN: Upstream Update
Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1272 other packages on CRAN, downloaded 43.2 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 661 times according to Google Scholar. This versions updates to the 15.2.3 upstream Armadillo release from yesterday. It brings minor changes over the RcppArmadillo 15.2.2 release made last month (and described in this post). As noted previously, and due to both the upstream transition to C++14 coupled with the CRAN move away from C++11, the package offers a transition by allowing packages to remain with the older, pre-15.0.0 ‘legacy’ Armadillo yet offering the current version as the default. If and when CRAN will have nudged (nearly) all maintainers away from C++11 (and now also C++14 !!) we can remove the fallback. Our offer to help with the C++ modernization still stands, so please get in touch if we can be of assistance. As a reminder, the meta-issue #475 regroups _all_ the resources for the C++11 transition. There were no R-side changes in this release. The detailed changes since the last release follow. > #### Changes in RcppArmadillo version 15.2.3-1 (2025-12-16) > > * Upgraded to Armadillo release 15.2.3 (Medium Roast Deluxe) > > * Faster `.resize()` for vectors > > * Faster `repcube()` > > Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
dirk.eddelbuettel.com
December 17, 2025 at 4:28 PM
Sven Hoexter: exfatprogs: Do not try defrag.exfat / mkfs.exfat Windows compatibility in Trixie
exfatprogs 1.3.0 added a new `defrag.exfat` utility which turned out to be not reliable and cause data loss. exfatprogs 1.3.1 disabled the utility, and I followed that decision with the upload to Debian/unstable yesterday. But as usual it will take some time until it's migrating to testing. Thus _if you use testing do not try defag.exfat_! At least not without a vetted and current backup. Beside of that there is a compatibility issue with the way `mkfs.exfat`, as shipped in trixie (exfatprogs 1.2.9), handles drives which have a physical sector size of 4096 bytes but emulate a logical size of 512 bytes. With exfatprogs 1.2.6 a change was implemented to prefer the physical sector size on those devices. That turned out to be not compatible with Windows, and was reverted in exfatprogs 1.3.0. Sadly John Ogness ran into the issue and spent some time to debug it. I've to admit that I missed the relevance of that change. Huge kudos to John for the bug report. Based on that I prepared an update for the next trixie point release. If you hit that issue on trixie with exfatprogs 1.2.9-1 you can work around it by formating with `mkfs.exfat -s 512 /dev/sdX` to get Windows compatibility. If you use exfatprogs 1.2.9-1+deb13u1 or later, and want the performance gain back, and do not need Windows compatibility, you can format with `mkfs.exfat -s 4096 /dev/sdX`.
sven.stormbind.net
December 17, 2025 at 4:24 PM
Matthew Garrett: How did IRC ping timeouts end up in a lawsuit?
I recently won a lawsuit against Roy and Rianne Schestowitz, the authors and publishers of the Techrights and Tuxmachines websites. The short version of events is that they were subject to an online harassment campaign, which they incorrectly blamed me for. They responded with a large number of defamatory online posts about me, which the judge described as "unsubstantiated character assassination" and consequently awarded me significant damages. That's not what this post is about, as such. It's about the sole meaningful claim made that tied me to the abuse. In the defendants' defence and counterclaim[1], 15.27 asserts in part "The facts linking the Claimant to the sock puppet accounts include, on the IRC network: simultaneous dropped connections to the mjg59_ and elusive_woman accounts. This is so unlikely to be coincidental that the natural inference is that the same person posted under both names". "elusive_woman" here is an account linked to the harassment, and "mjg59_" is me. This is actually a surprisingly interesting claim to make, and it's worth going into in some more detail. The event in question occurred on the 28th of April, 2023. You can see a line reading "`*elusive_woman has quit (Ping timeout: 2m30s)`", followed by one reading "`*mjg59_ has quit (Ping timeout: 2m30s)`". The timestamp listed for the first is 09:52, and for the second 09:53. Is that actually simultaneous? We can actually gain some more information - if you hover over the timestamp links on the right hand side you can see that the link is actually accurate to the second even if that's not displayed. The first event took place at 09:52:52, and the second at 09:53:03. That's 11 seconds apart, which is clearly not simultaneous, but maybe it's close enough. Figuring out more requires knowing what a "ping timeout" actually means here. The IRC server in question is running Ergo (link to source code), and the relevant function is handleIdleTimeout(). The logic here is fairly simple - track the time since activity was last seen from the client. If that time is longer than DefaultIdleTimeout (which defaults to 90 seconds) and a ping hasn't been sent yet, send a ping to the client. If a ping has been sent and the timeout is greater than DefaultTotalTimeout (which defaults to 150 seconds), disconnect the client with a "Ping timeout" message. There's no special logic for handling the ping reply - a pong simply counts as any other client activity and resets the "last activity" value and timeout. What does this mean? Well, for a start, two clients running on the same system will only have simultaneous ping timeouts if their last activity was simultaneous. Let's imagine a machine with two clients, A and B. A sends a message at 02:22:59. B sends a message 2 seconds later, at 02:23:01. The idle timeout for A will fire at 02:24:29, and for B at 02:24:31. A ping is sent for A at 02:24:29 and is responded to immediately - the idle timeout for A is now reset to 02:25:59, 90 seconds later. The machine hosting A and B has its network cable pulled out at 02:24:30. The ping to B is sent at 02:24:31, but receives no reply. A minute later, at 02:25:31, B quits with a "Ping timeout" message. A ping is sent to A at 02:25:59, but receives no reply. A minute later, at 02:26:59, A quits with a "Ping timeout" message. Despite both clients having their network interrupted simultaneously, the ping timeouts occur 88 seconds apart. So, two clients disconnecting with ping timeouts 11 seconds apart is not incompatible with the network connection being interrupted simultaneously - depending on activity, simultaneous network interruption may result in disconnections up to 90 seconds apart. But another way of looking at this is that network interruptions may occur up to 90 seconds apart and generate simultaneous disconnections[2]. Without additional information it's impossible to determine which is the case. This already casts doubt over the assertion that the disconnection was simultaneous, but if this is unusual enough it's still potentially significant. Unfortunately for the Schestowitzes, even looking just at the elusive_woman account, there were several cases where elusive_woman and another user had a ping timeout within 90 seconds of each other - including one case where elusive_woman and schestowitzTR] disconnect [40 seconds apart. By the Schestowitzes argument, it's also a natural inference that elusive_woman and schestowitz[TR] (one of Roy Schestowitz's accounts) are the same person. We didn't actually need to make this argument, though. In England it's necessary to file a witness statement describing the evidence that you're going to present in advance of the actual court hearing. Despite being warned of the consequences on multiple occasions the Schestowitzes never provided any witness statements, and as a result weren't allowed to provide any evidence in court, which made for a fairly foregone conclusion. [1] As well as defending themselves against my claim, the Schestowitzes made a counterclaim on the basis that I had engaged in a campaign of harassment against them. This counterclaim failed. [2] Client A and client B both send messages at 02:22:59. A falls off the network at 02:23:00, has a ping sent at 02:24:29, and has a ping timeout at 02:25:29. B falls off the network at 02:24:28, has a ping sent at 02:24:29, and has a ping timeout at 02:25:29. Simultaneous disconnects despite over a minute of difference in the network interruption. comments
mjg59.dreamwidth.org
December 17, 2025 at 2:19 PM
Freexian Collaborators: Monthly report about Debian Long Term Support, November 2025 (by Santiago Ruano Rincón)
The Debian LTS Team, funded by Freexian’s Debian LTS offering] ([https://www.freexian.com/lts/debian/), is pleased to report its activities for November. ### Activity summary During the month of November, 18 contributors have been paid to work on Debian LTS (links to individual contributor reports are located below). The team released 33 DLAs fixing 219 CVEs. The LTS Team kept going with the usual cadence of preparing security updates for Debian 11 “bullseye”, but also for Debian 12 “bookworm”, Debian 13 “trixie” and even Debian unstable. As in previous months, we are pleased to say that there have been multiple contributions of LTS uploads by Debian Fellows outside the regular LTS Team. Notable security updates: * Guilhem Moulin prepared DLA 4365-1 for unbound, a caching DNS resolver, fixing a cache poisoning vulnerability that could lead to domain hijacking. * Another update related to DNS software was made by Andreas Henriksson. Andreas completed the work on bind9, released as DLA 4364-1 to fix cache poisoning and Denial of Service (DoS) vulnerabilities. * Chris Lamb released DLA 4374-1 to fix a potential arbitrary code execution vulnerability in pdfminer, a tool for extracting information from PDF documents. * Ben Hutchings published a regular security update for the linux 6.1 bullseye backport, as DLA 4379-1. * A couple of other important recurrent updates were prepared by Emilio Pozuelo, who handled firefox-esr and thunderbird (in collaboration with Christoph Goehre), published as DLAs DLA 4370-1 and DLA 4372-1, respectively. Contributions from fellows outside the LTS Team: * Thomas Goirand uploaded a bullseye update for keystone and swift * Jeremy Bícha prepared the bullseye update for gst-plugins-base1.0 * As mentioned above, Christoph Goehre prepared the bullseye update for thunderbird. * Mathias Behrle provided feedback about the tryton-server and tryton-sao vulnerabilities that were disclosed last month, and helped to review the bullseye patches for tryton-server. Other than the regular LTS updates for bullseye, the LTS Team has also contributed updates to the latest Debian releases: * Bastien Roucariès prepared a bookworm update for squid, the web proxy cache server. * Carlos Henrique Lima Melara filed a bookworm point update request for gdk-pixbuf to fix CVE-2025-7345, a heap buffer overflow vulnerability that could lead to arbitrary code execution. * Daniel Leidert prepared bookworm and trixie updates for r-cran-gh to fix CVE-2025-54956, an issue that may expose user credentials in HTTP responses. * Along with the bullseye updates for unbound mentioned above, Guilhem helped to prepare the trixie update for unbound. * In collaboration with Lukas Märdian, Tobias Frost prepared trixie and bookworm updates for log4cxx, the C++ port of the logging framework for JAVA. * Jochen Sprickerhof prepared a bookworm update for syslog-ng. * Utkarsh completed the bookworm update for wordpress, addressing multiple security issues in the popular blogging tool. Beyond security updates, there has been a significant effort in revamping our documentation, aiming to make the processes more clear and consistent for all the members of the team. This work was mainly carried out by Sylvain, Jochen and Roberto. We would like to express our gratitude to the sponsors for making the Debian LTS project possible. Also, special thanks to the fellows outside the LTS team for their valuable help. ### Individual Debian LTS contributor reports * Andreas Henriksson * Andrej Shadura * Bastien Roucariès * Ben Hutchings * Carlos Henrique Lima Melara * Chris Lamb * Daniel Leidert * Emilio Pozuelo Monfort * Guilhem Moulin * Jochen Sprickerhof * Markus Koschany * Paride Legovini * Roberto C. Sánchez * Santiago Ruano Rincón * Sylvain Beucler * Thorsten Alteholz * Tobias Frost * Utkarsh Gupta ### Thanks to our sponsors Sponsors that joined recently are in bold. * Platinum sponsors: * Toshiba Corporation (for 122 months) * Civil Infrastructure Platform (CIP) (for 90 months) * VyOS Inc (for 54 months) * Gold sponsors: * F. Hoffmann-La Roche AG (for 132 months) * CONET Deutschland GmbH (for 116 months) * Plat’Home (for 115 months) * University of Oxford (for 72 months) * Deveryware (for 60 months) * EDF SA (for 44 months) * Dataport AöR (for 19 months) * CERN (for 17 months) * Silver sponsors: * Domeneshop AS (for 137 months) * Nantes Métropole (for 131 months) * Akamai - Linode (for 127 months) * Univention GmbH (for 123 months) * Université Jean Monnet de St Etienne (for 123 months) * Ribbon Communications, Inc. (for 117 months) * Exonet B.V. (for 107 months) * Leibniz Rechenzentrum (for 101 months) * Ministère de l’Europe et des Affaires Étrangères (for 85 months) * Cloudways by DigitalOcean (for 74 months) * Dinahosting SL (for 72 months) * Upsun Formerly Platform.sh (for 66 months) * Moxa Inc. (for 60 months) * sipgate GmbH (for 58 months) * OVH US LLC (for 56 months) * Tilburg University (for 56 months) * GSI Helmholtzzentrum für Schwerionenforschung GmbH (for 47 months) * THINline s.r.o. (for 20 months) * Copenhagen Airports A/S (for 14 months) * **Conseil Départemental de l’Isère** * Bronze sponsors: * Seznam.cz, a.s. (for 138 months) * Evolix (for 137 months) * Intevation GmbH (for 134 months) * Linuxhotel GmbH (for 134 months) * Daevel SARL (for 133 months) * Megaspace Internet Services GmbH (for 132 months) * Greenbone AG (for 131 months) * NUMLOG (for 131 months) * WinGo AG (for 130 months) * Entr’ouvert (for 122 months) * Adfinis AG (for 119 months) * Laboratoire LEGI - UMR 5519 / CNRS (for 114 months) * Tesorion (for 114 months) * Bearstech (for 105 months) * LiHAS (for 105 months) * Catalyst IT Ltd (for 100 months) * Demarcq SAS (for 94 months) * Université Grenoble Alpes (for 80 months) * TouchWeb SAS (for 72 months) * SPiN AG (for 69 months) * CoreFiling (for 65 months) * Institut des sciences cognitives Marc Jeannerod (for 60 months) * Observatoire des Sciences de l’Univers de Grenoble (for 56 months) * Tem Innovations GmbH (for 51 months) * WordFinder.pro (for 51 months) * CNRS DT INSU Résif (for 49 months) * Soliton Systems K.K. (for 45 months) * Alter Way (for 42 months) * Institut Camille Jordan (for 32 months) * SOBIS Software GmbH (for 17 months) * Tuxera Inc. (for 8 months) * **OPM-OP AS**
www.freexian.com
December 17, 2025 at 2:18 AM
Christian Kastner: Simple-PPA, a minimalistic PPA implementation
Today, the Debusine developers launched Debusine repositories, a beta implementation of PPAs. In the announcement, Colin remarks that _"[d]iscussions about this have been happening for long enough that people started referring to PPAs for Debian as 'bikesheds'"_ ; a characterization that I'm sure most will agree with. So it is with great amusement that on this same day, I launch a second PPA implementation for Debian: Simple-PPA. Simple-PPA was never meant to compete with Debusine, though. In fact, it's entirely the opposite: from discussions at DebConf, I knew that it was only a matter of time until Debusine gained a PPA-like feature, but I needed a stop-gap solution earlier, and with some polish, what was once by Python script already doing APT processing for apt.ai.debian.net, recently became Simple-PPA. Consequently, Simple-PPA lacks (and will always lack) all of the features that Debusine offers: there is no auto-building, no CI, nor any other type of QA. It's the simplest possible type of APT repository: you just upload packages, they get imported into an archive, and the archive is exposed via a web server. Under the hood, reprepro does all the heavy lifting. However, this also means it's trivial to set up. The following is the entire configuration that simple-ppa.debian.net started with: # simple-ppa.conf [CORE] SignWith = 2906D748B7551BC8 ExportDir = /srv/www/simple-ppa MailFrom: Simple-PPA <[email protected]> Codenames = sid forky trixie trixie-backports bookworm bookworm-backports AlsoAllow = forky: unstable trixie: unstable bookworm: unstable [simple-ppa-dev] Label = Simple-PPA's self-hosted development repository # ckk's key Uploaders = allow * by key E76004C5CEF0C94C+ [ckk] Label = Christian Kastner at Simple-PPA Uploaders = allow * by key E76004C5CEF0C94C+ The `CORE` section just sets some defaults and sensible rules. Two PPAs are defined, `simple-ppa-dev` and `ckk`, which accept packages signed by the key with the ID `E76004C5CEF0C94C`. These PPAs use the global defaults, but individual PPAs can override `Architectures`, `Suites`, and `Components`, and of course allow an arbitrary number of users. Users upload to this archive using SFTP (e.g.: with dput-ng). Every 15 minutes, uploads get processed, with ACCEPTED or REJECTED mails sent to the Maintainer address. The APT archive of all PPAs is signed with a single global key. I myself intend to use Debusine repositories soon, as the autobuilding and the QA tasks Debusine offers are something I need. However, I do still see a niche use case for Simple-PPA: when you need an APT archive, but don't want to do a deep dive into `reprepro` (which is extremely powerful). If you'd like to give Simple-PPA a try, head over to simple-ppa.debian.net and follow the instructions for users.
www.kvr.at
December 16, 2025 at 10:25 PM