KeePassXC
banner
keepassxc.org
KeePassXC
@keepassxc.org
KeePassXC password manager – https://keepassxc.org

GitHub: https://github.com/keepassxreboot/keepassxc
Snapshot builds: https://snapshot.keepassxc.org/
Mastodon: https://fosstodon.org/@keepassxc

Team email PGP key: 2FB8 CA9C 105D 8D57 BB97 46BD
I hope having this here is enough. This doesn’t look like a large scheme to me at the moment, but still concerning.
November 10, 2025 at 11:14 PM
A) Thanks for the realisation.
B) You shouldn’t trust us anyway. We’re random dudes from the internet. We have some reputation, but that’s it. Feel free to review our code yourself or follow our development process on GitHub. Whether or not we use LLMs should be irrelevant to your assessment.
November 8, 2025 at 7:43 AM
Thanks for actually reading our argument. You can make a million cases for why LLMs, tech bros, the AI bubble etc. are societal and environmental problems and you’d be preaching to the choir here. But making a fundamental security issue out of it when it’s objectively not is a weak argument.
November 8, 2025 at 7:38 AM
Talking about us now instead of with us. Great. Also see previous answer. bsky.app/profile/keep...
We’re doing all we can to prevent vulnerabilities, but a blanket ban of AI code is not a rational part of that. You can make a moral argument if you like, but not a security argument.
November 7, 2025 at 11:29 PM
And?
November 7, 2025 at 11:27 PM
We encourage you to become educated on our process and policies: github.com/keepassxrebo...
github.com
November 7, 2025 at 11:18 PM
We’re doing all we can to prevent vulnerabilities, but a blanket ban of AI code is not a rational part of that. You can make a moral argument if you like, but not a security argument.
November 7, 2025 at 11:17 PM
We’re not being overrun by third-party AI pull requests, so I’m not sure why we’re having this discussion. We’re managing our own time and if problems arise, we will take measures as we see fit. This is not the case at the moment, but thanks for the concerns.
November 7, 2025 at 11:15 PM
Do what you must, we’re not stopping you. But we’re also not giving in to psychological blackmail only because you don’t like our stance on the issue. This discussion has run its course.
November 7, 2025 at 10:47 PM
As opposed to explicitly encouraging, or explicitly discouraging, or implicitly discouraging?? We explicitly encourage well-written code submissions backed by test cases. If you read our CONTRIBUTING document you would see that. It's all written very clearly. github.com/keepassxrebo...
github.com
November 7, 2025 at 10:44 PM
It’s whatever you read into it.
November 7, 2025 at 10:42 PM
Why should we try if we don’t want to? How can you not accept that yes means yes or rather that don’t care means don’t care?
November 7, 2025 at 10:40 PM
I have reviewed hundreds of human written code submissions. VERY few are without bugs, ridiculous code bloat, and/or edits to unrelated files. I am much more concerned about a malicious human actor participating in our PR process. Look no further than XZ Utils.
November 7, 2025 at 10:39 PM
Humans are damn good at that.
November 7, 2025 at 10:37 PM
Apparently they didn’t read their own article. Otherwise we wouldn’t have this discussion. We never said “No” to AI submissions, or are we not actually talking about consent, but about you imposing your ideology on us?
November 7, 2025 at 10:36 PM
Empirical studies cannot “prove” anything. But regardless, you neither know whether or to what degree we “rely” on LLMs nor what our brains look like. But you do you.
November 7, 2025 at 10:32 PM
No words.
November 7, 2025 at 10:18 PM
Who’s they and why does it matter? We’re reviewing the code, not “them.” We do know what XOR is and yet we also know how to use AI. So, what now?
November 7, 2025 at 10:01 PM
I’m sure malicious submitters will take our policy disallowing malicious submissions to heart. And as for the judgement part, I have no idea how even to begin to answer that circular logic and false dilemma rhetoric. Don’t trust us, that’s fine.
November 7, 2025 at 9:58 PM
How are we supposed to prevent AI submissions? And in what world would it make a difference? We review any piece of code manually anyway. A print(foo) is a print(foo). It doesn’t suddenly do anything nefarious only because a machine wrote it.
November 7, 2025 at 9:44 PM
We’re not encouraging anything.
November 7, 2025 at 9:23 PM
We need more information to debug that. Could you open an issue on GitHub please?
October 5, 2025 at 10:03 AM
We have no forum. You can ask on Matrix when it’s back up.
September 3, 2025 at 5:39 PM