Jonathan Aldrich
jonathanaldrich.bsky.social
Jonathan Aldrich
@jonathanaldrich.bsky.social
Professor at Carnegie Mellon University, working on programming languages and software engineering. Coauthor, Programming Language Pragmatics (videos: https://tinyurl.com/PLP5vids). CTO of noteful.net ACM Publications Board member. He/him.
Reposted by Jonathan Aldrich
A note from our editors:
February 1, 2026 at 9:10 PM
Also watch Hemant's POPL'26 talk here!

Security Reasoning via Substructural Dependency Tracking
www.youtube.com/watch?v=iN3J...
www.youtube.com
January 29, 2026 at 2:01 AM
Integrated graphics for now, but an Nvidia GTX 5070 is coming!
January 21, 2026 at 11:10 PM
Good question, I don't know. Will put it on my list of questions to ask!
January 17, 2026 at 2:25 PM
Yes, I know they have gotten feedback on the header page. In addition to the typesetting, there are problems with its accessibility.
January 16, 2026 at 2:40 PM
This will not happen immediately, partly for technical reasons and partly to preserve a value proposition for institutional subscribers, but one change was made yesterday and more later this month, etc. Once realized, it appears to me that Yannis's goal would address the recent petition.
January 16, 2026 at 2:17 PM
Security Reasoning via Substructural Dependency Tracking. Hemant Gouni, Frank Pfenning, and Jonathan Aldrich. Proc. ACM Program. Lang. 10, POPL, 2026.

dl.acm.org/doi/10.1145/...
Security Reasoning via Substructural Dependency Tracking | Proceedings of the ACM on Programming Languages
Substructural type systems provide the ability to speak about resources. By enforcing usage restrictions on inputs to computations they allow programmers to reify limited system units—such as memory—i...
dl.acm.org
January 15, 2026 at 3:03 PM
I seriously think this is going to revolutionize language-based security! The foundation is logically motivated, clean, powerful, and general. Meanwhile the types are simpler and (we believe, going to test this soon) more usable than prior information flow type systems.
January 15, 2026 at 3:03 PM
Hmm, interesting idea. My worry: LLMs can easily do a bad job that *seems* good enough, but can they actually do a job good enough to trust?
January 13, 2026 at 6:03 PM