Liz James
belizzard.bsky.social
Liz James
@belizzard.bsky.social
Research Engineer: Cybersecurity in Cyber Physical Systems mostly with a focus on Transport (and adjacent systems).

3D Printing and Modelling Nerd, lapsed musician.

I do have a doctorate (EngD) but will run away if it's used as something notable about me
guh brain broke in the middle of the sentence :)

* because I feel this is a foundational question.

It's also being asked in a variety of domains where writing and effective communication has been a proxy for understanding/competence - such as research funding bids.
December 29, 2025 at 4:04 PM
I also recognise that the types of assessment I prefer are usually not aligned with others: give me more vivas; ask me to defend a position in a debate; give me real-world constraints or changes on an existing solution on the fly and see the process I undertake to incorporate.
December 29, 2025 at 4:03 PM
So I have ended up spending a lot of time thinking about, and trying to incorporate more explicit assessment approaches for others where I can. Aiming to capture evidence of higher tiers of blooms taxonomy when looking to see if learning objectives have been met.
December 29, 2025 at 4:03 PM
In my professional life, I do cost-benefit analysis whenever I get asked to sit an exam or write an essay to demonstrate competency because of my personal challenges around recall under stress, etc.
December 29, 2025 at 4:03 PM
So my question is this: are we defending writing-under-constraint because it uniquely develops critical thinking? Or because it’s the easiest assessment format to police now that fluent text production is no longer scarce? Because this question is now being
December 29, 2025 at 4:03 PM
I plan to read the referenced study (brainonllm.com). I’m open to the idea that reliance on LLMs can negatively affect learning. But without controlling for assessment format and conditions, attributing effects to AI alone feels methodologically weak (comment on bits of article that I can see).
December 29, 2025 at 4:03 PM
Just feels like the material changes around in-class writing will materially change the task itself: time pressure, environment, cognitive load, loss of drafting and iteration, loss of tools. Any one/combo of these could plausibly explain outcome differences independent of whether AI was involved.
December 29, 2025 at 4:03 PM
Paywall caveat: I can’t see whether this is addressed later in the piece, but I can’t help asking about confounding variables in the claim that moving to all in-class writing produced “startling” results. The causal attribution feels underspecified.
December 29, 2025 at 3:53 PM
🤮
November 20, 2025 at 6:18 PM
...but can raise the bar.

And it always boils down to the Cost-Benefit Analysis.
November 16, 2025 at 8:35 AM
Security Industry: despite recognising the harm that a TA could do with your services. Why aren't you applying robust detection and rate limiting controls, and a robust screening/authentication process for your customers who may wish to utilise the maximum performance?

It doesn't solve the problem,
November 16, 2025 at 8:35 AM
And you are absolutely right about the API security itself.

AI provider (one hand): we provide the most capable model capable of doing all these amazing things without human intervention.

AI provider (other hand): the TAs are beginning to use these models to do badness, use us to out compete.
November 16, 2025 at 8:35 AM
My reading of the subject, in preparation for a journo interview, didn't highlight any real novelty in the use of these LLMs in exploits today.

Most used API calls... So fundamentally equivalent to a C2 beacon.

PromptLock used a local open weight model to derive commands - slightly more novel.
November 16, 2025 at 8:30 AM
Defensively (and the reduced cost to the TA will lead to higher success rates due to their ability to target more orgs), it doesn't materially change the objective of the TA.

Once they've gotten a foothold, local enumeration, exfiltration, network discovery, persistence and pivoting - come next.
November 16, 2025 at 8:27 AM
Also does it materially change the behaviour. Using a remote AI to determine what commands to run, is basically a c2 beacon with mediocre people on tap to try do things.
November 16, 2025 at 8:27 AM
Existential threat from trans people my ass. Continuing to ignore the documented and well studied barriers to women's sports, they'll focus on one that won't even move the scale.
November 12, 2025 at 9:16 AM
Mood
November 10, 2025 at 6:13 PM
Presenting my bullet point notes and seeing it get the wrong end of the stick (produce bad tokens that don't summarise or mislead) is super aggravating and in turn gives me motivation.
November 7, 2025 at 8:53 AM
On a more personal note, it has been helpful for me as a foil to start writing with not an empty page.

Anyone who works with me knows I get super philosophical/principled sometimes around security engineering but it can be quite difficult for people outside my sphere to see why.
November 7, 2025 at 8:53 AM
This has been my experience when using it too.

I've been considering using more complex statistical analysis for some of my tools (attempting to look for non constant time operations behind the inherent variation) but that's just hard stats.
November 7, 2025 at 8:53 AM
"As a regulator, we do not make the law. We advise on it and uphold it."

Ignoring how you and your allies have supported groups leveraging the judiciary to push an interpretation of the law favourably for one group over another (and make it unrecognisable from historical jurisprudence), I suppose?
October 28, 2025 at 8:31 PM
New outlook.... Save Attachment to OneDrive?

Liz: No, anywhere but there....
October 9, 2025 at 2:47 PM