@hgarrereyn.bsky.social
43 followers 45 following 17 posts
Posts Media Videos Starter Packs
hgarrereyn.bsky.social
Regardless, I think 2025 is going to be an interesting year for CTF...
hgarrereyn.bsky.social
While this was enough for us to solve nfuncs1, it was a bit too slow (and expensive for nfuncs2) and we ended up switching to a manual heuristic-recognition script, but failed to solve in time...
hgarrereyn.bsky.social
It was surprisingly capable! Able to automatically recognize a function like the following as an AES S-box based key expansion, and write Python to solve it automatically.
hgarrereyn.bsky.social
TLDR: we equipped o3-mini with access to Python and gave it the Binary Ninja HLIL representation of functions. We asked it to identify the user input constraints and subsequent XOR key for each function. Then validated its output, checking if the decoded function was sensible.
hgarrereyn.bsky.social
Placed 2nd last weekend with SuperDiceCode at DEF CON Quals 2025! -- Here's a brief retrospective about using an LLM agent to solve (part of) the nfuncs challenge: c.mov/nfuncs-agent/
hgarrereyn.bsky.social
Hmm is the solution to only give the llm tools when we think it will need to use them?
Reposted
hgarrereyn.bsky.social
what the fuck is an oh camel 😤
hgarrereyn.bsky.social
what the fuck is an oh camel 😤
hgarrereyn.bsky.social
Pretty neat! We can effectively prompt the LLM using code in a way that lets us extrapolate beyond the initial prompt in a programatic way.

I've packaged this up in a small POC: https://github.com/hgarrereyn/omni

TLDR:
from omni import Omni
o = Omni()
o.execute('''
# anything here
''')
hgarrereyn.bsky.social
Finally, lets hypothesize a `Gif` object on which we can `add_frame`:
---------------------
g = Gif()
for i in range(10):
c = Canvas()
c.add_random_shapes(num=100)
r = c.render()
g.add_frame(r, ms=20)
g.save
('./out.gif')
---------------------
Producing:
hgarrereyn.bsky.social
Now that we have this implementation however, we can adjust parameters without needing to invoke the LLM again:
---------------------
c = Canvas()
c.add_random_shapes(num=100)
c.draw
()
---------------------
hgarrereyn.bsky.social
Let's hypothesize an API which places random shapes:
---------------------
c = Canvas()
c.add_random_shapes(num=5)
c.draw
()
---------------------
LLM is invoked to figure out what `add_random_shapes` should do, and we get:
hgarrereyn.bsky.social
Now we introduce a new undefined api:
---------------------
...
t = Triangle(width=3, height=5)
t.set_origin(6,6)
t.set_color('blue')
...
c.add(t
)
---------------------
LLM is invoked to update the context code and we get:
hgarrereyn.bsky.social
The existing context code can extrapolate to new usages:
---------------------
...
r2 = Rect(width=3, height=3)
r2.set_origin(2, 2)
r2.set_color('red')
r2.set_rotation(deg=10)
...
c.add(r2
)
---------------------
We don't need to invoke the LLM here, but can render:
hgarrereyn.bsky.social
E.g. lets write the following:
---------------------
r = Rect(width=4, height=6)
r.set_origin(5, 4)
r.set_color('green')
r.set_rotation(deg=45)

c = Canvas()
c.add(r
)
c.draw
()
---------------------
LLM generates context code that allows us to render:
hgarrereyn.bsky.social
Been playing around with a fun pseudo-programming-by-example kind of LLM setup. Instead of having the LLM write our client code (copilot) or write core library code (PBE), what if we have it generate binding code that maps our client code onto existing libraries/frameworks?