Garrick Aden-Buie
banner
grrrck.xyz
Garrick Aden-Buie
@grrrck.xyz
r + python + data + web things. team shiny at posit (rstudio). open source all the things

https://garrickadenbuie.com
https://github.com/gadenbuie
I call these `f()` and `f_impl()`
January 16, 2026 at 8:42 PM
There are a *lot* of tools! You can reference them by group or pick specific tools by name so you get just the ones you want/need. posit-dev.github.io/btw/referenc...
Tools: Register tools from btw — btw_tools
The btw_tools() function provides a list of tools that can be registered with an ellmer chat via chat$register_tools() that allow the chat to interface with your computational environment. Chats retur...
posit-dev.github.io
January 9, 2026 at 2:16 AM
Oops, I misspoke, it's actually the "env" group that describes objects in your environment to LLMs. ("session" is for sessioninfo-style tools). The env tools _describe_ things like data frames but don't send the whole dataset; that helps LLMs write data analysis code with less hallucinations
January 9, 2026 at 2:14 AM
Depending on your use case, you might be interested in other {btw} tool groups too:

'run' for the tool to run R code
'search' for tools that search cran
'session' to let claude inspect variables in an R session
January 7, 2026 at 8:10 PM
One small tweak I make personally: you can limit which groups of tools are enabled to reduce tokens used for tool definitions and avoid conflicts with Claude's built-in tools. I use

btw::btw_mcp_server(tools = list('docs', 'pkg'))

for reading help pages and pkg dev tools...
January 7, 2026 at 8:10 PM
January 1, 2026 at 12:49 AM
😂 “legally chill”
December 17, 2025 at 12:57 AM
That makes sense! It’s unfortunate that OpenAI change the default in a way that make the model comparisons so different.

It would be interesting to see how those models perform with medium reasoning, especially if it’s required for solid #rstats performance, but I get why you’d not too
December 12, 2025 at 11:46 PM
Was this your subtle way of telling me I should read the article? If so, it didn't work, I'm continuing to speculate based on headlines and things I'm seeing on bluesky
December 12, 2025 at 9:27 PM
ah yeah that would make sense! (this is what I meant by "what's going on") It's surprising that compared to GPT-5, 5.1 and 5.2 pretty much tank the eval.

Would it make sense to add a "with reasoning" variant for those models? Now I'm curious about reasoning levels in the other models too... 🤔
December 12, 2025 at 9:26 PM
Wait, what’s going on with GPTs 5.1 and 5.2 in the lower left corner? 🧐
December 12, 2025 at 8:43 PM
But there’s a PEP for that!
December 12, 2025 at 1:15 AM
The other option is to put the app in a folder in inst/ and export a function that calls runApp() on that directory (using system.file() to find that path). This option is best when the Shiny app is ancillary to the pkg. Because the code is in inst/ it’s hidden from CRAN checks, but harder to test
December 10, 2025 at 2:34 AM
Broadly I think there are two choices. Which is better depends on your package and how central the app is to the pkg.

If the package exists just to create the app, export a function that creates the shinyApp() object. All the supporting code goes in the pkg (in R/). Lean on small fns and unit tests
December 10, 2025 at 2:34 AM