Csaba Szepesvari
@skiandsolve.bsky.social
1.2K followers 220 following 92 posts
⛷️ ML Theorist carving equations and mountain trails | 🚴‍♂️ Biker, Climber, Adventurer | 🧠 Reinforcement Learning: Always seeking higher peaks, steeper walls and better policies. https://ualberta.ca/~szepesva
Posts Media Videos Starter Packs
Reposted by Csaba Szepesvari
skiandsolve.bsky.social
..actually, not only standard notation, but also to be able to speak about the loss (=log-loss) used to train today's LLMs.
skiandsolve.bsky.social
No, it is not information retrieval. It is deducing new things from old things. You can do this by running a blind breadth-first (unintelligent) search producing all proofs of all possible statements. Just don't want errors. But this is not retrieval. It is computation.
skiandsolve.bsky.social
Of course approximations are useful. The paper is narrowly focused on deductive reasoning which seem to require the exactness we talk about. The point is that regardless of whether you use quantum mechanics or the Newtonian one, you don't want your derivations mistake-ridden.
skiandsolve.bsky.social
Worst-case vs. average case: yes!
But I would not necessarily connect these to minimax vs. Bayes.
skiandsolve.bsky.social
Yeah, admittedly, not a focus point of the paper. How about if the model produces a single response, the loss is the zero-one loss. Then the model better choose the label with the highest probability label, which is OK. Point of having mu: Not much point, just matching standard notation..
skiandsolve.bsky.social
I am curious about these examples.. (and yes, I can construct a few, too, but I want to add more)
skiandsolve.bsky.social
No, this is not correct: Learning 1[A>B] interestingly has the same complexity (provably). This is because 1[A>B] is in the "orbit" of 1[A>=B]. So the symmetric learning who is being taught 1[A>B] need to figure out it is not taught 1[A>=B].
skiandsolve.bsky.social
Maybe. I am asking for much less here from the machines. I am asking for them just to be correct (or stay silent). No intelligence, just good old fashioned computation.
skiandsolve.bsky.social
the solution is found..
skiandsolve.bsky.social
Yes, transformers do not have "working memory". Also, I don't believe in that using them in AR mode is powerful enough for challenging problems. In a way, without "working memory", external "loop", we say the model should solve problems by free association ad infinitum or at least until
skiandsolve.bsky.social
On the paper: Interesting but indeed there is little in common. On the problem studied in the paper: Would not a slightly more general statistical framework solve your problem? Ie measure error differently than through the prediction loss (AR models: parameters, spectral measure, etc.).
skiandsolve.bsky.social
Yeah, I don't see the exactness happening that much on its own through statistical learning. Neither experimentally, nor theoretically. We have an example for illustrating this: use the uniform distribution for good coverage, teach transformers to compare m-bit integers using GD. Need 2^m examples.
skiandsolve.bsky.social
Yeah, we cite this and this was a paper that got me started on this project!
skiandsolve.bsky.social
First position paper I ever wrote. "Beyond Statistical Learning: Exact Learning Is Essential for General Intelligence" arxiv.org/abs/2506.23908 Background: I'd like LLMs to help me do math, but statistical learning seems inadequate to make this happen. What do you all think?
Beyond Statistical Learning: Exact Learning Is Essential for General Intelligence
Sound deductive reasoning -- the ability to derive new knowledge from existing facts and rules -- is an indisputably desirable aspect of general intelligence. Despite the major advances of AI systems ...
arxiv.org
skiandsolve.bsky.social
Our seminars are back. If you missed Max's talk, it is on YouTube and today I will host Jeongyeol from UWM who will talk about the curious case of why latent MDPs though scary at first sight might be tractable! Link to the seminar homepage:
sites.google.com/view/rltheor...
skiandsolve.bsky.social
Glad to see someone remembers these:)
skiandsolve.bsky.social
should be distinguished. The reason they should not is because they are indistinguishable. So at least those need to be collapsed. So yes, one can start with redundant models, where it will appear you could have epistemic uncertainty, but this is easy to rule out. 2/2
skiandsolve.bsky.social
I guess with a worst-case hat on, we just all die:) In other words, indeed, the distinction is useful inasmuch as the modelling assumptions are valid. And there the mixture of two Diracs over 0 and 1 actually is a bad example, because that says that two models that are identical as distributions 1/x
skiandsolve.bsky.social
I guess I stop here:) 5/5
skiandsolve.bsky.social
Well, yes, to the degree that the model you use correctly reflects what's going on. Example with drug trials, randomized patient allocation. Result is effectiveness. Meaning of aleatoric and epistemic uncertainty should be clear and they help with explaining outcomes of the trial. 4/x
skiandsolve.bsky.social
One observes 1, there is epistemic uncertainty (the model could be the first or the second). Of course, nothing is black and white like this ever. And we talk about models here. Models are.. made up.. Usual blurb about usefulness of models. Should you care about this distinction? 3/x
skiandsolve.bsky.social
Epistemic uncertainty refers to whether given the data (and prior information), we can surely identify the data generating model. Example: Model class has two distributions; one has support {0,1}, the other has support {1}. One observes 0. There is no epistemic uncertainty. 2/X
skiandsolve.bsky.social
I don't get this:
In the context of this terminology, data comes from a model. Aleatoric uncertainty refers to the case when this model is a Dirac! In the second case, the model is a mixture of two Dirac's. This is not a Dirac. Hence, there is aleatoric uncertainty. 1/X