Kathryn Tewson
banner
kathryntewson.bsky.social
Kathryn Tewson
@kathryntewson.bsky.social
Good in a crisis and at no other time

First initial last name at gmail dot com

Signal:@KathrynTewson.06
… just exactly what it says? The most likely answer isn’t always the right answer. The most common blood type is O+, but my blood type is B-.
December 5, 2025 at 3:52 AM
it’s a roomba. That’s the conditions of the hypo.
December 5, 2025 at 3:50 AM
Can it tell me which room is my daughter's and which is my son's?
December 5, 2025 at 3:30 AM
Can they evaluate whether a less likely option would be more appropriate?
December 5, 2025 at 3:30 AM
Data in context with other data is information.
December 5, 2025 at 3:12 AM
To anyone. And . . . what? Of course knowledge is produced.

AI models can't infer anything, they can't conclude anything. They don't think. There is no mind. They just generate statistically-probable text.
December 5, 2025 at 3:11 AM
It's certainly one step.
December 5, 2025 at 3:10 AM
The stuff that's new knowledge is the stuff I learned through reasoning -- "this is the master bedroom," "this is the kitchen." The roomba just knows where walls are or arent.
December 5, 2025 at 3:09 AM
I would say that the pattern is a fertile source for knowledge, but in and of itself, it isn't knowledge.
December 5, 2025 at 2:40 AM
I don't think so, no. It's just data.
December 5, 2025 at 2:36 AM
generate = produce
new = novel, not previously known
knowledge = facts, hypotheses, theories, explanations, conclusions, or understandings

Can it think new thoughts and derive things that were not previously known?
December 5, 2025 at 2:34 AM
Wow. So yeah, there's not really any articulation of what factual determinations support the moderators' conclusions that a given image is AI-generated -- lots of statements that there *are* such determinations, but no disclosure of what they are.

That's, uh, not great.
December 5, 2025 at 2:29 AM
Do you believe that a Roomba can generate new knowledge via the application of cognitive and intellectual work?
December 5, 2025 at 2:27 AM
It disappeared just after I accessed it -- I was looking at it earlier, I believe. Was it taken offline in response to my inquiry?
December 5, 2025 at 2:19 AM
?
December 5, 2025 at 2:10 AM
Yes, I read it but I didn't see evaluation guidelines articulated anywhere. Did I miss them?
December 5, 2025 at 2:05 AM
Are the guidelines the moderators work from published anywhere?
December 5, 2025 at 1:59 AM
The litigants in this case merely copied and pasted the output from ChatGPT into their brief without analyzing or validating it in any way, after having been repeatedly told not to do that.
December 4, 2025 at 11:25 PM
So by this definition, a Roomba is reasoning?
December 4, 2025 at 11:24 PM
I'm asking if the validation process is validating that the CoT correctly identified *steps* to get to the answer, or is validating that each CoT step has yielded the correct answer.
December 4, 2025 at 10:17 PM
*sigh* please answer the question I asked
December 4, 2025 at 9:27 PM
The right answer, or the right steps?
December 4, 2025 at 9:26 PM
Because of the emphasis in the paper on evaluating each CoT step for correctness.
December 4, 2025 at 9:16 PM
What makes the input "suitable"?
December 4, 2025 at 9:10 PM