Ian Bicking
ianbicking.org
Ian Bicking
@ianbicking.org
Software developer in Minneapolis. Working with applications of LLMs. Previously: Mozilla, Meta, Brilliant.org
I don't think we should cede that, I'm not willing to drop the label of leftist just because I don't conform to social media leftist norms. Doing so seems kind of wimpy.
December 22, 2025 at 12:27 AM
I don't know, I apply it to my understanding of the world.
December 22, 2025 at 12:21 AM
If it's not clear, I believe it's a better world where students graduate and still actively consume and make use of academic research. Giving them tools that help them apply research – inside and outside of their professions – feels like a boon to both researchers and society as a whole.
December 22, 2025 at 12:18 AM
Honestly, I should not have been snarky here. That's bad form, bad for discourse, and that's on me.
December 21, 2025 at 11:36 PM
Having just been piled onto for posting a mild defense of a suggested academic AI policy, I think it's situational. But I gotta remember just not to reply to that stuff.
December 21, 2025 at 11:10 PM
Empirically yes, people really like when you are nice and encouraging and generous with your time and thought! So much criticism of LLMs for being sycophantic, when maybe we should take the lesson that overt enthusiasm and appreciation are attractive traits worth developing in ourselves
December 21, 2025 at 11:01 PM
In your estimation, how many of your students read a paper after they graduate?
December 21, 2025 at 10:37 PM
Reposted by Ian Bicking
Your students don't get paid for it, and when they graduate they probably won't be paid for it.

I'm not a researcher, but I have made use of research before and after LLMs. FAR more after LLMs, because I can discover research and apply the lens of my work to that research.
December 21, 2025 at 10:30 PM
I believe very firmly that this technology already greatly increases the impact of research, and the breadth of people who can benefit from research. It could increase the societal value of academics (though I won't claim this is inevitable).
December 21, 2025 at 10:30 PM
Your students don't get paid for it, and when they graduate they probably won't be paid for it.

I'm not a researcher, but I have made use of research before and after LLMs. FAR more after LLMs, because I can discover research and apply the lens of my work to that research.
December 21, 2025 at 10:30 PM
For other tasks I’d have to consider them specifically. Obviously reading an entire paper will work better than reading an LLM summary. Any researcher has to make choices about depth and breadth given finite time.
December 21, 2025 at 9:30 PM
Here I will say that the state of the art LLMs, given appropriate context (including the researcher’s goals and citation expectations) will outperform the abstract consistently. That is, if you read abstracts to make this same determination then the LLM will do better at guiding you
December 21, 2025 at 9:30 PM
The real question of accuracy, for this piece of advice (which I am defending but did not write!) is: does the LLM guide you to the correct next step? If you are choosing what material to look at more closely and what to ignore, did it guide you correctly?
December 21, 2025 at 9:30 PM
Separately, did the LLM faithfully represent the material as a whole? Did it identify the essential arguments from the supporting arguments? Harder still to judge, but absolutely important. Worth debate, and when used for a survey you should be doubly cautious
December 21, 2025 at 9:30 PM
Note LLMs hedge their statements just like humans, so depending on how you read the claims you might come up with different senses of their accuracy. You can even argue that if the LLM hedges differently than the original source that it may be inaccurate
December 21, 2025 at 9:30 PM
There’s also a couple kinds of accuracy: is the summary factually correct, and does it faithfully represent the material? If you use an LLM that does citations and quotes then the accuracy in my experience is pretty good, but there are still regular mistakes
December 21, 2025 at 9:30 PM
Ok, serious answer: first, I don’t think “summarizing” is a very good approach, though it’s what everyone does. What you are asking the LLM to do is extract and condense information, and a “summary” typically doesn’t specify what information, leaving a kind of bland “common sense” interpretation
December 21, 2025 at 9:30 PM
Is your claim that the LLM output is poisoned, that even reading it is itself unacceptable? That seems like a bold claim, but if that’s what you’re claiming then ok I guess
December 21, 2025 at 3:41 AM
I read the list you took this item from (attached). It has a row: "Ask AI to summarize a book or article in your field. Reproduce that summary in your literature review without reading the book or article; Acceptable: No, never acceptable"

You're interpretation is plainly incorrect.
December 21, 2025 at 3:32 AM
"the advice says its acceptable to use the LLM output WITHOUT citing the LLM"

It does not say that.
December 20, 2025 at 9:27 PM
The advice says to use it as a starting point for critical engagement and not to cite it!
December 20, 2025 at 9:07 PM
If you have the time to read every book and article in your area of study and in related areas, then more power to you!
December 20, 2025 at 9:03 PM
The advice says not to cite the AI output. I guess it could be interpreted different ways, but I read it as suggesting you shouldn’t directly use that output in your own work. I wouldn’t cite a google search either, nor consider it authoritative. But I’d use it in research!
December 20, 2025 at 8:45 PM
What about this is controversial? Beginning an inquiry by using AI to survey material is a very normal and appropriate use.
December 20, 2025 at 6:53 PM