Phillip A. Brown
phillip-brown.bsky.social
Phillip A. Brown
@phillip-brown.bsky.social
Ph.D. student in English at Notre Dame | 19th Cent. American Literature
If someone fools a reader into thinking a given text is produced by a human when it is produced by AI, that is not an earth-shaking call to reorient the field of the humanities, that is an example of one human being tricking another human being. Which is a bad thing, and it's okay to say it's bad.
July 21, 2025 at 10:26 PM
It is also just fundamentally not independent as these are programs created by humans to make these texts and must be both prompted by humans to produce the text and the text must be read and recognized by a human. AI is an ongoing human choice, not some disembodied force. (2/3)
July 21, 2025 at 10:26 PM
Postscript: This is how the Big Tech works, and we've seen it before. People shocked and awed by these AGI projections should remember that fanciful projections have been used time and time again to manipulate stock prices and facilitate the lucrative acquisitions that the tech economy is based on.
July 15, 2025 at 5:52 PM
which is... just not how things work, and genuinely insulting to pass off as a serious projection. Again, we have been given absolutely no concrete reason to think that the massive leap between LLMs and AGI is remotely close to happening, if even possible, let alone in two years time. (4/4)
July 15, 2025 at 5:30 PM
Their forecasts are not based on actual research but on "models" that are purely speculative exercises. Their forecasts are based on premises like "a company builds a super powerful data center and then trains its AI on AI research to make it capable of advancing AI research." (3/4)
July 15, 2025 at 5:30 PM
We are still waiting for any realistic description of what a transition from LLMs to genuine human-like artificial intelligence (AGI) would look like. The dates and graphs seem convincing on first glance, but when you actually read the paper, there is no indication that this leap is imminent. (2/4)
July 15, 2025 at 5:30 PM
I mean an answer that doesn't ultimately rely on tautologies or circular "it should be because it is/will be" type thinking, or empty keywords like "tech literacy" (which ultimately reduces to the same thing).
March 18, 2025 at 9:22 PM
Thank you!
March 18, 2025 at 3:58 PM
Hello, is this Discord still active? the join link is expired.
March 18, 2025 at 3:20 PM
Recently “paying attention” has become the measure of political merit, *how much* you “pay attention” determines your level of merit, and not appearing to pay attention, or attending to the wrong things, gets you scolded as “complicit,” etc. Without regard to actual politics in any case.
November 15, 2024 at 2:05 PM
4. Trust the process. You'll end up where you were meant to be, and if that's not in graduate school for the time being, you'll learn something from the experience.
November 14, 2024 at 6:30 PM
3. Forget about "safety schools." My results didn't line up at all with my advance assumptions about probability. There's no way to de-risk the process so don't waste an application (and the $80) on a school you wouldn't be thrilled to attend.
November 14, 2024 at 6:30 PM
2. Consider "optional" written responses as mandatory.

Related, 2.b., go through the online applications for each school well in advance so there are no surprise elements that aren't listed on the department webpage when it's time to submit. (learned this the hard way).
November 14, 2024 at 6:30 PM