Dave Bloom
drumkitdave.bsky.social
Dave Bloom
@drumkitdave.bsky.social
Music stuff, librarian/info lit stuff, other duties as assigned
Here’s another pretty good one on NotebookLM and summarization. dl.acm.org/doi/pdf/10.1...
dl.acm.org
December 22, 2025 at 1:53 PM
Academic librarian here and my colleagues and I have done some work on instances like this. A tricky thing about sketchy publications is that you often can't trust dates. This article may have a 2021 data on it, but that doesn't mean it's actually been around since then. 1/
December 21, 2025 at 4:32 PM
You had the same thought as I did! I explained what I found above.
December 21, 2025 at 4:24 PM
By November 2023, it included the 2020-2022 run, including the article that Jill linked. Not a sure thing that AI was involved (other citations in this article are sloppy, but not necessarily AI), but, despite claimed publication date, it wasn't on the site until months after ChatGPT came out. end/
December 21, 2025 at 4:22 PM
There aren't Wayback-archived versions of the article or volume/issue, but there's an archived version of the publication archive. Here's the weird part - by September 2022, the publication archive was out of order and didn't have any volumes published after 2020. 4/
December 21, 2025 at 4:15 PM
Anyway, after I did some checking on the article that Jill linked (mixed results--journal isn't in Ulrichs Periodical Directory, but the credited authors are affiliated as described), I used the Internet Archive Wayback Machine to see when this article actually showed up . . . 3/
December 21, 2025 at 4:07 PM
For instance, about a year ago, we found a publisher that populated their archive with a bunch of back-dated abstracts that were almost certainly all AI-generated. It lent them an air of legitimacy presumably so that their more recent, low-quality articles had some weight to them. 2/
December 21, 2025 at 4:02 PM
Academic librarian here and my colleagues and I have done some work on instances like this. A tricky thing about sketchy publications is that you often can't trust dates. This article may have a 2021 data on it, but that doesn't mean it's actually been around since then. 1/
December 21, 2025 at 3:59 PM
I checked through some of the other citations, as well. Most are either legit or only contain minor issues that are pretty common human goofs, like this one (wrong volume/issue, wrong page numbers), but there are some odd things about this article/journal that I'll post further up.
December 21, 2025 at 3:49 PM
The upshot is that we were able to knock results from the journal out of our article search and got the publisher's journals removed from Ulrich's. But this is clearly going to be an eternal game of Whac-a-Mole for librarians (and for patrons who contact us to report this stuff)! 4/4
December 6, 2025 at 1:53 AM
In another, a student found an article that had seemingly AI-generated fake citations using our institutional article search (which searches our subscription stuff, but also beyond). It got picked up via CrossRef. Another sketchy publisher, but this one with enough oomph to be listed in Ulrich's. 3/
December 6, 2025 at 1:49 AM
In one, a researcher told us he'd found himself cited as coauthor on a paper he hadn't written. We found a few years' worth of backlog on the journal's page (hijacked version of a legit journal) was AI-generated abstracts and titles to pad things out for the newer, dubious full-text articles. 2/
December 6, 2025 at 1:43 AM
My colleagues and I (librarians at UW-Madison) just ran a two-hour presentation on the impact of gen AI on predatory publishing for library staff this week! It was directly inspired by a few patron interactions similar to what you're describing. 1/
December 6, 2025 at 1:37 AM
My bandmates and I were *really* into that show. A few years later, a club offered my ex-bandmates' new band an opening spot for either Flickerstick or the Arcade Fire, who were just about to receive their Pitchfork 10.0 for Funeral. Guess which show my friends picked . . .
April 29, 2025 at 2:35 AM
In classes where AI is an info lit instruction component, I cover ethics, but focus on pragmatic issues, e.g., databases and search engines are better tools for finding credible sources that you can assess for authority, saving you work & time. It undercuts overall assumptions about AI and ease.
April 4, 2025 at 10:31 PM
Would love a follow up that evaluates sourcing on more typical user queries! Anecdotally, your findings square roughly with what I've noticed about sourcing for more standard information seeking-type prompts, but 'find this exact quote' is pretty distinct from 'answer this and provide support'.
March 13, 2025 at 7:27 PM
Although this study would suggest that you might have a difficult time verifying the output for that prompt, because the sourcing may be incorrect, incomplete, or misleading. And if you're not an expert on drugs or genetics, you definitely shouldn't trust the output without verification.
March 13, 2025 at 7:03 PM
That phrasing got me at first, too, but the queries in the study were "here's a quote, find me the source," so it mostly just establishes that sourcing is garbage. This is a very big deal, but shouldn't be extrapolated out to reliability of all output.
March 13, 2025 at 6:59 PM
Definitely worrying that an expert in assessment and digital learning is so overwhelmed with the narrative around AI as an efficiency engine that he steps right past a well-established, more reliable solution.
February 28, 2025 at 5:40 PM