Aaron Tay
@aarontay.bsky.social
3.2K followers 330 following 1.9K posts
I'm librarian + blogger from Singapore Management University. Social media, bibliometrics, analytics, academic discovery tech.
Posts Media Videos Starter Packs
Pinned
aarontay.bsky.social
I'm an academic librarian blogger at Musings about librarianship since 2009.
To get a taste of what I blog, see "Best of..." Musings about librarianship - Posts on discovery, open access, bibliometrics, social media & more. Have you read them?
musingsaboutlibrarianship.blogspot.com/p/best-of.html
aarontay.bsky.social
It's ironic to see 2025 publications talking about academic ai search engines saying things like Elicit uses GPT3 and Undermind.ai uses arxiv. (Might want to check if there are more updated sources).
aarontay.bsky.social
Sorry. All virtual seats for Mike's session are now over. But we still have seats for other events in this series. eventregistration.smu.edu.sg/event/TTT202...
aarontay.bsky.social
I realise I am very uncomfortable with agents and I've been thinking why (1)
aarontay.bsky.social
Want to hear more from @mikecaulfield.bsky.social? Mike the creator of SIFT and co-author of Verified- is doing a free online class on using searching + AI for Verification. 24 October 2025 (Friday), 10:00–11:30 am SGT (UTC +8). eventregistration.smu.edu.sg/event/TTT2025/
aarontay.bsky.social
It definitely cant do things like find citations/references of paper x and... the thinking pretends it can but it can't . To be fair neither can undermind.ai etc. So far, I haven't found a acad deep research that is agentic enough to do that but modern general LLMs like chatgpt CAN do such things(2)
aarontay.bsky.social
Playing more with Scopus deep research.. Looking at "thinking" & testing it looks like Scopus Deep Research doesn't have citation searching as a tool, unlike Undermind, Consensus Deep search etc. Looks more like it is generated various questions & choosing keywords to try to find content (1)
aarontay.bsky.social
Tried with the new Warden OpenAlex rewrite & the % with abstracts in Elsevier is higher but still below last year. I wonder - OpenAlex stores abstracts as inverted index, is that a way to bypass the issue? Technically the record does not have the abstract but you can still match text in abstract?
aarontay.bsky.social
Hmm yeah i may misunderstand what this feature is meant to do. Will ask at our official webinar today
aarontay.bsky.social
Another favourite question, can you use GS alone for systematic review, like scite assistant and a few others, Sciencedirect is tripped up by Gehanno(2023) because the first sentence of abstract is "it is said that..." though the paper findings actually say "could be used alone for SR" (6)
aarontay.bsky.social
Maybe I misunderstand how or what "compare experiment" feature is for, but it makes little sense to compare papers that are totally different in method and/or objective?!?! (5)
aarontay.bsky.social
Some of it can be explained. e.g. Sciencedirect AI does a secondary cite on X and hence "thinks" that X is relevant, when actually X is on a totally different topic (just happens to mention results from truly relevant y). But some I really can't explain why it appears in compare experiment table(4)
aarontay.bsky.social
The biggest disappointment is the "compare experiment" feature. Leaving aside you cant control the headers for comparison in many tests the top few results totally not related to the question? eg in this one the first two are not studies estimating size of GS! Why is this so? (3)
aarontay.bsky.social
The generated answer here looks ok. That said the fact it searches full-text instead of just abstract means it more likely to do secondary citations similar to scite assistant.
e.g (Momodu, Okunade & Adepoju, 2022) is cited because it mentions the result from (Gusenbauer, 2019) (2)
aarontay.bsky.social
Kicking the tires of Sciencedirect AI. (1)
aarontay.bsky.social
in other words I like the grainularity of ASJC Subject Areas but I dont want to assign it by using the journal the article is in....
aarontay.bsky.social
SOS. I have Scopus & Scival, I want a way to categorise articles in Scopus. But I don't want it to use journal level assignment (ie all articles in x are in area y). I know Scopus & Scival have topics and topic clusters but they are too granular. What can I do? (1)
aarontay.bsky.social
curious about whether review articles have higher %OA than normal articles and playing around with openalex, lens, the answers seems to be yes. But looking at just gold (include diamond)+hybrid, I not sure how much is really APC funded (yes i know Gold OA etc need not be APC funded).
aarontay.bsky.social
Yes.I would say embeddings gives you "closest" based on the distribution of the data it was trained on. Most probable match is also another way of putting it , in if it is BERT the embeddings are trained using masked language modelling then it will guess the most probable word to fill (2)
Reposted by Aaron Tay
dannykay68.bsky.social
GEARING UP: Join more than 300 people attending our symposium: "Beyond Certainty – what does ‘Discovery’ mean in an open and artificially intelligent world?" www.deakin.edu.au/library/abou...
8 October 9:30am AEST
#AIBeyond2025
Beyond Certainty
www.deakin.edu.au
aarontay.bsky.social
Publishers used to be somewhat liberal about open abstracts but no longer it seems..
aarontay.bsky.social
Nothing really in there i didn't already know except maybe the prices of apis. But it really reinforces the idea that Google has an overwhelming advantage in search and AI (which is search in a way)