Yoshitomo Matsubara
@yoshitomo-matsubara.net
1.4K followers 170 following 260 posts
Research Scientist at Yahoo! / ML OSS developer PhD in Computer Science at UC Irvine Research: ML, NLP, Computer Vision, Information Retrieval Technical Chair: #CVPR2026 #ICCV2025 #WACV2026 Open Source/Science matters! https://yoshitomo-matsubara.net
Posts Media Videos Starter Packs
yoshitomo-matsubara.net
Excited to share that I was selected as one of the inaugural PyTorch Ambassadors!

This is a great opportunity for me to promote open source/science + PyTorch as a PyTorch Ambassador

More to come soon!
pytorch.org/blog/meet-th...
Meet the 2025 PyTorch Ambassadors – PyTorch
pytorch.org
Reposted by Yoshitomo Matsubara
iccv.bsky.social
There’s no conference without the efforts of our reviewers. Special shoutout to our #ICCV2025 outstanding reviewers 🫡

iccv.thecvf.com/Conferences/...
2025 ICCV Program Committee
iccv.thecvf.com
Reposted by Yoshitomo Matsubara
aclrollingreview.bsky.social
🚨 ARR is looking for a volunteer Co-CTO to help improve tech infrastructure!
🛠️ Preferred:
• 5+ years in NLP research
• Git, CLI tools, Python, and basic HTML
• 2-year role, overlapping with current Co-CTO
Interested? DM @fredashi.bsky.social or email [email protected]
#ARR #ACL #NLProc
yoshitomo-matsubara.net
A may be already happening, IDK

Such studies would be helpful and should discuss quality as well
Not sure how the "productivity" and "quality" can be quantified (<- should not be # accepted papers) though

I don't see what you want to show with your previous reply
yoshitomo-matsubara.net
Thank you for sharing that
I know how CVPR/ICCV peer-reviews work

I checked his talk and confirmed your point, but the idea is simply increasing acceptance rate for main conference, which is not the same as accepting more papers as Findings like NLP venues do

ICYDK aclanthology.org/2020.finding...
aclanthology.org
Reposted by Yoshitomo Matsubara
csprofkgd.bsky.social
Here we go again 😅 This time I’m planning to take a more senior role to help mentor the next gen of publicity chairs. Please consider volunteering!
cvprconference.bsky.social
#CVPR2026 is looking for Publicity Chairs! The role includes working as part of a team to share conference updates across social media (X, Bluesky, etc.) and answering community questions.

Interested? Check out the self-nomination form in the thread.
Reposted by Yoshitomo Matsubara
cvprconference.bsky.social
This marks the kick-off of our #CVPR2026 coverage! I’m @csprofkgd.bsky.social, and more publicity chairs will be joining in soon.

The key #CVPR2026 dates are now posted. One highlight: the supplementary materials deadline comes a full week AFTER the main paper deadline.
yoshitomo-matsubara.net
Ouch, I thought I could finally meet you in person during ICCV 2025 😭
yoshitomo-matsubara.net
Then, the 1st graph suggests that the ideal reasonable acceptance rate is a function of # submissions and venue capacity, which looks like an opposite direction of the Findings idea and I disagree

I couldn't find his slide of "good citizens of computer vision" and it's still vague
yoshitomo-matsubara.net
I do not quite understand your "pricing approaches" or devaluation part either
yoshitomo-matsubara.net
Same, I reviewed for AAAI and did not find "AI review" useful

ICLR'25 did exactly the same as your idea, but more than 70% of the reviews that received the feedback were not updated

Auditing human reviews with LM should be helpful, but it should come with a more rigorous enforcement and evaluation
yoshitomo-matsubara.net
I think the idea is similar to what AAAI'26 is doing (and ICLR'25 did)
medium.com/@yoshitomo-m...

Personally, I find it helpful for the sake of pre-submission reviews for authors

I don't find those helpful enough at this point to address the original issue
Personal thoughts on a randomized study of LM-based review feedback agent at ICLR 2025
Last year, ICLR 2025 announced an interesting experiment “Assisting ICLR 2025 reviewers with feedback”. Unfortunately, I had to decline the…
medium.com
yoshitomo-matsubara.net
I see your point about potential reduction in the rate of submission growth

Unless the number of quality (and available) reviewers increases proportionally to the number of submissions, we will face the same problem

From the perspective, the acceleration by LM may make the situation even worse
yoshitomo-matsubara.net
I said "may even further increase"
My point is simply accepting more papers would not solve the current peer-review issues

E.g., *CL / EMNLP introduced Findings to accept papers that meet bar but are not selected for presenting at main conf
But, they are still facing the same issues as ML/CV venues
yoshitomo-matsubara.net
That objective is covered by the Findings track idea (accepted, but no presentation in main conference) like NLP conferences do

However, simply accepting more papers won't help us slow down and may even further increase submission volume
yoshitomo-matsubara.net
I like the idea, but I believe that many people will submit duplicate submissions or not-identical-but-reworded submissions to get the lottery
yoshitomo-matsubara.net
I understand different countries/regions have different speed, but my point is to slow down at global level

Lottery seems like making the review system even more random and would not help us slow down
yoshitomo-matsubara.net
I'd be happier with capping at 5!
yoshitomo-matsubara.net
As said, I do feel 10 per author is still too many, but you may be surprised that there was pushback when some conferences capped 25 papers per author
yoshitomo-matsubara.net
- Cap submissions, say 10 papers per author: I feel it's still too many though

- Introduce Findings track to ML and CV communities: I saw ICCV'25 has Findings workshop, great first-step

- Bring reviewer mentorship program back: We should educate more reviewers e.g., ICLR 2022

What else?
yoshitomo-matsubara.net
As usual, my personal opinion

ML/CV/NLP communities should "slow down"

While controlling an acceptance rate just for the venue's "prestigiousness" sounds nonsense to me e.g., #NeurIPS2025, we (re-)submit too many papers for the number of quality reviewers

Potential solutions👇
yoshitomo-matsubara.net
euripsconf.bsky.social
CLARIFICATION: We’ve been informed by the @neuripsconf.bsky.social organizers that no papers were rejected due to space constraints. We will nonetheless still have a Salon de Refusés track based on a to be determined set of criteria
euripsconf.bsky.social
Congratulations to everyone who got their @neuripsconf.bsky.social papers accepted 🎉🎉🎉

At #EurIPS we are looking forward to welcoming presentations of all accepted NeurIPS papers, including a new “Salon des Refusés” track for papers which were rejected due to space constraints!
yoshitomo-matsubara.net
By the way, I found an Official Review that was potentially "AI-generated" (a super long line of comments with no linebreak) 🤷

The "AI Review" looks better than the official review (at least in terms of the presentation)
yoshitomo-matsubara.net
I checked the titles of references provided at the end of the review and confirmed all exist (didn't check authors, years, venue)
But, I am not sure if they are actually relevant to the submitted work

The stance of the "AI Review" is not quantified (no "Rating" in "AI Review")