- Bisecle is compatible with LLMs from 1B to 13B, introducing only a small number of additional parameters and computational cost.
- Bisecle can achieve superior performance even in low-resource settings.
- Bisecle establishes a new SOTA results surpassing others in both accuracy (+15.79%) and forgetting reduction (8.49% lower Forgetting rate).
- Our method Bisecle consistently outperforms others, indicating strong robustness even when training data is limited.
- multi-directional supervision mechanism improves knowledge preservation.
- contrastive prompt learning scheme is designed to isolate task-specific knowledge to facilitate efficient memory storage, and to explicitly mitigate update conflict.
🧠 Bisecle: Binding and Separation in Continual Learning for Video Language Understanding.
Preprint: arxiv.org/abs/2507.00469
Code: github.com/cruiseresear...
Inspired by the rapid binding and pattern separation mechanisms in the hippocampus
Reposted by: Flora D. Salim
#KDD2025 #WearableAI #VLLMs admscentre.org/4mu7lFx
The first author - 1st year student Mehdi Jafari is attending his first academic conference #ACL2025.
* LLMs Can Represent and Retain ToM-related Constructs: The study investigated whether LLMs could represent and retain ToM-related constructs and found evidence supporting this ability.
* ToM-informed Alignment Improves Response Quality:
a) The extent to which the activation space of LLMs represents ToM of interlocutors,
b) Whether these representations form a consistent model of ToM,
and
c) How can we leverage ToM-related features to generate more aligned responses?
Using ToM, we can analyse interlocutor behaviours based on the understanding of their mental and emotional states.
aclanthology.org/2025.finding...
Codes: github.com/cruiseresear...
Findings in the thread below.
Check our recent work on this topic. Bisecle: Binding and Separation in Continual Learning for Video Language Understanding
arxiv.org/abs/2507.00469
Know anyone suitable? Pls repost
Reposted by: Flora D. Salim
The paper will need to have a single decision; the point of this exercise is not just about addressing each reviewer's concerns individually.
Reposted by: Flora D. Salim
Reposted by: Flora D. Salim
Reposted by: Flora D. Salim
Reposted by: Flora D. Salim
developer.nvidia.com/blog/introdu...
We'll discuss how AI transforms climate modeling, weather forecasting, and high-resolution urban simulations.
Anyone else attending?
www.nytimes.com/2025/03/01/u...
Reposted by: Flora D. Salim
Round 2 of Brick by Brick 2024 has commenced! To join: www.aicrowd.com/challenges/b...
Our NeurIPS 2024 paper includes both a multi-label classification benchmark and a zero-shot forecasting benchmark.
neurips.cc/virtual/2024...
-- requires models to manage hierarchical dependencies and ensure consistency.
BTS also includes a KG that captures the relationships between TS and their physical, logical, and virtual entities.
Making it a great case for Hierarchical Multi-Label Classification. The TS are to be classified across nested categories (e.g. Point>Sensor>Air Quality>CO2).