Annual Computer Security Applications Conference
@acsacconf.bsky.social
110 followers 0 following 370 posts
One of the longest-running computer security conferences, more infos at: https://www.acsac.org This year‘s edition: Annual Computer Security Applications Conference (ACSAC) ACSAC 2025 | December 8-12, 2025 | Waikiki, Hawaii, USA
Posts Media Videos Starter Packs
acsacconf.bsky.social
Concluding the session was Srimoungchanh et al.'s "Assessing UAV Sensor Spoofing: More Than a GNSS Problem," which explores how sensor spoofing allows adversaries to control #UAVs beyond GNSS vulnerabilities. (www.acsac.org/2024/p...) 5/5
#Cybersecurity #DroneSecurity
Srimoungchanh et al.'s "Assessing UAV Sensor Spoofing: More Than a GNSS Problem"
acsacconf.bsky.social
In the session's third slot was Wang et al.'s "VIMU: Effective Physics-based Realtime Detection and Recovery against Stealthy Attacks on UAVs," showcasing a robust system for identifying and countering sensor threats in UAVs. (www.acsac.org/2024/p...) 4/5
#Cybersecurity #UAV
Wang et al.'s "VIMU: Effective Physics-based Realtime Detection and Recovery against Stealthy Attacks on UAVs"
acsacconf.bsky.social
Following that, we had Park et al.'s "Leveraging Intensity as a New Feature to Detect Physical Adversarial Attacks Against LiDARs," showcasing a method that analyzes pulse intensity to enhance detection accuracy. (www.acsac.org/2024/p...) 3/5
#Cybersecurity #AutonomousVehicles
Park et al.'s "Leveraging Intensity as a New Feature to Detect Physical Adversarial Attacks Against LiDARs"
acsacconf.bsky.social
First was Xia & Chen's "Moiré Injection Attack: Compromising Autonomous Vehicle Safety via Exploiting Camera's Color Filter Array (CFA) to Inject Hidden Traffic Sign" presenting attacks on #AutonomousVehicles, deceiving #AI without human detection. (www.acsac.org/2024/p...) 2/5
Xia & Chen's "Moiré Injection Attack(MIA): Compromising Autonomous Vehicle Safety via Exploiting Camera's Color Filter Array (CFA) to Inject Hidden Traffic Sign"
acsacconf.bsky.social
For this #ThrowbackThursday, we will look at #ACSAC2024's (Autonomous) Vehicle Security session. The links in this thread will lead you to the paper pdfs and the slide decks, so be sure to check them out! 1/5
Looking back at ACSAC 2024
acsacconf.bsky.social
📣 Deadline alert 📣 TODAY is the deadline for submissions to the Workshop on AI for Cyber Threat Intelligence which is held as pre-conference workshop of #ACSAC2025: Find all details in the CfP on their website:
WAITI
Conference Template
waiti-workshop.github.io
acsacconf.bsky.social
Those who want to get a head start on their attendance of #ACSAC2025 can now book their rooms at the discounted conference rate: www.acsac.org/2025/v...
The 'Alohilani Resort in Waikiki, Honolulu, Hawaii
acsacconf.bsky.social
The last paper presented was Hegde et al.'s "Model-Manipulation Attacks Against Black-Box Explanations," exploring vulnerabilities in explanation methods like LIME and highlighting the need for trustworthy alternatives. (www.acsac.org/2024/p...) 6/6
#TrustworthyAI #ExplainableAI
Hegde et al.'s "Model-Manipulation Attacks Against Black-Box Explanations"
acsacconf.bsky.social
The fourth paper in this session was Wang et al.'s "Physical ID-Transfer Attacks against Multi-Object Tracking via Adversarial Trajectory," revealing vulnerabilities in MOT systems through adversarial trajectories. (www.acsac.org/2024/p...) 5/6
#ComputerVision #Cybersecurity
Wang et al.'s "Physical ID-Transfer Attacks against Multi-Object Tracking via Adversarial Trajectory"
acsacconf.bsky.social
After that came Doan et al.'s "On the Credibility of #Backdoor Attacks Against Object Detectors in the Physical World," revealing innovative methods to compromise real-world detection tasks with physical object-triggered backdoors. (www.acsac.org/2024/p...) 4/6
#AI
Doan et al.'s "On the Credibility of Backdoor Attacks Against Object Detectors in the Physical World"
acsacconf.bsky.social
Following that, we had Tao et al.'s "Exploring Inherent Backdoors in #DeepLearning Models," highlighting vulnerabilities in clean models and identifying 315 backdoors in trusted sources. (www.acsac.org/2024/p...) 3/6
#AI #Cybersecurity
Tao et al.'s "Exploring Inherent Backdoors in #DeepLearning Models"
acsacconf.bsky.social
First up in the session was Warnecke et al.'s "Evil from Within: Machine Learning #Backdoors Through Dormant Hardware Trojans," highlighting a novel hardware-based backdoor attack on ML systems without altering model or software. (www.acsac.org/2024/p...) 2/6
#HardwareSecurity
Warnecke et al.'s "Evil from Within: Machine Learning #Backdoors Through Dormant Hardware Trojans"
acsacconf.bsky.social
For this #ThrowbackThursday, we will look at #ACSAC2024's second Machine Learning Security session, which focussed on Backdoors & Attacks. The links in this thread will lead you to the paper pdfs and the slide decks, so be sure to check them out! 1/6
Looking back at ACSAC 2024
acsacconf.bsky.social
The final paper in this session was Fan et al.'s "Lightweight Secure Aggregation for Personalized Federated Learning with Backdoor Resistance," introducing FLIGHT, a robust method enhancing #security and #efficiency in #FederatedLearning. (www.acsac.org/2024/p...) 6/6
#AI
Fan et al.'s "Lightweight Secure Aggregation for Personalized Federated Learning with Backdoor Resistance"
acsacconf.bsky.social
Fourth was Ali et al.'s "Adversarially Guided Stateful Defense Against Backdoor Attacks in Federated Deep Learning," which introduces a defense that outperforms SOTA methods through adversarial perturbations and trust index for cluster selection. (www.acsac.org/2024/p...) 5/6
#AI
Ali et al.'s "Adversarially Guided Stateful Defense Against Backdoor Attacks in Federated Deep Learning"
acsacconf.bsky.social
The third presentation was Behnia et al.'s "Efficient Secure Aggregation for Privacy-Preserving Federated Machine Learning," introducing e-SeaFL, a protocol enhancing efficiency with minimal communication. (www.acsac.org/2024/p...) 4/6
#DataPrivacy #CyberSecurity #AI
Behnia et al.'s "Efficient Secure Aggregation for Privacy-Preserving Federated Machine Learning"
acsacconf.bsky.social
Then came Zari et al.'s "Link #Inference Attacks in Vertical Federated Graph Learning," revealing a potent gradient-based link information leakage threat, underscoring urgent defense needs. (www.acsac.org/2024/p...) 3/6
#CyberSecurity #DataPrivacy #AI
Zari et al.'s "Link #Inference Attacks in Vertical Federated Graph Learning"
acsacconf.bsky.social
Launching the session was Li et al.'s "FedCAP: Robust Federated Learning via Customized Aggregation and Personalization," showing a novel solution tackling data heterogeneity and Byzantine threats. (www.acsac.org/2024/p...) 2/6
#MLSecurity #CyberSecurity #AI
Li et al.'s "FedCAP: Robust Federated Learning via Customized Aggregation and Personalization"
acsacconf.bsky.social
For this #ThrowbackThursday, we will look at one of #ACSAC2024's Machine Learning Security sessions, specifically the one that focussed on #FederatedLearning. The links in this thread will lead you to the paper pdfs and the slide decks, so be sure to check them out! 1/6
Looking back at ACSAC 2024
acsacconf.bsky.social
📣 Today is deadline for the Cyber Security Experimentation and Test Workshop (held at #ACSAC2025) and for case study submissions to #ACSAC2025 📣 If your work fits any of these two, our website has the info:

CSET: https://cset25.isi.edu
Case studies: www.acsac.org/2025/s...