Matthias Schulze
@percepticon.bsky.social
630 followers 280 following 2K posts
PhD in political science, studying infosec, cyber conflict & information war at IFSH. Self-taught hacker & blue team. Blog and podcast about my work over at https://percepticon.de or https://ioc.exchange/@percepticon
Posts Media Videos Starter Packs
percepticon.bsky.social
Your passwords don’t need so many fiddly characters, NIST says #cybersecurity #infosec
Your passwords don’t need so many fiddly characters, NIST says
It’s once again time to change your passwords, but if one government agency has its way, this might be the very last time you do it.    After nearly four years of work to update and modernize its guidance for how companies, organizations, and businesses should protect their systems and their employees, the US National Institute of Standards and Technology has released its latest guidelines for password creation, and it comes with some serious changes. Gone are the days of resetting your and your employees’ passwords every month or so, and no longer should you or your small business worry about requiring special characters, numbers, and capital letters when creating those passwords. Further, password “hints” and basic security questions are no longer suitable means of password recovery, and password length, above all other factors, is the most meaningful measure of strength. The newly published rules will not only change the security best practices at government agencies, they will also influence the many industries that are subject to regulatory compliance, as several data protection laws require that organizations employ modern security standards on an evolving basis. In short, here’s what NIST has included in its updated guidelines: * Password “complexity” (special characters, numbers) is out. * Password length is in (as it has been for years). * Regularly scheduled password resets are out. * Passwords resets used strictly as a response to a security breach are in. * Basic security questions and “hints” for password recovery are out. * Password recovery links and authentication codes are in.   The guidelines are not mandatory for everyday businesses, and so there is no “deadline” to work against. But small businesses should heed the guidelines as probably the strongest and simplest best practices they can quickly adopt to protect themselves and their employees from hackers, thieves, and online scammers. In fact, according to Verizon’s 2025 Data Breach Investigations Report, “credential abuse,” which includes theft and brute-force attacks against passwords, “is still the most common vector” in small business breaches. Here’s what some of NIST’s guidelines mean for password security and management. 1. The longer the password the stronger the defense “Password length is a primary factor in characterizing password strength,” NIST said in its new guidance. But exactly how long a password should be will depend on its use. If a password can be used as the only form of authentication (meaning that an employee doesn’t need to also send a one-time passcode or to confirm their login through a separate app on a smartphone), then those passwords should be, at minimum, 15 characters in length. If a password is just one piece of a multifactor authentication setup, then passwords can be as few as 8 characters. Also, employees should be able to create passwords as long as 64 characters. 2. Less emphasis on “complexity” Requiring employees to use special characters (&^%$), numbers, and capital letters doesn’t lead to increased security, NIST said. Instead, it just leads to predictable, bad passwords. “A user who might have chosen ‘password’ as their password would be relatively likely to choose ‘Password1’ if required to include an uppercase letter and a number or ‘Password1!’ if a symbol is also required,” the agency said. “Since users’ password choices are often predictable, attackers are likely to guess passwords that have previously proven successful.” In response, organizations should change any rules that require password “complexity” and instead set up rules that favor password length. 3. No more regularly scheduled password resets In the mid-2010s, it wasn’t unusual to learn about an office that changed its WiFi password every week. Now, this extreme rotation is coming to a stop. According to NIST’s latest guidance, passwords should only be reset after they have been compromised. Here, NIST was also firm in its recommendation—a compromised password must lead to a password reset by an organization or business. 4. No more password “hints” or security questions Decades ago, users could set up little password “hints” to jog their memory if they forgot a password, and they could even set up answers to biographical questions to access a forgotten password. But these types of questions—like “What street did you grow up on?” and “What is your mother’s maiden name?”—are easy enough to fraudulently answer in today’s data-breached world. Password recovery should instead be deployed through recovery codes or links sent to a user through email, text, voice, or even the postal service. 5. Password “blocklists” should be used Just because a password fits a list of requirements doesn’t make it strong. To protect against this, NIST recommended that organizations should have a password “blocklist”—a set of words and phrases that will be rejected if an employee tries to use them when creating a password. “This list should include passwords from previous breach corpuses, dictionary words used as passwords, and specific words (e.g., the name of the service itself) that users are likely to choose,” NIST said. Curious where to start? “Password,” obviously, “Password1,” and don’t forget “Password1!” Strengthening more than passwords Password strength and management are vital to the overall cybersecurity of any small business, and it should serve as a first step towards online protection. But there’s more to online protection today. Hackers and scammers will deploy a variety of tools to crack into a business, steal its data, extort its owners, and cause as much pain as possible. For 24/7 antivirus protection, AI-powered scam guidance, and constant web security against malicious websites and connections, use Malwarebytes for Teams.
www.malwarebytes.com
percepticon.bsky.social
 Russian spyware ClayRat is spreading, evolving quickly, according to Zimperium #cybersecurity #infosec
 Russian spyware ClayRat is spreading, evolving quickly, according to Zimperium
A fast-spreading Android spyware is mushrooming across Russia, camouflaging itself as popular apps like TikTok or YouTube, researchers at Zimperium have revealed in a blog post. The company told CyberScoop they expect the campaign is likely to expand beyond Russian borders, too. In three months, Zimperium zLabs researchers observed more than 600 samples, the company wrote in a blog post Thursday. Once implanted, the spyware can steal text messages, call logs, device information and more, and wrest control of a phone to do things like take pictures or place phone calls. “It’s mainly targeting Russia, but they can always adapt to other payloads, and since every inflected phone then becomes an attack vector, it’s likely to become a global campaign,” said Nico Chiaraviglio, chief scientist at Zimperium. “However, it’s not easy to know the attackers’ intentions.” The spyware, dubbed ClayRat, has some notable tools it uses to infect victims. “ClayRat poses a serious threat not only because of its extensive surveillance capabilities, but also because of its abuse of Android’s default SMS handler role,” the blog post reads. “This technique allows it to bypass standard runtime permission prompts and gain access to sensitive data without raising alarms.” It’s also been evolving quickly, Zimperium said, “adding new layers of obfuscation and packing to evade detection.” Zimperium didn’t say who was behind the spyware. The Russian government is a cyberspace power, but typically hasn’t had to rely on spyware vendors, per se, as it has its own capabilities. Often — but not always — spyware linked to or suspected to be linked to the Kremlin is turned inwards, snooping on domestic targets. “ClayRat is distributed through a highly orchestrated mix of social engineering and web-based deception, designed to exploit user trust and convenience,” according to Zimperium. “The campaign relies heavily on Telegram channels and phishing websites that impersonate well-known services and applications.” ClayRat’s users also rely on phishing platforms. The post  Russian spyware ClayRat is spreading, evolving quickly, according to Zimperium appeared first on CyberScoop.
cyberscoop.com
percepticon.bsky.social
Apple bumps RCE bug bounties to $2M to counter commercial spyware vendors #cybersecurity #infosec
Apple bumps RCE bug bounties to $2M to counter commercial spyware vendors
In light of new memory safety features added to Apple’s latest iPhone chips that make entire classes of exploits harder to pull off, the company has revamped its bug bounty program to double or quadruple rewards in various attack categories. The payout for an iOS zero-click system-level remote code execution (RCE) exploit responsibly disclosed to the company by researchers will be raised from $1 million to $2 million starting next month. “We’re doubling our top award to $2 million for exploit chains that can achieve similar goals as sophisticated mercenary spyware attacks,” Apple’s security team wrote in a blog post. “This is an unprecedented amount in the industry and the largest payout offered by any bounty program we’re aware of — and our bonus system, providing additional rewards for Lockdown Mode bypasses and vulnerabilities discovered in beta software, can more than double this reward, with a maximum payout in excess of $5 million.” Both iOS and Android have been targets for surveillance software vendors that sell mobile spying capabilities to intelligence and law enforcement agencies around the world. These vendors have claimed for a long time that they carefully vet clients, but their software — which typically gets deployed through zero-day exploits — has frequently ended up in the hands of repressive regimes for the purpose of spying on political activists and journalists. Other zero-day exploits, either developed in-house by intelligence agencies or acquired from the grey market, are used in cyberespionage campaigns. Apple has fixed a total of eight zero-day exploits in iOS so far this year and six in 2024. Just last month, the company fixed a flaw in the iOS ImageIO component (CVE-2025-43300) that was chained with a zero-day vulnerability in WhatsApp (CVE-2025-55177) to target an estimated 200 individuals. “The only system-level iOS attacks we observe in the wild come from mercenary spyware — extremely sophisticated exploit chains, historically associated with state actors, that cost millions of dollars to develop and are used against a very small number of targeted individuals,” Apple said. “While Lockdown Mode and Memory Integrity Enforcement make such attacks drastically more expensive and difficult to develop, we recognize that the most advanced adversaries will continue to evolve their techniques.” CPU-level memory safety improvements With the launch of iPhone 17 and iPhone Air last month, Apple introduced Memory Integrity Enforcement, an anti-exploitation technology that has been five years in the making, with various pieces and components added to both its CPUs and software over time. Memory Integrity Enforcement aims to severely complicate the exploitation of memory corruption vulnerabilities, particularly buffer overflows and use-after-free memory bugs. It makes use of the CPU Arm Memory Tagging Extension (MTE) specification published in 2019 and the subsequent Enhanced Memory Tagging Extension (EMTE) from 2022. These chip-level mechanisms implement a memory tagging and tag-checking system so that any memory allocated by a process is tagged with a secret and any subsequent requests to access that memory need to contain the correct secret. In simple terms, exploiting memory corruption flaws is all about gaining the ability to write malicious bytecode into memory buffers already allocated by the system to an existing process — the vulnerable application usually — so that the process then executes your malicious code with its privileges. If the targeted process is a kernel component, then you obtained system level arbitrary code execution privileges. With MTE, attackers now must also find the secret tag in order to write inside tagged memory buffers without being flagged and have their target process terminated by the OS. However, this technology still had shortcomings and weaknesses, race condition windows, issues with asynchronus writes, side channel attacks that could leak the tag due to timing differences and also CPU speculative execution attacks such as Spectre v1, which use CPU caches to leak data and potentially MTE tags. “Ultimately, we determined that to deliver truly best-in-class memory safety, we would carry out a massive engineering effort spanning all of Apple — including updates to Apple silicon, our operating systems, and our software frameworks,” the Apple security team said. “This effort, together with our highly successful secure memory allocator work, would transform MTE from a helpful debugging tool into a groundbreaking new security feature.” Higher difficulty means higher rewards The culmination of that work is what Apple now calls Memory Integrity Enforcement (MIE) and is a feature of its new A19 and A19 Pro chips found in its iPhone 17 and iPhone Air lineup. MIE is leveraged in iOS to protect the entire kernel and over 70 userland processes, making memory corruption exploits against these targets much harder to pull off. It is for that reason that Apple has decided to increase the payouts in its bug bounty program. Researchers must now be even more creative and work even harder to get exploit chains to work on the latest Apple devices. Payouts have increased not just for the top remote code execution chains like those used by commercial spyware vendors. Other classes of attacks, many of which rely on memory corruption conditions combined with other flaws, are receiving bounty boosts starting next month: * $500K for application sandbox escapes ($150K previously) * $500K for attacks that require physical access to the device ($250K previously) * $1M for proximity attacks through the wireless and radio protocols ($250K previously) * $1M for one-click remote attack chains that require user interaction ($250K previously) * $2M for zero-click remote attack chains ($1M previously) In addition, individual attack chain components or multiple components that cannot be linked together to demonstrate an attack that meets the criteria above will still be eligible for rewards, but with lower payouts. The company has also introduced so-called Target Flags across the OS that, if “captured” by the researcher, would speed up their payout process even before a fix is developed and released. These target flags are designed to prove the attack reached some level of capability such as register control, arbitrary read/write, or code execution and they enable Apple to verify the impact of a submitted exploit programmatically. Additional bonuses can take the rewards even higher. For example, reporting exploits in development or public beta builds are eligible for a bonus because doing so enables Apple to fix issues before the software is pushed to large numbers of devices. Exploits that bypass the iOS Lockdown Mode protections are also eligible for bonuses. Sending iPhones to activists to counter spyware Because Apple believes iPhone 17 devices are now much harder to attack by spyware vendors, they are planning to provide 1,000 free devices to civil society organizations to be distributed to individuals around the world who they determine are at high-risk of being targeted with surveillance exploits.
www.csoonline.com
percepticon.bsky.social
Poland Records Increase in Cyberattacks on Critical Infrastructure #cybersecurity #infosec
Poland Records Increase in Cyberattacks on Critical Infrastructure
Poland said that the country’s critical infrastructure is experiencing an increasing number of cyberattacks from Russia. Polish Minister of Digitalization Krzysztof Gawkowski made this statement in an interview with Reuters. Russian military intelligence, according to the minister, has tripled its resources for malicious actions against Poland this year. He noted that of the 170 thousand cyber incidents detected in the first three quarters of this year, a significant part was attributed to Russian entities, while other cases were motivated by financial interests and involved theft or other forms of cybercrime. Gawkowski said that Poland faces 2,000-4,000 incidents every day, of which 700-1,000 “pose a real threat or could cause serious problems.” He said foreign adversaries are now expanding their activities beyond water and sewage systems and moving into the energy sector. He did not provide exact figures on Russian activities and could not comment on the methods Russia uses in Polish cyberspace. “Russia’s actions are the most serious because they are aimed at critical infrastructure necessary to maintain normal life,” Gakowski said. This statement comes after a large-scale cyberattack that rocked several European airports on Saturday, September 20. On October 2, a large-scale distributed denial-of-service (DDoS) attack targeted Latvia’s state-owned websites.
insightnews.media
percepticon.bsky.social
A dual typology of social media interventions and deterrence mechanisms against misinformation #cybersecurity #infosec
A dual typology of social media interventions and deterrence mechanisms against misinformation
In response to the escalating threat of misinformation, social media platforms have introduced a wide range of interventions aimed at reducing the spread and influence of false information. However, there is a lack of a coherent macro-level perspective that explains how these interventions operate independently and collectively. To address this gap, I offer a dual typology through a spectrum of interventions aligned with deterrence theory and drawing parallels from international relations, military, cybersecurity, and public health. I argue that five major types of platform interventions, including removal, reduction, informing, composite, and multimodal, can be mapped to five corresponding deterrence mechanisms—hard, situational, soft, integrated, and mixed deterrence—based on purpose and perceptibility. These mappings illuminate how platforms apply varying degrees of deterrence mechanisms to influence user behavior. By Amir Karami School of Data Science and Analytics, Kennesaw State University, USA Image by stocksnap on Pixabay --- Introduction In response to the growing threat of misinformation, social media platforms have deployed diverse interventions designed to mitigate the spread and influence of false or misleading information (Krishnan et al., 2021) and have reached nearly half of all users in the process (Saltz et al., 2021). These interventions—such as content removal, limiting the visibility of malicious activities, and attaching warning labels to posts—reflect a dynamic governance landscape shaped by public pressure, political scrutiny, and evolving platform capabilities (Ng et al., 2021; Zannettou, 2021). Despite the increasing number and variety of interventions, there is a lack of a coherent macro-level typology that explains how these interventions operate independently and collectively. Current studies tend to examine platform interventions in isolation or emphasize their technical aspects (Broniatowski et al. 2023; Vincent et al., 2022; Pennycook & Rand, 2019), rather than analyzing their intended impact or positioning them within broader deterrence or governance models. A macro-level perspective is necessary for understanding how platforms seek to shape user behavior at scale. To address this gap, I propose a dual typology that systematically links social media interventions to deterrence mechanisms. This approach considers two core dimensions of interventions: purpose (coercion, restriction, persuasion, or synthesis) and perceptibility (visibility to users). The framework aims to clarify how interventions function not only as technical solutions but also as behavioral strategies that deter misinformation at scale. Deterrence theory foundation Deterrence theory in criminology posits that individuals can be dissuaded from unwanted behavior if potential punishments are sufficiently certain, swift, and severe (Tomlinson, 2016). Over time, this foundational concept has evolved into a broader typology of deterrence strategies, each defined by the mechanisms it employs. In international relations, hard deterrence (power) refers to coercive behavior through force or penalties, while soft deterrence relies on persuasive strategies by appealing to users’ values, beliefs, or understanding of consequences (Nye, 2005). Lying between these two approaches is situational deterrence (Cusson, 1993), which focuses on limiting opportunities for undesirable behavior without resorting to coercion. Expanding beyond single tactics, integrated deterrence frameworks in military and cybersecurity domains combine multiple capabilities into a single, cohesive strategy of deterrence (Chen, 2023; Stewart, 2024). Likewise, mixed deterrence, frequently seen in space and defense applications, refers to the simultaneous application of different deterrent mechanisms to address complex or hybrid threats (Arkin, 1986; Chen, 2023). Integrated deterrence combines multiple tactics within one intervention on the same entity, while mixed deterrence applies different mechanisms across entities or levels to build a broader defense. Table 1 offers a definition for each deterrence mechanism. Term Definition Hard deterrence Coercing behavior through force or penalties (Nye, 2005)   Soft deterrence Persuading behavior by appealing to users’ values, beliefs, or understanding of consequences (Nye, 2005) Situational deterrence Restricting opportunities for undesirable behavior without resorting to coercion (Cusson, 1993) Integrated deterrence Combining multiple capabilities into a single, cohesive deterrence strategy (Chen, 2023; Stewart, 2024) Mixed deterrence Adopting simultaneous application of different deterrent mechanisms to address complex or hybrid threats (Arkin, 1986; Chen, 2023) Table 1. Deterrence mechanisms. Typology of social media interventions and deterrence mechanisms I propose a dual typology (see Table 3) that maps intervention types with corresponding deterrence mechanisms, offering a more systematic understanding of how social media platforms combat misinformation. This bridges the technical implementation and behavioral function of interventions, highlighting how each type of intervention serves a different purpose (see Table 2) and deterrence mechanism (see Table 1). The typology is grounded in two dimensions: * Purpose: the strategic intent—whether the intervention seeks to coerce, restrict, or persuade, or synthesize these three purposes. * Perceptibility: the degree to which the intervention is visible or noticeable to users. In the typology, the high and low perceptibility reflect broad patterns in how interventions are typically experienced by users. For example, informing interventions such as warning labels are usually highly perceptible because they appear directly on the content that users view, while reduction interventions such as downranking are less perceptible because they operate algorithmically in the background. At the same time, perceptibility is context-dependent. The same intervention may be more or less visible depending on the level at which it is applied. For instance, the removal of a single post may be relatively invisible to the wider community, whereas the suspension of an entire account is highly noticeable to both the affected user and their audience. Term Definition Removal intervention Deletion of content or the suspension/removal of user accounts (Center for an Informed Public et al., 2021) Informing intervention Providing users with information, context, or warnings about content (Center for an Informed Public et al., 2021) Reduction intervention Curtailing the reach or visibility of content and accounts (Center for an Informed Public et al., 2021) Composite intervention Combining multiple types of interventions into a single, unified approach that is applied together as one strategy  (Glasziou et al., 2014) Multimodal intervention Adopting two or more interventions across different levels of action in a coordinated way (Burgener et al., 2008; Morton et al., 2020) Table 2. The definition of social media interventions for fighting misinformation. Deterrence mechanism Intervention Purpose Perceptibility Example Hard Removal Coercive High Account suspension   Soft Informing Persuasive High Warning   Situational Reduction Restrictive Low Downranking   Integrated Composite Restrictive + Persuasive High Labeling content and disabling its engagement features Mixed Multimodal Coercive/ Restrictive/ Persuasive High Simultaneous downranking some comments and removing other comments Table 3. Dual typology of deterrence mechanisms and social media interventions. Hard deterrence mechanism via removal intervention Building on this framework, I now illustrate each intervention type through concrete examples, beginning with the most restrictive and perceptible form of moderation. Removal refers to deleting content or suspending and removing user accounts or online communities that disseminate misinformation (Center for an Informed Public et al., 2021; Cima et al., 2024). This represents the most restrictive form of intervention, as it fully blocks access to content, the offending user, or the community sharing misinformation. All major platforms employ removal for content that violates policies. For instance, YouTube, Facebook, and Twitter (now X) removed the “Plandemic” video—a viral COVID-19 conspiracy theory—in May 2020 to staunch its spread (Culliford, 2020a). Entire pages or accounts dedicated to spreading falsehoods have been taken down as well. Removal is a hard deterrence mechanism because it operates through direct coercion with high perceptibility: it imposes a high cost on the violator by erasing their content or presence, thereby unequivocally signaling that the behavior is prohibited. Soft deterrence mechanism via informing intervention Informing interventions aim to educate users or provide additional context without removing content. These include warning labels, fact-check notices, banners, contextual panels, and interstitial pop-ups that appear before viewing or sharing flagged posts, along with other messages designed to educate or caution users (Center for an Informed Public et al., 2021). For example, Facebook and Instagram apply labels to posts debunked by fact-checkers, often covering the post with a warning that must be clicked through. Twitter has used interstitial warnings on tweets and prompts (Geeng et al., 2020). TikTok has added banners on COVID-19 or vaccine-related posts with reminders and links to authoritative information (Morgan, 2020). These measures allow the content to remain accessible (no deletion or reach reduction in many cases), thus fully preserving the user’s ability to speak and others’ ability to hear them, but they inject additional information to steer perception. Informing is the epitome of a soft deterrence mechanism: it relies on persuasion rather than any material restriction. Situational deterrence mechanism via reduction intervention Reduction refers to interventions that curtail the reach or visibility of content and accounts associated with misinformation, without fully removing them (Center for an Informed Public et al., 2021). Tactics include downranking or demoting posts in algorithmic feeds, limiting the distribution of certain URLs or stories, or placing frictions on sharing (Center for an Informed Public et al., 2021). The content remains on the platform, but it is harder to encounter. For example, Facebook’s feed algorithm downranks posts identified as containing “exaggerated or sensational health claims” so that they appear to far fewer users (Yeh, 2019). Twitter has in the past downranked replies or tweets deemed misleading  (Roth & Harvey, 2018; Roth & Pickles, 2020). Reduction is a situational deterrence mechanism: it imposes restrictions on the opportunity of misinformation spread but not an absolute ban. In terms of coerciveness, it is less coercive than hard deterrence yet more coercive than soft deterrence. Notably, reduction measures are low perceptibility. Integrated deterrence mechanism via composite intervention Composite intervention, in this taxonomy, refers to combining multiple types of interventions  (Glasziou et al., 2014), spanning soft and situational deterrence mechanisms, into a single coordinated strategy. Rather than a single mode of intervention, an integrated deterrence mechanism synthesizes tactics to address misinformation more holistically. In practical terms, this could mean applying two or more interventions simultaneously to the same content or actor. For example, Twitter’s handling of certain election misinformation in 2020 was integrated: it placed warning labels on tweets and disabled the ability to like or retweet those tweets without quote-commenting (Culliford, 2020b). Reddit’s “quarantine” feature for problematic communities is another integrated intervention: a quarantined subreddit is not banned, but it is placed behind a click-through warning page and is kept out of search results and recommendations (Reddit, 2018). In short, composite intervention is an integrated deterrence mechanism because it deliberately combines two deterrence modes to create a more effective or targeted overall deterrent. The combination of interventions can thus be seen as creating a composite intervention that attacks the misinformation’s spread through both restriction and persuasion. Mixed deterrence mechanism via multimodal intervention Multimodal intervention refers to strategies that strengthen the overall information environment or user resilience against misinformation through multiple interventions. In the typology, this aligns with mixed deterrence mechanisms, a comprehensive approach where multiple layers of defense and influence are deployed in a coordinated way. Unlike integrated intervention, which bundles two actions on a single piece of content, mixed deterrence mechanisms span across different levels (post, account, community, network, and off-platform), creating a repetitive or structural reinforcement. For example, Twitter began removal interventions in 2017, in which violating posts were deleted at the content level. In 2018, it introduced reduction interventions that downranked or limited the visibility of misleading content. By 2020, Twitter also adopted informing interventions, such as applying warning labels or prompts to provide additional context (Crowell, 2017; Culliford, 2020b; Roth & Harvey, 2018).  Overall, a mixed deterrence mechanism means deploying multiple interventions that work together to form a robust deterrent and mitigation structure, thereby leveraging varying degrees of coerciveness, restrictiveness, and persuasiveness. This approach underscores the rationale for organizing interventions according to their underlying deterrence mechanism. Unlike other interventions, which typically target specific entities such as a post, a user, or a community, multimodal interventions operate across multiple levels of the platform. They may simultaneously combine actions at the post, account, and community levels, creating a layered strategy that reinforces deterrence across the broader information environment. This distinction frames multimodal interventions as a comprehensive moderation approach that operates across multiple layers of the platform, rather than a tactic aimed at a single entity. Conclusion Organizing social media misinformation interventions into five categories (removal, reduction, informing, composite, and multimodal) and mapping these categories onto five deterrence mechanisms (hard, situational, soft, integrated, and mixed) provides a theoretically grounded framework that clarifies how each strategy functions to influence user behavior. This typology highlights two core dimensions: purpose (coercive, restrictive, persuasive, or blended) and perceptibility (how visible the intervention is to users). Classical deterrence theory serves as the foundational logic of this framework, offering structured insight into the behavioral assumptions behind various intervention types. Social media governance research emphasizes that interventions are not “one size fits all” in either their effectiveness or public reception. This reinforces the need to distinguish interventions not only by their technical function but also by their social acceptability, level of intrusiveness, and communicative transparency. The typology also acknowledges the importance of combining hard and soft tactics through integrated or mixed deterrence approaches. These strategies may be especially effective when confronting hybrid threats such as politicized misinformation or coordinated influence campaigns. Beyond its theoretical value, this typology has practical relevance for a range of stakeholders. Researchers can use it as a coding framework to systematically track different deterrence mechanisms across platforms. Journalists and fact-checkers can draw on it to explain these mechanisms in accessible terms for the public. Policy analysts and regulators may apply it to assess whether measures such as downranking misleading political ads strike the right balance between limiting harm and preserving open debate. Platform practitioners can use the typology to guide the design of interventions, while civil society organizations can leverage it to advocate for layered approaches that combine removal, reduction, and informing to counter coordinated influence campaigns. These examples illustrate the flexibility of the typology, since different deterrence mechanisms and interventions can be applied depending on the context and stakeholder goals. In short, by mapping social media interventions onto the dual typology, I gain not only a coherent and flexible framework for scholarly analysis but also a practical lens for evaluating, communicating, and refining responses to misinformation across health, political, and crisis domains. The post A dual typology of social media interventions and deterrence mechanisms against misinformation first appeared on HKS Misinformation Review.
misinforeview.hks.harvard.edu
percepticon.bsky.social
German Cabinet Approves Law To Shoot Down Threatening Drones #cybersecurity #infosec
German Cabinet Approves Law To Shoot Down Threatening Drones
In the wake of mysterious drone incursions that forced the recent shutdowns of the Munich Airport, the German cabinet approved a measure to give police the authority to shoot down uncrewed aerial systems (UAS) posing a danger. The moves mark a big difference in how German authorities approach counter-drone defense, which has previously been limited to detection, not taking them down. The changes come as several European nations have been experiencing a rash of drone incursions, which Germany’s chancellor says are part of Russia’s ramped-up hybrid war efforts, a claim Moscow denies. German Chancellor Friedrich Merz took to social media on Wednesday to explain the need to update German law to meet the new drone threats. “Drone incidents threaten our safety,” said Merz. “We will not allow that. We are strengthening the powers of the federal police so that drones can be detected and intercepted more quickly in the future.” Die Drohnen-Vorfälle bedrohen unsere Sicherheit. Das lassen wir nicht zu. Wir stärken die Kompetenzen der Bundespolizei: Damit Drohnen künftig schneller aufgespürt und abgewehrt werden können. Das haben wir heute im Kabinett beschlossen.— Bundeskanzler Friedrich Merz (@bundeskanzler) October 8, 2025 The new law would give police permission to take down drones that “violated Germany’s airspace, including shooting them down in cases of acute threat or serious harm,” Reuters reported. The measure awaits parliamentary approval. In addition to kinetic counter-drone measures, the new law gives German authorities permission to use “lasers or jamming signals to sever control and navigation links,” the news outlet noted. The measure extends to all domains. “In order to combat a threat posed by unmanned aerial systems on land, in the air or on water, the federal police may deploy appropriate technical means against the system, its control unit or its control connection if other means of combating the threat would be futile or otherwise significantly more difficult,” the new law states. After a series of recent incidents, the German government wants to boost police powers to shoot down drones. Federal Interior Minister Alexander Dobrindt said a new law would equip authorities to use state-of-the-art technology to combat drone threats: pic.twitter.com/nvyIlsJxWl— DW News (@dwnews) October 8, 2025 All this comes as Germany has seen a 33% percent increase in the number of drone-related air traffic disruptions this year. There were 172 such events between January and the end of September 2025, up from 129 in the same period last year and 121 in 2023, according to data from Deutsche Flugsicherung (DFS). The new authority to take out drones is one of several measures Germany is taking in the wake of the incursions.  German police are creating a new counter-drone unit to deal with the problem. To build up the expertise of this unit, German officials will talk to countries like Israel and Ukraine that have significant experience creating and fighting off drones. Germany is also working out a system where the police and military would divide up counter-drone efforts, Interior Minister Alexander Dobrindt explained. “Police would deal with drones flying at around tree-level, whereas more powerful drones should be tackled by the military,” Dobrindt said. A sign indicates a no-drone-zone as flights resumed at Munich Airport after temporary suspensions due to drone sightings. (Photo by Johannes Simon/Getty Images) Johannes Simon Germany has debated changing its Federal Police laws for years. They were last updated in 1994. Discussion about how to defend against drones in many ways mirrors concerns expressed in the U.S., where current federal law restricts actions and collateral damage concerns limit how the military can respond. As a result, the U.S. military is not currently pursuing the use of lasers, microwaves, missiles or guns. The recent expiration of drone interception authorities provided to the departments of Homeland Security and Justice adds further restrictions to the ability of U.S. agencies to mitigate incursions. For Germany, the issue is even more challenging, because by law, its military is a defensive force “whose role is explicitly limited to protecting the state from external military threats in war-like scenarios,” the German DW news outlet noted. Even in the case of the current drone problems, it is unclear if any pose a military threat or create a situation akin to war. Given concerns that a wider war could erupt, German arms manufacturer Rheinmetall is awaiting a multi-billion-dollar order from the German Armed Forces for its Skyranger anti-aircraft gun system, which has major counter-drone capabilities. These systems could also be used to actively defend sites from drone incursions during peacetimes under the new law, although their use would have to be tightly controlled and it would be only be applicable at all in certain situations. Regardless, the Skyranger deal is indicative of how serious Germany is beginning to take the drone problem. (RHEINMETALL) As with the U.S., there are huge concerns in Germany about civilian harm from counter-drone systems, especially in heavily populated areas. Following reports of drone incursions over several European nations, Germany dispatched the German frigate FGS Hamburg to Copenhagen to help protect European Union meetings. It was one of several deployments of European counter-drone measures to the Danish capital. Though the Hamburg is armed with missiles and guns, a spokesperson of the Bundeswehr joint force command told us prior to the EU meetings that the ship’s responses to any drone incursions would be limited to detection efforts by its sensors. “The principle of proportionality and to minimize collateral damage are two important aspects we always keep in mind,” the spokesperson said in response to our questions about the Hamburg’s rules of engagement for its weapons systems, should a drone or drones be detected. The German Navy frigate FGS Hamburg docked in Copenhagen, Denmark to provide anti-drone protection for the EU summit. (Photo by Kristian Tuxen Ladegaard Berg/NurPhoto) Kristian Tuxen Ladegaard Berg The increasing concern about protecting NATO’s skies began after more than a dozen Russian drones entered Polish airspace last month, with some being shot down. A flight into Estonian airspace by three Russian MiG-31 Foxhound interceptors further increased tensions, which have already been high with a brutal war raging in Ukraine and concerns that it could spill over its borders. The drone incursions over a number of European countries ramped up considerably around this time. The latest wave of drone incursions began late last month when two Nordic airports were temporarily closed. Danish Prime Minister Mette Frederiksen said the airspace violation over the Copenhagen Airport was “the most serious attack on Danish critical infrastructure to date.” As we have explained in the past, it is quite possible that many, if not most of these sightings are mistaken identity. It is a pattern that emerged last year when thousands of people claimed to see drones in the New Jersey region of the U.S. The overwhelming majority of those sightings were airplanes, planets and other benign objects in the sky. Still, just like in the New Jersey case, we do know that a significant number of the sightings over military bases were confirmed by the government. The reality is that these drone incursions over critical facilities in Europe have been happening for years, but just how much it has exploded in recent weeks is blurred by media reports and sightings not supported by independent analysis or corroborated by sensor data. Regardless, German leaders say they are working to bring the nation’s law in line with other European countries, “such as France, Britain, Romania and Lithuania, which have extended the powers of their security forces to take out drones that are unlawfully in their airspace,” the Guardian pointed out. “Today we are creating a strong law for the federal police,” proclaimed Dobrindt, introducing the new counter-drone measures. “We are reacting decisively, effectively and technically at the cutting edge.” Contact the author: [email protected] The post German Cabinet Approves Law To Shoot Down Threatening Drones appeared first on The War Zone.
www.twz.com
percepticon.bsky.social
Newly developed 'NeuroWorm' a breakthrough for neural monitoring #cybersecurity #infosec
Newly developed 'NeuroWorm' a breakthrough for neural monitoring
(China Daily) Chinese scientists have devised "NeuroWorm", an intelligent microfiber capable of navigating freely inside the body, demonstrated through lab tests on mice, providing insights for brain-machine interfaces and neural regulation in treating certain chronic diseases.   Tests showed that the microfiber, which is about 200 micrometers in width and as thin as two strands of human hair, can monitor neural electrical signals and minute tissue deformations across large areas of the body or brain while moving, unlike similar materials from other studies, which remain fixed in one location.  Such advancement could redefine treatments using brain-machine interfaces or neurological disease therapies, the researchers said. "Unlike traditional Parkinson's disease treatments that require implanting multiple electrodes in different brain regions, the 'Neuro-Worm' is implanted once and can navigate various affected areas, monitoring neural electrical signals and even alleviating symptoms through electrical stimulation," said Yan Wei, a lead scientist on the team and researcher from the College of Materials Science and Engineering at Donghua University in Shanghai.   "For disabled patients who need brain-machine interfaces to help move restricted body parts, this technology is equally beneficial. With a single implantation of this fiber, it can monitor neural electrical signals across a wide range of the body."   Inspired by the segmented body structure of earthworms, which allows for distributed sensory and motion control, the research team led by Liu Zhiyuan, a researcher at the Chinese Academy of Sciences' Shenzhen Institute of Advanced Technology and Yan developed the "NeuroWorm". The study was published on the website of the journal Nature on Wednesday.   "This dynamic, soft and stretchable microfiber serves as a neural interface, offering significant advantages over traditional neural interface devices that are static and require invasive procedures for repositioning," Yan said.   The "NeuroWorm" fiber has other advantages over traditional devices, the researchers said. For example, it is embedded with 60 nano-level micro sensors, 15 times the number found in traditional methods. This allows for precise multipoint monitoring of neural electrical and biomechanical signals while navigating within the body after being implanted in the muscle or navigating within the brain after being implanted in the cerebral cortex of mice, said Yan. Furthermore, the fiber possesses excellent competency in softness. Long-term experiments on mice showed high biocompatibility, with the microfiber remaining in muscles for up to 13 months without adverse reactions, the researchers said.   "A key innovation lies in the fiber's ability to navigate within the body. Using an open magnetic control strategy, we achieved initial controlled movement and steering within tissues, allowing the fiber to move like an earthworm through soil and record neural signals all along the path without the need for additional surgeries," said Yan.   Zhu Meifang, a strategic adviser of the study, who is also director of the State Key Laboratory of Advanced Fiber Materials at Donghua University, said this breakthrough aims to transition bioelectronics from fixed passive recording to mobile intelligent collaboration.   "The team hopes to work with more institutions in the application field to accelerate the practical use of the technology, potentially transforming the landscape of the clinical use of brain-machine interface devices as well as neural disease treatment and monitoring," she said. Source: By Zhou Wenting in Shanghai | China Daily | Updated: 2025-10-03 07:31
www.technologynewschina.com
percepticon.bsky.social
Former U.S. Cyber Chief: Crowdsource Cyber Defense #cybersecurity #infosec
Former U.S. Cyber Chief: Crowdsource Cyber Defense
EXPERT INTERVIEW — Riyadh’s Global Cybersecurity Forum (GCF) in Saudi Arabia kicked off last week under the theme “Scaling Cohesive Advancement in Cyberspace.” The gathering came as researchers are increasingly discovering new malware and hacking campaigns, cybercrime is at an all-time high, and, in the U.S., critical cybersecurity legislation and authorities have been allowed to expire. We caught up there with Chris Inglis, the first U.S. National Cyber Director, who says he sees reason for optimism. Inglis spoke on a cybercrime panel at the GCF and told us why he’s bullish on the prospect of cooperation and collaborative action to effectively counter cyber threats. Our conversation has been lightly edited for length and clarity. The Cipher Brief: What is the real focus there right now as all of these cyber experts gather? Inglis: There is a buzz to be sure, and I think that buzz kind of revolves around the use of the term in their title this year, which is to do “cohesive scaling.” Both of those attributes are important. Cohesive implies the notion not just of concurrent action, but collaborative action. And scale is what lies before us. So we must scale this effort because we're being crowdsourced by a vast array of actors, malign actors, holdings at risk through things like ransomware or insertions or critical infrastructure. So I think the buzz is what do we do together as opposed to the single point solutions that might be offered by the technologist alone. The Cipher Brief: You're on a panel there talking about cybercrime and the global stakeholders associated with cybercrime. Can you give us a few highlights of some of the things that you're going to talk about in that session? Inglis: I think that the reality of cybercrime is it's perhaps a more appealing, more transcendent issue to focus collective action on, because every citizen, regardless of what nation he or she might be from, cares about crime and wants to live in a world where they're not going to be thwarted or taken down by somebody that takes advantage of digital infrastructure that's not quite fit for purpose. And so rather than talk about who those actors are that hold them at risk or talk about coalitions of one form or another that might take on coalitions of malign actors, let's talk about the needs of our citizens and that everyone wants to live in a crime-free world. That might sound like a bit of a panacea, but there's no one that would argue against that. And I think the other thing about taking on the criminal elements is that there's so many of them, the cost of entry is still so low and the assets they might acquire still so high that we're never going to entirely remove them from the field. That might sound like I'm giving up before I even start, but it's going to focus us on this high-leverage proposition of, what if we just made it too hard for them to succeed? I then don't need to find each and every one of those that's transgressed and succeeded against me. I actually am in a better place because they decided today not to try or they failed in trying in the first place. And so it focuses us, again, on resilience and robustness, not for its own sake, but so that we might have confidence in digital infrastructure. I think those are the highlights of this collective action and a focus on resilience. The Cipher Brief: Oftentimes, criminal groups now are being backed by nation states. How is that being tackled at an international forum like this? Inglis: We're being too kind. Sometimes, criminal enterprises are nation states, thinking about North Korea where it's a money-making proposition. It's an unholy alliance to be sure, and I think it gives them the kind of backing that we do not want to put into the hands of any single adversary. But we have the right on the defensive side to not simply collaborate, but to do so in the light of day. We don't have to skulk about in the dark or to accomplish these crowdsourcing activities on the dark web. We could do it in the light of day in a place like Riyadh, which is what's taking place here. Talking about what our common aspirations are for our citizens, talking about what the common kind challenges are to those aspirations, and thinking about not just collective action, which might be a concurrent application of all this talent, but collaborative action with a degree of professional intimacy that we actually assist one another in ways that no one of us could succeed alone. So I'm bullish about what the defense can pull off if they follow the same tactics that the offense does, which is let's crowdsource the other side. The Cipher Brief: While you're talking about collaboration in Riyadh, CISA 2015 expired here in the United States on September 30th, and that really has a lot of indicators in terms of information sharing between government and the private sector. How serious of an issue is this? Inglis: Of course, I'm worried about the lack of the legal authority and the liability protections that are attendant to that. But if it was truly valuable in the first place, then I hope, imagine and am confident that that degree of sharing still goes on. That form should follow function. We should get the law back in place as soon as possible. I've heard no one argue against the usefulness of that, and we're just caught in a time and place where we ran out of time. But behind the scenes, hopefully, and I'm more than hopeful, I'm confident there is a degree of collaboration going on. Why? Not because it's mandated, but because it's useful to all sides. The Cipher Brief: President Trump was just in Saudi Arabia earlier this year where he announced a pretty incredible investment package. AI was a big focus of his trip there, and of those announcements that were made, I'm wondering how concerned you are about autonomous AI-driven cyber weapons escalating conflicts, and if there is a path toward international guardrails or norms here. Inglis: I don't think anyone's actually talking about the literal kind of creation of autonomous AI driven systems. That term is sometimes not well-defined. Ask it this way, which is do we want weapon systems that can change sides in the middle of a war? Of course not. So we don't want autonomous weapon systems. But do we want highly capable weapon systems that augment human capacity, that can take a line of action from a human being who remains accountable, the human remains accountable, and execute that at scope and scale in ways that a human alone could not? Yes, of course we do, but we need a value scheme to go with that. And there's talk not just on the part of governments, but on the part of the private sector for the necessity of that. If we went back 50, 60 years to the days of early robotics, Isaac Asimov would be advising us that we should have three rules for robots. One, it should never hurt a human being. Two, it should obey human beings. And three, it should protect itself. In that order. And it turns out there's an equivalent to those three simple rules for generative AI or agentic AI. I'm not afraid of AI that achieves human-like capacities, but I am very nervous about having it be completely independent of human beings. And no one that I know is talking about having it be independent of human beings. Human accountability must and will remain on the loop, even though the speed of the human's ability to think through the complex problems AI can take on is going to be overmatched in a wondrous way by generative AI. We will remain accountable for it, and therefore the values that Asimov would recommend play through to this day. And I think that there's a version of that in every instance that I've seen of responsible parties talking about let's use this way in some different capacity. Sign up for the Cyber Initiatives Group Sunday newsletter, delivering expert-level insights on the cyber and tech stories of the day – directly to your inbox. Sign up for the CIG newsletter today. The Cipher Brief: How bullish are you on the idea of norms, particularly when we're seeing so many nation-states using cyber as a national security tool, an espionage tool, and cybercrime? How bullish are you on norms and the effectiveness of norms? Ingli: I'm bullish on the utility of norms. I'm less bullish on the implementation of those kind of universally and kind of the same across all kind of players in this space. As we talked about earlier in this conversation, clearly there are some actors who are broadly ignoring those norms, and the answer to that is to not for ourselves to actually similarly violate those norms. Why? Because our people are then disadvantaged in that regard. They get caught in the churn. Our allies or those who would collaborate with us in this world will not then commit their full-time and attention to that in the absence of shared value, shared norms, shared aspirations, and so I think that norms still have their value, and it still tells us how we actually deliver on the human aspirations that ultimately have a foundation in values, not just technology. The Cipher Brief: What are some of the most interesting conversations that you've had on the sidelines there in Riyadh? Inglis: I think the most interesting conversations are about those who argue for collaboration as opposed to division of effort. And the pitch that they make is not one that's to their own advantage, it's to the collective advantage. Reminding us that we're not trying to solve similar problems. We're all trying to solve the same problem or deliver the same aspirations to our citizens. Those are the most compelling conversations that I've seen so far. And the focus by the GCF, the Global Cyber Forum that's convened here by the Saudis, a focus on those things that every parent, every human being could find a noble aspiration for our children: child protection, elimination of ransomware that holds individuals and small businesses at risk. Those are, I think, the most meaningful discussions. The technology can follow, the doctrine can follow, but if we get those aspirations right, we're in a better place at the start. Are you Subscribed to The Cipher Brief’s Digital Channel on YouTube? There is no better place to get clear perspectives from deeply experienced national security experts. Read more expert-driven national security insights, perspective and analysis in The Cipher Brief because National Security is Everyone’s Business.
www.thecipherbrief.com
percepticon.bsky.social
In der Theorie ja, praktisch vergesse ich es immer ;)
percepticon.bsky.social
Ich habe, auch nach gut 20 Jahren seit Einführung, immernoch nicht gelernt, dass man in der deutschen Bahn einfach nicht zuverlässig GoogleDocs auf der Fahrt bearbeiten kann, weil Funklöcher....