Like I said that AI dangerous 🧿🌐👁️🗨️ 🤮🤮🤢 🚫 I will have kid(s) when I find the right woman #aiawareness #aimisuse
June 20, 2025 at 7:22 PM
Like I said that AI dangerous 🧿🌐👁️🗨️ 🤮🤮🤢 🚫 I will have kid(s) when I find the right woman #aiawareness #aimisuse
Key takeaways: As we navigate AI’s evolution, we must prioritize reliability and ethical considerations. Blind trust in AI can erode critical thinking and authenticity. Future implications include a potential call for stricter guidelines for AI applications. #AIMisuse #AIethic...
December 4, 2024 at 1:39 PM
Key takeaways: As we navigate AI’s evolution, we must prioritize reliability and ethical considerations. Blind trust in AI can erode critical thinking and authenticity. Future implications include a potential call for stricter guidelines for AI applications. #AIMisuse #AIethic...
Excited to be presenting at the aiEDU & OESCA AI Summit today!
Sessions & resources:
🧰 AI Toolbox - bit.ly/curts-aitools
😈 Managing AI Misuse - bit.ly/curts-aimisuse
💬 Custom Chatbots - bit.ly/curts-chatbots
#edusky
Sessions & resources:
🧰 AI Toolbox - bit.ly/curts-aitools
😈 Managing AI Misuse - bit.ly/curts-aimisuse
💬 Custom Chatbots - bit.ly/curts-chatbots
#edusky
May 9, 2024 at 1:55 PM
Excited to be presenting at the aiEDU & OESCA AI Summit today!
Sessions & resources:
🧰 AI Toolbox - bit.ly/curts-aitools
😈 Managing AI Misuse - bit.ly/curts-aimisuse
💬 Custom Chatbots - bit.ly/curts-chatbots
#edusky
Sessions & resources:
🧰 AI Toolbox - bit.ly/curts-aitools
😈 Managing AI Misuse - bit.ly/curts-aimisuse
💬 Custom Chatbots - bit.ly/curts-chatbots
#edusky
North Korean Threat Actors Leverage ChatGPT in Deepfake Identity Scheme #AIMisuse #Cybersecurity #DataTheft
North Korean Threat Actors Leverage ChatGPT in Deepfake Identity Scheme
North Korean hackers Kimsuky are using ChatGPT to create convincing deepfake South Korean military identification cards in a troubling instance of how artificial intelligence can be weaponised in state-backed cyber warfare, indicating that artificial intelligence is becoming increasingly useful in cyber warfare.
As part of their cyber-espionage campaign, the group used falsified documents embedded in phishing emails targeting defence institutions and individuals, adding an additional layer of credibility to their espionage activities.
A series of attacks aimed at deceiving recipients, delivering malicious software, and exfiltrating sensitive data were made more effective by the use of AI-generated IDs. Security monitors have categorised this incident as an AI-related hazard, indicating that by using ChatGPT for the wrong purpose, the breach of confidential information and the violation of personal rights directly caused harm.
Using generative AI is becoming increasingly common in sophisticated state-sponsored operations. The case highlights the growing concerns about the use of generative AI in sophisticated operations. As a result of the combination of deepfake technology and phishing tactics, these attacks are harder to detect and much more damaging.
Palo Alto Networks' Unit 42 has observed a disturbing increase in the use of real-time deepfakes for job interviews, in which candidates disguise their true identities from potential employers using this technology.
In their view, the deepfake tactic is alarmingly accessible because it can be done in a matter of hours, with just minimal technical know-how, and with inexpensive consumer-grade hardware, so it is alarmingly accessible and easy to implement.
The investigation was prompted by a report that was published in the Pragmatic Engineer newsletter that described how two fake applicants who were almost hired by a Polish artificial intelligence company raised suspicions that the candidates were being controlled by the same individual as deepfake personas.
As a result of Unit 42’s analysis, these practices represent a logical progression from a long-standing North Korean cyber threat scheme, one in which North Korean IT operatives attempt to infiltrate organisations under false pretences, a strategy well documented in previous cyber threat reports.
It has been repeatedly alleged that the hacking group known as Kimsuky, which operated under the direction of the North Korean state, was involved in espionage operations against South Korean targets for many years.
In a 2020 advisory issued by the U.S. Department of Homeland Security, it was suggested that this group might be responsible for obtaining global intelligence on Pyongyang's behalf.
Recent research from a South Korean security firm called Genians illustrates how artificial intelligence is increasingly augmented into such operations.
There was a report published in July about North Korean actors manipulating ChatGPT to create fake ID cards, while further experiments revealed that simple prompt adjustments could be made to override the platform's built-in limitations by North Korean actors.
It follows a pattern that a lot of people have experienced in the past: Anthropic disclosed in August that its Claude Code software was misused by North Korean operatives to create sophisticated fake personas, pass coding assessments, and secure remote positions at multinational companies.
In February, OpenAI confirmed that it had suspended accounts tied to North Korea for generating fraudulent resumes, cover letters, and social media content intended to assist with recruitment efforts.
These activities, according to Genians director Mun Chong-hyun, highlight the growing role AI has in the development and execution of cyber operations at many stages, from the creation of attack scenarios, the development of malware, as well as the impersonation of recruiters and targets.
A phishing campaign impersonating an official South Korean military account (.mil.kr) has been launched in an attempt to compromise journalists, researchers, and human rights activists within this latest campaign. To date, it has been unclear how extensive the breach was or to what extent the hackers prevented it.
Officially, the United States assert that such cyber activities are a part of a larger North Korea strategy, along with cryptocurrency theft and IT contracting schemes, that seeks to provide intelligence as well as generate revenue to circumvent sanctions and fund the nuclear weapons program of the country.
According to Washington and its allies, Kimsuky, also known as APT43, a North Korean state-backed cyber unit that is suspected of being responsible for the July campaign, was already sanctioned by Washington and its allies for its role in promoting Pyongyang's foreign policy and sanction evasion.
It was reported by researchers at South Korean cybersecurity firm Genians that the group used ChatGPT to create samples of government and military identification cards, which they then incorporated into phishing emails disguised as official correspondence from a South Korean defense agency that managed ID services, which was then used as phishing emails.
Besides delivering a fraudulent ID card with these messages, they also delivered malware designed to steal data as well as allow remote access to compromised systems. It has been confirmed by data analysis that these counterfeit IDs were created using ChatGPT, despite the tool's safeguards against replicating government documents, indicating that the attackers misinterpreted the prompts by presenting them as mock-up designs.
There is no doubt that Kimsuky has introduced deepfake technology into its operations in such a way that this is a clear indication that this is a significant step toward making convincing forgeries easier by using generative AI, which significantly lowers the barrier to creating them.
It is known that Kimsuky has been active since at least 2012, with a focus on government officials, academics, think tanks, journalists, and activists in South Korea, Japan, the United States, Europe, and Russia, as well as those affected by North Korea's policy and human rights issues.
As research has shown, the regime is highly reliant on artificial intelligence to create fake summaries and online personas. This enables North Korean IT operatives to secure overseas employment as well as perform technical tasks once they are embedded. There is no doubt that such operatives are using a variety of deceptive practices to obscure their origins and evade detection, including artificial intelligence-powered identity fabrication and collaboration with foreign intermediaries.
The South Korean foreign ministry has endorsed that claim.
It is becoming more and more evident that generative AI is increasingly being used in cyber-espionage, which poses a major challenge for global cybersecurity frameworks: assisting citizens in identifying and protecting themselves against threats not solely based on technical sophistication but based on trust.
Although platforms like ChatGPT and other large language models may have guardrails in place to protect them from attacks, experts warn that adversaries will continue to seek out weaknesses in the systems and adapt their tactics through prompt manipulation, social engineering, and deepfake augmentation in an effort to defeat the system.
Kimsuky is an excellent example of how disruptive technologies such as artificial intelligence and cybercrime erode traditional detection methods, as counterfeit identities, forged credentials, and distorted personas blur the line between legitimate interaction and malicious deception, as a result of artificial intelligence and cybercrime.
The security experts are urging the public to take action by using a multi-layered approach that combines AI-driven detection tools, robust digital identity verification, cross-border intelligence sharing, and better awareness within targeted sectors such as defence, academia, and human rights industries.
Developing AI technologies together with governments and private enterprises will be critical to ensuring they are harnessed responsibly while minimising misuse of these technologies. It is clear from this campaign that as adversaries continue to use artificial intelligence to sharpen their attacks, defenders must adapt just as fast to maintain trust, privacy, and global security as they do against adversaries.
dlvr.it
September 23, 2025 at 5:01 PM
North Korean Threat Actors Leverage ChatGPT in Deepfake Identity Scheme #AIMisuse #Cybersecurity #DataTheft
Deepfake Video of Sadhguru Used to Defraud Bengaluru Woman of Rs 3.75 Crore #AIMisuse #CyberFraud #CybersecurityThreats
Deepfake Video of Sadhguru Used to Defraud Bengaluru Woman of Rs 3.75 Crore
As a striking example of how emerging technologies are used as weapons for deception, a Bengaluru-based woman of 57 was deceived out of Rs 3.75 crore by an AI-generated deepfake video supposedly showing the spiritual leader Sadhguru. The video was reportedly generated by an AI-driven machine learning algorithm, which led to her loss of Rs 3.75 crore.
During the interview, the woman, identifying herself as Varsha Gupta from CV Raman Nagar, said she did not know that deepfakes existed when she saw a social media reel that appeared to show Sadhguru promoting investments in stocks through a trading platform, encouraging viewers to start with as little as $250. She had no idea what deepfakes were when she saw the reel.
The video and subsequent interactions convinced her of its authenticity, which led to her investing heavily over the period of February to April, only to discover later that she had been deceived by the video and subsequent interactions. During that time, it has been noted that multiple fake advertisements involving artificial intelligence-generated voices and images of Sadhguru were circulating on the internet, leading police to confirm the case and launch an investigation.
It is important to note that the incident not only emphasises the escalation of financial risk resulting from deepfake technology, but also the growing ethical and legal issues associated with it, as Sadhguru had recently filed a petition with the Delhi High Court to protect his rights against unauthorised artificial intelligence-generated content that may harm his persona.
Varsha was immediately contacted by an individual who claimed to be Waleed B, who claimed to be an agent of Mirrox, and who identified himself as Waleed B.
In order to tutor her, he used multiple UK phone numbers to add her to a WhatsApp group that had close to 100 members, as well as setting up trading tutorials over Zoom. After Waleed withdrew, another man named Michael C took over as her trainer when Waleed later withdrew.
Using fake profit screenshots and credit information within a trading application, the fraudsters allegedly constructed credibility by convincing her to make repeated transfers into their bank accounts, in an effort to gain her trust. Throughout the period February to April, she invested more than Rs 3.75 crore in a number of transactions.
After she declined to withdraw what she believed to be her returns, everything ceased abruptly after she was informed that additional fees and taxes would be due. When she refused, things escalated. Despite the fact that the investigation has begun, investigators are partnering with banks to freeze accounts linked to the scam, but recovery remains uncertain since the complaint was filed nearly five months after the last transfer, when it was initially filed.
Under the Bharatiya Nyaya Sanhita as well as Section 318(4) of the Information Technology Act, the case has been filed. Meanwhile, Sadhguru Jaggi Vasudev and the Isha Foundation formally filed a petition in June with the Delhi High Court asking the court to provide him with safeguards against misappropriation of his name and identity by deepfake content publishers.
Moreover, the Foundation issued a public advisory regarding social media platform X, warning about scams that were being perpetrated using manipulated videos and cloned voices of Sadhguru, while reaffirming that he is not and will not endorse any financial schemes or commercial products. It was also part of the elaborate scheme in which Varsha was added to a WhatsApp group containing almost one hundred members and invited to a Zoom tutorial regarding online trading.
It is suspected that the organisers of these sessions - who later became known as fraudsters - projected screenshots of profits and staged discussions aimed at motivating participants to act as positive leaders. In addition to the apparent success stories, she felt reassured by what seemed like a legitimate platform, so she transferred a total of 3.75 crore in several instalments across different bank accounts as a result of her confidence in the platform.
Despite everything, however, the illusion collapsed when she attempted to withdraw her supposed earnings from her account. A new demand was made by the scammers for payment of tax and processing charges, but she refused to pay it, and when she did, all communication was abruptly cut off. It has been confirmed by police officials that her complaint was filed almost five months after the last transaction, resulting in a delay which has made it more challenging to recover the funds, even though efforts are currently being made to freeze the accounts involved in the scam.
It was also noted that the incident occurred during a period when concern over artificial intelligence-driven fraud is on the rise, with deepfake technology increasingly being used to enhance the credibility of such schemes, authorities noted. In April of this year, Sadhguru Jaggi Vasudev and the Isha Foundation argued that the Delhi High Court should be able to protect them from being manipulated against their likeness and voice in deepfake videos.
In a public advisory issued by the Foundation, Sadhguru was advised to citizens not to promote financial schemes or commercial products, and to warn them against becoming victims of fraudulent marketing campaigns circulating on social media platforms. Considering that artificial intelligence is increasingly being used for malicious purposes in this age, there is a growing need for greater digital literacy and vigilance in the digital age.
Despite the fact that law enforcement agencies are continuing to strengthen their cybercrime units, the first line of defence continues to be at the individual level. Experts suggest that citizens exercise caution when receiving unsolicited financial offers, especially those appearing on social media platforms or messaging applications.
It can be highly effective to conduct independent verification through official channels, maintain multi-factor authentication on sensitive accounts, and avoid clicking on suspicious links on an impulsive basis to reduce exposure to such traps.
Financial institutions and banks should be equally encouraged to implement advanced artificial intelligence-based monitoring systems that can detect irregular patterns of transactions and identify fraudulent networks before they cause significant losses.
Aside from technology, there must also be consistent public awareness campaigns and stricter regulations governing digital platforms that display misleading advertisements.
It is now crucial that individuals keep an eye out for emerging threats such as deepfakes in order to protect their personal wealth and trust from these threats.
Due to the sophistication of fraudsters, as demonstrated in this case, it is becoming increasingly difficult to protect oneself in this digital era without a combination of diligence, education, and more robust systemic safeguards.
dlvr.it
September 13, 2025 at 4:30 PM
Deepfake Video of Sadhguru Used to Defraud Bengaluru Woman of Rs 3.75 Crore #AIMisuse #CyberFraud #CybersecurityThreats
One egregious misuse involves AI generating exploitative content, like sexualized images of marginalized communities. This not only disrespects the original creators but also perpetuates harmful stereotypes, raising questions about accountability in AI systems. #AImisuse
December 3, 2024 at 4:28 PM
One egregious misuse involves AI generating exploitative content, like sexualized images of marginalized communities. This not only disrespects the original creators but also perpetuates harmful stereotypes, raising questions about accountability in AI systems. #AImisuse
Contractor Uses AI to Fake Road Work, Sparks Outrage and Demands for Stricter Regulation #AIMisuse #AIgeneratedroadscam #ChatGPTfraud
Contractor Uses AI to Fake Road Work, Sparks Outrage and Demands for Stricter Regulation
In a time when tools like ChatGPT are transforming education, content creation, and research, an Indian contractor has reportedly exploited artificial intelligence for a far less noble purpose—fabricating roadwork completion using AI-generated images.
A video that recently went viral on Instagram has exposed the alleged misuse. In it, a local contractor is seen photographing an unconstructed, damaged road and uploading the image to an AI image generator. He then reportedly instructed the tool to recreate the image as a finished cement concrete (CC) road—complete with clean white markings, smooth edges, and a drainage system.
In moments, the AI delivered a convincing “after” image. The contractor is believed to have sent this fabricated version to a government engineer on WhatsApp, captioning it: “Road completed.” According to reports, the engineer approved the bill without any physical inspection of the site.
While the incident has drawn laughter for its ingenuity, it also shines a spotlight on a serious lapse in administrative verification. Civil projects traditionally require on-site evaluation before funds are cleared. But with government departments increasingly relying on digital updates and WhatsApp for communication, such loopholes are becoming easier to exploit.
Though ChatGPT doesn’t create images, it is suspected that the contractor used AI tools like Midjourney or DALL·E, possibly combined with ChatGPT-generated prompts to craft the manipulated photo. As one Twitter user put it, “This is not just digital fraud—it’s a governance loophole. Earlier, work wasn’t done, and bills got passed with a signature. Now, it’s ‘make it with AI, send it, and the money comes in.’”
The clip, shared by Instagram user “naughtyworld,” has quickly racked up millions of views. While some viewers praised the tech-savviness, others expressed alarm at the implications.
“This is just the beginning. AI can now be used to deceive the government itself,” one user warned. Another added, “Forget smart cities. This is smart corruption.”
The incident has fueled widespread calls on social media for stronger regulation of AI use, more transparent public work verification processes, and a legal probe into the matter. Experts caution that if left unchecked, this could open the door to more sophisticated forms of digital fraud in governance.
dlvr.it
June 10, 2025 at 4:34 PM
Contractor Uses AI to Fake Road Work, Sparks Outrage and Demands for Stricter Regulation #AIMisuse #AIgeneratedroadscam #ChatGPTfraud
Google’s Gemini AI Sparks Backlash Over Watermark Removal Capabilities
#AI #GenAI #GoogleAI #GeminiAI #Google #Alphabet #AIregulation #WatermarkRemoval #ImageProtection #CopyrightAI #SynthID #AIMisuse #AIcontroversy #DigitalRights #AIEthics
#AI #GenAI #GoogleAI #GeminiAI #Google #Alphabet #AIregulation #WatermarkRemoval #ImageProtection #CopyrightAI #SynthID #AIMisuse #AIcontroversy #DigitalRights #AIEthics
Google’s Gemini AI Sparks Backlash Over Watermark Removal Capabilities - WinBuzzer
Google’s Gemini 2.0 Flash AI model has sparked controversy for removing watermarks from protected images, raising legal and ethical concerns.
winbuzzer.com
March 17, 2025 at 9:34 AM
Google’s Gemini AI Sparks Backlash Over Watermark Removal Capabilities
#AI #GenAI #GoogleAI #GeminiAI #Google #Alphabet #AIregulation #WatermarkRemoval #ImageProtection #CopyrightAI #SynthID #AIMisuse #AIcontroversy #DigitalRights #AIEthics
#AI #GenAI #GoogleAI #GeminiAI #Google #Alphabet #AIregulation #WatermarkRemoval #ImageProtection #CopyrightAI #SynthID #AIMisuse #AIcontroversy #DigitalRights #AIEthics
Presenting at the AI Summit for Educators today!
All my session resources:
🤖 Google AI - https://bit.ly/curts-googleai
😈 Managing AI Misuse - https://bit.ly/curts-aimisuse
🚀 Enriching Students with AI - https://bit.ly/curts-aienrich
💡 AI to Support Learners - https://bit.ly/curts-aisupport
#edusky
All my session resources:
🤖 Google AI - https://bit.ly/curts-googleai
😈 Managing AI Misuse - https://bit.ly/curts-aimisuse
🚀 Enriching Students with AI - https://bit.ly/curts-aienrich
💡 AI to Support Learners - https://bit.ly/curts-aisupport
#edusky
August 1, 2023 at 1:01 PM
Presenting at the AI Summit for Educators today!
All my session resources:
🤖 Google AI - https://bit.ly/curts-googleai
😈 Managing AI Misuse - https://bit.ly/curts-aimisuse
🚀 Enriching Students with AI - https://bit.ly/curts-aienrich
💡 AI to Support Learners - https://bit.ly/curts-aisupport
#edusky
All my session resources:
🤖 Google AI - https://bit.ly/curts-googleai
😈 Managing AI Misuse - https://bit.ly/curts-aimisuse
🚀 Enriching Students with AI - https://bit.ly/curts-aienrich
💡 AI to Support Learners - https://bit.ly/curts-aisupport
#edusky
Excited to be presenting today in Athens GA for the Georgia Cyber Academy!
🧔 Hipster Google bit.ly/curts-hipster
🧰 AI Tools bit.ly/curts-aitools
✨ Google AI bit.ly/curts-googleai
😈 AI Misuse bit.ly/curts-aimisuse
#edusky
🧔 Hipster Google bit.ly/curts-hipster
🧰 AI Tools bit.ly/curts-aitools
✨ Google AI bit.ly/curts-googleai
😈 AI Misuse bit.ly/curts-aimisuse
#edusky
July 24, 2024 at 4:17 PM
Excited to be presenting today in Athens GA for the Georgia Cyber Academy!
🧔 Hipster Google bit.ly/curts-hipster
🧰 AI Tools bit.ly/curts-aitools
✨ Google AI bit.ly/curts-googleai
😈 AI Misuse bit.ly/curts-aimisuse
#edusky
🧔 Hipster Google bit.ly/curts-hipster
🧰 AI Tools bit.ly/curts-aitools
✨ Google AI bit.ly/curts-googleai
😈 AI Misuse bit.ly/curts-aimisuse
#edusky
Excited to be providing a full day of AI sessions for MSD of Pike Township in Indiana today!
🤯 AI Uses - bit.ly/curts-aiuses
🧰 AI Tools - bit.ly/curts-aitools
💯 AI Grading - bit.ly/curts-aigrad...
😈 AI Misuse - bit.ly/curts-aimisuse
#edusky
🤯 AI Uses - bit.ly/curts-aiuses
🧰 AI Tools - bit.ly/curts-aitools
💯 AI Grading - bit.ly/curts-aigrad...
😈 AI Misuse - bit.ly/curts-aimisuse
#edusky
July 15, 2024 at 1:16 PM
Excited to be providing a full day of AI sessions for MSD of Pike Township in Indiana today!
🤯 AI Uses - bit.ly/curts-aiuses
🧰 AI Tools - bit.ly/curts-aitools
💯 AI Grading - bit.ly/curts-aigrad...
😈 AI Misuse - bit.ly/curts-aimisuse
#edusky
🤯 AI Uses - bit.ly/curts-aiuses
🧰 AI Tools - bit.ly/curts-aitools
💯 AI Grading - bit.ly/curts-aigrad...
😈 AI Misuse - bit.ly/curts-aimisuse
#edusky
Threat actors exploit XS Grok AI to spread malicious links—AI-generated trust is now a weapon. Awareness and filtering are critical. 🤖🔗 #AIMisuse #SecureAI
Threat actors abuse X’s Grok AI to spread malicious links
Threat actors are using Grok, X's built-in AI assistant, to bypass link posting restrictions that the platform introduced to reduce malicious advertising.
buff.ly
September 4, 2025 at 6:39 AM
🗣️ 📢#Authors and #publishers: A judge has approved a $1.5 billion copyright settlement in the case against #Anthropic for alleged #AIMisuse of #LiteraryWorks. Visit apnews.com/article/anth... to learn more and claim your rights.
#PublishingNews #Copyright #AuthorsRights
#PublishingNews #Copyright #AuthorsRights
Judge approves $1.5 billion copyright settlement between AI company Anthropic and authors
A federal judge on Thursday approved a $1.5 billion settlement between artificial intelligence company Anthropic and authors who allege nearly half a million books had been illegally pirated to train chatbots.
apnews.com
October 18, 2025 at 10:36 PM
🗣️ 📢#Authors and #publishers: A judge has approved a $1.5 billion copyright settlement in the case against #Anthropic for alleged #AIMisuse of #LiteraryWorks. Visit apnews.com/article/anth... to learn more and claim your rights.
#PublishingNews #Copyright #AuthorsRights
#PublishingNews #Copyright #AuthorsRights
🚨 𝗔𝗜 𝗠𝗶𝘀𝘂𝘀𝗲 𝗶𝘀 𝗮𝗻 ‘𝗘𝘅𝘁𝗿𝗲𝗺𝗲 𝗥𝗶𝘀𝗸’—𝗘𝗿𝗶𝗰 𝗦𝗰𝗵𝗺𝗶𝗱𝘁 𝗦𝗼𝘂𝗻𝗱𝘀 𝘁𝗵𝗲 𝗔𝗹𝗮𝗿𝗺! 🚨
www.aibusinesslist.com/resources/fo...
#AInews #EricSchmidt #AIMisuse #ArtificialIntelligence #aibusinesslist
www.aibusinesslist.com/resources/fo...
#AInews #EricSchmidt #AIMisuse #ArtificialIntelligence #aibusinesslist
Former Google CEO Eric Schmidt Warns of 'Extreme Risk' from AI Misuse
Former Google CEO Eric Schmidt warns of catastrophic dangers posed by AI misuse if not properly regulated.
www.aibusinesslist.com
February 19, 2025 at 12:37 PM
🚨 𝗔𝗜 𝗠𝗶𝘀𝘂𝘀𝗲 𝗶𝘀 𝗮𝗻 ‘𝗘𝘅𝘁𝗿𝗲𝗺𝗲 𝗥𝗶𝘀𝗸’—𝗘𝗿𝗶𝗰 𝗦𝗰𝗵𝗺𝗶𝗱𝘁 𝗦𝗼𝘂𝗻𝗱𝘀 𝘁𝗵𝗲 𝗔𝗹𝗮𝗿𝗺! 🚨
www.aibusinesslist.com/resources/fo...
#AInews #EricSchmidt #AIMisuse #ArtificialIntelligence #aibusinesslist
www.aibusinesslist.com/resources/fo...
#AInews #EricSchmidt #AIMisuse #ArtificialIntelligence #aibusinesslist
🎣 A flaw in Google Gemini lets attackers hijack AI-generated email summaries for phishing. When AI meets inbox, vigilance becomes your first line of defense.
#PhishingRisk #AIMisuse 📩⚠️
buff.ly/h5DSHVN
#PhishingRisk #AIMisuse 📩⚠️
buff.ly/h5DSHVN
Google Gemini flaw hijacks email summaries for phishing
Google Gemini for Workspace can be exploited to generate email summaries that appear legitimate but include malicious instructions or warnings that direct users to phishing sites without using…
buff.ly
July 16, 2025 at 8:05 AM
🎣 A flaw in Google Gemini lets attackers hijack AI-generated email summaries for phishing. When AI meets inbox, vigilance becomes your first line of defense.
#PhishingRisk #AIMisuse 📩⚠️
buff.ly/h5DSHVN
#PhishingRisk #AIMisuse 📩⚠️
buff.ly/h5DSHVN
1/ Here’s our key takeaway from the new Google DeepMind paper on AI misuse: The biggest threat isn't sophisticated "model compromise." It's simple "capability exploitation." #GenAI #AIMisuse #TrustAndSafety #Disinformation #CreatorEconomy
October 21, 2025 at 8:03 PM
1/ Here’s our key takeaway from the new Google DeepMind paper on AI misuse: The biggest threat isn't sophisticated "model compromise." It's simple "capability exploitation." #GenAI #AIMisuse #TrustAndSafety #Disinformation #CreatorEconomy
Cyberhare Solutions, a University of the West of Scotland start-up that aims to tackle the rising academic misuse of artificial intelligence, has secured £250,000 in funding from UK Research and Innovation.
www.digit.fyi/scottish-sta...
#tech #AI #AImisuse @DrCyberhare
www.digit.fyi/scottish-sta...
#tech #AI #AImisuse @DrCyberhare
Scottish Startup Wins £250K to Tackle AI Cheating
As AI cheating cases at UK universities soar, a new platform aims to give lecturers evidence-based insights into tech misuse by students.
www.digit.fyi
September 29, 2025 at 9:15 AM
Cyberhare Solutions, a University of the West of Scotland start-up that aims to tackle the rising academic misuse of artificial intelligence, has secured £250,000 in funding from UK Research and Innovation.
www.digit.fyi/scottish-sta...
#tech #AI #AImisuse @DrCyberhare
www.digit.fyi/scottish-sta...
#tech #AI #AImisuse @DrCyberhare
Moreover, the rise of AI scams illustrates a critical challenge: the reliance on individuals to identify and combat misuse is flawed. Even experts can fall prey, underscoring the need for systemic protections and education around AI's risks and vulnerabilities. #AImisuse
December 3, 2024 at 4:29 PM
Moreover, the rise of AI scams illustrates a critical challenge: the reliance on individuals to identify and combat misuse is flawed. Even experts can fall prey, underscoring the need for systemic protections and education around AI's risks and vulnerabilities. #AImisuse
Excited to be presenting today at Circleville City Schools!
💎 The "Gem in AI": Google Gemini for Schools - bit.ly/curts-gemini
🚦 Managing AI Concerns in Schools - bit.ly/curts-aimisuse
#EduSky #EduSkyAI #EdTech
💎 The "Gem in AI": Google Gemini for Schools - bit.ly/curts-gemini
🚦 Managing AI Concerns in Schools - bit.ly/curts-aimisuse
#EduSky #EduSkyAI #EdTech
May 9, 2025 at 12:19 PM
Excited to be presenting today at Circleville City Schools!
💎 The "Gem in AI": Google Gemini for Schools - bit.ly/curts-gemini
🚦 Managing AI Concerns in Schools - bit.ly/curts-aimisuse
#EduSky #EduSkyAI #EdTech
💎 The "Gem in AI": Google Gemini for Schools - bit.ly/curts-gemini
🚦 Managing AI Concerns in Schools - bit.ly/curts-aimisuse
#EduSky #EduSkyAI #EdTech
Manas Robin, a long-time collaborator of Garg and a noted singer-composer, confirmed that a unique ‘digital signature’ is being created for the late singer’s vocals
#ZubeenGarg #VoicePreservation #DigitalSignature #AIMisuse
nenow.in/north-east-n...
#ZubeenGarg #VoicePreservation #DigitalSignature #AIMisuse
nenow.in/north-east-n...
Zubeen Garg voice preservation: Assam launches digital signature to prevent AI misuse
Guwahati: As Assam mourns the sudden demise of its iconic singer Zubeen Garg, authorities and associates of the late artist have initiated steps to digitally preserve his voice to prevent misuse throu...
nenow.in
September 22, 2025 at 12:56 PM
Manas Robin, a long-time collaborator of Garg and a noted singer-composer, confirmed that a unique ‘digital signature’ is being created for the late singer’s vocals
#ZubeenGarg #VoicePreservation #DigitalSignature #AIMisuse
nenow.in/north-east-n...
#ZubeenGarg #VoicePreservation #DigitalSignature #AIMisuse
nenow.in/north-east-n...
Oh. My. Goddess.
Das ist so dumm, fehlerbehaftet und kackscheißig, dass nutzen nur Vollarschlochfaschisten so.
#aiMisuse.
Das ist so dumm, fehlerbehaftet und kackscheißig, dass nutzen nur Vollarschlochfaschisten so.
#aiMisuse.
"Eine KI scannt nahezu in Echtzeit die Social-Media-Feeds der betreffenden Person, wie aus Daten hervorgeht, die nun von der Grenzschutzbehörde veröffentlicht wurden."
Damit wären die US entgültig tabu. Bei meiner Trump-Kritik, lande ich sonst in El Salavador.
www.derstandard.de/story/300000...
Damit wären die US entgültig tabu. Bei meiner Trump-Kritik, lande ich sonst in El Salavador.
www.derstandard.de/story/300000...
Wer in die USA einreist, muss sich einer Gefühlserkennung durch KI stellen
Der US-Grenzschutz veröffentlicht eine lange Liste der im Einsatz befindlichen KI-Systeme. "Riskante" Schlüsselwörter sollte man unbedingt vermeiden
www.derstandard.de
April 17, 2025 at 11:55 AM
Oh. My. Goddess.
Das ist so dumm, fehlerbehaftet und kackscheißig, dass nutzen nur Vollarschlochfaschisten so.
#aiMisuse.
Das ist so dumm, fehlerbehaftet und kackscheißig, dass nutzen nur Vollarschlochfaschisten so.
#aiMisuse.
Key takeaways: AI Ethics must evolve alongside technology, emphasizing accountability and transparency. As misuse proliferates, we must prioritize ethical guidelines, robust safeguards, and public awareness to navigate future implications responsibly. #AIethics #AImisuse
December 3, 2024 at 4:29 PM
Key takeaways: AI Ethics must evolve alongside technology, emphasizing accountability and transparency. As misuse proliferates, we must prioritize ethical guidelines, robust safeguards, and public awareness to navigate future implications responsibly. #AIethics #AImisuse
Chiranjeevi Secures Court Order Protecting Personality Rights Against AI Misuse
#AImisuse #Chiranjeevi #HyderabadCourt #PersonalityRights #publicityrights
#AImisuse #Chiranjeevi #HyderabadCourt #PersonalityRights #publicityrights
Chiranjeevi Secures Court Order Protecting Personality Rights Against AI Misuse
Veteran actor Chiranjeevi has secured an interim injunction from a Hyderabad civil court, granting him significant relief in a personality and publicity
blazetrends.com
October 26, 2025 at 7:24 AM
Chiranjeevi Secures Court Order Protecting Personality Rights Against AI Misuse
#AImisuse #Chiranjeevi #HyderabadCourt #PersonalityRights #publicityrights
#AImisuse #Chiranjeevi #HyderabadCourt #PersonalityRights #publicityrights