Jo Peterson
banner
cleartech.bsky.social
Jo Peterson
@cleartech.bsky.social
840 followers 1.8K following 98 posts
Engineer who helps clients scope, source and vet solutions in #Cloud #cloudsecurity #aisecurity| #ai|Tech analyst| VP Cloud and Security|USAF vet| 📚Learning from CIOs and CISOs on the daily| 💕 of NY Times Spelling 🐝 Linked In: LinkedIn.com|in|JoPeterson1
Posts Media Videos Starter Packs
📌 Q: What does “precision” refer to in AI?

A: In AI, "precision" refers to a metric that measures how many of a model's positive predictions are actually correct

V/ VISO

#ai #aisecurity
📌 Q: What is prompt injection in Agentic AI?

A: In agentic AI, "prompt injection" refers to a security vulnerability where a malicious user manipulates the input prompt given to an AI system, essentially "injecting" harmful instructions to trick AI

v/ Cisco.bsky.social
#aisecurity #agenticai
📌 Q: How can data cleaning boost AI model accuracy?

A: Data cleaning is crucial in AI because the quality of data directly impacts the accuracy and reliability of AI models

#datacleaning #ai
📌 Q: Can Large Language Models (LLMs) alter data?

A: Yes, LLMs (Large Language Models) can indirectly alter data by generating new information or modifying existing data based on the prompts and context provided.

v/ @nexla.bsky.social

#aisecurity #ai
📌 Q: How do you restrict access to Large Language Models (LLMs)?

A: To restrict access to LLMs, implement access controls like (RBAC), multi-factor authentication (MFA), and user authentication systems, limiting who can interact with LLM

v/ Exabeam.bsky.social

#aisecurity #cloudai #cyberai
📌 Q: Can #AI models get stuck?

A: Yes, AI models can essentially get "stuck" in a state where they repeatedly generate similar outputs or fail to learn effectively

v/ Infobip

#cloud #cloudsecurity #cloudai #aisecurity
📌 Q: How do you secure private AI?

A: To secure private AI, you need:

✅strict access controls
✅data encryption
✅model watermarking
✅secure network infrastructure
✅data anonymization,
✅robust privacy policies,
✅regular security audits

v/ Surgere

#cloud #cloudsecurity #cloudai #aisecurity
📌 Q: What is a false positive in AI?

A: A "false positive" in AI refers to when an AI system incorrectly identifies something as belonging to a specific category, like flagging human-written text as AI-generated

v/ Originality.ai

#cloud #cloudsecurity #cybersecurity #aisecurity
Reposted by Jo Peterson
Sign-up for the AskWoody Newsletter and read Deanna's latest article: "Back to BASICs — Hello, World! " A look at the favorite BASIC programming language versions.
www.askwoody.com/2025/back-to...

#programming #Learntocode #Computing @AskWoody
📌 Q: What is an AI 🤖 privacy issue?

A: An AI privacy issue refers to the potential for artificial intelligence systems to violate personal privacy by collecting, storing, and analyzing personal data without user knowledge, consent, or control

v/ IBM

#cloud #cloudsecurity #cloudai #aisecurity
📌 Q: What is over privilege in an AI 🤖 system?

A: “Over privilege" in an AI system refers to a situation where an AI model or component has been granted excessive access to data or functionalities

v/ @oneidentity.bsky.social

#cloud #cloudsecurity #cloudai #aisecurity
📌 Q: How does agentic AI handle inputs?

A: Agentic AI handles inputs by autonomously processing information from various sources, including environmental data, user interactions, and internal knowledge bases

v/ @ibm.bsky.social

#cloud #cloudsecurity #cloudai #aisecurity
📌 Q: What are common data leak vulnerabilities in LLMs?

A:
✅ Incomplete or improper filtering of sensitive information

✅ Overfitting or memorization of sensitive data

✅ Unintended disclosure of confidential information

v/ OWASP® Foundation

#cloud #cloudsecurity #cloudai #aisecurity
📌 Q: What’s the difference between public and private AI?

A: Public AI operates on hyperscale cloud-based platforms and is accessible to multiple businesses

Private AI is tailored and confined to a specific organisation.

v/ ComputerWeekly.bsky.social

#cloud #cloudsecurity #cloudai #aisecurity
📌 Q: What is a walled garden approach in AI?

A: A "walled garden" approach in AI refers to a closed ecosystem where a single entity controls all aspects of an AI system

v/ Iterate.ai

#cloud #cloudsecurity #cloudai #aisecurity
📌 Q: What is an unknown threat in AI security?

A: An "unknown threat" in AI security refers to a cyber threat that hasn't been previously identified or documented, meaning it lacks a known signature

v/ @zscaler.bsky.social

#cloud #cloudsecurity #aisecurity
📌 Q: What is AI model collapse?

A: AI model collapse is a process where generative AI models trained on AI-generated data begin to perform poorly.

v/ @appinventiv.bsky.social

#cloud #cloudsecurity #cybersecurity #aisecurity
📌 Q: What is Adaptive authentication in AI security?

A: Adaptive authentication in AI security is a dynamic authentication method that uses machine learning and contextual data to assess the risk of a login attempt

v/ OneLogin by One Identity

#cloud #cloudsecurity #cloudai #aisecurity
📌 Q: What is adversarial machine learning?

A: Adversarial machine learning (AML) is a technique that uses malicious inputs to trick or mislead a machine learning (ML) model.

v/ @crowdstrike.bsky.social

#cloud #cloudsecurity #cybersecurity #cloudai
💡Happy to announce that I’ve been invited to participate in the AI Safety Executive Leadership Council for the Cloud Security Alliance
#cloud #cloudsecurity #cloudai #cybersecurity #aisecurity
📌 Q: How often should you refresh your cybersecurity policy?

A: A cybersecurity policy should be refreshed at least once a year

v/ @carbide.bsky.social

#cloud #cloudsecurity #cybersecurity #cloudai #aisecurity
📌 Q: What is an insider threat in AI security?

A: An "insider threat" in AI security refers to a situation where someone with authorized access to an organization's AI systems misuses that access to harm the organization

v/@vectraai.bsky.social

#cloud #cloudsecurity #cloudai #aisecurity