Skip to content
TechVibe

TechVibe

  • Home
  • Web3
  • Technology
  • Health
  • Business
  • Sports
    • Cricket
    • Football
  • Press Release
  • Contact Us
  • Toggle search form

Rephrase the title:Experts Sound the Alarm on Cyberattacks That Can ‘Poison’ AI Systems

Posted on January 6, 2024 By Haley Bennett

Rephrase and rearrange the whole content into a news article. I want you to respond only in language English. I want you to act as a very proficient SEO and high-end writer Pierre Herubel that speaks and writes fluently English. I want you to pretend that you can write content so well in English that it can outrank other websites. Make sure there is zero plagiarism.:

A recent study conducted by computer scientists from the National Institute of Standards and Technology (NIST) and their collaborators has exposed the vulnerability of artificial intelligence (AI) and machine learning (ML) systems to deliberate manipulation, commonly referred to as “poisoning.” 

The findings reveal that these systems can be intentionally misled, posing significant challenges to their developers, who currently lack foolproof defense mechanisms.

Tick Tock

(Photo : Gerd Altmann from Pixabay)

Poisoning AI

The study, titled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” is part of NIST’s broader initiative to support the development of reliable AI. The goal is to assist AI developers and users in understanding potential attacks and adopting effective mitigation strategies. 

It emphasizes that while certain defense mechanisms are available, none provide absolute assurances of complete risk mitigation. Apostol Vassilev, a computer scientist at NIST and one of the publication’s authors, highlights the importance of addressing various attack techniques and methodologies applicable to all types of AI systems. 

The study encourages the community to innovate and develop more robust defenses against potential threats. The integration of AI systems into various aspects of modern society, such as autonomous vehicles, medical diagnoses, and customer interactions through online chatbots, has become commonplace. 

These systems rely on extensive datasets for training, exposing them to diverse scenarios and enabling them to predict responses in specific situations. However, a major challenge arises from the lack of trustworthiness in the data itself, which may be derived from websites and public interactions, according to the research team.  

Bad actors can manipulate this data during an AI system’s training phase, potentially leading the system to exhibit undesirable behaviors. For instance, chatbots may learn to respond with offensive language when prompted by carefully crafted malicious inputs.

Read Also: Can AI Replace Humans in the Music Industry? Here’s What an Award-Winning Composer Has to Say About It

Attack on AI

The study categorizes four major types of attacks on AI systems: evasion, poisoning, privacy, and abuse attacks. The team observes that evasion attacks seek to modify inputs after the deployment of an AI system, thereby influencing its response.

Poisoning attacks, on the other hand, occur during the training phase by introducing corrupted data, impacting the behavior of the AI. Privacy attacks aim to extract sensitive information about the AI or its training data, while abuse attacks involve injecting incorrect information from compromised sources to deceive the AI.

The authors stress the simplicity with which these attacks can be launched, often requiring minimal knowledge of the AI system and limited adversarial capabilities. For instance, poisoning attacks can be carried out by controlling a small percentage of training samples, making them relatively accessible to adversaries. 

“Despite the significant progress AI and machine learning have made, these technologies are vulnerable to attacks that can cause spectacular failures with dire consequences,” co-author Alina Oprea, a professor at Northeastern University, said in a statement.

“There are theoretical problems with securing AI algorithms that simply haven’t been solved yet. If anyone says differently, they are selling snake oil,” she added. The study’s findings can be found here.

Related Article: Researchers Use AI Chatbot to Produce Prompts That Can ‘Jailbreak’ Other Bots, Including ChatGPT

Byline

ⓒ 2023 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Haley Bennett

I have over 10 years of experience in the cryptocurrency industry and I have been on the list of the top authors on LinkedIn for the past 5 years. I have a wealth of knowledge to share with my readers, and my goal is to help them navigate the ever-changing world of cryptocurrencies.

Health Tags:AI, AI Cyberattacks, AI Poison, AI Poisoning, Artificial Intelligence, Cyberattacks

Post navigation

Previous Post: Rephrase the title:Ginormous worms ruled the Cambrian seas 500 million years back, fossils reveal
Next Post: Rephrase the title:Tajon Buchanan signs for Inter Milan: Canada star joins Serie A heavyweights from Club Brugge

Related Posts

Rephrase the title:Match Group Faces Allegations of Designing Addictive Dating Apps Health
Rephrase the title:AI News Presenter ‘J-na’ Debuts on Jeju’s Weekly YouTube Program Health
Rephrase the title:Prince Harry Wins Payout in Phone-Hacking Case Settlement Against British Newspaper Health
Rephrase the title:Yogurt Gets Nod from FDA: Limited Evidence Suggests Reduction in Type 2 Diabetes Risk Health
Financial Crisis Hits Small Australian Town Due to Speed Camera Scandal Health
Rephrase the title:Instagram Surpasses TikTok in App Downloads, Riding High on Short-Form Video Trend Health

Recent Posts

  • Robin Open Social-Fi: Revolutionizes Gaming with Innovative Integration and Global Partnerships
  • $GUMMY Set to Launch New Meta On Staking on Solana
  • BinoStake.io: Transforming Crypto Investments On BNB Chain with Liquid Staking Solutions
  • Mocaverse to Develop Decentralized Social Layer
  • Expansion of Web3 Fueled by Hong Kong’s Financial Secretary

Categories

  • Business
  • Cricket
  • Football
  • Health
  • Press Release
  • Technology
  • Web3

About Us

Welcome to TechVibe9, where the rhythm of technology meets innovation! We are a group of tech enthusiasts on a mission to uncover and showcase the latest in the tech world.

Mail Us : support@techvibe9.com

Latest Post

  • Robin Open Social-Fi: Revolutionizes Gaming with Innovative Integration and Global Partnerships
  • $GUMMY Set to Launch New Meta On Staking on Solana
  • BinoStake.io: Transforming Crypto Investments On BNB Chain with Liquid Staking Solutions

Helpful Links

  • About Us
  • Contact Us
  • Privacy Policy
  • Terms & Conditions

Copyright © TechVibe9

Powered by PressBook Masonry Dark