Artwork

Контент предоставлен MLSecOps.com. Весь контент подкастов, включая эпизоды, графику и описания подкастов, загружается и предоставляется непосредственно компанией MLSecOps.com или ее партнером по платформе подкастов. Если вы считаете, что кто-то использует вашу работу, защищенную авторским правом, без вашего разрешения, вы можете выполнить процедуру, описанную здесь https://ru.player.fm/legal.
Player FM - приложение для подкастов
Работайте офлайн с приложением Player FM !

The Evolved Adversarial ML Landscape; With Guest: Apostol Vassilev, NIST

30:30
 
Поделиться
 

Manage episode 366038782 series 3461851
Контент предоставлен MLSecOps.com. Весь контент подкастов, включая эпизоды, графику и описания подкастов, загружается и предоставляется непосредственно компанией MLSecOps.com или ее партнером по платформе подкастов. Если вы считаете, что кто-то использует вашу работу, защищенную авторским правом, без вашего разрешения, вы можете выполнить процедуру, описанную здесь https://ru.player.fm/legal.

Send us a text

In this episode, we explore the National Institute of Standards and Technology (NIST) white paper, Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations. The report is co-authored by our guest for this conversation; Apostol Vassilev, NIST Research Team Supervisor. Apostol provides insights into the motivations behind this initiative and the collaborative research methodology employed by the NIST team.

Apostol shares with us that this taxonomy and terminology report is part of the Trustworthy & Responsible AI Resource Center that NIST is developing.
Additional tools in the resource center include NIST’s AI Risk Management Framework (RMF), the OECD-NIST Catalogue of AI Tools and Metrics, and another crucial publication that Apostol co-authored called Towards a Standard for Identifying and Managing Bias in Artificial Intelligence.
The conversation then focuses on the evolution of adversarial ML (AdvML) attacks, including prominent techniques like prompt injection attacks, as well as other emerging threats amidst the rise of large language model applications. Apostol discusses the changing AI and computing infrastructure and the scale of defenses required as a result of these changes.
Concluding the episode, Apostol shares thoughts on enhancing ML security practices and invites stakeholders to contribute to the ongoing development of the AdvML taxonomy and terminology white paper.
Join us now for a thought-provoking discussion that sheds light on NIST's efforts to further define the terminology of adversarial ML and develop a comprehensive taxonomy of concepts that will aid industry leaders in creating additional standards and guides.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

41 эпизодов

Artwork
iconПоделиться
 
Manage episode 366038782 series 3461851
Контент предоставлен MLSecOps.com. Весь контент подкастов, включая эпизоды, графику и описания подкастов, загружается и предоставляется непосредственно компанией MLSecOps.com или ее партнером по платформе подкастов. Если вы считаете, что кто-то использует вашу работу, защищенную авторским правом, без вашего разрешения, вы можете выполнить процедуру, описанную здесь https://ru.player.fm/legal.

Send us a text

In this episode, we explore the National Institute of Standards and Technology (NIST) white paper, Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations. The report is co-authored by our guest for this conversation; Apostol Vassilev, NIST Research Team Supervisor. Apostol provides insights into the motivations behind this initiative and the collaborative research methodology employed by the NIST team.

Apostol shares with us that this taxonomy and terminology report is part of the Trustworthy & Responsible AI Resource Center that NIST is developing.
Additional tools in the resource center include NIST’s AI Risk Management Framework (RMF), the OECD-NIST Catalogue of AI Tools and Metrics, and another crucial publication that Apostol co-authored called Towards a Standard for Identifying and Managing Bias in Artificial Intelligence.
The conversation then focuses on the evolution of adversarial ML (AdvML) attacks, including prominent techniques like prompt injection attacks, as well as other emerging threats amidst the rise of large language model applications. Apostol discusses the changing AI and computing infrastructure and the scale of defenses required as a result of these changes.
Concluding the episode, Apostol shares thoughts on enhancing ML security practices and invites stakeholders to contribute to the ongoing development of the AdvML taxonomy and terminology white paper.
Join us now for a thought-provoking discussion that sheds light on NIST's efforts to further define the terminology of adversarial ML and develop a comprehensive taxonomy of concepts that will aid industry leaders in creating additional standards and guides.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

41 эпизодов

Все серии

×
 
Loading …

Добро пожаловать в Player FM!

Player FM сканирует Интернет в поисках высококачественных подкастов, чтобы вы могли наслаждаться ими прямо сейчас. Это лучшее приложение для подкастов, которое работает на Android, iPhone и веб-странице. Зарегистрируйтесь, чтобы синхронизировать подписки на разных устройствах.

 

Краткое руководство

Слушайте это шоу, пока исследуете
Прослушать