Artwork

Контент предоставлен The New Stack Podcast and The New Stack. Весь контент подкастов, включая эпизоды, графику и описания подкастов, загружается и предоставляется непосредственно компанией The New Stack Podcast and The New Stack или ее партнером по платформе подкастов. Если вы считаете, что кто-то использует вашу работу, защищенную авторским правом, без вашего разрешения, вы можете выполнить процедуру, описанную здесь https://ru.player.fm/legal.
Player FM - приложение для подкастов
Работайте офлайн с приложением Player FM !

AI, LLMs and Security: How to Deal with the New Threats

37:31
 
Поделиться
 

Manage episode 411926783 series 75006
Контент предоставлен The New Stack Podcast and The New Stack. Весь контент подкастов, включая эпизоды, графику и описания подкастов, загружается и предоставляется непосредственно компанией The New Stack Podcast and The New Stack или ее партнером по платформе подкастов. Если вы считаете, что кто-то использует вашу работу, защищенную авторским правом, без вашего разрешения, вы можете выполнить процедуру, описанную здесь https://ru.player.fm/legal.

The use of large language models (LLMs) has become widespread, but there are significant security risks associated with them. LLMs with millions or billions of parameters are complex and challenging to fully scrutinize, making them susceptible to exploitation by attackers who can find loopholes or vulnerabilities. On an episode of The New Stack Makers, Chris Pirillo, Tech Evangelist and Lance Seidman, Backend Engineer at Atomic Form discussed these security challenges, emphasizing the need for human oversight to protect AI systems.

One example highlighted was malicious AI models on Hugging Face, which exploited the Python pickle module to execute arbitrary commands on users' machines. To mitigate such risks, Hugging Face implemented security scanners to check every file for security threats. However, human vigilance remains crucial in identifying and addressing potential exploits.

Seidman also stressed the importance of technical safeguards and a culture of security awareness within the AI community. Developers should prioritize security throughout the development life cycle to stay ahead of evolving threats. Overall, the message is clear: while AI offers remarkable capabilities, it requires careful management and oversight to prevent misuse and protect against security breaches.

Learn more from The New Stack about AI and security:

Artificial Intelligence: Stopping the Big Unknown in Application, Data Security

Cyberattacks, AI and Multicloud Hit Cybersecurity in 2023

Will Generative AI Kill DevSecOps?

Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

  continue reading

856 эпизодов

Artwork
iconПоделиться
 
Manage episode 411926783 series 75006
Контент предоставлен The New Stack Podcast and The New Stack. Весь контент подкастов, включая эпизоды, графику и описания подкастов, загружается и предоставляется непосредственно компанией The New Stack Podcast and The New Stack или ее партнером по платформе подкастов. Если вы считаете, что кто-то использует вашу работу, защищенную авторским правом, без вашего разрешения, вы можете выполнить процедуру, описанную здесь https://ru.player.fm/legal.

The use of large language models (LLMs) has become widespread, but there are significant security risks associated with them. LLMs with millions or billions of parameters are complex and challenging to fully scrutinize, making them susceptible to exploitation by attackers who can find loopholes or vulnerabilities. On an episode of The New Stack Makers, Chris Pirillo, Tech Evangelist and Lance Seidman, Backend Engineer at Atomic Form discussed these security challenges, emphasizing the need for human oversight to protect AI systems.

One example highlighted was malicious AI models on Hugging Face, which exploited the Python pickle module to execute arbitrary commands on users' machines. To mitigate such risks, Hugging Face implemented security scanners to check every file for security threats. However, human vigilance remains crucial in identifying and addressing potential exploits.

Seidman also stressed the importance of technical safeguards and a culture of security awareness within the AI community. Developers should prioritize security throughout the development life cycle to stay ahead of evolving threats. Overall, the message is clear: while AI offers remarkable capabilities, it requires careful management and oversight to prevent misuse and protect against security breaches.

Learn more from The New Stack about AI and security:

Artificial Intelligence: Stopping the Big Unknown in Application, Data Security

Cyberattacks, AI and Multicloud Hit Cybersecurity in 2023

Will Generative AI Kill DevSecOps?

Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

  continue reading

856 эпизодов

All episodes

×
 
Loading …

Добро пожаловать в Player FM!

Player FM сканирует Интернет в поисках высококачественных подкастов, чтобы вы могли наслаждаться ими прямо сейчас. Это лучшее приложение для подкастов, которое работает на Android, iPhone и веб-странице. Зарегистрируйтесь, чтобы синхронизировать подписки на разных устройствах.

 

Краткое руководство