Artwork

Контент предоставлен Snyk. Весь контент подкастов, включая эпизоды, графику и описания подкастов, загружается и предоставляется непосредственно компанией Snyk или ее партнером по платформе подкастов. Если вы считаете, что кто-то использует вашу работу, защищенную авторским правом, без вашего разрешения, вы можете выполнить процедуру, описанную здесь https://ru.player.fm/legal.
Player FM - приложение для подкастов
Работайте офлайн с приложением Player FM !

The Need For Diverse Perspectives In AI Security With Dr. Christina Liaghati

36:28
 
Поделиться
 

Manage episode 381316034 series 1601195
Контент предоставлен Snyk. Весь контент подкастов, включая эпизоды, графику и описания подкастов, загружается и предоставляется непосредственно компанией Snyk или ее партнером по платформе подкастов. Если вы считаете, что кто-то использует вашу работу, защищенную авторским правом, без вашего разрешения, вы можете выполнить процедуру, описанную здесь https://ru.player.fm/legal.

Episode Summary

In this episode, Dr. Christina Liaghati discusses incorporating diverse perspectives, early security measures, and continuous risk evaluations in AI system development. She underscores the importance of collaboration and shares resources to help tackle AI-related risks.

Show Notes

In this enlightening episode of The Secure Developer, Dr. Christina Liaghati of MITRE offers valuable insights on the necessity of integrating security considerations right from the design phase in AI system development. She underscores the fact that cybersecurity issues can’t be fixed solely at the end of the development process; rather, understanding and mitigating vulnerabilities require continual iterative discovery and investigation throughout the system's lifecycle.

Dr. Liaghati emphasizes the need for incorporating diverse perspectives into the process, specifically highlighting the value of expertise from fields like psychology and human-centered design to grasp the socio-technical issues associated with AI use fully. She sounds a cautionary note about the inherent risks when AI is applied in critical sectors like healthcare and transportation, which calls for thorough discussions about these deployments.

Additionally, she introduces listeners to MITRE's ATLAS project, a community-focused initiative that seeks to holistically address the challenges posed by AI, drawing lessons from past experiences in cybersecurity. She points out the ATLAS project as a resource for learning about adversarial machine learning, particularly useful for those coming from a traditional cybersecurity environment or the traditional AI side.

Importantly, she talks about the potential of AI technology as a tool to improve day-to-day activities, exemplified by email management. These discussions underscore the importance of knowledgeable and informed debates about integrating AI into various aspects of our society and industries. The episode serves as a useful guide for anyone venturing into the world of AI security, offering a balanced perspective on the potential challenges and opportunities involved.

Links

Follow Us

Follow Us

  continue reading

155 эпизодов

Artwork
iconПоделиться
 
Manage episode 381316034 series 1601195
Контент предоставлен Snyk. Весь контент подкастов, включая эпизоды, графику и описания подкастов, загружается и предоставляется непосредственно компанией Snyk или ее партнером по платформе подкастов. Если вы считаете, что кто-то использует вашу работу, защищенную авторским правом, без вашего разрешения, вы можете выполнить процедуру, описанную здесь https://ru.player.fm/legal.

Episode Summary

In this episode, Dr. Christina Liaghati discusses incorporating diverse perspectives, early security measures, and continuous risk evaluations in AI system development. She underscores the importance of collaboration and shares resources to help tackle AI-related risks.

Show Notes

In this enlightening episode of The Secure Developer, Dr. Christina Liaghati of MITRE offers valuable insights on the necessity of integrating security considerations right from the design phase in AI system development. She underscores the fact that cybersecurity issues can’t be fixed solely at the end of the development process; rather, understanding and mitigating vulnerabilities require continual iterative discovery and investigation throughout the system's lifecycle.

Dr. Liaghati emphasizes the need for incorporating diverse perspectives into the process, specifically highlighting the value of expertise from fields like psychology and human-centered design to grasp the socio-technical issues associated with AI use fully. She sounds a cautionary note about the inherent risks when AI is applied in critical sectors like healthcare and transportation, which calls for thorough discussions about these deployments.

Additionally, she introduces listeners to MITRE's ATLAS project, a community-focused initiative that seeks to holistically address the challenges posed by AI, drawing lessons from past experiences in cybersecurity. She points out the ATLAS project as a resource for learning about adversarial machine learning, particularly useful for those coming from a traditional cybersecurity environment or the traditional AI side.

Importantly, she talks about the potential of AI technology as a tool to improve day-to-day activities, exemplified by email management. These discussions underscore the importance of knowledgeable and informed debates about integrating AI into various aspects of our society and industries. The episode serves as a useful guide for anyone venturing into the world of AI security, offering a balanced perspective on the potential challenges and opportunities involved.

Links

Follow Us

Follow Us

  continue reading

155 эпизодов

Alle episoder

×
 
Loading …

Добро пожаловать в Player FM!

Player FM сканирует Интернет в поисках высококачественных подкастов, чтобы вы могли наслаждаться ими прямо сейчас. Это лучшее приложение для подкастов, которое работает на Android, iPhone и веб-странице. Зарегистрируйтесь, чтобы синхронизировать подписки на разных устройствах.

 

Краткое руководство