Artwork

Контент предоставлен Soroush Pour. Весь контент подкастов, включая эпизоды, графику и описания подкастов, загружается и предоставляется непосредственно компанией Soroush Pour или ее партнером по платформе подкастов. Если вы считаете, что кто-то использует вашу работу, защищенную авторским правом, без вашего разрешения, вы можете выполнить процедуру, описанную здесь https://ru.player.fm/legal.
Player FM - приложение для подкастов
Работайте офлайн с приложением Player FM !

Ep 11 - Technical alignment overview w/ Thomas Larsen (Director of Strategy, Center for AI Policy)

1:37:19
 
Поделиться
 

Manage episode 389405391 series 3428190
Контент предоставлен Soroush Pour. Весь контент подкастов, включая эпизоды, графику и описания подкастов, загружается и предоставляется непосредственно компанией Soroush Pour или ее партнером по платформе подкастов. Если вы считаете, что кто-то использует вашу работу, защищенную авторским правом, без вашего разрешения, вы можете выполнить процедуру, описанную здесь https://ru.player.fm/legal.

We speak with Thomas Larsen, Director for Strategy at the Center for AI Policy in Washington, DC, to do a "speed run" overview of all the major technical research directions in AI alignment. A great way to quickly learn broadly about the field of technical AI alignment.
In 2022, Thomas spent ~75 hours putting together an overview of what everyone in technical alignment was doing. Since then, he's continued to be deeply engaged in AI safety. We talk to Thomas to share an updated overview to help listeners quickly understand the technical alignment research landscape.
We talk to Thomas about a huge breadth of technical alignment areas including:
* Prosaic alignment
* Scalable oversight (e.g. RLHF, debate, IDA)
* Intrepretability
* Heuristic arguments, from ARC
* Model evaluations
* Agent foundations
* Other areas more briefly:
* Model splintering
* Out-of-distribution (OOD) detection
* Low impact measures
* Threat modelling
* Scaling laws
* Brain-like AI safety
* Inverse reinforcement learning (RL)
* Cooperative AI
* Adversarial training
* Truthful AI
* Brain-machine interfaces (Neuralink)
Hosted by Soroush Pour. Follow me for more AGI content:
Twitter: https://twitter.com/soroushjp
LinkedIn: https://www.linkedin.com/in/soroushjp/
== Show links ==
-- About Thomas --
Thomas studied Computer Science & Mathematics at U. Michigan where he first did ML research in the field of computer vision. After graduating, he completed the MATS AI safety research scholar program before doing a stint at MIRI as a Technical AI Safety Researcher. Earlier this year, he moved his work into AI policy by co-founding the Center for AI Policy, a nonprofit, nonpartisan organisation focused on getting the US government to adopt policies that would mitigate national security risks from AI. The Center for AI Policy is not connected to foreign governments or commercial AI developers and is instead committed to the public interest.
* Center for AI Policy - https://www.aipolicy.us
* LinkedIn - https://www.linkedin.com/in/thomas-larsen/
* LessWrong - https://www.lesswrong.com/users/thomas-larsen
-- Further resources --
* Thomas' post, "What Everyone in Technical Alignment is Doing and Why" https://www.lesswrong.com/posts/QBAjndPuFbhEXKcCr/my-understanding-of-what-everyone-in-technical-alignment-is
* Please note this post is from Aug 2022. The podcast should be more up-to-date, but this post is still a valuable and relevant resource.

  continue reading

15 эпизодов

Artwork
iconПоделиться
 
Manage episode 389405391 series 3428190
Контент предоставлен Soroush Pour. Весь контент подкастов, включая эпизоды, графику и описания подкастов, загружается и предоставляется непосредственно компанией Soroush Pour или ее партнером по платформе подкастов. Если вы считаете, что кто-то использует вашу работу, защищенную авторским правом, без вашего разрешения, вы можете выполнить процедуру, описанную здесь https://ru.player.fm/legal.

We speak with Thomas Larsen, Director for Strategy at the Center for AI Policy in Washington, DC, to do a "speed run" overview of all the major technical research directions in AI alignment. A great way to quickly learn broadly about the field of technical AI alignment.
In 2022, Thomas spent ~75 hours putting together an overview of what everyone in technical alignment was doing. Since then, he's continued to be deeply engaged in AI safety. We talk to Thomas to share an updated overview to help listeners quickly understand the technical alignment research landscape.
We talk to Thomas about a huge breadth of technical alignment areas including:
* Prosaic alignment
* Scalable oversight (e.g. RLHF, debate, IDA)
* Intrepretability
* Heuristic arguments, from ARC
* Model evaluations
* Agent foundations
* Other areas more briefly:
* Model splintering
* Out-of-distribution (OOD) detection
* Low impact measures
* Threat modelling
* Scaling laws
* Brain-like AI safety
* Inverse reinforcement learning (RL)
* Cooperative AI
* Adversarial training
* Truthful AI
* Brain-machine interfaces (Neuralink)
Hosted by Soroush Pour. Follow me for more AGI content:
Twitter: https://twitter.com/soroushjp
LinkedIn: https://www.linkedin.com/in/soroushjp/
== Show links ==
-- About Thomas --
Thomas studied Computer Science & Mathematics at U. Michigan where he first did ML research in the field of computer vision. After graduating, he completed the MATS AI safety research scholar program before doing a stint at MIRI as a Technical AI Safety Researcher. Earlier this year, he moved his work into AI policy by co-founding the Center for AI Policy, a nonprofit, nonpartisan organisation focused on getting the US government to adopt policies that would mitigate national security risks from AI. The Center for AI Policy is not connected to foreign governments or commercial AI developers and is instead committed to the public interest.
* Center for AI Policy - https://www.aipolicy.us
* LinkedIn - https://www.linkedin.com/in/thomas-larsen/
* LessWrong - https://www.lesswrong.com/users/thomas-larsen
-- Further resources --
* Thomas' post, "What Everyone in Technical Alignment is Doing and Why" https://www.lesswrong.com/posts/QBAjndPuFbhEXKcCr/my-understanding-of-what-everyone-in-technical-alignment-is
* Please note this post is from Aug 2022. The podcast should be more up-to-date, but this post is still a valuable and relevant resource.

  continue reading

15 эпизодов

Усі епізоди

×
 
Loading …

Добро пожаловать в Player FM!

Player FM сканирует Интернет в поисках высококачественных подкастов, чтобы вы могли наслаждаться ими прямо сейчас. Это лучшее приложение для подкастов, которое работает на Android, iPhone и веб-странице. Зарегистрируйтесь, чтобы синхронизировать подписки на разных устройствах.

 

Краткое руководство