The Center for AI Policy Podcast zooms into the strategic landscape of AI and unpacks its implications for US policy. This podcast is a publication from the Center for AI Policy (CAIP), a nonpartisan research organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy. Operating out of Washington, DC, CAIP works to ensure AI is developed and implemented with the highest safety standards.
…
continue reading
1
#12: Michael K. Cohen on Regulating Advanced Artificial Agents
43:50
43:50
Прослушать позже
Прослушать позже
Списки
Нравится
Нравится
43:50
Dr. Michael K. Cohen, a postdoc AI safety researcher at UC Berkeley, joined the podcast to discuss OpenAI's superalignment research, reinforcement learning and imitation learning, potential dangers of advanced future AI agents, policy proposals to address long-term planning agents, academic discourse on AI risks, California's SB 1047 bill, and more…
…
continue reading
1
#11: Ellen P. Goodman on AI Accountability Policy
46:21
46:21
Прослушать позже
Прослушать позже
Списки
Нравится
Нравится
46:21
Ellen P. Goodman, a distinguished professor of law at Rutgers Law School, joined the podcast to discuss the NTIA's AI accountability report, federal AI policy efforts, watermarking and data provenance, AI-generated content, risk-based regulation, and more. Our music is by Micah Rubin (Producer) and John Lisi (Composer). For a transcript and relevan…
…
continue reading
1
#10: Stephen Casper on Technical and Sociotechnical AI Safety Research
59:46
59:46
Прослушать позже
Прослушать позже
Списки
Нравится
Нравится
59:46
Stephen Casper, a computer science PhD student at MIT, joined the podcast to discuss AI interpretability, red-teaming and robustness, evaluations and audits, reinforcement learning from human feedback, Goodhart’s law, and more. Our music is by Micah Rubin (Producer) and John Lisi (Composer). For a transcript and relevant links, visit the Center for…
…
continue reading
1
#9: Kelsey Piper on the OpenAI Exit Documents Incident
52:59
52:59
Прослушать позже
Прослушать позже
Списки
Нравится
Нравится
52:59
Kelsey Piper, Senior Writer at Vox, joined the podcast to discuss OpenAI's recent incident involving exit documents, the extent to which OpenAI's actions were unreasonable, and the broader significance of this story. Our music is by Micah Rubin (Producer) and John Lisi (Composer). For a transcript and relevant links, visit the Center for AI Policy …
…
continue reading
1
#8: Tamay Besiroglu on the Trends Driving Past and Future AI Progress
1:43:07
1:43:07
Прослушать позже
Прослушать позже
Списки
Нравится
Нравится
1:43:07
Tamay Besiroglu, Associate Director of Epoch AI, joined the podcast to provide a comprehensive overview of the factors shaping AI progress, from algorithmic advances and hardware scaling to data availability and economic incentives, and to analyze the potential trajectories of AI development over the coming years. Our music is by Micah Rubin (Produ…
…
continue reading
1
#7: Katja Grace on the Future of AI and Insights From AI Researchers
49:03
49:03
Прослушать позже
Прослушать позже
Списки
Нравится
Нравится
49:03
Katja Grace, Lead Researcher and Co-Founder of AI Impacts, joined the podcast to discuss where AI is heading and what AI researchers think about it, including analysis of likely the largest-ever survey of AI researchers. Our music is by Micah Rubin (Producer) and John Lisi (Composer). For a transcript, highlights, and relevant links, visit the Cent…
…
continue reading
1
#6: Jason Green-Lowe on Legal Liability for AI Harms
44:29
44:29
Прослушать позже
Прослушать позже
Списки
Нравится
Нравится
44:29
Jason Green-Lowe, Executive Director of the Center for AI Policy, joins the podcast to discuss who is held legally accountable when AI causes harm, how they're held accountable, and potential policies to improve or clarify liability law regarding AI. Our music is by Micah Rubin (Producer) and John Lisi (Composer).…
…
continue reading
1
#5: Jeffrey Ladish on Cybersecurity, Cyberoffense, and AI
45:02
45:02
Прослушать позже
Прослушать позже
Списки
Нравится
Нравится
45:02
Jeffrey Ladish, Executive Director of Palisade Research, joins the podcast to discuss threats of AI-powered cyberattacks, security of model weights, and what lies ahead. Our music is by Micah Rubin (Producer) and John Lisi (Composer).Center for AI Policy
…
continue reading
1
#4: Sam Hammond on Modernizing Government for the AI Era
44:06
44:06
Прослушать позже
Прослушать позже
Списки
Нравится
Нравится
44:06
Sam Hammond, Senior Economist at the Foundation for American Innovation, joins the podcast to discuss how AI creates a need for government modernization, and what that modernization should look like. Our music is by Micah Rubin (Producer) and John Lisi (Composer).Center for AI Policy
…
continue reading
1
#3: Daniel Colson on the American Public's Perception of AI
32:38
32:38
Прослушать позже
Прослушать позже
Списки
Нравится
Нравится
32:38
Daniel Colson, Co-Founder and CEO of the AI Policy Institute, joins the podcast to discuss US public opinion on AI, based on polling data from the AI Policy Institute. Our music is by Micah Rubin (Producer) and John Lisi (Composer).Center for AI Policy
…
continue reading
1
#2: Mark Beall on AI and US National Security
33:27
33:27
Прослушать позже
Прослушать позже
Списки
Нравится
Нравится
33:27
NOTE: Since recording this podcast, Mark Beall has left his role at Gladstone to focus fully on AI safety and security policy advocacy. You can reach him at mark@bealldigital.com for more information. Mark Beall, co-founder and CEO of Gladstone AI, joins the podcast to discuss the state of AI, its implications for US national security, and next ste…
…
continue reading
1
#1: Thomas Larsen on AI Measurement and Evaluation
36:58
36:58
Прослушать позже
Прослушать позже
Списки
Нравится
Нравится
36:58
Thomas Larsen, Director of Strategy at the Center for AI Policy, joins the podcast to discuss capability evaluations, safety evaluations, preparedness frameworks, and why they're important. Our music is by Micah Rubin (Producer) and John Lisi (Composer).Center for AI Policy
…
continue reading