Artwork

Контент предоставлен Carnegie India. Весь контент подкастов, включая эпизоды, графику и описания подкастов, загружается и предоставляется непосредственно компанией Carnegie India или ее партнером по платформе подкастов. Если вы считаете, что кто-то использует вашу работу, защищенную авторским правом, без вашего разрешения, вы можете выполнить процедуру, описанную здесь https://ru.player.fm/legal.
Player FM - приложение для подкастов
Работайте офлайн с приложением Player FM !

Unbundling AI Openness: Beyond the Binary

48:02
 
Поделиться
 

Manage episode 513988264 series 2591344
Контент предоставлен Carnegie India. Весь контент подкастов, включая эпизоды, графику и описания подкастов, загружается и предоставляется непосредственно компанией Carnegie India или ее партнером по платформе подкастов. Если вы считаете, что кто-то использует вашу работу, защищенную авторским правом, без вашего разрешения, вы можете выполнить процедуру, описанную здесь https://ru.player.fm/legal.

The episode challenges the familiar “open versus closed” framing of AI systems. Sharma argues that openness is not inherently good or bad—it is an instrumental choice that should align with specific policy goals. She introduces a seven-part taxonomy of AI—compute, data, source code, model weights, system prompts, operational records and controls, and labor—to show how each component interacts differently with innovation, safety, and governance. Her central idea, differential openness, suggests that each component can exist along a spectrum rather than being entirely open or closed. For instance, a company might keep its training data private while making its system prompts partially accessible, allowing transparency without compromising competitive or national interests. Using the example of companion bots, Sharma highlights how tailored openness across components can enhance safety and oversight while protecting user privacy. She urges policymakers to adopt this nuanced approach, applying varying levels of openness based on context—whether in public services, healthcare, or defense. The episode concludes by emphasizing that understanding these layers is vital for shaping balanced AI governance that safeguards public interest while supporting innovation.

How can regulators determine optimal openness levels for different components of AI systems? Can greater transparency coexist with innovation and competitive advantage? What governance structures can ensure that openness strengthens democratic accountability without undermining safety or national security?

Episode Contributors

Chinmayi Sharma is an associate professor of law at Fordham Law School in New York. She is a nonresident fellow at the Stoss Center, the Center for Democracy and Technology, and the Atlantic Council. She serves on Microsoft’s Responsible AI Committee and the program committees for the ACM Symposium on Computer Science and Law and the ACM Conference on Fairness, Accountability, and Transparency.

Shruti Mittal is a research analyst at Carnegie India. Her current research interests include artificial intelligence, semiconductors, compute, and data governance. She is also interested in studying the potential socio-economic value that open development and diffusion of technologies can create in the Global South.

Suggested Readings

Unbundling AI Openness by Parth Nobel, Alan Z. Rozenshtein, and Chinmayi Sharma.

Tragedy of the Digital Commons by Chinmayi Sharma.

India’s AI Strategy: Balancing Risk and Opportunity by Amlan Mohanty and Shatakratu Sahu.

Every two weeks, Interpreting India brings you diverse voices from India and around the world to explore the critical questions shaping the nation's future. We delve into how technology, the economy, and foreign policy intertwine to influence India's relationship with the global stage.

As a Carnegie India production, hosted by Carnegie scholars, Interpreting India, a Carnegie India production, provides insightful perspectives and cutting-edge by tackling the defining questions that chart India's course through the next decade.

Stay tuned for thought-provoking discussions, expert insights, and a deeper understanding of India's place in the world.

Don't forget to subscribe, share, and leave a review to join the conversation and be part of Interpreting India's journey.

  continue reading

137 эпизодов

Artwork
iconПоделиться
 
Manage episode 513988264 series 2591344
Контент предоставлен Carnegie India. Весь контент подкастов, включая эпизоды, графику и описания подкастов, загружается и предоставляется непосредственно компанией Carnegie India или ее партнером по платформе подкастов. Если вы считаете, что кто-то использует вашу работу, защищенную авторским правом, без вашего разрешения, вы можете выполнить процедуру, описанную здесь https://ru.player.fm/legal.

The episode challenges the familiar “open versus closed” framing of AI systems. Sharma argues that openness is not inherently good or bad—it is an instrumental choice that should align with specific policy goals. She introduces a seven-part taxonomy of AI—compute, data, source code, model weights, system prompts, operational records and controls, and labor—to show how each component interacts differently with innovation, safety, and governance. Her central idea, differential openness, suggests that each component can exist along a spectrum rather than being entirely open or closed. For instance, a company might keep its training data private while making its system prompts partially accessible, allowing transparency without compromising competitive or national interests. Using the example of companion bots, Sharma highlights how tailored openness across components can enhance safety and oversight while protecting user privacy. She urges policymakers to adopt this nuanced approach, applying varying levels of openness based on context—whether in public services, healthcare, or defense. The episode concludes by emphasizing that understanding these layers is vital for shaping balanced AI governance that safeguards public interest while supporting innovation.

How can regulators determine optimal openness levels for different components of AI systems? Can greater transparency coexist with innovation and competitive advantage? What governance structures can ensure that openness strengthens democratic accountability without undermining safety or national security?

Episode Contributors

Chinmayi Sharma is an associate professor of law at Fordham Law School in New York. She is a nonresident fellow at the Stoss Center, the Center for Democracy and Technology, and the Atlantic Council. She serves on Microsoft’s Responsible AI Committee and the program committees for the ACM Symposium on Computer Science and Law and the ACM Conference on Fairness, Accountability, and Transparency.

Shruti Mittal is a research analyst at Carnegie India. Her current research interests include artificial intelligence, semiconductors, compute, and data governance. She is also interested in studying the potential socio-economic value that open development and diffusion of technologies can create in the Global South.

Suggested Readings

Unbundling AI Openness by Parth Nobel, Alan Z. Rozenshtein, and Chinmayi Sharma.

Tragedy of the Digital Commons by Chinmayi Sharma.

India’s AI Strategy: Balancing Risk and Opportunity by Amlan Mohanty and Shatakratu Sahu.

Every two weeks, Interpreting India brings you diverse voices from India and around the world to explore the critical questions shaping the nation's future. We delve into how technology, the economy, and foreign policy intertwine to influence India's relationship with the global stage.

As a Carnegie India production, hosted by Carnegie scholars, Interpreting India, a Carnegie India production, provides insightful perspectives and cutting-edge by tackling the defining questions that chart India's course through the next decade.

Stay tuned for thought-provoking discussions, expert insights, and a deeper understanding of India's place in the world.

Don't forget to subscribe, share, and leave a review to join the conversation and be part of Interpreting India's journey.

  continue reading

137 эпизодов

Все серии

×
 
Loading …

Добро пожаловать в Player FM!

Player FM сканирует Интернет в поисках высококачественных подкастов, чтобы вы могли наслаждаться ими прямо сейчас. Это лучшее приложение для подкастов, которое работает на Android, iPhone и веб-странице. Зарегистрируйтесь, чтобы синхронизировать подписки на разных устройствах.

 

Краткое руководство

Слушайте это шоу, пока исследуете
Прослушать