AI Models Struggle with Driving Safety, Language Models Get More Human-Like, and Scientists Crack the Code on Privacy
MP3•Главная эпизода
Manage episode 460539282 series 3568650
Контент предоставлен PocketPod. Весь контент подкастов, включая эпизоды, графику и описания подкастов, загружается и предоставляется непосредственно компанией PocketPod или ее партнером по платформе подкастов. Если вы считаете, что кто-то использует вашу работу, защищенную авторским правом, без вашего разрешения, вы можете выполнить процедуру, описанную здесь https://ru.player.fm/legal.
As artificial intelligence systems become more integrated into our daily lives, researchers are uncovering both promising advances and concerning limitations. New studies reveal that vision-language models aren't yet reliable enough for autonomous driving, while parallel breakthroughs are making AI communication more natural and human-like, all as scientists develop innovative ways to protect our privacy when interacting with these increasingly powerful systems. Links to all the papers we discussed: The GAN is dead; long live the GAN! A Modern GAN Baseline, An Empirical Study of Autoregressive Pre-training from Videos, Are VLMs Ready for Autonomous Driving? An Empirical Study from the Reliability, Data, and Metric Perspectives, Enhancing Human-Like Responses in Large Language Models, On Computational Limits and Provably Efficient Criteria of Visual Autoregressive Models: A Fine-Grained Complexity Analysis, Entropy-Guided Attention for Private LLMs
…
continue reading
102 эпизодов