Artwork

Контент предоставлен Nickle. Весь контент подкастов, включая эпизоды, графику и описания подкастов, загружается и предоставляется непосредственно компанией Nickle или ее партнером по платформе подкастов. Если вы считаете, что кто-то использует вашу работу, защищенную авторским правом, без вашего разрешения, вы можете выполнить процедуру, описанную здесь https://ru.player.fm/legal.
Player FM - приложение для подкастов
Работайте офлайн с приложением Player FM !

Possible to turn ChatGPT, Bard, or any LLM from an amnesiac goldfish into a memory mammoth!

1:00:23
 
Поделиться
 

Manage episode 372107452 series 3463837
Контент предоставлен Nickle. Весь контент подкастов, включая эпизоды, графику и описания подкастов, загружается и предоставляется непосредственно компанией Nickle или ее партнером по платформе подкастов. Если вы считаете, что кто-то использует вашу работу, защищенную авторским правом, без вашего разрешения, вы можете выполнить процедуру, описанную здесь https://ru.player.fm/legal.

What if your ChatGPT, Bard, or any other LLMs could remember the things you said last month? For example, if you were planning to buy a home near your kids' school, the story you told your son last week, or the gift ideas your wife wouldn't like. That would be amazing, right? However, with the current short memory limit, also known as the context window, these capabilities remain a dream.

But Dr. Burtsev's research has come to our rescue! Thanks to his breakthrough, your LLMs can now accurately remember 1 million tokens, equivalent to several books' worth of information. We are now much closer to having a dreamlike Chat Agent.
Want to know more about what this means for the rest of us? Just tune in to this podcast where Mr. Bursev joins us to discuss his inspiration, how he made it possible, and many other insightful thoughts and ideas about interactive learning, human brain-inspired machine learning algorithms, AGI, Turing test, and, of course, AI safety.

Here is the paper: Scaling Transformer to 1M tokens and beyond with RMT
https://arxiv.org/abs/2304.11062
Find Mr Burtsev's profile here
https://lims.ac.uk/profile/?id=114
Here are various other resources mentioned during the show:
Mike's Linkedin page and information about the IGLU contest (Interactive Grounded Language Understanding)
https://www.linkedin.com/posts/mikhai...
The Society of Mind, Marvin Minsky
https://isbndb.com/book/9780671657130
The Human Brain Project
https://www.humanbrainproject.eu/en/b...
Yann LeCun, JEPA: A Path Towards Autonomous Machine Intelligence
https://www.reddit.com/r/MachineLearn...
Mindstorms in Natural Language-Based Societies of Mind
Jürgen Schmidhuber

  continue reading

2 эпизода

Artwork
iconПоделиться
 
Manage episode 372107452 series 3463837
Контент предоставлен Nickle. Весь контент подкастов, включая эпизоды, графику и описания подкастов, загружается и предоставляется непосредственно компанией Nickle или ее партнером по платформе подкастов. Если вы считаете, что кто-то использует вашу работу, защищенную авторским правом, без вашего разрешения, вы можете выполнить процедуру, описанную здесь https://ru.player.fm/legal.

What if your ChatGPT, Bard, or any other LLMs could remember the things you said last month? For example, if you were planning to buy a home near your kids' school, the story you told your son last week, or the gift ideas your wife wouldn't like. That would be amazing, right? However, with the current short memory limit, also known as the context window, these capabilities remain a dream.

But Dr. Burtsev's research has come to our rescue! Thanks to his breakthrough, your LLMs can now accurately remember 1 million tokens, equivalent to several books' worth of information. We are now much closer to having a dreamlike Chat Agent.
Want to know more about what this means for the rest of us? Just tune in to this podcast where Mr. Bursev joins us to discuss his inspiration, how he made it possible, and many other insightful thoughts and ideas about interactive learning, human brain-inspired machine learning algorithms, AGI, Turing test, and, of course, AI safety.

Here is the paper: Scaling Transformer to 1M tokens and beyond with RMT
https://arxiv.org/abs/2304.11062
Find Mr Burtsev's profile here
https://lims.ac.uk/profile/?id=114
Here are various other resources mentioned during the show:
Mike's Linkedin page and information about the IGLU contest (Interactive Grounded Language Understanding)
https://www.linkedin.com/posts/mikhai...
The Society of Mind, Marvin Minsky
https://isbndb.com/book/9780671657130
The Human Brain Project
https://www.humanbrainproject.eu/en/b...
Yann LeCun, JEPA: A Path Towards Autonomous Machine Intelligence
https://www.reddit.com/r/MachineLearn...
Mindstorms in Natural Language-Based Societies of Mind
Jürgen Schmidhuber

  continue reading

2 эпизода

Все серии

×
 
Loading …

Добро пожаловать в Player FM!

Player FM сканирует Интернет в поисках высококачественных подкастов, чтобы вы могли наслаждаться ими прямо сейчас. Это лучшее приложение для подкастов, которое работает на Android, iPhone и веб-странице. Зарегистрируйтесь, чтобы синхронизировать подписки на разных устройствах.

 

Краткое руководство

Слушайте это шоу, пока исследуете
Прослушать