By Manishi Srivastava PhD, co-authored by ChatGPT
The ACIT Science Podcast episode 12 features Györgi Buzsaki, a professor of neuroscience at NYU and a respected neuroscientist known for his work on memory, sleep, and neural syntax. Buzsaki has authored two books, Rhythms of the Brain in 2006 and The Brain From Inside Out in 2019, which propose a paradigm shift in how neuroscience should approach the study of the brain. Buzsaki argues that the mind is in the way because it has preconceived notions about what we are expecting to find in the brain, and that neuroscience should shift the balance a little bit to the other direction and think more about brain mechanisms.
Buzsaki discusses how information in the brain is organized and intertwined with cognitive function. He proposes that neuroscience has used terms and vocabulary that were concocted by predecessors who did not know anything about neural mechanisms, and without questioning these terms, neuroscience is looking for mechanisms and boundaries with the same kind of boundaries that existed in our thoughts. He argues that the brain does not have structures or homes for our ideas and that neuroscience has done too much in terms of searching for these mechanisms.
Buzsaki then discusses how memory, sleep, and neural syntax are related. He says that there is no future without the past, no planning and no imagination without memory, and that traditionally and historically, these entities have been researched in different labs. However, ideas are converging from both bottom-up electrophysiology and imaging studies in humans that they are not so different, in fact, they could be used for a variety of different things including cognition. He discusses how information in the brain is packaged to make sense to the receiving structures. Every system where information is being sent has to be packaged, and it typically does not happen in one shot. Temporal frames are necessary for any kind of syntax; otherwise, it would be challenging to comprehend. The fastest pattern that is known is a hippocampal sharp ripple that is about 200 Hertz or a little bit lower, but in some cases, it can go up. This is typically local because you don’t need to send that far.
Buzsaki then discusses the organization of cell assemblies, which are sets of neurons that are temporally linked and functionally related. He says that Hermite’s and Bees show us that you can have an extraordinary complicated looking or complex-looking organization with local rules. Going back to the neuron itself, it is a super interesting question whether the neuron knows where the information comes from. Recently, Buzsaki and his postdocs had some ideas about this, and it turns out that during development, those inputs that are coming from the neighbors, or cou, can be equal, but there is also a sense that the observer neurons look at the proportional changes of the inputs.
Buzsaki also talks about how the principles of the brain can be applied to technology, for example, in machine learning. He argues that the most powerful idea in the brain is the way below, which is almost everything in the brain because it’s hard to find something about brain rhythms. He discusses how precision can be increased tremendously when recording from many other neurons, just like in an orchestra when you are listening to that, you are the one of the violinists and you listen to every single other member not only to the conductor.
Buzsaki argues that a better understanding of the brain can lead to more efficient and effective machine learning algorithms. He says that it is essential to understand how the brain processes information and how it is organized to improve machine learning algorithms.
You can listen to the full podcast here: https://youtu.be/aaQKrH9y99Y