8th Annual Pepose Award Lecture moved to Monday, March 13

Professor Frank Werblin, Professor Emeritus of Neuroscience at the University of California, Berkeley will receive the eighth annual Jay Pepose ’75 Award in Vision Sciences from Brandeis University on Monday, March 13 (date change due to impending snowstorm). The event will be held at 4 PM (room to be announced). At that time, Werblin will deliver a public lecture titled, “The Evolution of Retinal Science over the Last 50 Years.”

During his research, Professor Werblin identified a number of cellular correlates underlying visual information processing in the retina. He has authored many articles in peer-reviewed journals, and has contributed articles on retinal circuitry to the Handbook of Brain Microcircuits (Oxford University Press) and retinal processing in the Encyclopedia of the Eye (Elsevier). Werblin founded Visionize in 2013, a company dedicated to helping patients suffering from vision diseases that cannot be corrected with glasses or surgery.

The Pepose Award is funded by a $1 million endowment established in 2009 through a gift from Jay Pepose ’75, MA’75, P’08, P’17, and Susan K. Feigenbaum ’74, P’08, P’17, his wife. Pepose is the founder and medical director of the Pepose Vision Institute in St. Louis and a professor of clinical ophthalmology at Washington University. He founded and serves as board president of the Lifelong Vision Foundation, whose mission is to preserve lifelong vision for people in the St. Louis community, nationally and internationally through research, community programs and education programs. While a student at Brandeis, he worked closely with John Lisman, the Zalman Abraham Kekst Chair in Neuroscience and professor of biology at Brandeis.

Odor Recognition & Brute-Force Conversions

Frontiers in Computational Neuroscience will be publishing an interesting paper written by Honi Sanders and John Lisman (with co-authors Brian E. Kolterman, Roman Shusterman, Dmitry Rinberg, Alexei Koulakov) titled, “A network that performs brute-force conversion of a temporal sequence to a spatial pattern: relevance to odor recognition“. Honi Sanders has written a preview of this paper.

by Honi Sanders

Lisman_ProvisionalPDF_BLThere are many occasions in which the brain needs to process information that is provided in a sequence. These sequences may be externally generated or internally generated. For example, in the case of understanding speech, where words that come later may affect the meaning of words that come earlier, the brain must somehow store the sentence it is receiving long enough to process the sentence as a whole. On the other hand, sequences of information also are passed from one brain area to another.  In these cases too the brain must store the sequence it is receiving long enough to process the message as a whole.

One such sequence is generated by the olfactory bulb, which is the second stage of processing of the sense of smell.  While individual cells in the olfactory bulb will fire bursts in response to many odors, the order in which they fire is specific to an individual odor. How such a sequence can be recognized as a specific odor remains unclear.  In Sanders et al, we present experimental evidence that the sequence is discrete and therefore contains a relatively small number of sequential elements; each element is represented in a given cycle of the gamma frequency oscillations that occur during a sniff. This raises the possibility of a “brute force” solution for converting the sequence into a spatial pattern of the sort that could be recognized by standard “attractor” neural networks.  We present computer simulations of model networks that have modules; each model can produce a persistent snapshot of what occurs during a given gamma cycle. In this way, the unique properties of the sequence can be determined at the end of sniff by the spatial pattern of cell firing in all modules.

The authors thank Brandeis University High Performance Computing Cluster for cluster time. This work was supported by the NSF Collaborative Research in Computational Neuroscience, NSF IGERT, and the Howard Hughes Medical Institute.

More science

We’ve all been busy this spring writing grants and teaching courses and doing research and graduating(!), so lots of publications snuck by that we didn’t comment on. Here’s a few I think that might be interesting to our readers.

  • From Chris Miller‘s lab, bacterial antiporters do act as “virtual proton efflux pumps”:
  • nsrv2Are ninja stars responsible for controlling actin disassembly? Seems like the Goode lab might think so.
    • Chaudhry F, Breitsprecher D, Little K, Sharov G, Sokolova O, Goode BL. Srv2/cyclase-associated protein forms hexameric shurikens that directly catalyze actin filament severing by cofilin. Mol Biol Cell. 2013;24(1):31-41.
  • What do you get from statistical mechanics of self-propelled particles? The Hagan and Baskaran groups team up to find out.
  • From John Lisman and Ole Jensen (PhD ’98), thoughts about what the theta and gamma rhythms in the brain encode
  • From Mike Marr‘s lab, studeies using genome-wide nascent sequencing to understand how transcrption bursting is controlled in eukaryotic cells
  • From the Lau and Sengupta labs, RNAi pathways contribute to long term plasticity in worms that have gone through the Dauer stage
    • Hall SE, Chirn GW, Lau NC, Sengupta P. RNAi pathways contribute to developmental history-dependent phenotypic plasticity in C. elegans. RNA. 2013;19(3):306-19.
  • Can nanofibers selectively disrupt cancer cell types? Early results from Bing Xu‘s group.
    • Kuang Y, Xu B. Disruption of the Dynamics of Microtubules and Selective Inhibition of Glioblastoma Cells by Nanofibers of Small Hydrophobic Molecules. Angew Chem Int Ed Engl. 2013.

A biologically plausible transform for visual recognition

People can recognize objects despite changes in their visual appearance that stem from changes in viewpoint. Looking at a television set, we can follow the action displayed on it even if we don’t look straight at it, if we sit closer than usual, or if we are lying sideways on a couch. The object identity is thus invariant to simple transformations of its visual appearance in the 2-D plane such as translation, scaling and rotation. There is experimental evidence for such invariant representations in the brain, and many competing theories of varying biological plausibility that try to explain how those representations arise. A recent paper detailing a biologcally plausible algorithmic model of this phenomenon is the result of a collaboration between Brandeis Neuroscience graduate student Pavel Sountsov, postdoctoral fellow David Santucci and Professor of Biology John Lisman.

Many theories of invariant recognition rely on the computation of spatial frequency of visual stimuli using the Fourier transform. This, however, is problematic from a biological realism standpoint, as the Fourier transform requires the global analysis of the entire visual field. The novelty of the model proposed in the paper is the use of a local filter to compute spatial frequency. This filter consists of a detector of pairs of parallel edges. It can be implemented in the brain by multiplicatively combining the activities of pairs of edge detectors that detect edges of similar orientations, but in different locations in the visual field. By varying the separation of the receptive fields of those detectors (thus varying the separation of the detected edges), different spatial frequencies can be detected. The model shows how this type of detector can be used to build up invariant representations of visual stimuli. It also makes predictions about how the activity of neurons in higher visual areas should depend on the spatial frequency content of visual stimuli.

Sountsov P, Santucci DM, Lisman JE. A Biologically Plausible Transform for Visual Recognition that is Invariant to Translation, Scale, and Rotation. Frontiers in computational neuroscience. 2011;5:53.

The Story Behind the Paper: How calmodulin became efficient

by John Lisman

Story behind: Nat Neurosci. 2011 Mar;14(3):301-4. Calmodulin as a direct detector of Ca2+ signals. Faas GC, Raghavachari S, Lisman JE, Mody I.

Long-term potentiation, a model for memory, is triggered by the activation of the calmodulin-dependent protein, CaMKII in dendritic spines. Sri Raghavachari, my former postdoc, and I were interested in how exactly CaMKII gets activated during LTP. It seemed that it should be straightforward to account for this in a computational model. Our confidence was based on the fact that a lot of groundwork had been done—the elevation of Ca2+ that triggered this process had been measured by Ryohei Yasuda and the interactions of Ca2+, calmodulin and CaMKII had all been determined in test tube experiments. But when Sri put this all together in a standard biochemical model, the simulations indicated that there would be virtually no CaMKII activation. Clearly something was wrong because Ryohei Yasuda had shown that under the same conditions in which he measured Ca2+ elevation in spines, he could also measure strong CaMKII activation.

During the summer I work at the Marine Biological Laboratory (MBL) in Woods Hole, Massachusetts and Sri came down to visit. We spent hours trying to figure out what was wrong with the simulations. Sri had carefully checked the simulations and determined that the program could accurately account for other data. Thus, the problem was not a bug in the program, but rather in an assumption we had put into the program. Every day, we awoke, convinced we had found the erroneous assumption; by nightfall we had rejected that idea.

One of the great things about working at MBL is the number of other neuroscientists there and the collegial atmosphere. We went to talk to Bill Ross, an expert in the measurement of Ca2+ in neurons. It had long been known that when Ca2+ enters neurons, very little of it stays free because most gets bound to “buffer” molecules. These are like the pH buffers all biochemists use; it’s just that Ca2+ buffers bind Ca2+ instead of protons. Bill asked us a lot about the particular Ca2+ buffers that were in spines—what was known about their molecular identity and their Ca2+ binding properties. Moreover, he wanted to know what assumptions about the buffers we had built into our simulations.

The answer to this was simple: we had followed the “standard” dogma based on the work of the Nobel Prize winner, Ernst Neher. He had determined that neurons contained a “fast buffer” that very rapidly binds the entering Ca2+. He had not been able to determine what type of molecule this was. Every model of Ca2+ dynamics that had since been developed had incorporated this fast buffer into the scheme and we had followed this convention. Thus, when Ca2+ entered the cytoplasm, 95% got bound to the fast buffer; only the remaining 5% was free and could activate calmodulin.

Perhaps there was something wrong with this assumption, but to evaluate this issue we had to learn about how Ca2+ buffers work, something we knew little about. Fortunately, Isabelle Llano was working as instructor in the MBL Neurobiology course. Because of her expertise in the small proteins that buffer Ca2+ in neurons, we went to chat with her. We learned a lot from her, but she also pointed us to Guido Faas, who was working with Istvan Mody at UCLA and measuring the kinetics of how Ca2+ binds to protein buffers. Previous work had measured the equilibrium properties of Ca2+ binding to proteins—the binding rate was then inferred by calculation. In contrast, Guido was using advanced methods to rapidly jump the Ca2+ concentration and then actually measure how fast Ca2+ would bind.

As we learned more about Ca2+ buffers from Guido and read the literature more carefully about calmodulin, we finally came up with a radical but intellectually satisfying new model—-perhaps the fast buffer that Neher had measured was none other than calmodulin itself. This would certainly radically change our computer simulations—-instead of calmodulin responding to only the 5% of Ca2+ that remained free after the bulk of Ca2+ was soaked up the unknown “fast buffer”, calmodulin would be activated by all the Ca2+ ions that entered, making the process of calmodulin activation and CaMKII activation much more efficient.

But for this to be true, calmodulin would have to bind to Ca2+ fast, faster than to the other major Ca2+ binding proteins in neurons (e.g. calbindin). When we talked to Guido about this possibility, he was excited to test it. His previous work had dealt only with calbindin, but he could now extend the work to calmodulin. Indeed, he could reconstitute the buffering in spines, putting both calmodulin and calbindin into his cuvettes. When the results came in, they were stunning; calmodulin has extraordinarily fast Ca2+ binding kinetics, much faster than that of calbindin.

With the new binding parameters provided by Guido, Sri reformulated his computer model. He took out the unknown fast buffer and replaced it with only calmodulin (which is at surprisingly high concentration) and calbindin. The simulations now showed that enough calmodulin was activated to account for the measured activation of CaMKII, our holy grail.

The new view we propose makes sense: Calmodulin is the transducer that couples Ca2+ entry to enzyme activation. It would make sense for calmodulin to be as efficient a detector of Ca2+ as possible and thus to directly intercept the entered Ca2+. Our results indicate that this is the case. Ca2+ triggered reactions are implicated in hundreds of forms of biological signaling. We therefore believe that this new view of Ca2+ signaling will have broad applicability.

Brandeis hosts International Workshop on Learning and Memory

25 internationally recognized scientists gathered at Brandeis University from October 3-5, 2010, to discuss recent progress in understanding the neural mechanisms that promote learning. The workshop was sponsored by the Science of Learning Division of the National Science Foundation in a grant to Brandeis University Professor John Lisman, the Zalman Abraham Kekst Chair in Neuroscience. Lisman and Dr. Emrah Duzel, a neurologist from University College London, were the co-organizers of the workshop. Among the leading scientists attending were Mortimer Mishkin, Chief of the Cognitive Section on Neuroscience at NIMH, and the Nobel Prize recipient, Susumu Tonegawa.

The question of how the brain changes during learning has long fascinated scientists. In 1949 the Canadian psychologist, Donald Hebb, proposed that learning new associations involves changes in the strength of synapses. Subsequent work in many laboratories established that synapses do change as we learn and that the process rather closely follows the specific rule that Hebb had postulated.  Recent work, however, has revealed a limitation of Hebb’s rule; the forming of associations depends on the novelty of incoming information and on the motivation to learn, factors that Hebb’s rule cannot account for. The purpose of the workshop was to see how Hebb’s rule could be revised to take into consideration the new findings.

Protected by Akismet
Blog with WordPress

Welcome Guest | Login (Brandeis Members Only)