STEMM Rolemodels – get involved!

Growing up, I loved science and math and never payed attention to the fact that most girls didn’t share my enthusiasm. I was pretty oblivious to cultural stereotypes surrounding women in STEM (Science, Technology, Engineering, and Math) fields. In fact, I didn’t notice I was the only woman in my Engineering Physics undergrad major until I was a junior. But as I continued into graduate school and now postdoc, I became more and more aware of gender biases.   I am regularly one of the only women in a meeting, or one of the few woman speakers in my conference session. I also became more and more aware of the issues that go along with being in the minority. Whether it’s someone underestimating my abilities, or having to shout a little louder to be heard in a meeting, little biases in my interactions with peers and colleagues slowly started to wear me down.

This is all to say: The longer I stay in science, the more outspoken I’ve become about diversity in science. It’s critical. And we (the scientific community) simply are not where we need to be yet.

Some of my colleagues and I are launching a project called STEMM Role Models  to start chipping away at this critical challenge. The app aims to increase the visibility of women and other minorities by creating a searchable database of scientists. The ultimate goal is to increase the diversity of scientists (and ideas) at conferences and beyond by making it easier for people to find scientists doing great work. After winning some support for our idea in the Rosalind Franklin App competition, we’ve launched an open-source project on GitHub to create the app. Read more below and get involved if you’re interested!

Read More

New adaptive decoding paper

Motor brain-machine interfaces (BMIs) map recorded neural activity into movement through a mathematical algorithm we call a “decoder”. How we train these algorithms is challenging for two reasons. First, in clinical applications where the patient has a motor disability, you can’t ask them to move to see how their brain activity relates to movement. So, we don’t want to assume we will have reliable training data. Second, we have to deal with the fact that BMI is something new the brain learns to do. Many studies show that even if we had training data to create a decoder that nicely predicts natural movements, it is not guaranteed to lead to good BMI control because the brain changes and adapts in BMI.

Closed-loop decoder adaptation (CLDA) is a very useful trick to bypass these challenges. CLDA trains the decoding algorithm while the subject is using the BMI. By training the decoder in the same context it is being used (in “closed-loop”), we can deal with any changes in the brain due to using the new, unfamiliar BMI. And adapting the decoder allows the algorithm to learn the best solution, so we don’t need a perfect guess to start with. CLDA has proven really powerful for training BMIs. Indeed, many groups use this type of decoder training.

While CLDA is very common, everyone has their own spin on it. And as you might guess, not all CLDA algorithms are created equal. CLDA algorithm design is particularly interesting to me because these adaptive decoders are interacting with a brain that also adapts! The details of how you adapt a decoder–how frequently you adapt, the learning rules and rates, the training signals, etc.–all impact the user. Meanwhile, the user’s behavior will impact the algorithm’s ability to learn. BMIs create surprisingly complex systems. And how these adaptive systems behave and interact is still poorly understood.

One focus of my work is to understand how CLDA algorithms behave in BMI so we can optimize their performance. My colleagues and I are developing “design principles” to guide algorithm development. These principles and insights into how CLDA work will help us develop state-of-the-art algorithms. A paper I, Maryam Shanechi, and Jose Carmena just published in PLoS Computational Biology does exactly that. This paper builds off of our previous work, and combines it with Maryam’s work on point-process filter-based decoders and optimal control to create a new approach to CLDA. The main take-aways are:

  1. Rapid adaptation rates give faster, more reliable decoder convergence. The decoder adapts at a certain rate in CLDA. For instance, algorithms I used in my Ph.D. used 1-2 minute “batches” of data for training and therefore only updated the decoder parameters every 1-2 minutes. The timescale of adaptation ultimately influences how quickly the decoder can learn a stable solution (“convergence”). And also influences subject-decoder interactions in the BMI system. This paper explores very rapid adaptation.  We used a high-rate BMI decoder (a point-process filter operating every 5ms) and created a CLDA algorithm that updated the decoder parameters at every BMI iteration (i.e. every 5ms). We compared the approach to my previous batch-based method updating every 1.5 minutes. We found that the two algorithms give the same final BMI performance. But, rapid adaptation gets there faster and more reliably. In fact, with this new rapid adaptation scheme, we could get to high-performance BMI from a totally random initial decoder in ~6-7 minutes. This rapid convergence might be critical for clinical applications. It’s a step towards a plug-and-play BMIs patients can use quickly.
  2. Optimal feedback control for intention-estimation. As part of CLDA, you need the “training” signal for your decoder. In a “supervised” approach, you use knowledge of what the subject is trying to do–their “intentions”–for training the decoder. You can think of this as fixing a subject’s mistakes. Say you put a cup to someone’s left, but the BMI moves to the right when they try to grab it. You can fix their mistake by re-training the decoder so that the BMI moves to the left for that particular pattern of neural activity. This, of course, requires some way to guess what a subject intends to do. Typically, people have used heuristic approaches like re-aiming the BMI to always move towards the target. A potentially smarter way to do this is using optimal control to model the subject’s strategy. In this paper, we use optimal control for intention estimation and show that it gives better final performance than a commonly used method. Interestingly, that’s only true for certain models of control strategies. This shows that if you can better match a subject’s intentions in the training data, you get better final performance in the BMI. Makes sense, right?
  3. A cohesive framework for BMI decoder training. Beyond CLDA, there are other nice techniques to help train BMI decoders. One example is “assisted control,” where a computer helps the subject move the BMI initially. Gradually ramping down the amount of assistance can help ease a user into BMI control. Assisted control is also commonly combined with CLDA, and the methods are similarly varied. Our optimal feedback control method lends itself quite well to model-based assisted control, too. So, we developed a framework for CLDA that combines key decoder training elements–intention estimation, assisted control, etc.–into one cohesive approach.  The framework is modular, so people can swap in their favorite technique as desired. This type of consistent framework will be critical for evaluating and comparing different work in BMI moving forward.

Beyond the specific CLDA aspects of the paper, this work is also one of our first demonstrations of a point-process filter being used for continuous BMI control. It works quite well, and is exciting for other reasons. But that’s a story for another time. Reviewers willing, I will hopefully be telling more about that story soon. Stay tuned.

If you’re interested in learning more about the method and our results, check out “Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering” at PLoS Computational Biology (open access!).

Update: the turquoise brain project

As promised, I have an update on my quest to make a silicone brain. I revised my approach by 3D printing an inverse mold of the brain:

2015-06-29 21.59.31

I made the inverse model using openSCAD. The 3D-print needed a bit of polishing and a few sprays of shellac to make the slightly porous 3D print had a relatively smooth surface. Then, I filled it with some silicone, let it set, and out popped this:

top-viewThe inverse mold got rid of the middle seam, and the polishing/shellac helped make things much smoother (though there are still a few rough spots that need tweaking). I liked it so much, I decided to make a little one (50% scale-down):

2015-06-29 22.03.28The little one seems ripe for making key chains. Now I just need my own MRI to make a very personalized version!

 

Conference paper on multi-scale recordings

This past April, I presented some of my new work in the Pesaran lab at the IEEE EMBS conference on Neural Engineering. You can download a copy of the short paper here.

This report outlines the system we’ve developed for simultaneous recordings within the cortical layers and on the surface. We combined electrocorticography (ECoG) recordings with a commercially-available micro-electrode drive. This initial paper primarily focuses on the details of the design and validating our approach in a simple preparation. We collected a data set recording ECoG as we moved electrodes through the cortical layers. We can use this data to better understand what aspects of neural processing are captured by ECoG, which is still not very clear. With these basics out of the way, we’re now busy putting the system to use to dig deeper into these questions. We’ll also be using this system to look at how neural signal types influence neuroprostheses. Stay tuned for more!

 

 

The Turquoise Brain Project

My most recent projects involve developing new platforms for recording neural activity. It’s a lot of mechanical design and fiddling with details. I’ve discovered that having physical objects to manipulate can often help in this process. Looking at virtual representations of things doesn’t always give you a true sense of the physical scale and mechanics.

With that in mind, I set out to make a replica of a brain. Since brains are squishy, I wanted to make it out of silicone. It also just sounded fun to have a little squishy brain-shaped stress ball.

To do this, I:

  1. Used BrainSight to reconstruct a brain from an MRI image and export the brain shape in an STL file
  2. 3D printed the brain with a Markerbot (replicator 5th gen.)
  3. Made a quick-and-dirty inverse mold of the brain by pressing the 3D printed brain into clay one half at a time
  4. Filled the inverse molds with a flexible silicone
  5. Once the halves cured, I used a bit more silicone to stick them together

Here’s the end result:

turquoise brain v1, top view

turquoise brain v1, top view

brainV1_sideview

turquoise brain v1, side view

brain reconstruction STL

brain reconstruction STL

It’s still a work in progress. The seams are problematic, and the 3D printed brain needs to be better polished to get a smoother result. I tried making a silicone inverse mold, which would help eliminate seams. The molds came out great, but when I used them to make a silicone brain, the many ridges of the brain made it nearly impossible to remove the brain without ripping it. So instead, I’m going to try 3D printing inverse molds (rather than the brain). Stay tuned for version 2!