Motor Prosthetics – Dr. “Cindy” Chestek

This interview was conducted with Dr. Cynthia Chestek, an Associate Professor in the Biomedical Engineering department at the University of Michigan, Ann Arbor. Her research at the Chestek Lab focuses on Neural Engineering and Tissue Engineering. We are excited to have interviewed her as our first industry interview.

Q: Could you just give an overview of the research that takes place in your lab and what impact it has?

A: My lab works on motor neuroprosthetics. What that means is we’re taking signals from across the nervous system. So we use peripheral nerves, brain signals, muscles, etc., and we’re trying to generate control signals for the hand and fingers. This would apply to anyone who has lost the use of their hands. So for example, we’re using nerve signals for amputation, and we would use brain signals for someone that has a spinal cord injury. But it’s all about listening to the nervous system and trying to decode the language and generate hand control signals.

Q: What do you see as the next stop in your research?

A: The one thing that we use as an output that we’re putting a lot of effort into right now is functional electrical stimulation. What that means is you implant tiny wires throughout a paralyzed arm, and that can be used to activate paralyzed muscles. We have now pretty good control signals for fingers that we can record directly from motor cortex. And so a system like that, you could have it be entirely implantable, recording from the brain and stimulating in the arms. So we want to start restoring finger function in people who have spinal cord injury and other forms of paralysis.

Q: What types of technologies are most essential to the success of your controlled prosthetics?

A: My lab works on a wide range of technologies. It’s every piece of the chain, from the electrode to the electronics to the algorithms and the signal processing. Maybe I’ll start at the end. The new thing that is happening in the algorithm side is I think we have finally figured out how to productively apply deep learning to these real time applications. We have started seeing big performance jumps associated with using neural networks in real time. In terms of the electronics, I think that all of these systems have previously been designed for neuroscientists that aren’t using these relatively limited feature sets that we’re actually using for the prosthetics. We and others have been showing that you can actually make a very low power systems that can record a thousand or more channels from the brain, for example. On the electrode side, we work on carbon fiber electrodes, and what that lets us do is make electrodes that are the size of capillaries that go into the brain and do very, very minimal damage and record signals. We can have neurons up close and personal with the electrodes because they’re doing so little damage.

Q: What does this research process look like? How do you test results in such a sensitive field?

A: Let me take you through the timeline of one of our human experiments. So we’re doing human work with peripheral nerve controlled prosthetic hands. And there the signals are actually much more stable than brain signals. What we do is you have a prosthesis user who comes in and they generate something like 15 minutes of training data. So you have them watching an animated hand and it’s making fist, pinch, point, all of these different movements, and they’re following along and it’s really boring. Maybe we get 20 instances of each movement, then we train one of our algorithms and then they start using it. One of the things that can absolutely happen in these experiments is it looks amazing when you stay within the training circumstances. They can be controlling 10 different movements on a computer screen. But then if they get up and try to use the hand to make a pot of coffee, then you realize, “Wow, there’s so much more complications in all of us than we captured”. We’re trying to make it into a machine learning problem.

Q: How do you see your research evolving in a couple of years? What do you see as the next problems or successes in this field?

A: Things feel like they’re happening very quickly. For example, we’re seeing subcellular electrodes, but they’re not quite in humans yet. We’re seeing these high channel count systems, but they’re not quite available yet. They’re first examples of everything everything. Meanwhile, my colleagues out at Stanford just demonstrated decoding 70 words per minute from the brain. Wow. That’s somebody who can’t speak. That’s almost real time. They’re attempting to speak and the computer is generating the words for them. It’s actually really hard to predict because there’s actually been a lot of industry investment. Enough companies have seen these results from labs and they were very motivated to try to improve on the existing electrode arrays and the existing electronics and the things that really have to be done by industry. That is the big thing that I think is going to happen in the next few years. We’re going to see new commercial systems and that is going to change the research. I think it lets us researchers think even further ahead if that makes sense. My lab, for example, we’re trying to trying to figure out, okay, how do we get rid of the wire bundles completely? How do we implant tiny elements throughout the brain that can achieve the same end? So I can be one step more science fiction whenever something makes its way out into industry.

Q: What response have you gotten for your research, both from the public and medical sector?

A: I would say that the surgeons that have invested their time and made this their research goal have been critically important to all of these technologies getting out there. They take on a lot. I did my postdoc out of Stanford with Jamie Henderson, and I work with Parag Patil, and also with Paul Sedurna here. I don’t do surgery, I’m an engineer. Obviously, this is a critical partnership. Obviously, they are the ones that decide when it’s time, you know what I mean? And they take this forward, and that first in human step is a really big step. And so I think that we’ve been inching forward over the past decade. The first implantable brain machine interface in a human was published in 2006. And that was after some early work in the 90s without using arrays, I should say. But the first implanted electrode array was 2006. And since then, things have been inching forward and inching forward. And there haven’t been major adverse events. But it is definitely like, the MDs are the brave ones. And also the patients, the people who volunteer to do something that has never been done before. The engineers, we support that. But it’s really the patients and their doctors that are deciding how to inch this out as safely as possible to people.

Q: Finally, how is your research impacting the industry, and how will the industry evolve in the future?

A: So in terms of the robotics, we just outpaced the robotic hands and the exoskeletons fairly recently. Before, a prosthetic hand manufacturer could have said, “Okay, nobody’s going to have control signals for more than what we can do already”. And that’s no longer true. We can absolutely generate a much better performance with somebody controlling an animated or a simulated hand than they’re actually going to be able to achieve using the prosthesis. And so there’s all of these trade offs in weight versus speed and things like that. I’m hoping that this puts more pressure on the hand makers and the exoskeleton makers to get something that can respond quickly and can better approximate what a human hand can do. If we start having really high success in brain controlled FES and the muscles become a limitation, which is absolutely a possibility, I think that really motivates additional work in exoskeletons. We’re very excited about soft robotics and some of the gloves, things that are coming out.

Dr. Chestek & Tech Talks’ Final Thoughts: There are always issues with small markets. Spinal cord injury, amputation, each individual application is small, but neural interfaces as a whole might be the future of medicine. Every organ has a nervous system that you can control, stimulate, and record. We need to develop technologies that are useful for more than one thing. Electrode rays, algorithms, electronics. There does have to be a bit of a shift in mindset. For example, there is currently no such thing as a prosthetics company that has an implantable line. For them, it’s still like they love reading the papers and seeing the videos, but they haven’t really started the hard work of becoming a company that does that. But I think we’re going to see, I don’t know. I think 10 years is a long time in tech. Everything I just talked about, basically the first examples of it are 20 years old. That was the first array implants. In 10 years, I think we’re going to see a bigger comfort with what I’ll call low level implants, not the most advanced stuff, but things that can help you record a few signals, do a little bit of stimulation, and I think you’ll see the first versions being less invasive, safer procedures. And then we’ll see where it goes from there. But I think you’re going to see a lot more. There’s a lot of human research going on with initial human in clinical trials and first in human. And I think you’ll see that turn into the first commercial systems over the next 10 years.

It is very interesting to address the fact that implementation of technologies lags behind the rate of research by a significant margin. Rehabilitation robotics is growing at a rapid rate, and I hope our audience pays a closer look to such research after this interview.