Skip to Content

Meet Adam Charles, assistant professor of BME

February 5, 2021
Adam Charles

Adam Charles joined the Johns Hopkins Department of Biomedical Engineering as an assistant professor in July 2020. In this interview, Charles discusses his research developing mathematical models and algorithms to understand the brain, his goals for the future, and career advice for students.

What made you pursue a career in engineering?

I always loved mathematics—in part because I was pretty lousy at memorizing and in math I could memorize the fewest rules and work out the rest as needed. The one caveat was that I wanted the math to tie to something tangible. In my early physics classes, I really loved how mathematics provided a very concise and clear way to express seemingly complex systems. I became interested in engineering, and eventually I found my home with signal processing and data science for neuroscience.

Why did you choose Johns Hopkins BME? What are you looking forward to most?

The community! My work lives in an emerging intersection of data science, imaging, and neuroscience. JHU has all three in abundance: the Mathematical Institute for Data Science (MINDS), the Center for Imaging Science (CIS), and the Kavli Neuroscience Discovery Institute. With these centers, and BME as my home, many opportunities are available for someone like myself to collaborate broadly in bridging these areas.

Can you give a brief overview of your current research?

My current work aims to develop mathematical models and algorithms to help bridge the gap between the physical brain and our scientific understanding of the brain. This includes: 1) bringing machine learning closer to the sensors to enable new types of computational imaging and data analysis; and 2) bringing the scientific question being asked into the data processing.

In the former, I create new data analysis algorithms and mathematical models that enable us to make the most of the brain data we collect. There is an interesting competing set of constraints in that brain data is both becoming increasingly larger and more plentiful, but acquisition is still expensive and often clustered around specific experimental paradigms that are guided in large part by the available technology and historical assumptions. The new algorithms I aim to create will improve the information we can robustly extract from these datasets, and even guide imaging techniques automatically to track and interpret signals in the brain online during experimentation.

The latter goal addresses a fundamental disconnect that pops up often between computational models that are looking for structure in the high dimensional data and the scientific hypotheses and “first-principal” models that form our normative understanding of the brain. To bridge this gap, we need more interpretable computational models, as well as a fundamental theoretical understanding of what the limits and capabilities of these models are. For many popular models, like recurrent neural networks, we’re still a ways off in the theory that we need to fully interpret what they are telling us about the data they are trained on.

Have you ever experienced a “eureka moment?”

As it turns out, I’m not that kind of thinker! I’m much more of a percolation thinker. I’ll feel like I might have had an idea, maybe sometime while reading a paper. It’s a feeling like I just saw something in the bushes, but I’m not sure. I’ll forget about it and a little later I’ll get a similar feeling, like there’s something in that paper that I saw but I can’t put my finger on it. This will annoy me most of the day and eventually I’ll be able to catch the topic of my thought. I’ll be able to see the general shape; for instance, I will think, “This has something to do with graphs and matrix decompositions.” A little later, I’ll catch some details. This can go on for a while, often helped by conversations with colleagues where I struggle to get out anything coherent, until finally I’m able to write down a coherent thought.

What do you consider your biggest research accomplishment so far?

While I love all my papers equally, if really pressed, I think I would say my line of work in multi-photon calcium imaging is my biggest accomplishment. This imaging modality is making long-term recordings of large neuronal populations (thousands of neurons simultaneously!) available to more and more labs around the world. The resulting images show neurons with the time-varying brightness of each shape indicating neural activity. Collecting and interpreting these data is a vital process in neuroscience, and I’m very happy to have been able to work on increasing both the quantity of neurons we can simultaneously image, as well as the quality of the data interpretation algorithms. For the first, my collaborators and I created a computational imaging method that used stereoscopy to measure volumes of tissue instead of single slices, increasing neural yield by over a factor of three. For the latter, I derived a new, robust inference algorithm that was immune to the challenging noise and cross-neuron interference properties of the images.

One aspect I’m proud of is that these techniques were truly interdisciplinary. They were possible because people of different backgrounds came together; for example those in optics and computation. I believe that this is really the future of neuroscience technology, especially as algorithms begin to be further integrated into every step of the scientific discovery process.

What impact would you like your work to have?

My goal is to enable new types of neuroscience experiments, and to increase efficiency (time, data, effort, dollars, etc.) in going from brain to science. I want neuroscientists to not only have access to amazing new optics and electrodes, but also the best ways to interpret that data and to algorithmically improve how they use them.

What are your goals for the future?

What I would really like to see is an increase in effort to bridge the data modalities, models, and scientific questions into a unified framework. This goes much beyond “taking in data” and operating, to having the mathematical framework aware of every step that happens outside of a brain. This also means that methods, both experimental and computational, should be shared more widely and build off of each other. This is really the only way we can make sure that the impressive variety of techniques and experimental paradigms we have can converge to answer a common set of questions about general computation in the brain.

Do you have any career advice to offer to current students?

Try things out! There’s a lot that you can learn from talking to people, but the best way to know if a career is for you is to go and actually try it. Internships, shadowing doctors, and undergrad research are amazing ways to sample the space of potential employment opportunities. In a similar vein, always try to connect to a new idea. No matter how outside of your scope you might think it is, find the shortest path between what you know and what you’ve come across. That building of a conceptual network can help immensely in understanding both new ideas more intricately, as well as make one more empathetic to the people who work on those ideas.

What do you enjoy doing outside the lab?

I really love making things, from constructing a playhouse for my kids to installing upgrades to our house,  though my favorite is probably cooking and baking. My kids’ birthdays are almost as much for me as for them because I get a fun new challenge of how to make a cake that fits their chosen theme. In the same equivalence class are puzzles. I’m fairly addicted to The New York Times crossword, and even more so love cryptic crosswords (although these are rarer). Cryptic crosswords combine a lot of things I enjoy: crosswords, twisting words’ meanings, and generally being creative.

Category: Faculty
Associated Faculty: Adam Charles

Read the Johns Hopkins University privacy statement here.

Accept