I’m very interested in scientific illustration. In graduate school, I worked as a layout editor and art director for the Berkeley Science Review–a graduate student-run science magazine at UC Berkeley. While there, I made quite a few different illustrations. I’ve also worked on illustrating concepts related to my research.

Below are some samples of my illustrations. Captions provide some context to interpret the images. If you like my work and have a concept you’d like visualized, please do contact me.

**Neuron 2014 cover**

In a brain-machine interface (BMI), both the algorithm decoding neural activity and the brain contribute to performance. Both components can adapt, via adaptive decoding or neural plasticity. In their Neuron 2014 article, Orsborn et al. explore the interactions between neural and decoder adaptation in BMIs. They show that a two-learner BMI can produce proficient, highly stable BMI performance and reliable recall of underlying neural representations. Neural activity stabilized with learning, as did decoder parameters (schematically represented by colors in the raster plots for the two components). Neural and decoder adaptation were found to closely interact, with decoder parameters influencing neural adaptation. Performance and neural representations could be maintained, even with changes in neural recordings and control contexts, which may be particularly useful for real-world neuroprostheses.

**P-value “rollercoaster” (Berkeley Science Review)**

Photos by Marek Jakubowski; layout and concept, Amy Orsborn.

Simulations show that the statistical significance of a result can strongly depend on the sample size. As a researcher adds one observation to each of two conditions and performs a test of significance, the p-value can change dramatically, with values below the orange line indicating significance (left). A scientist’s emotions can follow a similar rollercoaster.

**Bayesian Inference (Berkeley Science Review)**

In 1966, A US Air Force bomber flying over the south of Spain exploded, dropping its cargo–including four hydrogen bombs–below. One bomb was not readily found on land, and was thought to be in the ocean. This came to be known as the Palomares Incident. To find the bomb with minimal information, search teams employed Bayesian inference to guide the search strategy of a submarine (‘Alvin’). On day 1, they formed a “prior”–the probability that the bomb was in different locations–based on reports from witnesses. This is illustrated in the color-map labeled Day 1, where warmer colors indicating higher probabilities. This prior was used to select where to search (dashed lines). After a day of searching, the data collected was used to update the prior, which guided day 2’s search (‘Day 2’). This process was repeated, gradually narrowing down the likely locations of the bomb, until it was found.

**Probability and uncertainty–smarter traffic routing algorithms (Berkeley Science Review)**

Most traffic routing algorithms use average trip duration to recommend routes. But averages can overlook an important aspect of traffic–uncertainty. Imagine you need to get home in 10 minutes to take a pie out of the oven. You have two routes home. An expressway with a fast speed limit but that is occasionally prone to traffic jams, or a slower-driving country road with little traffic. The probability of your trip taking a certain length of time (the distributions shown at right) for the two roads are quite different. Most trips on the expressway are faster than the country road. So comparing the average trip time (dashed lines) would suggest the expressway is the best bet to get home quickly. But if you also look at the variance in the distribution–how spread out the distribution is–you see that the expressway with sometimes get you home to a burned pie. Trip times the country road, on the other hand, is less variable. With more information about the distribution of trip times, traffic algorithms could make smarter routing recommendations.

**Information theory applied to neuroscience (Berkeley Science Review)**

Claude E. Shannon introduced information theory in 1948 to describe the transmission of information across communication channels. His theory fundamentally links information with uncertainty. Information theory can be used by neuroscientists to ask how much information is “encoded” in the response of neurons. In one example experiment illustrated here, researchers recorded from neurons in a bullfrog. They found that the information encoded by the frog’s neurons depended on the type of auditory stimulus. For repetitions of the same “natural” stimulus–a frog’s vocalizations–the neurons fired similarly (blue). Repetitions of a more artificial tone, however, elicited more variable patterns of neural activity (orange). The highly variable neural responses to tones gives very little information about the stimulus. Thus, the encoding of information was higher for natural stimuli.

**Parasites (Berkeley Science Review)**

Opening spread for an article about parasites and gut microbes, and how they can influence their hosts’ behaviors.

**Berkeley Energy Research Infographic (Berkeley Science Review)**

Energy research at UC Berkeley covers a vast array of topics. This graphic illustrates the variety of research areas and distribution of researchers at UC Berkeley studying each. The groups represent a broad research category. Branches within each group show research sub-categories. The size of each dot is proportional to the number of researchers working in the field, based on data from the BERC gap analysis study.