Category Archives: Uncategorized

Follow-up from OSSPEAC 2024

Thanks to everyone who joined my talk at the October 2024 OSSPEAC Conference! In case those QR codes went by too fast, here are all the links I shared in my talk:

Follow-up from Spring 2024 Speech Retreat

Thanks to everyone who joined my talk at the March 2024 Speech Retreat! In case those QR codes went by too fast, here are all the links I shared in my talk:

Seeking candidates for supplement to NIH-funded gender-affirming voice training project

Our NIH-funded project “Improving the Accessibility of Transgender Voice Training with Visual-acoustic Biofeedback,”  is eligible to apply for an NIH Research Supplement to Promote Diversity in Health-Related Research. If you know a student from an underrepresented background who might be interested, we would love if you could put us in touch with them.

About our project: “Some transgender people can be negatively impacted if their voice is perceived as incongruous with their gender identity, and they may pursue training to achieve a vocal presentation that is comfortable for them. In addition to the pitch of the voice, men’s and women’s vocal tracts also differ in their resonating characteristics, but resonance is harder to understand than pitch, and harder to target in therapy. Our software allows learners to visualize the resonant frequencies of the vocal tract, which could make it easier to adjust them to match a target that is appropriate for their personal speech goals.” This project is a collaboration with Vesna Novak, a transgender computer scientist at the University of Cincinnati. Our overarching goal is to create and test the first software for gender-affirming voice training that combines real-time information about both vocal pitch and resonance with structured exercises. 

About the NIH Supplement: These supplements are intended to support promising young researchers from underrepresented backgrounds as defined by the NIH (scroll down to Underrepresented Populations in the U.S. Biomedical, Clinical, Behavioral and Social Sciences Research Enterprise for specific categories of eligibility; only US citizens and permanent residents are eligible). The researcher and mentor work together to identify a research project and a career development plan to be executed in the course of the administrative supplement (typically one year in duration, full-time or part-time). Researchers at multiple levels are eligible for supplements – post-baccalaureate, post-master’s, predoctoral, and postdoctoral.  Given the focus of our research project, we would love to find a team member who is LGBTQ+ or committed to working for the well-being of the LGBTQ+ community as well as meeting NIH’s definition of underrepresented groups. 

How to get involved: If you know anyone you think might be interested in this opportunity, please feel free to spread the word! Interested candidates can contact us with a current CV for further discussion.

I am also interested in hearing from any students who would be interested in applying to work on this project as a doctoral student through NYU Steinhardt’s fully funded PhD fellowship program (start date fall 2025).  In addition to the PhD program in Communicative Sciences and Disorders, there is an interdisciplinary PhD program in Rehabilitation Sciences that could be a strong fit for a student with AI/ML experience who would be interested in working on the technical side of the project.

People marching in the street with a LGBTQ+ pride flag

Possible microphones for staRt

We strongly recommend using a microphone when using the staRt app on iPad or on the web – you will see clearer, peakier peaks! I get a lot of queries asking what microphone we recommend. To be honest, we don’t think it matters that much WHAT microphone you use, as long as you use one. If you or your client can find a pair of earbuds with an inline microphone, that should be fine. We do have a few suggestions: 

  • We recommend holding the mic a few inches in front of the speaker’s mouth. 
  • We do NOT recommend using Airpods.

If you are feeling overwhelmed by the options on Amazon, here are two inexpensive options you might try:

Text reads "top tips for using start online: use a microphone!"
 
 

If you’re using the staRt app over a video call, note that Zoom doesn’t like sustained speech sounds – it thinks there’s an echo and cuts the audio. To avoid this, turn on “original audio for musicians”! You need to enable original audio in Zoom’s audio settings panel (click the up arrow next to the mute button and choose “Audio Settings”) AND turn it on in the call (upper left corner). 

Text reads "Turn on Zoom's original sound for musicians!" Image shows screenshots of the Audio Settings dialog in Zoom, then the Audio profile dialog with "Original sound for musicians" selected and all other settings unchecked, then the Zoom Meeting tab with a notification "Original sound for musicians: On"

Expanding biofeedback access with bitslabstart.com

Our team has been thrilled to see the growing number of users of the staRt iOS app, but we know it has some limitations. It’s not a great fit for telepractice, where its high processing load makes it hard to use with screen-sharing or mirroring technology. Also, we know that not everyone has an iPad! 

Enter bitslabstart.com! We are still building out the full set of features, but the current release includes the core speech visualization (real-time wave with adjustable target slider), an interactive user tutorial, and an “SLP Info” tab with suggested targets and links to additional resources. It can be used in a browser on a desktop or laptop computer (Mac or PC).* Use it in person if you don’t have an iPad, or screen-share it over Zoom for your telepractice clients!

Screenshot of the staRt app with text "Introducing start online! bitslabstart.com." The screenshot shows a still frame of the real-time wave and the starfish target slider. The SLP Info tab is highlighted.

A few hints for telepractice:

  • If possible, run the software on the client’s device so they see the wave with no lag.
  • Try turning on Zoom ‘Original Sound for Musicians’ setting to prevent audio from dropping out during sustained sounds.
  • Check out our 2023 ASHA talk for additional tips and tricks!

*In its current release, bitslabstart.com does not work on mobile devices. We will work on expanding compatibility moving forward.

Upcoming hybrid talk on biofeedback for free CE credit

On 12/5/23 I will be presenting in NYU CSD’s colloquium series in a special session eligible for CE credit. Here are the details!

Technology-enhanced treatment for speech sound
disorder: Who gets access, who responds, and why?

This course discusses the use of biofeedback training for speech in older children with residual speech sound disorder. It reviews evidence on the efficacy of ultrasound and visual-acoustic biofeedback and discusses how biofeedback may have its effect, such as by replacing faulty sensory input. The course
also covers barriers to clinical uptake and steps to expand access to speech technologies.

Presented by Dr. Tara McAllister
December 5, 2023 | 239 Greene St, 8th Floor

5- 6:15 PM | Presentation
6:15- 6:45 PM | Reception

Please register here for Zoom attendance.

Please register here if you wish to earn ASHA CEU credit.

Financial Disclosure: This research was partly funded by the National Institutes of Health. This talk will discuss the staRt app for visual-acoustic biofeedback. Dr. McAllister is a member of the board of Sonority Labs, LLC, a for-profit entity that was created to explore the possibility of
commercialization of the staRt software. The staRt app is currently distributed as a free download, but in the future it may be sold or licensed in a for-profit capacity. Other products for biofeedback delivery will also be discussed in this course.Nonfinancial Disclosure: No relevant nonfinancial relationship exists.

Time-ordered Agenda:
Define biofeedback and its use for residual speech sound disorder (10 min)
Review evidence regarding the efficacy of biofeedback (10 min)
Discuss models of how biofeedback works in relation to models of sensory function (15 min)
Discuss the possibility of personalizing biofeedback for a learner’s sensory profile (10 min)
Discuss barriers to clinical uptake of biofeedback and steps to expand access (10 min)
Discuss other populations who might benefit from biofeedback (10 min)
Questions, discussion, and self-assessment (10 minutes)

 

BITS Lab could not be prouder to share that two outstanding alumnae of NYU’s program in Communicative Sciences and Disorders will be returning to the lab with support from administrative supplements from the National Institutes of Health. Samantha Ayala ’18, who completed her MS in speech-language pathology at Columbia and earned clinical certification this past summer, will be returning to the lab for one year as a full-time research clinician. She will support data collection for our ongoing randomized controlled trial (NIH R01 DC017476, “Biofeedback-Enhanced Treatment for Speech Sound Disorder: Randomized Controlled Trial and Delineation of Sensorimotor Subtypes” ) and is also launching her own research project looking at variability in the speech of children with residual speech sound disorder.

Wendy Liang received her MS in SLP from NYU in 2016. She has worked as a medical speech-language pathologist and swallowing disorders specialist for one of the largest health systems in Texas, and she founded Burgundy & White, a nonprofit that aims to promote survivorship and continued care for individuals impacted by head and neck cancer. Wendy’s work in BITS lab will be supported by NIH funds through the program “Administrative Supplements to Support Collaborations to Improve the AI/ML-Readiness of NIH-Supported Data.” This was one of fifty awards across all the NIH institutes. Together with partners at Syracuse University (Jonathan Preston, Asif Salekin, and Nina Benway, whose dissertation research made the supplement possible) and Montclair State University (Elaine Hitchcock), this project will lay groundwork for the development of automated speech recognition tools for children with speech sound disorder. We will modify and augment an existing corpus of acoustic recordings of child speech and test an algorithm for automated classification of productions as accurate or inaccurate. The project is advised by Yvan Rose (Memorial University of Newfoundland) for corpus sharing and Carol Espy-Wilson (University of Maryland) for engineering design.

Congratulations and welcome back to Wendy and Sam!

Using the staRt app remotely

I’ve received a few inquiries lately about options to use the staRt app over telepractice. This is something we have been thinking about a lot – in fact, I will be speaking on the subject at an upcoming webinar hosted by ASHA’s SIG 19. The staRt tech team and I are currently working on developing a browser version of staRt that can be readily shared over Zoom. However, we are in early stages and do not yet have an estimate of when that version might become available.  All current users of the app will be notified if and when a pilot browser-based version of staRt is released. (The app is available as a free download in the App store.) In the meantime, here are a few notes on what has and has not worked for us: 

  • In our testing to date, we weren’t really successful in using the app over Zoom by screen-sharing from the iPad or using mirroring software. The app is pretty resource-intensive, so the wave tends to run too slowly if other software is running at the same time. If you have been successful in using the app via screen-sharing or mirroring, please let us know so we can find out more about your setup!
  • We had one clinician report that he was able to use the app remotely by running the app on his iPad, connecting over Zoom on his laptop, and using a document camera to share the image of the wave to the computer. (He was just using the sound from the computer speakers as the input to the app, no special cable connection or anything.) I was surprised to hear that this worked, though – you will definitely want to make sure you are in a very quiet space if you try it!
  • The only real solution we found so far was to build the app in the Xcode software development simulator on a Mac; then you can screen-share on Zoom with relatively limited lag/loss of resolution. However, building the app in simulator requires some significant tech savvy, so I wouldn’t recommend it unless you have or have access to someone with some programming experience. (And you need to have a Mac; there’s no PC option.) The instructions are in the readme on this github page. We also use soundflower to route audio directly from Zoom to the simulator; instructions for setup are in this document. If you do try to build the app and get stuck somewhere, contact us (nyuchildspeech at gmail) and we can ask our developer to help troubleshoot.

Seeking participants for online “r” therapy study

Our research team is conducting a project investigating speech interventions for “r” errors that will provide free treatment via telepractice (video calls) for eligible children. Many children have difficulty producing the “r” sound, and some of these children are not able to eliminate their errors even after receiving years of treatment. We are evaluating treatment methods that have been shown to be effective in previous studies, with the goal of finding out if they are effective in the context of telepractice.

cartoon of speech practice

We are looking for children between 9 and 15 years of age who have difficulty producing correct “r” sounds. Participating children must be native speakers of English who have no history of major hearing impairment or developmental disorder (e.g., Down syndrome, cerebral palsy, autism spectrum disorder, etc.).

Potential participants will be asked to complete an initial evaluation. If they are eligible to participate in the study, they will complete a baseline (pre-treatment)phase. In the treatment phase of the study, they will receive 21 treatment sessions, each roughly one hour in duration. Treatment will be administered by a certified speech-language pathologist. Services will be provided at no cost to participating families. All study activities will be conducted via video call.

If you would like more information about this study, please follow this link to complete a brief screening survey. Thank you for your time!