Tuesday, July 12, 2011

Choosing Carter's voice device

Today we have a guest post from Stacey Moffat, a teacher, writer and mother to three, including Carter, 8, above, in Kitchener, Ont. I was interested in the topic of voice devices because we are pursuing one for Ben. We have abandoned many in the past because we always found the technology archaic and clunky and a disincentive to using. It seems that different programs are popular in different geographic regions. Tell us about what your child uses and why! Thanks! Louise

Choosing Carter's voice device

By Stacey Moffat

I gazed at the symbols on the voice output device shown to me by the speech therapist and I felt perplexed. The symbols were abstract and I found the system confusing.

I was used to picture symbols where one picture represents one word. Nouns of course, were easiest to represent: dog was shown by a picture of a dog and apple a picture of an apple. For more complex words like ‘in,’ positioning was shown with an arrow pointing into a box.

The language on this device was Minspeak. With Minspeak, the relationship between the symbol and the word it represents is not always obvious. For example, on this device the picture of a mountain with the sun going down behind it meant ‘get.'

To me, having a system that used pictures that didn’t clearly represent the meaning of each word seemed confusing. Unfortunately I wasn't given a thorough explanation about how Minspeak and Minspeak Application Programs work. Instead I was told that it wouldn’t matter what system I chose for my son because he would do well with anything.

But, I thought to myself, if I can’t understand the language and symbol set on a chosen device how would Carter, a boy with a developmental delay?

I decided to move forward with choosing a device based solely on size, thinking that portability was top priority for Carter. He is mobile and a very active boy. I wanted Carter to be able to take his device wherever he went.

Thankfully, before any paperwork was put in place for obtaining a device, I travelled to Pittsburgh for a conference put on by CASANA where I attended a workshop about Augmentative and Alternative Communication (AAC).

Here are some things I learned at the workshop that were tremendously helpful and steered me away from focusing on size and portability and instead toward choosing a system that fosters language development and maximizes language output.

I learned that portability and compactness do not necessarily go hand-in-hand with user friendliness and easily accessible language.

I learned that the more words that are accessible to the user on the main page of a device the better – these are called core words. They are words that are used frequently and repeatedly in the English language (e.g. want, put, get, me, my, here, there, etc.). It is most advantageous for users to have as many core words accessible to them as possible.

It was explained to me that having a variety of pages set up with different themes (a page for playing cars, a page for circle time at school) can become cumbersome to users. Systems with this type of set-up are often abandoned because users get tired of having to navigate through a web of pages in order to say what they want to say. Having several pages to sort through slows down the output of speech which can cause frustration. Never mind the fact that caregivers, teachers and therapists can often spend hours programming devices with vocabulary around specific activities only to have the child use the programmed words on a very limited basis.

I learned that there are just too many words in the English language to have every word represented by one picture. Add to that the fact that not all words lend themselves to being represented by a picture. This takes us back to my earlier example where ‘get’ was represented by a picture of a mountain with the sun going down behind it. With Minspeak certain pictures can represent up to five different words.

Minspeak Application Programs can seem quite overwhelming and difficult to understand. However, if you are willing to take the time to learn about them through direct experience you soon discover that while Minspeak is a language unlike any other, it is logical and well organized.

The clincher for me was the fact that systems using Minspeak focus on language development, not just language output. For children with limited speech that means becoming competent with language so that they can build sentences word by word. Unfortunately this process does not allow device users to speak as rapidly as those with typical speech. However, by learning to build his own phrases, I feel that Carter is more empowered when expressing himself. Rather than being limited to pre-programmed sentences that someone else has put in his device he is learning to voice his own thoughts and opinions, and how he feels about something.

There is an application that allows the Minspeak language system to be downloaded onto your home computer. By downloading the program you can then play around with the system and get to know it and understand it before committing to this type of set-up for yourself or your child. Having it accessible on a computer can be helpful for therapists or others who work with your child because it enables them to get to know the system and also gives them a system on which to model language building for the user.

When Carter’s voice output device finally arrived, it would have been icing on the cake if he'd punched the buttons in order to tell us what he’s had on his mind all these years. Unfortunately that was not how things unfolded. Carter has a lot of work ahead of him. There are still significant gaps in his expressive language. At almost eight years old, he is very much like a toddler learning about, experimenting with and building his language skills.

When I explain Carter’s language challenges to those who are interested I like to compare his situation to that of someone trying to learn French or any other language. Learning a new language requires numerous lessons and a lot of practice. Learning to use a voice device with the Minspeak language is no different.

I’m thankful that I took the time to do more research about Minspeak. Carter is extremely motivated by finally having a voice with which to express himself and he is building his language skills one step at a time.


Thank you for this post. We have been using a Vantage Lite for the past 3 years, and have encountered significant resistance from school personnel to using the language system the way it is designed to be used. They are always trying to create and add their own "pages" which defeats the whole idea of language building and limits our daughter's options. And yet, at home in certain situations, she is making her own sentences and often surprising us with her thoughts. It would be great to find a way to convince the reluctant teachers and therapists that it is indeed worth it to learn the system. It seems they just can't believe that a developmentally delayed 8 yr old can figure something out that they find so challenging! (It is maddening to read in their reports, "Mom claims child is able to access device to create 5-word sentences, but we have not seen that here.") We're going to get out the video camera.

Wow! It's great to connect with someone else who is experiencing the same struggles that we are -- although it's a shame that's the reason we're connecting :-(
Carter also uses the Vantage Lite (he's had it for about 7 months now) and I've found it extremely difficult to find professionals that have experience with this system. Why aren't more professionals willing to get on board with this language system?
I've searched high and low trying to find where I can get my son the required support to progress him through the developmental stages of language and become competent with his 'talker'.
I don't want to make sweeping generalizations but I get the impression that the Minspeak/Unity systems are more popular in western Canada and in the U.S.
We've gone so far as to head to Pittsburgh, PA to get support from SLPs at the AAC Institute.
Where are you from, Kate?
Thanks for your comment :-)

As someone who has been a researcher and developer in the field of AAC and assistive technology for over 30 years, I wholeheartedly agree with the frustration felt by Kate J. Unfortunately, the AAC field has become divided over approaches--Minspeak (and sometimes Blissymbols) vs. page-based systems of pre-stored words/phrases. The former offers a more language-building approach while the latter is often limited to what has been pre-stored. (However, both approaches also allow for additional ways to write novel words through the alphabet supplemented with word prediction.) I will admit my bias against pre-stored messaging as I see it as ultimately limiting compared with a language approach that may take some additional effort upfront.

A key issue that I raise is who are we helping and who must make an effort to communicate? Perhaps, the notion of "helping" is wrong-headed. Is communication something to be given? My first experience 30 years ago watching a person who was non-speaking communicate with her mother in real-time using a combination of vocalzations, facial expressions, and body gestures taught me an important lesson. When I tried to talk with her, I could not keep up with what she was saying. Whose problem was it? Mine or hers? Who needed help? Who needed to make an effort to communicate?

Now in school, teachers say that they can't understand a child's use of a system and impose their own limited perspectives because they don't want to learn the system. Who has the problem? Who really needs help? Why does the non-speaking child (and parent) bear all of the responsibility? In any conversation, both participants share responsibility in reaching a common understanding, and both must make some effort. How many years do teachers put into teaching reading and writing? How much time does it take to learn a system like Minspeak which is based on the same rules and structures? Just because it takes some effort is no excuse to impose a false disability on a child!

These comments really struck home.

I have spent untold hours/weeks/months? programming words/phrases into the labrynth of pages on a dynamite early on, and with Proloquo on an iPad and iPod more recently. There is no easy way to order the content and it becomes more and more deeply embedded (kind of like relying on a huge dictionary to string a sentence together).

My son is 17 and for years I've been asking why the big technology/software companies can't come together to create a voice device that is intuitive and user-friendly -- like mainstream business software. I am always told the market is too small. But wouldn't it be worth the effort, just from a PR point of view? To be seen as enabling so many children and adults to communicate?

We are excited about the potential for Fraser's WordQ over time with Ben.

We are now back at trying to find a robust voice device for Ben, but I feel cynical at the outset.

I'm grateful to hear of everyone's experiences and insights! Hope we hear from more.

Kate, it's exciting to hear about what your daughter is doing at home. We did have get a loan of a dynamite years ago from a distributor and then video Ben using it to prove he should be approved for funding for it. And there were times in the early days when I would go into the kindergarten and see the device turned off. Ben seems to be more comfortable using sign language than the voice devices, but I never felt we had the support and the right device to be successful.

Fraser -- You raised so many compelling points!!!!! Thank you for posting. Louise

Stacey, I'm in the Twin Cities area of MN. We got started with the Vantage when we had a phenomenal SLP, who also had an M.Ed. and was a real expert with it. We were able to borrow the hospital's device for several months of home and school use before purchasing our own. We saw amazing progress over a couple of years. Our SLP could even foresee how our daughter would grow with it over a lifetime. Unfortunately, the SLP relocated to Georgia with her husband's job, and we haven't had good support since, even from the PRC Rep., so we've kind of stagnated over the last year. Our daughter still uses it daily, just not the way we imagined (with her, and accessible all day). She also will use sign, gesture, and verbal approximations, and seems to know who to use which mode with. When Grandma came at Christmas, and when we had our annual social worker visit (both unfamiliar), she went right for the Vantage. If she doesn't have a sign for something (today it was "mosquito" - haha), she can easily find it there too.
I know we are not the only ones wanting better support! And having actually experienced it already, I know it's possible.
If you'd like to talk further, I'm at jensens@tcinternet.net.

Louise, I am with you. I like the iPad/Proloquo2Go because they are easier to navigate than the Dynavox (we've traveled the same AAC path!), but right now we're programming sentences and it's getting complicated. I wish that the voice wasn't so robotic sounding. We tried the Tango device a few years ago, Max wasn't ready for it, but I so loved that it had a kid's voice.

I never heard of Fraser's WordQ till tonight, I am going to check it out.

Imagine if Apple set out to create a speech device. Imagine.

We're more curious about how to build an open ended wearable device where people could decide for them selves what words are needed when and where. Luckily we're starting with a very small vocab, but our goal is cheap, wearable and something that can be maintained and modified in the home. I think that looking for commercial solutions won't work as well as figuring out DIY solutions. The time IS right.

Hi Jason -- I'd be interested in hearing more about your plans. Thanks! Louise