An app that assists novice ASL interpreters with imagery and text during simultaneous interpretation
When difficult or technical vocabulary is used during a simultaneous interpretation session, Signsavvy will display the word with corresponding synonyms or an image to help interpreters quickly understand in context to the conversation. Not only would this help the interpreters if they mishear, but also allow them to see the spelling if they need to fingerspell.
Interpretation sometimes happens in teams especially for sessions lasting longer an hour. The role of the off-interpreter is to be another set of eyes/ears and assist the on-interpreter with cues as needed. When interpreters work in teams, the off-interpreter can switch to the list view. This makes it easier to verify accuracy and look up vocabulary for the on-interpreter if needed.
Interpreters can share their screens with their users by clicking the screen casting icon. Deaf and hard-of-hearing people who are already used to having lack of information access might want to be included and may benefit in seeing what the interpreter sees.
After each session, interpreters can select which words are helpful to save to their vocab list. They can also deselect words they wouldn't want to show up in future sessions.
Saved words from previous sessions are sorted by topic, date, or alphabetical order under the vocabulary tab. Users can review and learn vocabulary during their free time to be better prepared for upcoming sessions.
Interpreters can create profiles for their users and coworkers. This allows them to take notes on each individual's style, needs, or language level so they can keep track of that information for future sessions. It also saves past sessions with that individual and the vocabulary used in those sessions.
When interpreters receive a new job, they can add an upcoming session to the schedule and enter in details and preferences.
To set up vocabulary assistance, they can add the subject field and a few keywords to help the AI better predict what to show.
Since some words are better explained using synonyms, while others are more obvious with images, users can select their preferred display mode. If they select auto on, the AI will choose which mode to display depending on what explains the word best.
Based on the users' input and previous sessions, the AI will generate a list of vocabulary predictions that might occur in the session. Interpreters can remove words they already know. This helps AI identify their vocabulary level and better predict which words to show.
Interpreters are a vital part of accessibility and inclusion for deaf and hard of hearing individuals. However, interpretation is a very stressful job requiring a lot of experience, knowledge, and energy. It is especially difficult for novice interpreters who start out interpreting an unfamiliar topic with terms they do not know. Because of this, there are very few interpreters despite the high demand.
After doing secondary research on the role of interpreters and deaf culture, we wondered…
For this entire project, we worked with...
Direct observation at a live event with interpreters
Our team attended our university’s presidential address to see how interpreters work at a live event. We observed and documented how they interact with the speaker, their partner interpreter, the deaf/ hard of hearing section, and the rest of the audience. Afterwards, we got to speak with one of the interpreters and one deaf individual to get some understanding on why they did certain actions and how they think it went.
Semi-structured Interviews
We conducted semi-structured interviews with 12 participants to find answers to our research questions. After setting up an interview protocol and a list of questions, we were able to reach out to many ASL interpreters in the greater Seattle area and ask about their experiences/challenges being an interpreter. This method was how we got most of our data and insights.
Card sorting probe activity
For some of the interpreters we interviewed, we also conducted a card sorting/writing activity. It required the participant to come up with a superpower that would solve or help them with a challenge in a specific scenario or context during their job. Then they would give their reasoning why it would help them. By doing this we were able to identify specific challenges and wants within those scenarios and contexts in interpreting.
For the first few rounds, we would provide the scenario through a context card (description or image). In the last rounds, we had the interpreter come up with their own context card in case we missed any important scenarios the interpreter had in mind.
Affinity Diagramming
After each interview or activity, we reviewed our notes/transcripts/recordings and “extracted” qualitative data points (quotes and observations) by writing them down on sticky notes. By posting and grouping all the sticky notes accumulated on large boards, we were able to identify common themes and turn those into actionable insights.
Journey Mapping
From our time working with our participants, we gained a good sense of the interpreter’s process. We created a journey map to visualize interpreters’ interactions and procedures, and highlight touch points and pain points in their process.
We divided their process into three sections: pre-interpretation, during interpretation, and post-interpretation. With our limited time and research, we decided to narrow our focus to pre-interpretation and during interpretation since those areas had more pain points.
Keeping our principles in mind we came up with our HMW statement:
How might we assist interpreters by equipping them with information they need for their interpretation sessions?
We then ideated without keeping business constraints or feasibility in mind since we wanted a full range of ideas we could explore. After doing several ideation sprints our team narrowed down to these six ideas and fleshed them out.
Of those six, we ultimately decided to explore idea 5 because we thought it had the most potential to assist interpreters during their sessions in a meaningful way despite the speculative nature of the AR and AI technology.
What we did
We came up with three different variations of our idea 5 concept without the dictionary feature and tested those with five participants.
To do this, we mimicked several simultaneous interpretation sessions through the wizard-of-oz method and role-playing. We used a laptop with a prepared slide deck to act as the AR “interface”, placing it in the participant's line of sight to simulate what the interpreter would see when using this AR technology. Cecilia controlled the “interface”, I acted as the deaf individual (who the interpreter looks and signs to), and Winnie acted as the speaker by reading prepared paragraphs out loud. The participant would then sign what was being read by Winnie to me (the deaf individual) while they had the aid of the interface.
We would observe and video record how they signed and tried to see how they reacted to the different variations of visual aids. After each paragraph was read, we discussed with the participant to gauge how helpful the visual aids were and what parts were not.
What we learned
Design Decisions 1
Interpreters can select topic and audience’s language level to help AI know which words to highlight.
Interpreters can keep track of the word used in the conversations and see the spelling in case they need to fingerspell.
Interpreters who are interpreting alone for an unfamiliar subject or topic can benefit while experts do not need it.
Interpreters can select the topic and familiarity level to help the AI know which words to highlight.
Any transcript data stored will be unaccessible and automatically wiped 20 minutes after a session in order to protect people’s privacy and rights.
AR technology that assists novice interpreters with imagery and captioning in order to help deliver successful interpretation
Highlights Terminology & Provides Corresponding Imagery
SignSavvy helps interpreters understand unfamiliar terms during an interpretation session by analyzing the conversation and providing visual captioning with a highlighted of a difficult term. For each term it highlights, it shows a corresponding image to provide a better understanding for the interpreter.
Set Preferences
Before interpreters start their session, they can input preferences and contextual information about the session to help the artificial intelligence (AI) predict and determine what images to show. With machine learning that collects data of the vocabulary being used, SignSavvy will be able to predict the direction of the conversation as it builds and show images that are even more helpful to the interpreter.
Storyboard
What we did
We presented our speculative design concept and storyboard to five participants to get feedback. This was the first time we showed and explained our design to our participants. We wanted to know if this technology would be something interpreters can see themselves using in the future.
What we learned
Initially, we chose the augmented reality technology because of its non-intrusiveness. Although participants thought it was a cool and interesting concept, they couldn't really imagine using AR technology within their jobs in this day and age. During the testing, many participants expressed the concern that AR technology is a bit out-of-touch for their everyday life.
Participants also reiterated that receiving information about the upcoming session is very important for interpreters to be better prepared.
As well as showing images of unfamiliar terms, synonyms can also be helpful for interpreters to quickly know what a term means in context to a conversation.
Some participants had trouble entering their familiarity with the topic in the preference setting since there is no set standard for them to determine if they are novice, general, or expert. Also, it wasn't very clear how entering these preferences would actually help come up with a better image or synonym.
Design Decisions 2
AR technology is a bit foreign and out-of-touch for interpreters to use in this day and age, but they are always using their phones to check emails, schedule, and google terms. An app can easily fit in with their workflows.
Because of our transition from an AR interface to a physical one, we realized that having even a little captioning would be too much for the interpreter to read from a little device.
Instead of solely focusing on assisting interpreters during the session, we also want to help them prepare before the session and afterwards to help them learn from their previous sessions.
Before we had two display modes (word only, and word + image). Interpreters have varying preferences for the display of contextual information depending on the word, topic, situation, and the people they are interpreting for. Showing synonyms instead of an image could be helpful in certain situations.
Having users enter keywords relating to the subject/topic of the session beforehand will help the AI better understand the type of conversation that might happen and provide better images and synonyms.
An app that assists novice ASL interpreters with imagery and text during simultaneous interpretation
User Scenario Storyboard
Sketches and Wireframes
What we did
We conducted in-depth usability testings with two interpreter participants. We made an interactive prototype using Figma and projected it on an iPhone for participants to use. We asked participants to think aloud so that we knew their thoughts, concerns, and frustrations while interacting with the prototype.
The two participants were new to our project. They were thrilled by the idea of an app designed for interpreters since there are currently no products like it available. They gave us detailed feedback on the interface and provided fresh perspectives on the app's structure and functionality to make it fit in better with their current workflow.
What we learned
Design Decisions 3
Deaf people who are already used to having lack of information access might want to be included in seeing the visual assistance. Mirroring/casting to a bigger screen in the room can be helpful if it is in their line of sight.
In instances where an off interpreter is using the app, the off interpreter can switch to a list view to show live list of highlighted words from that session.
Interpreters can choose which words they would want to show up again in future sessions and the words to save to their vocab section.
After finishing this project, I realized there was a blindspot in our design. Currently the interaction model requires the interpreter to create a session, save it, click back into it, then start that session. The interpreter cannot go into live in-session mode unless they create an upcoming session beforehand. What would the interaction be like for interpreters who forgot or didn't have time to set up a session, and want to go right into using the real-time assistance? Although this wouldn't be the typical use case for an interpreter, a good design should also work for the edge cases. I believe that interpreters would appreciate the freedom to go straight into in-session mode without having to create a session. I can also see how interpreters might think that this is just a scheduling app without the main feature (in-session assistance) being available from the first screen they see.
If I were to continue with this project, I would make a way for the user to go straight into in-session mode if they wanted to and allow them to fill in the job information afterwards. Ideally the interpreter would create a session once they receive a job and prepare before that session but that doesn't always happen.
In every project that has the potential to affect people, I believe that as designers we carry the responsibility to question and carefully consider the implications and potential problems/unintended outcomes that might arise from the design. These are a few questions I came up with about how SignSavvy might affect people.
Now I do not know the answer to these questions. Obviously we weren't able to test on a large scale, but these questions are important to ask and think about. If this were to be developed and put out in the real world, we would need to find answers to these questions.
Before this project, I thought design research was valuable but only to a certain extent. Now I have learned that research should always happen if the time and resources are available. Each time we went out and interviewed or tested, we learned something new that changed our direction. Diligent research really helps understand not only the people who we are designing for, but also their thoughts, attitudes, and routines which drive how they interact with the world.
Another takeaway is that asking the right questions early on in the design process is crucial because those questions lead to valuable answers which acts as the foundation for the design to be built on.
The most valuable takeaway from this project is not coming up with a cool product or design. It is how much I was able to learn from diving into an unfamiliar topic which was deaf culture, sign language, and interpreters. I got to learn about how others navigate and interact with the world differently. I also grew to love interpreters not just for how well they handle their difficult job but more for their passion and dedication for accessibility and inclusion to DHH people. Because I learned more about deaf culture and DHH people, I have the perspective to look into their world and be a more knowledgable and empathetic designer and person.