EarShot is an app that helps people with hearing disabilities transcribe audio input and provides voice over for transcriptions.
What was the problem?
Hidden under paid features, transcribing audio quickly when in demanding social situations was a price paid by those with hearing disabilities.
Goals:
To provide a Minimum Value Product that allows users immediate transcriptions and voice overs
Customisations of volume levels for easier use of voice over feature
Quick download and access to saved transcriptions
My role:
UX Designer
User Researcher
UI Designer
Branding and logo design
This was a group project with 4 other students who took up roles of Marketing Manager, Lead Technology Officers and Ethical Manager.
Constraints:
Time: 2 weeks for the entire ideation, research, designing and testing the product as a part of a group project on Artificial intelligence.
Trade offs:
The user can only use this app offline or in private wi-fi connections to secure data and privacy.
The app is not a replacement for hearing aids: developed by an NGO to help provide young adults with free alternatives who cannot afford or do not qualify for hearing aids.
Summary:
Research:
A competitive analysis lets us know the market criteria and affordances for designs- what is a feasible design: advantages and disadvantages?
Led to user personas: how would users use already existing apps and why do they choose not to?
Ideation:
Using user personas, we merged the user journey into wireframes
Low-fidelity prototypes were then tested with users.
Final Edits:
Based on user testing of mid-fidelity prototypes, we compare the changes made
Research
Competitive Analysis
By performing a competitive analysis on 3 transcribing apps- Noted, Otter and SmartScribe, we discovered one important disadvantage: important features like dictation and downloading transcripts were only available for paid subscribers.
2. User Personas:
The disadvantages from the above competitive analysis led us to formulate a user persona.
Ideation:
Wireframes
Usability Testing with low-fidelity screens
Edits made
Wireframes:
Make transcription its own functional page, while settings can be accessed when the user has time- this was one of the pain points realised in the user persona.
Features could be customisable: font size, volume for each ear in case of using headphones, translations, etc.
This then led to the following prototype, with which we tested on 5 users.
2. Usability testing with low-fidelity prototypes:
The following video is a screen recording of how our users navigated through our prototype. Using Figma’s prototyping features, we sent the link to the prototypes to 5 users, and asked them to record their movements through the app.
The users were then asked to fill in a form of how they would rate their experience of using the prototype:
All 5 users found it “easy to access” the transcribing page, as well as the directions to the downloads
However, 4 users found the customisation features in “Transcribing Settings” and “Audio Settings” to be “not easy” to use; when asked for specifics, it was because there was a repetition of commands.
3 users found the visual design “not great”, 1 user found it “okay” and 1 abstained.
3. Edits made to final screens:
After conducting the usability tests, we understood user journey was accomplished, but overall visual improvements needed to be made. Additionally, attention to the wording of commands and instructions needed to be more precise. Below are changes made to the screens:
Final Product:
Hear me out:
This was my first ever UX project, and a way in which I applied all that I had learnt from LinkedIn, Medium blogs, Youtube videos. Moreover, the process of designing this app- from the research, design and testing, to understanding how the transcription would work and the technology and AI behind it, to the ethical implications of having downloaded transcriptions- I learnt something from every member of my team and myself.