November 24

  • Debriefed the 2 participants for weekly reminder testing
  • Conducted 1 user interview
  • Continued working on flows
  • Started final diagrams + presentation slides
  • Started wireframe flows
  • Defined product: multi-modal conversational digital assistant

This week was productive and clarifying, but a bit emotionally rough.

On Monday while volunteering at the senior centre, one of the seniors I befriended was telling me how rough a week he had – he had multiple trips to the ER in the past week, and many health issues and problems with medical services, amongst other things. He was in a very tough spot. I felt so helpless listening, and that’s a feeling I hate. All I could do was lend an ear, and I told him that he could tell me and I would listen.

On Wednesday, I debriefed one of my two weekly reminders participants – when she picked up the phone, she was crying. She had an extremely rough week as well, and it was eerily similar to the gentleman at CEI – she also had multiple trips to the ER and very little assistance. I felt so bad debriefing the testing, even though she insisted it was fine and for me to ask her my questions. Afterward, I listened to her cry and tell me her situation. She told me she was grateful for me; I am very thankful that my presence and words brought her comfort, even if it’s not much practical help.

These two experiences, although very tough, were valuable for me to see. It got me thinking about my pursuit of working in the healthcare field, and the emotional situations I would be facing. I will need to learn to empathize with the people I’m designing for without 1. being wrought with so much distress that it affects me and my ability to work, or 2. without becoming desensitized to the problems people are facing. Even though I felt very low for a few days, I definitely feel more determination to pursue this kind of work.


My biggest step forward this week was defining my product as a multi-modal, conversational digital assistant. Gretchen used the word “multi-modal”, which is wonderful for not only describing what I’m making, but also for guiding me in my next steps. Previously I was still in this grey space, unsure of what exactly the visual interface of this assistant would look like.

From my testing and previous research, I decided to move forward with creating a purely conversational digital assistant. If you think about current assistants – Google Assistant, Siri, Amazon Alexa, only the voice interfaces are conversational – the visual interfaces still very much have unique digital systems that don’t exist in the physical world (e.g. card format, digital UI components, etc). The user needs to navigate and perform actions on this UI to actually get things done. These things all take time to learn, so why not have the visual interface be a conversation as well?

Ultimately, my assistant would be multi-modal: VUI is purely conversational, and the visual interface will be chatbot-like: the user would have a text conversation with the assistant (they can also talk into the mic), and this can lead to richer multimedia information. The goal is for performing actions to be accessible for people who have hearing loss (VUI won’t work) and lower tech literacy.

An example I captured from my conversation with Gretchen


Another simple example: to pull up an event on a visual interface, typically a user needs to navigate to the calendar app > select the date of the event > select the event itself to view details. With my assistant, they can simply text, “Show me “x” event on “y” day”. The assistant will then send them the event details. The interaction of retrieving and inputting information would ultimately be very natural. This visual interface would exist on mobile, tablet, and desktop.

With that being said, I’ve started doing some wireframes to demonstrate a scenario of handling a doctor’s appointment.


I’m also continuing to work on my subscription flows, and my finalized diagrams for my presentation, which I started. I’m thinking about how to tell my story, and what my demo would look like. I would most likely create a video and showcase the scenarios with sketches and wireframe flows (instead of actual actors, I think that can wait until next semester) to demonstrate my concept.

I also conducted an interview with another senior to validate my concept and learn more about how they are handling reminders and tasks.

This coming week, I am traveling to India for a friend’s wedding! I probably won’t be able to get as much work done, but I’ll aim for the following:

  • Finish wireframe mockups with participant feedback in mind
  • Continue working on final diagrams/presentation

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create your website at WordPress.com
Get started
%d bloggers like this: