RTE Player Redesign: Blog Four

Áine O'Neill
5 min readJan 24, 2021
Learning outcomes for this blog

Usability Testing Procedure

Saturn are satisfied with the high levels of rigour achieved in our usability testing process, so our method will be discussed in some detail. We tested our prototype with two participants each (10 total) between the ages of 25–34. Because of the restrictions of virtual testing, we recruited friends and colleagues who we were familiar with. Swapping our participants with other team members so we could test on strangers would have eliminated familiarity, and could be incorporated into future usability testing. However, we followed the same script to ensure consistent testing and a straightforward results synthesis.

We used the qualitative method of remote conceptual testing to conduct usability tests. Participants were asked to complete five tasks, each sampling a different section of our app. Figma allowed us to showcase the prototype through a link. We aimed to try out different usability testing platforms; between us we tried out Lookback, Loop11 and Zoom. Lookback was excellent for our trial usability test, which was conducted among team members to quickly pick out mistakes in our tasks and scripts, and allowed the team to tune in and watch together. However, both download restrictions and session limits made things more difficult, so in future rounds of testing, Zoom would be most appropriate. The prototype had physical limitations (as all prototypes do) which impacted on some user feedback, but generally users subscribed to the idea that this was a working app to give honest opinions of.

We used the quantitative method of System Usability Scale (SUS) at the end of each test. We scored highly with 89.2%, proving that our app has a strong usability. But this didn’t help us to target specific issues in the way that our qualitative feedback did, so it was vital that we triangulated and learned from both.

Our note taking went through stages of collaborative organisation on Miro. Each participant was given a colour and comments relating to each task were noted on a sticky, which were then grouped. This allowed us to quickly see common thoughts from participants, outliers and interesting ideas for further iteration (placed on a separate board). The system was flexible and allowed for rounds of organisation to bring out our key findings.


Overall, the feedback was very positive and helpful, and users highlighted our blind spots which we could iterate better solutions around. It was interesting seeing different peoples preferences for navigating and choosing a show to watch depending on their interests and how they watch. Some liked to browse, and found Pick for Me features exciting, whereas ratings played a stronger role in decision making for other users. Some comments sparked ideas and opportunities to expand, like two users mentioning their preference for seeing faces as they chat to friends. Time was limited at this point, so Saturn created an Effort/Impact Matrix to prioritise the most impactful changes we could make quickly.

(fig.1) Effort Impact Matrix

Our rating system needed more constraint to direct and focus the users on one action. The ‘Random Pick’ as a name made users doubt the quality of targeted content it would display, so it was renamed ‘Smart Pick’. There was general enthusiasm for the ability to chat with friends, but confusion over the two sides:public chatting and private chatting. This was solved by strengthening the visibility and affordance in a ‘public/private’ slider button. We also allowed the chat feature in all shows, not just in the Live tab.

Although high levels of preparation and rigour went into organising a professional usability test, we would have benefitted from more guerrilla testing of earlier wireframes. We did test features quickly with family members who provided useful immediate feedback, but more on the fly testing would have teased out more issues before our first usability test.


Rekhi, S. (2017) Don Norman’s Principles of Interaction Design

Retrieved from Medium, January 23, 2021:


(fig.2) Summary of Design Refinement

Consent Form and Tasks (sent to participant)

(fig.3/4) Consent form and Tasks, all stored securely in our One Drive

Test Script with Note Taking Stickies

(fig.5A) Test Script with Note System
(fig.5B) Sample of Notes (see OneDrive for full)

SUS Scale

(fig.6) SUS questions and answers

Conducting the Tests

(fig.7) Figma Link (fig.8) Lookback Session with Participant

Key Findings and SUS Results

Quantitative Findings