bt-final.jpg

First time user experience

 

Role

Researcher

Summary

A description of a step-by-step process our team used to test and enhance the first time user experience (FTUE) of a mobile game Bubble Twins. The main tools we used were interviews, monitored playtestings and questionaries. The knowledge acquired from the following tests allowed us to double the time players have spent inside the game during their first play sessions, increase engagement and smooth out the learning curve.

 
 

Team & Testers

2x Researchers, Testers (Children, 11-14 years old, male & female) 


first time user experience

First-time user experience (FTUE) refers to the initial stages of using a piece of software. It is responsible for setting the stage for the experience of the user when interacting with a product down the line. Read more on Wikipedia >

Approach

While long term engagement is heavily influenced by a player's likes and preferences towards specific genres and types of games, FTUE engagement is not so much. In free-to-play games FTUE, in most cases, is a fun and rewarding experience, with fast and easy progress and lots of gifts and bonuses. And if the game progress, pacing and learning curve are adjusted right, it can be engaging and fun for players of different tastes. Thus, when we were receiving a negative feedback about our game from a first-time player, we assumed it was the game problem, not a player preference problem.

Negative feedback during FTUE means the problem is in the experience, not in a user’s preferences
 
 
 
 
 
6.jpg
 

Research Plan:

Meet & Assess

At this stage we met the testers, got them relaxed via a friendly small talk or a joke. Then we gave details on what would be happening during the playtesting, so they would know what to expect.

During this stage we also made a short informal assessment interview to get a general understanding of who we were dealing with: a person's likes, preferences, motivations, and etc. We also recorded games the testers played, their favourite platforms, genres and titles. These records would later help us to better understand the feedback.

Playtesting & Observation

After that we gave our testers time to go through 70% of the levels prepared for testing. While they were playing, we would observe and note their behaviour, actions, reactions, verbal and non-verbal feedback. The important part was to keep ourselves uninvolved and avoid commenting or giving feedback on the tester's game process. This stage was our primary source of information, so we tried to notice and document every detail that seemed important.

Choice to Continue

After a tester would complete all the round 1 levels, we would casually suggest if the he or she wanted to finish playing and talk about the game or we could give a him a couple of extra minutes to play. Further I'll explain this stage in more detail.

Playtesting & Observation

If a tester chose to continue playing, we would give him the time to complete the rest 30% of the levels. The rules for this stage were the same - observe and avoid interrupting. The only difference would be that all the notes we have taken about that tester before, would now be marked as a feedback of an engaged tester.

Questionnaire

After a tester was done with playing, we would give him or her a questionnaire to reflect on her experience. We also discussed things that caught our eye during his play or any questions or extra feedback a tester had for us.

 
 
 
 
 
4.jpg
 

questionnaire breakdown

 
anketee.png
 

True / False Rating Questions

True/False rating questions gave us a general summary about a tester's experience. We chose the statements based on what we assumed could have been the game's problem spots. So when we would get a response statement, we could see if our assumptions were right or not.

For example: "I sometimes did not understand what to do" or "The game is too easy"

Level Rating Questions

Level rating questions consisted of 3 rows that showed 5 screenshots of levels each and asked to mark each level either "Enjoyed" or "Not Enjoyed". These questions would show us which types of levels and puzzles people enjoyed the most and the least so we could focus on the mechanics that were the most enjoyable.

Perception Questions

Questions on what testers thought the game was about, what did they think the game world was like and who the characters were. We were genuinely curious about how people perceived and understood our game and whether our ideas were similar.

For Example: "Who are the characters?" or "Where does the game takes place?"

Comments

These were meant to let the testers share their own thoughts or give comments, if they had any.


'CHOICE TO CONTINUE' Explanation

Being an indie developer, we did not get a lot of chances to do proper play testings and user testings. So when such opportunity came, we wanted to make the most out of it and test not only how people interacted with our game, but also how genuinely engaging the game was. To do that, we created this test called "a choice to continue". 

Choice to continue” was meant to show how genuinely engaging the game was

The idea was simple - we wanted to create a situation when a tester was free to stop playing our game. We assumed that this would show us whether a tester enjoyed the experience, or was just polite to play it. The process was like this: at a specific point during a playtesting, we casually suggested that, if a tester wanted, it was ok to stop playing and move on to talk about the experience for a couple of minutes. We assumed that, if the game was not engaging enough, a tester would choose to stop playing. And if he liked the game, he would prefer to continue playing, since filling in a questionnaire is less fun.

 
 
 
 
 
3.jpg
 

Factors tested

Engagement

The main tool for understanding engagement was planned to be "a choice to continue" test. If a player decided to not proceed with the game, we would assume she was not engaged and then evaluate her decision to stop playing against other feedback she provided. If a tester decided to continue playing, we would evaluate her non-verbal feedback. Basically, was she excited to continue playing or just didn't want to appear rude.

We tried to create the right circumstances for the children to be honest about their feelings towards the game. And since kids are generally less influenced by social and moral standards, we assumed they were less likely to keep doing something they would find boring or not engaging. So, from the very beginning we tried to create a relaxed and comfortable atmosphere at our table in order to promote honest opinions and actions.

Learning Curve

We had a set of 15 levels in a sequence that we assumed was optimal to gradually understand the game. Playtests had to either confirm or disprove this assumption. The game was really simple at the beginning. Though, as it progressed, we were introducing different game elements, like bouncing balls or spikes, to diversify the gameplay. And we needed to understand at what pace should we introduce the new game elements not to overload players with information and at the same time keep the game interesting. Understanding and getting this aspect of the game right is essential to building an engaging FTUE. If we messed it up, we would risk losing a huge percent of our players after just several minutes into the game.

To understand how well we were doing in this section we mostly relied on the interviews after the playsession and questionnaire answers. Also, a good way to get this type of information was to observe how testers played, how many attempts they needed to complete a level and what were the most common problems they faced. Also, depending on the questions players were asking during playing, we would try to figure out where problems lied and what could be done about it. 

Level Design

Every level featured a different mechanic and posed a different nature of challenge and we needed to know which one people preferred. In addition, we used different approaches to build the puzzles and wanted to see which ones people enjoyed more and which were considered boring. Understanding this was meant to help us create more levels that people would potentially like and drop techniques that produced boring level designs. Also, good understanding of players' preferences in this area would allow to better structure the onboarding levels, increasing the likelihood of the next playsession and thus better retention.

We planned to understand all of these things mostly through questionnaires and verbal feedback. As I've mentioned before, one of the sections in the questionnaire was combined out of pictures of different levels for the testers to choose good and bad ones. By getting this information we hoped to understand the preferences of our players and give them more of what they liked.

Rules & Feedback

The version we were testing had no tutorial. Our game was simple and we wanted to embrace that and avoid building a full scale tutorial, and rather focus on explaining specific details and allow players to explore the rest of the rules through play. In addition, we did not know which aspects of the game were hard to understand for the players.

We planned to learn all of this solely on questions and feedback players would provide during playsessions. Our plan was to follow the best practices of playtests and keep silent during the sessions, avoiding helping, explaining and commenting on anything our testers were doing. If a question emerged, we would take note and later analyze it together with all the other questions asked, creating a map of everything that needed extra attention and explanation.

Perception

This part was more for fun. We wanted to see how kids interpreted the visual design of the game and compare their ideas with what we had in mind. And that we planed to learn solely through verbal communication, during the fun part of our discussions.

 
 
 
 
 
5.jpg
 

RESULT

What didn't work:

  1. Too many feedback to record manually. We were trying to write down all of the feedback. Instead, we should have used a voice or a video recording for each play session.

  2. Conversations with kids about their experience were not as helpful as we hoped, since a lot of them could not or did not care that much to express details about what they liked or did not like.

What did work:

  1. Based on the combination of various feedback we were able to adjust the order of the existing levels to enhance the game flow and learning curve.

  2. Based on the feedback we also developed a pattern that we wanted to follow in terms of introducing new game elements.

  3. We were able to identify and delete/change the least popular puzzles.

  4. We made a list of game aspects that needed additional explanation or a tutorial screen.

  5. Were able to understand which elements required additional visual signifiers or explanations.

  6. A choice to continue proved to be very effective since it yield different results and we could see a correlation between a player's choice to continue decision, his verbal and non-verbal feedback and answers he provided in the questionnaire.

The research allowed us to double the time new players spent with the game during their first session

Impact

The following research was immensely valuable, but being unable to run multiple playtests like that, we could not numerically measure its impact. Through general observation, however, after the adjustments we have made based on the results and knowledge gained from the tests, we noticed that the game became more "sticky" for the first time users. New players would generally spend at least twice the time with the game compared to previous testers during their first play session. The game flow became smoother and more engaging with the pacing and the learning curve creating a much more satisfactory experience, compared to the previous versions. The tests also helped us better understand our primary audience, adjust, and ship a much better product.