Saturday, September 1, 2012

Paper Reading #2: The Impact of Tutorials on Games of Varying Complexity



     The University of Washington researched and published a paper on “The Impact of Tutorials on Games of Varying Complexity”, which can be found here: http://grail.cs.washington.edu/projects/game-abtesting/chi2012/chi2012.pdf .The researchers included Erik Andersen (Ph.D. student), Eleanor O’Rourke(Ph.D. student), Yun-En Liu(Ph.D. student), Richard Snider, Jeff Lowdermilk, David Truong, Seth Cooper(director of the Center for Game Science), and Zoran Popovic(research advisor and director of the Center for Game Science). Popovic has a Sc.B. with Honors in Computer Science from Brown University, a M.S. in Computer Science, and a Ph.D. in Computer Science from Carnegie Mellon University. He had a research position at Sun Microsystems, Justsystem Pittsburgh Research Center, and University of California at Berkeley.
In this particular research, they wanted to discover how tutorials affect game learnability and player engagement because tutorials are such a crucial part for retaining new players. Their results suggested that if a game can be learned through experimentation, it is better to not invest in a tutorial. In this experiment, they examined tutorial presence, context-sensitivity, freedom, and availability of help. Tutorial presence is just whether a game provides a tutorial or not. Context-sensitivity determines if the information is provided within or outside the application interface.  Depending on the amount of freedom given in the tutorial can either frustrate the player or enhance the tutorial effectiveness. The last variable they considered was if an on-demand access to help or tutorials improves game play. They combined these variables into eight different tests as shown below.
The research group used three games of different difficulty and genres. Two of casual game play and one that was much more difficult. They collected data from the participants through large-scale anonymous experiments. This type of observation means there is no interaction with the participants who do not realize that they are being observed. So, the researchers instead obtain information from how many levels are completed, the total length of time played, and the return rate. After looking at this data from the lower table, they concluded that context-sensitivity can improve engagement, tutorial freedom did not affect player behavior, and on-demand help both harmed and helped player retention depending on the difficulty of the game.
            As shown by the tables above, they evaluated their problem though qualitative and objective measures. Since they had no interaction with the actual participants, they could only go off of the numerical data recorded while they were playing the game. This can be good and bad. I think the only addition I would add to their evaluation is to ask the user to complete an option survey when they close out of the game.  Also, they tried to pick small variables out of the larger system to measure.

There have been studies done on tutorials before. Some examples include
  • The designer’s notebook: Eight ways to make a bad tutorial
  • Apple guide: A case study in user-aided design of online help
  • Generating photo manipulation tutorials by demonstration
  • Stencils-based tutorials: Design and Evaluation
  • Tutorials: Learning to play
  • Learning by design: Games as learning machines
  • A survey of software learnability: Metrics, methodologies and guidelines
  • A comparison of still, animated, or nonillustrated on-line help with written or spoken instructions in a graphical user interface
  •  Practical guide to controlled experiments on the web: listen to your customers not to the     hippo
  • Sheepdog: Learning procedures for technical support.
These papers cover context sensitive versus context insensitive, how a tutorial looks, how experience helps the learning process, the pros and cons of having freedom in a tutorial, and a manual is better used on-demand. Therefore, I don’t think this work is that novel. It seems like they just brought in several thought and ideas from other papers and combined it into one study.
            Trying to look at so many variables at once seems a little overwhelming. I think they would have gotten better data if they were able to break down the problem better and just be able to focus on one attribute at a time. Their method included way too many variables that they had no control over. They don’t even know how accurate their data is because they don’t know what type of participants was attracted to what game or their reasons for quitting the game. The researchers mentioned in their paper that since they don’t know the demographics, it is possible that more patient people were drawn to the more difficult game since it required them to take time to download it. However, doing it this way allowed them to obtain a large amount of participation (about 45300).


No comments:

Post a Comment