Poster Demo Day

Hi!

Here is my poster for the Demo on the 21st of May at the Computer Science Department in Heverlee at 12:00 o’ clock.
Unfortunately, I won’t be there to give information about my thesis. But here you can see the overall idea on the poster.

If you have any questions or comments, I would be glad to answer them!

Second Presentation 26/03/13

Here is the presentation I’m giving Tuesday 26/03/13.

 

First evaluation of digital prototype

Evaluation of the digital prototype

Finally, it’s here! The first evaluation of the digital prototype!

Paper vs. Digital prototype

As you probably remember, I’ve made a paper prototype some time ago. I evaluated this in an earlier blog post. The paper prototype seemed to be good already, but some changes had to be made. I will explain the main differences here. Nothing fundamental has changed, though. The screens of the digital prototype are shown in he scenarios below.

Record button

The record button in the paper prototype caused for many confused test users. In the paper prototype, it was placed in the middle of the screen. This way, it was also used as location pointer. I separated the location dot and the record button as you can see in the figures.

Sound Battle locations

In the paper prototype, the battle locations weren’t clear to most of the users. They mostly thought the locations resembled other players. To improve this, an explanation of the sound battle was shown in the first screen of the sound battle. Also, the locations were presented using the location markers of Google Maps, which is a lot more familiar.

Profile buttons and main screen

The profile and view map buttons were shown in the top bar in the paper prototype at all times. This seemed rather useless, since nobody wants to see the profile when recording sound immediately. Also, it showed that it was confusing for users to have buttons on the top bar for these things. I moved them to the main screen in the digital prototype.

Set-up

The prototype can be used to execute two scenarios: the random record scenario and the sound battle scenario. The test user was asked to do these scenarios in the same way as with the paper prototype. There were no test users who already had tested the paper prototype. When executing these scenarios, the test user was asked to think out loud so I could note down the comments made. I also explicitly asked sometimes if everything was clear on the screen.

After executing these scenarios, I asked the test users to fill in an elaborate questionnaire on Google Drive. This questionnaire was divided into six parts:

  • personal information: age, if the test user was a student and if she or he had a smart phone;
  • smart phone information: if the test user had a smart phone, he or she had to answer questions about the OS of the smart phone;
  • game information: did the test user play a lot of games, does he or she know any gamified applications;
  • fun information: did the test user like to play the scenarios, does the user think it’s useful to play the game;
  • SUS questionnaire: the same questionnaire as used in the evaluation of the paper prototype, which generates a general usability score;
  • looks and feel: did the game look good, were specific buttons clear, room for improvement.

Five people were evaluated. Two of them were Masters Computer Science, one did Geography, one did Informatics and one was already working. The ages reached between 18 and 27. Four of them had a smart phone, of which two had Android installed and the other two had an iPhone. These four people also played games on the phone at least sometimes.

All questions and results of the questionnaire can be viewed in a summary. In the post below, I will refer to this summary.

The evaluation

Random Record

Scenario

The random record scenario

The random record scenario

The random record scenario was started by telling the test user this: ‘You are bored and you are thinking about the new application you have downloaded: the NoiseApp. You’d like to record sound where you are. How would you do it?’

The test user should then press the Random Record button on the main screen. The screen with the Google map appears. Pressing the record button starts the recording. The sound is then being recorded (simulated) for 10 seconds, showing a progress bar. When the recording is done, the points screen appears.

After asking if everything was clear, the test user was asked to return to the main screen, which had to be done by pressing the arrow button in the top left corner.

Again, the test user had to try to record sound, but before it was recorded he or she had to cancel the recording. This had to be done by pressing somewhere outside the progress bar window.

Problems

What should I do?

There weren’t any big problems. In the questionnaire everyone said that they more or less did know what to do. It often seemed not clear to them they had to push the record button when executing the scenario, though. One said that he thought he was already recording immediately when entering the screen with the Google map. In the next iteration, I will add an explanation pop-up window as is the case in the sound battle scenario, since everybody understood clearly what to do in the sound battle scenario, as shown in the questionnaire.

When can I start doing it?
The Android status bar.

The Android status bar.

Because the application only works when the GPS signal is fixed, a first push on the record button results in an error message: ‘Wait for a fixed GPS signal’. Most people kept waiting, even when they could see the GPS icon in the Android status bar when the GPS was fixed (as shown in the figure). A message when the GPS has a fix might be a good solution for this problem.

Sound Battle

Scenario

The sound battle scenario.

The sound battle scenario.

Afterwards, I told them: ‘You can earn more points challenging someone else. How would you do that?’

The test user should then press the Sound Battle button. After telling them to choose a random player, an explanation of how the sound battle has to be played was presented. After pressing OK, the battle locations were shown with red markers. These locations were generated randomly on streets within a radius of 150m. When closing in on the locations the marker changed color: orange when the location was less than 50m away, yellow when the location was less than 10m away. When the location marker was yellow, the recording could be done, which took again 10 seconds. An error message showed when the user tried to record when the marker was still red or orange.

When a location was recorded, the marker turned green once the player moved location. Then he or she could resume with recording the remaining battle locations. When all three locations were recorded, a pop-up appeared telling the user he won his first sound battle and had earned a badge. After pressing OK, the points screen appeared. I explicitly asked if everything was clear.

Problems

Coloring of markers

Although the battle locations were clearly represented with the markers, the coloring of these markers confused some of the test users (also shown in the questionnaire). It wasn’t clear to most users that the yellow color meant that the user was sufficiently close to the location to record. It might be better to change the yellow color to green. When a location has been recorded, the marker could be a different one to represent that the location shouldn’t be recorded anymore.

Recording of a location

When the recording of a location is done, the marker turns green once the user has moved. This is because of the implementation: markers only change color when the location changes. I should try to update the colors when recording is done, too.

Sound quality

Again, sound quality wasn’t clear to most users. I should put an info button there which explains how sound quality is being assessed.

SUS questionnaire

Box-plot of the SUS questionnaire.

Box-plot of the SUS questionnaire.

68% is an average score for the SUS questionnaire. As you can see in the boxplot, the application scored way better. The mean score is actually the same as when the paper prototype was evaluated, which is 81%. I hope the score will still increase once the application is finished.

Room for improvement

  • In the random record, an immediate feedback of the sound recorded might be helpful, like showing the recorded sound level on the map;
  • in the sound battle, it is more engaging to see who you are playing against.

Some remarks

  • Although only one test user deals with noise pollution regularly, they all thought the application makes sense;
  • as most of the users think the game might be fun once it’s finished, they don’t really think it’s challenging. This relates to the simple fun described in a previous blog post;
  • only one test user pressed on a sound battle location marker, which showed the location longitude and latitude and told the player to get closer to the location;
  • in the sound battle, only 60% thinks it’s important the battle locations are the same as the ones for the opponent;
  • in the sound battle, the amount of locations and the spreading was OK;
  • the recording of the sound didn’t take too long.

That’s all folks! Any comments are welcome of course!

Android Experiences, Vol. II

Android Experiences, Vol. II

Hey!

So, the last couple of weeks I’ve been implementing the digital prototype. For now, I’ve implemented two scenarios: the random record scenario and the sound battle scenario. The implementation of the prototype takes way more time than I firstly anticipated. That’s because I liked to put a lot of functionality already in it, so people testing the prototype get the feeling of a really working application. Also, it lessens the work of the final implementation. Although, that’s what I’m going for…

Just so you know, I’m evaluating the two scenarios as we speak and I will be posting the results this weekend.

9-patch drama

Maybe you have remembered from my earlier blog post, 9-patch is a way to create buttons in a beautiful and adaptable way. By adding black pixels onto the top and left border of the image, you tell Android to stretch the image only in a specific part, so the image doesn’t get blurry when being stretched.  By adding black pixels onto the right and bottom border, you tell Android to place the text into the area bound by the pixel border. Beautiful, no?

No!

2013-03-09 11.40.18

It’s of course ideal if you just want a plain background with some text on the button. These buttons are quite simple and not very interesting. What I wanted, were buttons like the ones you can find on Foursquare. You see such buttons in the figure. They are simple and beautiful (in my opinion). So I made buttons like them as you will see in the next blog post I’m going to write. When adding a nine patch border though, the image was stretched by Android in such a way that the icon wasn’t set in the middle. It really took me a lot of time to test, double check and draw the images, to search on the internet for solutions, and so on. I finally wanted to give up on the nine-patch button stuff.

I did make it work though! The problem I faced was caused in some way because I was using Button items with a background image and with a text on it. I found the solution by using an ImageButton item (which can only display an image) and a TextView item underneath it. A bit of a cheat though, I admit, but that did the trick!

So before you start nine patching, better look out for these awkward problems you can encounter when using 9-patch images.

Street coordinates vs. open data

The next problem that took a lot of time was to find GPS coordinates of streets. For the sound battle scenario, I have to generate randomly chosen locations. These locations have to be on streets, not onto inaccessible area. I don’t want to get people injured by doing the craziest manoeuvres.

.shp to .geojson

I thought getting those coordinates would be as simple as just asking Google: ‘give me coordinates of streets’. But no, it wasn’t. It took me a while to find a way to extract this data from maps made by CloudMate. I followed this tutorial that explains this. First I had to download .shp files of Belgium. That file had to be opened with Quantum GIS, a program that reads such a file. Of course, this map was big and it took a while to load and to navigate to Leuven. Then I could select the part that I was interested in. I chose the city of Leuven of course, but also a part of Heverlee, since I will do a lot of tests there. The selected data could then be extracted to geojson format, which looks like this:

{ “type”: “Feature”, “id”: 338030, “properties”: { “TYPE”: “track”, “NAME”: “Kapeldreef”, “ONEWAY”: null, “LANES”: null }, “geometry”: { “type”: “LineString”, “coordinates”: [ [ 4.6715047, 50.8623036 ], [ 4.6711241, 50.8620978 ], [ 4.6710358, 50.8619819 ], [ 4.6709153, 50.8617976 ], [ 4.6707745, 50.861593 ], [ 4.67051, 50.8615278 ], [ 4.6703555, 50.8615529 ], [ 4.6700007, 50.861678 ], [ 4.6697472, 50.8618367 ], [ 4.6694707, 50.8620126 ], [ 4.66937, 50.8620374 ], [ 4.669087, 50.8620334 ], [ 4.6688965, 50.8619793 ], [ 4.6687968, 50.8619598 ], [ 4.668738, 50.8619435 ], [ 4.668564, 50.8618987 ], [ 4.6682827, 50.8618474 ], [ 4.6680032, 50.8618438 ], [ 4.6675739, 50.8619097 ], [ 4.6672996, 50.861961 ], [ 4.6670349, 50.8619999 ], [ 4.6669009, 50.8620039 ], [ 4.6667334, 50.861968 ], [ 4.6665038, 50.8618511 ], [ 4.6662606, 50.861744 ] ] } }

I have tried other formats too, but the tutorial is right: .geojson was the most usable. I thought SQLite would do a better job for me, but when I opened that file I couldn’t find any coordinates! (how do they do that?)

.geojson to coordinates

Ok, perfect, so we have the .geojson format and the coordinates are readable! Probably, there is a way in Java to read geojson files, but I couldn’t find it immediately. Also, I didn’t want to mess up my Android project, so I extracted the data I needed using a regular expression. The expression looked like this:

\{ “type”: “Feature”, “id”: [0-9]*, “properties”: \{ “TYPE”: “[a-z]*”, “NAME”: (“[A-Za-z\s]*”|null), “ONEWAY”: (“[A-Za-z\s]*”|null|”1”), “LANES”: (null|[0-9]{1}\.0) \}, “geometry”: \{ “type”: “LineString”, “coordinates”: (\[( \[ (4\.[0-9]*\, 5[0-1]\.[0-9]*) \],)*( \[ (4\.[0-9]*\, 5[0-1]\.[0-9]*) \] )\]) \} \}

After deleting special characters, which I couldn’t put in the regular expression for some reason, I finally had the longitudes and latitudes of all data.

Big file to smaller files

This file had 9998 coordinates! I wrote a parser to read the file, so random NoiseLocations that were close to the player (in a radius of 200m) could be generated, but the Android application slowed down dramatically, obviously…

Then I wrote a parser to create quadrants of the coordinates. That made the trick! Now the map of Leuven + Heverlee is split up in 35 quadrants. Every quadrant has enough points to generate random locations in the vicinity of the player. It’s also quite small,which has an immediate effect on the load time, which lowered drastically.

But why?

I’m aware that this might be a strange thing to do. The most convenient way to generate these random NoiseLocations would be to send a request to a server, which does all the hard work and then send it back to the user in no time. True. But that might be for a later stage in my application. For now, it works!

Conclusion

I did work hard the last couple of weeks (this week alone I worked for 41 hours, just on my thesis!). But finally, I can do some tests and start working on the other scenarios. I hope this won’t take so much time again!

Watch out for my next blog post this weekend, in which I will describe the evaluation of the first two scenarios!

Scientific Paper

Scientific Paper

Tomorrow, the deadline for a first draft of the scientific paper I have to deliver is due.You can read this Scientific Paper.

Of course, the paper isn’t ready yet. Because I write my thesis in English, the scientific paper has to be in Dutch.

If you have any comments, I’m open to hear them.

Since writing the paper took a while, I will try to finish the first scenario’s of the digital prototype this week. The evaluation will be done next week in parallel with the implementation of the third scenario.

 

Android experiences

Android experiences

Digital Prototype

As told earlier, I am working on a digital prototype. Instead of just making a digital prototype using some random demo application, I am already implementing it in Android. The main advantage of doing this is that I can already get familiarized with android programming. This should shorten my application implementation time drastically.

Android Developer Toolkit

For almost every programming language I use Eclipse to develop applications. So, handy for me, there is an eclipse plugin for Android! It installed quite easily, too.

When creating a new project, you can set minimal API requirements, for which type of Android device your application is going to be designed and so on. These properties are quite easy to understand. When I first plugged in my HTC One X (with Android 4.1), it connected through eclipse immediately.

The toolkit makes it possible to have a graphical interface, so you can see how your application is going to look like directly. This makes it easier to implement the design. It does remind me a bit of Dreamweaver, which is used to develop websites. Properties can be set using the property windows.

When you want to run the application, Eclipse sends the application to the Android device, so you can run it on your smart phone.

First Android application

To get familiar with Android, I first followed a (little) tutorial to develop a first application. It doesn’t do more than just printing “Hello World” (of course) on the screen. It showed me the basics of how to run an app you made, but that actually wasn’t that informative, to be honest.
I also don’t like tutorials much, because they let you do things you can’t reuse. So that’s why I just jumped right into it, to make a NoiseApp.

Activities

So, every screen you see in an Android application, is an activity. It took me a while to get that. But actually, it is quite logical. Such an activity can easily be compared with all the scenarios I’ve presented in earlier posts. For example, the main screen of the application is one activity. Once a button is pressed, another activity will be started. This activity could be to record random sounds. Once the button record is pressed and a sound is recorded, another activity start to present the earned points.

Res folder

In the source folder, there is a res folder. This folder holds all resources, like string values, images, menu settings and so on. It’s easy that all resources are in one place. Images can be put into four folders: hdpi, ldpi, mdpi, xhdpi which are folders according to the quality of the images. When an image is presented on a 4.6 inch screen, or it is presented on a 2.3 inch screen, these folders make sure the right quality is chosen so no weird image adjustments have to be made. But then, there are 9-patch images, which I will discuss in the problems section.

Generated files

When implementing, some files are automatically generated, which makes deploying just as easy as pushing one button. It holds two files: BuildConfig.java, which is needed to build the application and R.java, which holds parameters concerning the resources that are used.

Problems

Of course, I already ran into a couple of problems.

Back button

Android phones have a hardware back button. This button is used to navigate back through the app, or exit the application. Also, when one app results in accessing an other app (like when you have the option to share something through Facebook). But when i use it to try to navigate between the activities, it just exists the application. I already searched for a solution to this problem. Some people present a hack to put in my code, but I can’t believe that that is the only option.

9-patch

9-patch images are images with a one-pixel black border. This border is used to tell Android what part of the image can be stretched and what part of the image can contain data, like text or images. This should make it a lot easier to create buttons for different types of smart phones. Although, it doesn’t seem to do what I expect it to do. It seems it cannot cope with putting text and an image onto a button. After some messing around with it, I got the result I was looking for.

Conclusion

Android is easy to start with. Basically, it’s just Java with some XML features. It does take some time to get familiar with it, though. But I’m sure I will get it to work :). Soon I will give an update on my Android experiences. If you have some tips or suggestions, you can always let me know!

Planning revisited

Hi there again!

It’s been a while since the last blog post! That’s of course because the exam period was occupying me. I thought I could combine exams and working on my thesis. It seems to be harder in practice… But now I’m back so I can concentrate on my thesis fully. I have fallen some time behind. But I’ll try to make up for it during the next weeks.

The first thing to do is to revisit my planning for the second semester. Looking back at the first semester, I came to the conclusion that I’m not good at making a planning and sticking to it. So that’s why I created an extra Thesis calendar in my Google Calendar where I schedule in my thesis time. You can always take a look at it at the bottom of this post. I hope this will help me to make me work on my thesis, since I use Google Calendar for everything I do. For every activity, I planned an amount of hours, according to the time I have schedules in my Calendar. The amount of planned hours will take lunch times and other distortions into account.

I will revisit this planning regularly and compare it to the hours I really worked on my thesis, to keep myself informed about my schedule and keep you informed too.

Date Subject Planned hours Hours done
27/01/13 – 31/01/13 Implementation of digital prototype 26.5h 15.75h
01/02/13 – 10/02/13 Skiing holidays 0h 2.25h
11/02/13 – 14/02/13 Implementation of digital prototype 16.5h 5.5h
14/02/13 – 18/02/13 Valencia (Erasmus visit) 0h 2.1h
18/02/13 – 24/02/13 Implementation of the digital prototype 36.5h 18h
25/02/13 – 28/02/13 Writing scientific article 15h 24h
28/02/13 DEADLINE: Scientific article N/A N/A
28/02/13 – 03/03/13 Implementation of digital prototype 25.5h 17.75h
04/03/13 – 12/03/13 Implementation + first evaluation of digital prototype 30h 47.5h
12/03/13 – 24/03/13 Further implementation of the prototype + evaluation 60h 0h
24/03/13 – 27/03/13 Implementation application 15h 0h
28/03/13 – 31/03/13 Preparation second presentation 27h 0h
–/03/13 DEADLINE: Second presentation N/A N/A
01/04/13 – 14/04/13 Implementation of the application and finishing 80h 0h
15/04/13 – 21/04/13 1st evaluation of the application
+ adjustmenst
42h 0h
15/04/13 – 21/04/13 2nd (big) evaluation of the application
 + (Google Play launch) + writing thesis
42h 0h
21/04/13 – 28/04/13 Evaluation of the application + writing thesis 42h 0h
29/04/13 – 30/04/13 Making poster and demo for the Poster/demo day 10h 0h
–/05/13: DEADLINE: HCI-day with Poster/demo N/A N/A
01/05/13 – 17/05/13 Writing thesis 101h 0h
17/05/13: DEADLINE: Submitting the full draft of the thesis text N/A 0h
07/06/13: Deadline submitting thesis N/A N/A
–/06/13: Defense N/A N/A

Evaluation of the paper prototype

Evaluation of the paper prototype

As you have read in my previous blog posts, I made a (hopefully) better paper prototype to test on users. Here, I will evaluate the results of these tests.

What?

The paper prototype I made now, is mostly in black and white. The reason for this is that colors might distract the user. The prototype is kept simple and quite minimal, but the lay-out is based on that of Foursquare. The NoiseApp (the name will be changed in the future) is actually quite like Foursquare, with the added feature to record sound.

Since the paper prototype cannot support all features of the application, I just wanted to evaluate the clarity of buttons to push and the application flow.  These are merely the most important issues that can be evaluated with a paper prototype.

I tested 8 people. Three of them were computer scientists, two of them engineers and three of them had a non-technological background. Also one test user didn’t own a smart phone. This kind of population seemed balanced and enough to highlight all the problems of the application. The problems I discuss here happened to most of the test users.

How?

The paper prototypeI made a cardboard casing that resembles a modern Android phone. The back, home and menu button of a typical Android phone are also on there, although it is not necessary to know their function to use the application. The paper screens resemble the screens of the application. Navigating to the screens was done by myself (I was the ‘computer’ behind the application). I made it easy to swap from screen to screen by taping the screens together, so I could just pull at the top to move the screen (see the image).

The test user had to sign a consent form. The consent form made clear what my thesis is about,  what I was doing with the user test and with a signature that granted me video recording the session. While performing the test, only hands (and the prototype) were video recorded to ensure anonymity. This way, I could concentrate on operating the prototype.

The test user had to perform three scenarios: a random record, a sound battle and checking the profile. In the full application, it would be possible to do more than three scenarios. I didn’t want to let users take more than 15 minutes, so I chose only three that were the most interesting.
The first two scenarios included the “recording of sound”. I asked the test users to think out loud, so I could understand what they were thinking. It’s a common tool for testing prototypes. I also didn’t give the test users any information about the application beforehand, but only the necessary info (which is the goal of the application). This way I could see whether the application was easy (intuitive) to use.

I also asked the test users to fill in a questionnaire afterwards. The questionnaire was set up so that I could gather data about whether the user had any experience with apps and whether the user found the application easy to use, functional and fun. A part of the questionnaire was based on the SUS (System Usability Scale) questionnaire. This questionnaire is composed of 10 statements on which the user can put a score of 1 (strongly disagree) to 5 (strongly agree) (this is the Likert scale). (see http://www.measuringusability.com/sus.php for more information) The usability can then be calculated to a score of 0 to 100.

The results

The Random Record scenario

Main screen

1. Main screen

record

1.2. Record

Progress

1.3. Progress

Points

1.4. Points

The random record scenario was the most important one, since the recording of sound is the main thing the application can do. Other (not tested) scenarios include the random record scenario too.

When evaluating this scenario, I told the test user this little story: “You are walking outside and you are bored. You have just installed your new NoiseApp application. Now you want to record sound, right where you are. How are you going to do this?”

I then showed them the main screen. Most of the test user knew that they had to push the random record button to get to the next screen.

Most comments were about the record button in the second screen. It was not clear that this was a button, and that it also presented the location where the user was. A better way to do this, might be to separate the location marker (just like in Google Maps) and the record button at the bottom with a microphone.

It was clear to most of the test users to wait at the third screen to pass. Although, one told me that the cancel button seemed more like an error that happened. I might just put the word ‘Cancel’ instead of the button.

The third screen was obvious for everyone.

The Sound Battle scenario

1. Main screen

1. Main screen

2 Location

2.2. Choose player

2 Location

2.3. Location

At Battle Location

2.4. At Battle Location

Progress

2.5. Progress

Points

2.6. Points

Badge

2.7. Badge

The scenario of the Sound Battle was the most difficult to prototype. On screens 2.3, 2.4 and 2.5, I pasted three Post-Its on each screen to mark the sound locations.

The test users had to start from the last screen of the previous scenario. I told them next story: “So, that were easy points! Although, random sound recording doesn’t gain a lot of points. You can earn more points with competing against someone. I would like you to play a game against a random player.”

All users knew they had to go back to the main menu, but there were different options. By pressing the Android back button or the application back button (arrow at the top), they went to screen 1.2 and then 1. You could also press the house at the top or the application name, but not many people found this option. I might highlight the house more when the application is fully in color.

When they pressed the sound battle button, they got to see screen 2.2. When reminding the test users that they had to play against a random player, they pressed the right button and screen 2.3 appeared.

As said earlier, three sound battle locations were taped on this screen with Post-Its. Altough, most test users didn’t think this was clear. Most of them thought that the post-its resembled other players. This was probably reinforced by using post-its and not icons to represent the locations. I might use the location picker image of Google Maps to make sure that people know it’s a location. Others said that it would be easier if some information was given at the beginning of the game. Because there was no dot in the middle to resemble the players location, many of the test users didn’t know what to do. I will add that in a next prototype. When the test user said that they’d move to one of the three locations, screen 2.4 appeared.

Screens 2.4 and 2.5 had the same comments as in the random record scenario. After recording at the first location, screen 2.3 appeared again, with the post-it of the recorded location marked, so it was clear that location was done. After recording at the three location, screen 2.6 appeared.

The points screen was clear, except most of the test users wanted to know what sound quality and location quality actually means. I might add an information button next to the term that explains it more. Also, ‘location quality’ will be changed into ‘location accuracy’, since the closer the test user records to the given location, the more points. To make sure that users know that location accuracy is important, I might change the color of the location marker. Further away from the location, the marker will be red, but close to the location, the marker turns green. Another criterium might be added to the points that could be earned later. (e.g. is you are the fastest, you gain more points) When the test user pressed ‘Ok’, the badge screen was shown.

The badge screen was clear to everyone.

The Profile scenario

3.1 Profile

3.1. Profile

3.2 Points

3.2. Points

2.3 Badges

3.3. Badges

2.4 Leaderboard

3.4. Leaderboard

This scenario was started with next story: “Now you want to check how many points you’ve earned.”

Most test users thought they had to go back to the main screen. There they saw that there was no option (of the four main buttons) to check the score. When they looked further, they saw the user icon at the top right. It seems that the top buttons don’t seem that clear to most users. Maybe I might add a button on the main screen (like the four game options) to check the profile. Also, it might have no use checking your profile during your recording.

When they found the user icon, everyone knew they had to press the stats button to show the points earned.

Then I asked the test user to show the badges he/she had earned. Most people knew they had to go back, to press the badges button. I had the comment though, that it might be easier just to present the badges on the same screen as the points. This might be a good idea.

When I asked the test user to see how he/she does in the leaderboard, 7 out of the 8 test users pressed the globe button at the top right. This was unfortunately not the right choice. The globe would present the map with all sound measurements. When I asked them what they’d do if I told them that they can see themselves in the leaderboard between their friends, most of them found their way by pressing the tab button ‘Friends’. The problem with this button was that is was not clearly prototyped. In the digital prototype, it will be more clear that it is a tab.

SUS-score

Boxplot SUS

Boxplot SUS

Since every test user had to fill in the SUS questionnaire, I could generate a general score that expresses the usability of the application. 68% is an average score for the SUS questionnaire.

As you can see in the boxplot, the application scored well on the usability scale. The median is 81.25%, which is way above average. Also, the mean is quite high with 76.25 %. When neglecting the outlier at the bottom, the average rises to 81.07%. Neglecting the outlier makes sense, since the test user had no experience with smart phones or applications whatsoever.

Some extra remarks

  • 50% of the test users had an Android phone;
  • Average age of the users was 22 y.o.;
  • Users that have a smart phone said in general that they could handle smart phones quite well (scored 3.43 on the Likert scale);
  • Some users found the application useful.

What now?

Since the comments on the prototype were rather small, I will not do a second paper prototype. I will prepare a digital prototype starting from now on. The digital prototype should respond to the comments made. I will already implement the prototype with the Android SDK, to get familiar with the programming language.

Because of time constraints, I will implement the digital prototype iteratively. I will first implement the random record, evaluate that, and then implement another scenario, until I have tested everything.

If you are interested in doing some test on the digital prototype, you can always comment or send a message!

Decisions

Decisions

I had to make some decisions the last couple of weeks…

Which application?

Two weeks ago I gave a presentation about what I’ve already done for my thesis and what’s coming up next. I had two applications in mind that I could develop, though I had to choose one. I made some scenarios that were possible in the two applications (as you can read in the previous posts) and asked for feedback.

It seems more people liked the idea of the teamwork application, but didn’t really think that it would work or engage students more. It would be cumbersome to use the application on top of a group work. Also, it seemed that there was a lot of functionalities, but maybe too much.

A lot more positive feedback came with the noise application, but there people liked the idea of recording noise a little bit less. It might seem counterintuitive, but that’s why I chose for the noise application. That’s kind of my goal: to make something that isn’t fun fun :). There were also plenty comments on the noise application, but they were more constructive, I felt.

Also, I preferred the noise application, because I really believe in the use of it. I already put the most of my time designing the prototype and so the choice was easily made. My promotor liked it more too, so my mentors advised me to go further with the idea.

Altough the link between my thesis title, which is engaging university students through gamification, is not that clear. I might ask to change the title, but that is something I’ll have to do later.

Which platform?

Although I am still evaluating a paper prototype, I was already wondering which platform I might use to implement the application. I preferred Android, but couldn’t give a reason at first, except that I have an Android phone and it would be easier and faster for me to implement and test the application.

This of course is not a good reason on its own, so I looked deeper. An HTML5 application would be great too. All people could play it then in their browser, because the application would then be (kind of) platform independent (this is not really correct, since different browsers support html5 differently). But, because I have to make sound recordings, microphone access seems not to be possible through HTML5. Too bad, but HTML5 doesn’t support it (yet). That made me take HTML5 quickly off the table.

So, why no iOS? Well, actually it would be possible to implement on iOS. But there is not a very distinct reason than using the Android platform. Except, to test the application, I would always have to use someone elses phone… That’s not handy, right?

There is also the possibility to use a crossplatform tool, like PhoneGap. Michiel Staessen does his thesis about this tool. (You can read his work at his blog.)
I might do that, but I still have to look into that. I have to be sure that everything is possible that I want to do.

Paper prototype, part II

I already made screens for a paper prototype of the noise application as you can see in a previous post. I received some comments on that. The colours I chose were a little bit too distracting. Also, I didn’t make sure that everything could be done in a right flow. For example, there were no back or home buttons. Since I probably will make an Android application, i used an Android casing (which has a back button by default). Altough, because I will test the prototype to a varied set of people, I made sure that no knowledge about using an Android device was needed.

So, I made some changes to the paper prototype. I won’t show it here yet, because I don’t want to influence results by letting people see it beforehand. I will do a proper evaluation and then explain the prototype in full.

First Presentation 22/11/12

Here is the presentation I’m giving Thursday 22/11/12.