0% found this document useful (0 votes)
34 views10 pages

CleanGame Gamifying The Identification of Code Smells

Uploaded by

gustavoparreiraa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views10 pages

CleanGame Gamifying The Identification of Code Smells

Uploaded by

gustavoparreiraa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

CleanGame: Gamifying the Identification of Code Smells

Hoyama Maria dos Santos Vinicius H. S. Durelli Maurício Souza


Federal University of Lavras Federal University of São João Del Rei Federal University of Minas Gerais
Lavras-MG, Brazil São João Del Rei-MG, Brazil Belo Horizonte-MG, Brazil
hoyama.santos@ufla.br durelli@ufsj.edu.br mrasouza@dcc.ufmg.br

Eduardo Figueiredo Lucas Timoteo da Silva Rafael S. Durelli


Federal University of Minas Gerais Federal University of Lavras Federal University of Lavras
Belo Horizonte-MG, Brazil Lavras-MG, Brazil Lavras-MG, Brazil
figueiredo@dcc.ufmg.br lucastimoteo@ufla.br rafael.durelli@ufla.br

ABSTRACT KEYWORDS
Refactoring is the process of transforming the internal structure Refactoring, gamification, code smell, Software Engineering educa-
of existing code without changing its observable behavior. Many tion, post-training reinforcement
studies have shown that refactoring increases program maintain-
ACM Reference Format:
ability and understandability. Due to these benefits, refactoring is
Hoyama Maria dos Santos, Vinicius H. S. Durelli, Maurício Souza, Eduardo
recognized as a best practice in the software development com- Figueiredo, Lucas Timoteo da Silva, and Rafael S. Durelli. 2019. CleanGame:
munity. However, prior to refactoring activities, developers need Gamifying the Identification of Code Smells. In XXXIII Brazilian Symposium
to look for refactoring opportunities, i.e., developers need to be on Software Engineering (SBES 2019), September 23–27, 2019, Salvador, Brazil.
able to identify code smells, which essentially are instances of poor ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3350768.3352490
design and ill-considered implementation choices that may hinder
code maintainability and understandability. However, code smell
1 INTRODUCTION
identification is overlooked in the Computer Science curriculum.
Recently, Software Engineering educators have started exploring Many studies involving industrial scale software systems have pro-
gamification, which entails using game elements in non-game con- vided evidence that the lion’s share of software development ex-
texts, to improve instructional outcomes in educational settings. penses can be ascribed to software maintenance. Maintaining soft-
The potential of gamification lies in supporting and motivating ware systems is a challenging and long-standing topic owing mostly
students, enhancing the learning process and its outcomes. We to the fact that modern software systems must cope with chang-
set out to evaluate the extent to which such claim is valid in the ing requirements. As a consequence, developers need to strive to
context of post-training reinforcement. To this end, we devised and keep software systems in a condition that allows for continuous
implemented CleanGame, which is a gamified tool that covers one evolution. This constant need for improving software systems has
important aspect of the refactoring curriculum: code smell iden- spurred a growing interest in refactoring, which is deemed as one
tification. We also carried out an experiment involving eighteen of the main practices to improve the internal structure of evolving
participants to probe into the effectiveness of gamification in the software systems [11]. The key idea underlying refactoring is to
context of post-training reinforcement. We found that, on average, improve the internal structure of existing code without changing
participants managed to identify twice as much code smells during the observable behavior [11], thereby preparing the code for fu-
learning reinforcement with a gamified approach in comparison ture modifications. When performed properly, refactoring activities
to a non-gamified approach. Moreover, we administered a post- improve the design of software, increasing maintainability and un-
experiment attitudinal survey to the participants. According to the derstandability. Accordingly, refactoring is listed as a recommended
results of such survey, most participants showed a positive attitude practice in the Software Engineering (SE) body of knowledge [6].
towards CleanGame. Prior to refactoring activities, developers need to look for code
smells, i.e., particular code structures that when removed through
CCS CONCEPTS refactoring activities lead to more readable, easy-to-understand,
and cheaper-to-modify code. However, the set of skills required to
· Social and professional topics → Software engineering ed- identify code smells is acquired through training and experience.
ucation. Despite the aforementioned benefits, refactoring and code smell
identification skills have been overlooked in the Computer Science
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed curriculum. Even though continuous evolution (i.e., maintenance
for profit or commercial advantage and that copies bear this notice and the full citation activities) accounts for more technical and financial resources than
on the first page. Copyrights for components of this work owned by others than ACM software development per se, a major share of a typical under-
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specific permission and/or a graduate curriculum is dedicated to development activities [16].
fee. Request permissions from permissions@acm.org. Practices as refactorings are often neglected in favor of more con-
SBES 2019, September 23–27, 2019, Salvador, Brazil structive activities such as design and implementation. In effect,
© 2019 Association for Computing Machinery.
ACM ISBN 978-1-4503-7651-8/19/09. . . $15.00 going through code while looking for code smells is a difficult and
https://doi.org/10.1145/3350768.3352490 somewhat boring task.

437
SBES 2019, September 23ś27, 2019, Salvador, Brazil Santos et al.

A recurring challenge in SE education is engaging students in impact and soundness of gamification in supporting and
learning activities that relate to the professional practices of SE. engaging students during code smell identification activities.
Additionally, it is often challenging for SE students to contextualize (3) We administered an attitudinal survey to the experiment
how some concepts and skills will fit into or influence their fu- participants to get an overview of their attitudes towards
ture professional practices. Recently, in hopes of dealing with such CleanGame and the advantages and drawbacks of using a
challenge, the SE education community has turned to innovative gamified approach to code smell identification.
pedagogical strategies such as gamification [2, 22]. Essentially, gam- The participants of our experiments confirmed that playing the
ification entails employing game design elements in a non-game game is fun, and that identifying smells as part of CleanGame
setting. In other words, gamification is centered around generating is more enjoyable than doing so outside the game. On average,
learning experiences that convey feelings and engage students as participants were able to identify approximately twice as much
if they were playing games, but not with entertainment in mind. code smells using CleanGame (4.94) as subjects using an IDE (2.39).
We conjecture that gamification can be used to improve SE educa- Additionally, the best-performing participants were able to correctly
tion. More specifically, we believe that gamification can be used to identify 8 code smells out of 10 using CleanGame.
support and motivate SE students in the development of code smell The remainder of this paper is organized as follows. Section 2
identification skills by turning a difficult and somewhat tedious provides background on code smells and gamification. Section 3 out-
activity (e.g., going over snippets of code) into an engaging experi- lines related work. Section 4 gives a brief description of CleanGame.
ence. There is much potential insight to be gained in exploring how Section 5 details the experiment we carried out to evaluate CleanGame.
SE education can be improved by devising gamification approaches Section 6 discusses the results of the experiment and their impli-
that cover different aspects related to topics that are overlooked cations. The quantitative results of the attitudinal survey are pre-
in academic curricula, e.g., code smell identification concepts and sented in Section 7. Section 8 discusses the threats to validity of the
skills. study. Section 9 presents concluding remarks.
Based on the premise that gamification is well suited to engage
students with code smell identification concepts, specially when 2 BACKGROUND
used as a way to provide students with training follow-up, we set
This section describes the theoretical foundation necessary for
out to explore whether gamification can have a positive impact on
understanding CleanGame (i.e., code smells and gamification).
post-training reinforcement in comparison with a more traditional
approach, which consists in setting up post-training reinforcement
2.1 Code Smells
content manually. Generally, traditional post-training to evaluate
skill-building in activities such as code smell identification entails Code smells, also known as bad-smells or just smells [11], repre-
hands-on tasks that involve perusing source code for code smells. sent symptoms of the presence of poor design or implementation
Usually, in traditional post-training these tasks are supported only choices in the source code, represent one of the most serious forms
by an integrated development environment (IDE), which allows for of technical debt [15]. Fowler et al. [11] described 22 smells and
easier code navigation. The lack of guidelines and elements to keep incorporated them into refactoring strategies to improve design
students engaged make traditional post-training for code smell quality. In addition to the smells proposed by Fowler et al. [11],
identification unwieldy. So a gamified approach can be employed there are many other code smells [7]. Nevertheless, in this paper
to mitigate these problems. To probe into the benefits provided we focus on the following code smells: (i) Large Class: Classes
by a gamified environment over an IDE-driven post-training, we that are trying to do too much often have a large number of in-
developed a tool that supports post-training activities centered stance variables; (ii) Long Method: It is a method that contains
around code smell identification. In the context of our tool, these too many lines of code; (iii) Divergent Change: Happens when
post-training activities follow a gameful design approach, i.e, they a class is often changed in many different ways and for different
leverage gamification elements such as leaderboard and rewarding reasons; (iv) Feature Envy: Happens when a class spends more
badges. To the best of our knowledge, our tool is the first educational time communicating with functions or data in another class than
platform to realize a gamified, post-training reinforcement approach with their own, may occur after fields have moved to a data class;
to code smell identification. To corroborate the benefits of our (v) Shotgun Surgery: Occurs when changing a class entails a lot
gamified approach, we carried out two evaluations: an experiment of small modifications in many different classes as well.
involving 18 participants and an attitudinal survey, which was Please note that, we selected these code smells because they are
conducted after the experiment. The main contributions of our widely used in academic and industrial settings [21].
research are threefold:
2.2 Gamification
(1) We introduce CleanGame: a gamified platform for post- Gamification is a relatively new term that has been used to denote
training reinforcement of code smell identification concepts the use of game elements and game-design techniques in non-
and skills. gaming contexts [8]. Game elements are a set of components that
(2) In keeping with current evidence, we argue that a gamified compose a game [4]. In some studies, game elements are also called
environment is more effective at conveying code smell iden- game attributes [4].
tification skills while keeping students engaged than a more In the context of SE, there has been an increasing interest in
traditional approach to code smell identification (i.e., IDE- using gamification with the goal of increasing engagement and
driven). So we carried out an experiment to probe into the motivation. Researchers and practitioners have started adopting

438
CleanGame: Gamifying the Identification of Code Smells SBES 2019, September 23ś27, 2019, Salvador, Brazil

gamification in several different contexts, such as, gamification of identifying code smells in the source code. The current implementa-
the software development life cycle [9], software process improve- tion of this module is integrated to PMD2 to allow the creation of a
ment initiatives [14], and also in SE education [2, 22]. However, list of code smells identified in Java source code. CleanGame allows
as mentioned, very little attention has been directed towards inte- users not only to access pre-defined quizzes and identification tasks,
grating refactoring and code smell identification concepts into the but also to create their own quizzes and tasks.
Computer Science curriculum. As stated by Fraser [12], some SE
activities are overlooked in academic curricula because emphasis is
placed on more constructive activities such as software design and
implementation.
In summary, gamification can be applied as a strategy to turn
complex or somewhat boring activities into engaging and competi-
tive activities. Thus, there is much potential insight to be gained
in exploring how SE education can be further improved by devel-
oping gamification approaches that cover different aspects related
to topics that are in a way overlooked in academic curricula, e.g.,
refactoring and refatoring-related concepts as code smells.

3 RELATED WORK
The proposal and use of game-related methods is a crescent topic
in SE education [22]. Gamification, specifically, has gained consid-
Figure 1: Smell-related Quiz module of CleanGame.
erable attention lately [3], both in professional and educational
contexts of SE, as a method to increase motivation and engagement
of subjects in the execution of SE activities. Figure 1 presents a screenshot of CleanGame. This figure shows
In the professional context, Dal Sasso et al.[20] and Garcia et al. a quiz question with four possible answers and the user has to
[13] propose frameworks for the gamification of SE activities. The choose the best option. On the right-hand side of Figure 1, we can
first [20] provides a set of basic building blocks to apply gamification see several game elements used in this gamified software tool, such
techniques, supported by a conceptual framework. The later [13] as player status, score, timing, and skip questions. CleanGame also
propose a complete framework for the introduction of gamification presents a ranking of the top-10 best scores in this quiz. Therefore,
in SE environments. the player is able to check in real time his classification in the
In the context of SE education, there are also several proposals of current quiz and how far this score is in relation to the top scores.
using gamification to support varied knowledge areas. Akpolat and The player score in the current questions is penalized in several
Slany [1] use weekly challenges to motivate students on applying situations. For instance, if the player either skips a question or takes
eXtreme Programming practices to their project. The students had too long to answer it, his score in this question is penalized by up
to compete for a “challenge cup” award. Code Defenders [19] uses to the total amount of points assigned to the given question. Code
gamification to create a ludic and competitive approach to promote smell identification tasks also have options that allow players to
software testing using mutation and unit tests. Bell et al. [5] expose ask for help (shown in Figure 2).
students to software testing using a game-like environment, HALO It is worth mentioning that CleanGame is fully integrated with
(Highly Addictive, sociaLly Optimized) SE. the GitHub application programming interface (API). Therefore,
Nonetheless, to the best of our knowledge, there is no study fo- during the creation of a room in the identification module, the user
cusing on the detection of code smells. CodeArena [10], for instance, need to provide an uniform resource locator (URL) of a Java GitHub
uses gamification to motivate refactoring. However, the target users repository. Then, the Java source code is cloned and transformed
are practitioners. We believe that CleanGame is a solid contribution into an abstract syntax tree (AST) in a fully automatic way to create
to the context of game-related approaches to SE education. an oracle of smells-related questions. Three help hints are available:
metrics used to detect this code smell, refactoring aimed to address
4 CLEANGAME this code smell, and short definition of this code smell. Asking for
help also negatively impact on the points the player receives for a
This section proposes and describes CleanGame1 , a gamified soft- question.
ware tool aimed to teach code smell detection. CleanGame is com-
posed of two independent modules: Smell-related Quiz and Code 5 EXPERIMENT SETUP
Smell Identification. The goal of the first module is to allow student
learning or revisiting the main concepts surrounding code smells. We surmise that gamification is well suited to better engage stu-
To achieve this goal, the Smell-related Quiz module presents ques- dents with refatoring-related topics such as code smell identifi-
tions about code smells with multiple-choice answers. The second cation, specially when used as a way to provide students with
module, Code Smell Identification, focuses on practical tasks of training follow-up. Based on this assumption, we set out to explore
whether gamification can have a positive impact on post-training
2 Itis an extensible cross-language static code analyzer. PMD is available at
1 CleanGame is available at https://bit.ly/2W6xClB https://pmd.github.io/

439
SBES 2019, September 23ś27, 2019, Salvador, Brazil Santos et al.

reinforcement in comparison with a more traditional approach, 5.1.1 Experiment Goals. We used the organization proposed by the
which consists in setting up post-training reinforcement content Goal/Question/Metric (GQM) [25] to set the goals of our experiment.
manually. Traditional post-training to evaluate skill-building in ac- Following such goal definition template, the scope of our study can
tivities such as code smell identification entails hands-on tasks that be summarized as outlined below.
involve perusing source code for code smells. Generally, the tasks
in post-training are supported only by an integrated development Analyze our gamified approach
environment (IDE), which allows for easier code navigation [23]. for the purpose of evaluation
The lack of guidelines and elements to keep students engaged make with respect to post-training effectiveness
traditional post-training for code smell identification unwieldy. We from the point of view of the researcher
believe that a gamified approach can be employed to mitigate these in the context of students looking for code smells.
problems. To probe into the benefits provided by a gamified envi-
ronment over an IDE-driven post-training, identification developed
a tool that supports post-training activities centered around code
smell identification. In the context of our tool, these post-training 5.1.2 Hypotheses Formulation. We framed our prediction for RQ1 :
activities follow a gameful design approach, i.e, they leverage gam- our gamified post-training approach is more effective than an IDE-
ification elements such as leaderboard and rewarding badges. driven approach. As mentioned, to answer RQ1 we evaluated the
effectiveness of the post-training approaches in terms of one proxy
measure: amount of correctly identified code smells (CICS). So RQ1
was turned into the following hypotheses:

Null hypothesis, H0−C I CS : there is no difference be-


tween a gamified approach and an IDE-based approach
for code smell identification in terms of the amount of
code smells correctly identified by students during post-
training.
Alternative hypothesis, H1−C I CS : students are able to
identify more code smells through a gamified environment
than through a traditional (i.e., IDE-driven) approach.

Let µ be the average amount of correctly identified code smells.


So, µCl eanGame and µ I DE denote the average amount of code
smells correctly identified by students using CleanGame and the av-
erage amount code smells identified using an IDE-based approach.
Figure 2: Smell-related Identification module of CleanGame. Then, the aforementioned set of hypotheses can be formally stated
as:
We designed an experiment to answer the following research
H0−C I CS : µ Cl eanGame = µ I DE
question:
and
RQ1 : Does gamification have a positive impact on how students
identify code smells during post-training activities? H1−C I CS : µ Cl eanGame > µ I DE
We surmised that students will thrive on a game-based approach
to code smell identification because there is evidence that gamifica- 5.2 Selection of Subjects
tion elements such as points and leaderboards convey a sense of
This experiment was run using Computer Science students: more
competence to students and enhance intrinsic motivation, thereby
specifically, undergraduate, master’s, and PhD students were used
improving performance. In keeping with this evidence, we posit
as subjects in our experiment. This experiment was run at the
that a gamified environment is more effective at conveying code
Federal University of Minas Gerais (UFMG). It is worth highlighting
smell identification skills and keeping students engaged than a
that this study can be classified as a quasi-experiment due to the
more traditional approach to code smell identification (i.e., IDE-
lack of randomization of participants: the participating students
driven). Therefore, RQ1 comes down to examining the impact and
signed up for the course. We elaborate on the ability to generalize
soundness of gamification in engaging students in code smell iden-
from this specific context in Subsection 8.
tification from a researcher’s perspective. In the context of our
All subjects signed a consent form prior to participating in the
experiment, we used the following proxy to measure the effective-
experiment. All subjects already had prior experience with Java
ness of gamification: average rate of correct answers (i.e., amount
and object oriented programming. Previous knowledge regarding
of code smells correctly identified),
refactoring and refactoring related concepts (e.g., code smells) was
not mandatory. Note that, none of the subjects had participated in
5.1 Scoping the course before.

440
CleanGame: Gamifying the Identification of Code Smells SBES 2019, September 23ś27, 2019, Salvador, Brazil

5.3 Experiment Design 6 EXPERIMENT RESULTS


This experiment has one factor with two treatments: the factor is In this section, we present the experimental results of the exper-
the post-training reinforcement approach through which the sub- iment we carried out. First we outline some descriptive statistics,
jects try to identify code smells and the treatments are CleanGame then we present the hypothesis testing.
(i.e., our gamified way of teaching and supporting the identification
of code smells) and IDE-based approach, which is a hands-on assign- 6.1 Descriptive Statistics
ment using an IDE. The experience of the subjects was not used as As mentioned, we employed a randomized crossover design, so
a blocking factor: we did not ask subjects to fill a pre-experimental subjects in both groups were exposed to the two approaches (Sub-
questionnaire because we decided against further stratifying our section 5.4). Table 1 presents detailed results of the performance of
sample into groups with similar experience levels. So, it is assumed the eighteen participants when using CleanGame to identify code
that the subjects in this experiment have equivalent background smells. In Table 2 we summarize how the subjects performed while
and level of experience. identifying code smells via an IDE.
We used a randomized crossover design so that all subjects could As shown in Tables 1 and 2, on average, subjects were able to
be exposed to both post-training approaches. That is, all participants identify approximately twice as much code smells using CleanGame
were assigned to use CleanGame as well as the IDE-based post- (4.94) as subjects using an IDE (2.39). Additionally, the best-performing
training approach. Both groups went over the same Java programs subject was able to correctly identify 8 code smells out of 10 using
and code smells. Note that, none subjects quit the experiment. CleanGame. As shown in Figure 3, subjects in group 1 performed
slightly better at code smell identification while using CleanGame
than when using an IDE. As for the subjects in group 2, they per-
formed significantly better when identifying code smells using
5.4 Instrumentation CleanGame (Figure 3). When combining the performance of both
In the introduction phase, subjects attended lectures (i.e., classroom- groups with both experimental treatments, our results would seem
based delivery) about refactoring and code smells. We employed to indicate that CleanGame allowed participants to be more effec-
a randomized crossover design so that subjects could be exposed tive at code smell identification (Figure 3, boxplot on the left).
to both post-training tasks, i.e., the gamified and the traditional Subjects were more apt to skip code identification tasks when
approach. More specifically, subjects were randomly assigned into using an IDE. When using an IDE, subjects skipped on average 1.22
two groups and assigned to complete code smell identification tasks tasks: subject #5 from group 2 had the largest number of answered
using each approach as follows: one group performed code smell questions in this group, 6 questions were skipped over. Participants
identification using an IDE followed by code smell identification who had larger numbers of skipped questions also seemed to have
using CleanGame; the other group performed code smell identifica- had difficult on most questions: for instance, subjects #3, #5, #6,
tion with CleanGame followed by code smell identification using and #9 had a high ratio of incorrect answers. In contrast, while
an IDE. Therefore, the response is measured twice in each subject. using CleanGame, participants seldom skipped over questions (as
Following randomization, the subjects assigned to be the first shown in Table 1). We surmise that participants were less likely
group to use CleanGame took part in a short training session in to skip questions while using CleanGame because of the metrics-,
which they were introduced to each feature of our tool. During this refactoring-, and definition-related tips provided by the tool. Ac-
training session, we had the subjects identify a few code smells cording to the results in Table 1, the most commonly requested
using CleanGame. The goal was to allow the subjects to familiarize type of tip was metric-related: while going over the 10 code smell
themselves with the graphical user interface (GUI) of the tool. Ad- identification tasks, subjects requested on average approximately
ditionally, throughout this training session, subjects were allowed five metric-related tips. Interestingly, refactoring-related tips were
to ask any questions about CleanGame. No further assistance was not requested very often by the participants. Definition-related tips
provided to the subjects assigned to carry out post-training tasks were the least requested type of tip. We believe that these results
using the traditional approach (i.e., IDE-driven). might indicate that the subjects had a good grasp of the concepts
Since we used a randomized crossover design, in a later stage, underlying code smells but needed some sort of metric to back up
the group that first took part in code smell identification tasks using their opinions regarding whether or not they were looking at a
CleanGame was then assigned to carry out code smell identification given code smell. Given that an IDE does not provide much sup-
tasks using the traditional approach. In turn, the group initially port in terms of code smell identification, our results indicate that
assigned to the traditional approach was introduced to CleanGame tackling those tasks within an IDE seems to be more difficult for
(i.e., participated in our brief training session) and proceeded to most participants, thus participants seem to stop responding more
identify code smells using our gamified approach. often at points where the code identification task gets complicated.
The advantages of applying each code smell identification ap-
proach as perceived by the subjects were investigated through a 6.2 Hypothesis Testing
post-questionnaire handed out after the experiment had been car- To test the hypotheses we formulated in Section 5.1.2 we applied a
ried out (i.e., wrap-up phase). Moreover, the same questionnaire paired Wilcoxon’s rank-sum test. As we hypothesized, according
was also used to gather further information from the participants to the results of this non-parametric test, subjects perform signif-
concerning the main hindrances/inhibitors of applying both code icantly better when using CleanGame than when using an IDE
smell identification approaches. (V = 125.5, p = 0.003).

441
SBES 2019, September 23ś27, 2019, Salvador, Brazil Santos et al.

Table 1: Code smell identification performance of the two experimental groups using CleanGame.

Group 1

Correct Incorrect
Subject Skipped Metric-related Tip Refactoring-related Tip Definition Tip Average Time†
Answers Answers

#1 6 4 0 8 2 0 476
#2 6 4 0 6 4 4 2,593
#3 5 5 0 6 3 0 1,336
#4 5 5 0 9 4 0 640
#5 5 5 0 6 3 3 674
#6 4 6 0 4 4 3 1,601
#7 2 7 1 1 1 0 1,198
#8 1 9 0 3 1 0 1,231
#9 3 7 0 3 2 1 891
Group 2

Correct Incorrect
Subject Skipped Metric-related Tip Refatoring-related Tip Definition Tip Average Time†
Answers Answers

#1 8 2 0 8 4 0 897
#2 7 3 0 1 1 0 1,259
#3 7 3 0 1 0 0 2,349
#4 6 4 0 8 4 0 1,317
#5 6 4 0 9 4 1 993
#6 5 5 0 9 2 0 1,411
#7 5 5 0 8 2 1 1,951
#8 4 6 0 0 0 0 960
#9 4 6 0 1 1 1 943
Descriptive Statistics for both Experimental Groups
Min 1 2 0 0 0 0 476
Max 8 9 1 9 4 4 2,593
Average (Mean) 4.94 5.00 0.05 5.06 2.33 0.78 1,262.22
Standard Dev. 1.76 1.68 0.24 3.30 1.46 1.27 566.10
† Average time is indicated in seconds.

8
8 RQ2 : Do students have a positive attitude towards a game-based
Amount of Correct Answers

learning experience? - The effectiveness of a post-training approach


6 is heavily influenced by the attitudes held toward how the instruc-
tional content is presented. Thus, RQ2 investigates the subjects’
6
4 outlook on gamification as a post-training approach to code smell
identification.
Amount of Correct Answers

2
RQ3 : What are the advantages and drawbacks of a gamified post-
Group 1 Group 2 training approach? - In addition, we set out to investigate the pros
4 Performance using CleanGame
and cons of gamification as a post-training reinforcement approach
8
from the standpoint of students.
Amount of Correct Answers

Therefore, the goal of this attitudinal survey is to gauge students’


6
opinions, level of satisfaction, and overall attitude towards our
2
4 gamified post-training approach.
After developing an initial draft of the survey questionnaire, we
2 ran a pilot test with a group of five (Computer Science) graduate
0
students. Our goal was to validate the questionnaire in terms of
0
clarity, objectiveness, and correctness. We refined the questionnaire
CleanGame IDE Group 1 Group 2
Performance using Both Post−training Approaches Performance using IDE based on the feedback from the pilot study and created an online
version using Google Forms3 . Table 3 summarizes each question
Figure 3: Overview of the performance of the experimental in the questionnaire. The questionnaire comprises 22 questions,
subjects in terms of properly identifying code smells using divided into three parts: Q1 to Q4 are aimed at gathering back-
both post-training approaches. ground information from the participants; Q5 to Q9 are questions
about code smell identification (both with and without CleanGame);
Q10 to Q24 are related to the participants’ experience while using
7 ATTITUDINAL SURVEY CleanGame. It is worth mentioning that questions Q10 to 21 were
This section outlines the results of an attitudinal survey we con-
ducted to answer the following research questions: 3 https://www.google.com/forms/about/

442
CleanGame: Gamifying the Identification of Code Smells SBES 2019, September 23ś27, 2019, Salvador, Brazil

Table 2: Code smell identification performance of the two Table 3: Questionnaire


experimental groups using an IDE.
ID Questions Type of answer
Q1 Student level Single Choice:
Group 1 a. Computer Science under-
graduate student;
Correct Incorrect b. Information Systems under-
Subject Skipped Average Time† graduate student;
Answers Answers
c. Computer Science graduate
student.
#1 3 7 0 2,340 Q2 Age Nominal Scale:
#2 2 8 0 1,080 (1) 17 - 22 years old; (2) 23 a 28
#3 5 5 0 1,500 years old; (3) 29 a 34 years old;
#4 3 7 0 1,380 (4) 34+ years old.
#5 4 6 0 1,140 Q3 How often do you play (digital or non-digital) Nominal Scale:
#6 3 7 0 1,680 games? (1) Never; (2) Rarely; (3)
#7 2 8 0 1,860 Monthly; (4) Weekly; (5) Daily
Q4 What is your experience with Java or Object Ori- Nominal Scale:
#8 5 5 0 2,220 ented development (1) None; (2) Academic experi-
#9 2 8 0 1,680 ence; (3) Beginner professional
Group 2 experience; (4) Advanced pro-
fessional experience
Q5 How difficult was the execution of the code smell Nominal Scale:
Correct Incorrect identification activity without CleanGame? (1) Very easy; (2) Easy; (3) Bal-
Subject Skipped Average Time†
Answers Answers anced; (4) Hard; (5) Very hard.
Q6 How difficult was code smell identification with Nominal Scale:
#1 4 6 0 1,560 CleanGame? (1) Very easy; (2) Easy; (3) Bal-
#2 3 6 1 1,740 anced; (4) Hard; (5) Very hard.
#3 0 6 4 540 Q7 I skipped questions (i.e., code smell identification ac- Likert Scale*
tivities) because it was too hard.
#4 1 9 0 2,160 Q8 I tried to answer all questions consciously (without Likert Scale*
#5 2 2 6 1,080 guessing).
#6 0 6 4 1,200 Q9 I tried to solve the challenges exhaustively before tak- Likert Scale*
#7 2 8 0 1,080 ing advantage of tips (provided by CleanGame).
#8 0 7 3 1,740 Q10 [Challenge] CleanGame is adequately challenging Likert Scale*
#9 2 4 4 1,380 without becoming boring.
Q11 [Satisfaction] Completing tasks in CleanGame gave Likert Scale*
Descriptive Statistics for both Experimental Groups me the feeling of achievement.
Min 0 2 0 540 Q12 [Satisfaction] I would recommend this game to my Likert Scale*
Max 5 9 6 2,340 peers.
Average (Mean) 2.39 6.39 1.22 1,520.00 Q13 [Social interaction] CleanGame promotes competi- Likert Scale*
tion.
Standard Dev. 1.54 1.69 3.95 464.30
Q14 [Fun] There was a element in CleanGame that cap- Likert Scale*
† Average time is indicated in seconds. tured my attention
Q15 [Focus] CleanGame kept me engaged during the exe- Likert Scale*
cution of activities/ I lost track of time/ I forgot about
my surroundings.
Q16 [Relevance] The contents of CleanGame are relevant Likert Scale*
adapted from MEEGA+ [18], which is a framework for evaluating for my interests and it is clear how they are related
serious games tailored to computing education. to code smell identification skill acquisition
Q17 [Relevance] I would like to use more tools similar to Likert Scale*
The participants were asked to answer the questionnaire immedi- CleanGame throughout my academic formation.
ately by the end of the experiment. We made it clear to the students Q18 [Relevance] I prefer to practice the concepts of code Likert Scale*
smells with CleanGame than with other educational
that questionnaire completion was optional and anonymous. methods.
Q19 [Learning perception] CleanGame contributed to my Likert Scale*
learning and was efficient in comparison to other ac-
7.1 Attitudinal Survey Results tivities.
Q20 [Learning perception] CleanGame (Identification Likert Scale*
Eighteen participants completed the questionnaire. Most partici- Game) contributed practice the concepts of code
smell identification.
pants (thirteen participants, which represents roughly 72.2% of our Q21 [Learning perception] CleanGame (Quiz) con- Likert Scale*
sample) are 23 to 28 years old. Also, thirteen participants (72.2%) tributed to remember code smell identification.
Q22 What were the positive aspects of CleanGame? Open Answer
claimed that they play games at least once a month, out of which Q23 What were the negative aspects of CleanGame? Open Answer
seven (38.9%) claimed that they play games on a daily basis. Only Q24 Do you have any additional comments regarding Open Answer
CleanGame?
three participants (16.7%) claimed never playing games. Regarding * Likert Scale: (-2) Definitely disagree; (-1) Disagree; (0) Indifferent; (1) Agree; (2) Definitely agree
the participants’ experience with Java or object oriented devel-
opment: eleven participants (61.1%) claimed having professional
experience with either Java or object oriented development, and they tried to answer all questions consciously. So we conjecture
seven (38.9%) claimed having only academic experience. that some participants tried to guess the correct answer to some
Figure 4 highlights the questionnaire results regarding questions code smell identification tasks. Finally, the participants had mixed
Q5 and Q6, which ask participants about the difficulty of performing opinions concerning Q9: our results would seem to suggest that
code smell identification activities with and without the support of some participants tried to take advantage of the tips provided by
CleanGame. The results would seem to suggest that the participants CleanGame before tackling the code smell identification task. On
found the activity more challenging to perform without CleanGame. the other hand, some participants only took advantage of tips after
Figure 5 shows the answers we collected for questions Q7, Q8, exhaustively trying to grasp the code smell identification task at
and Q9. From looking at the answers to Q7, we can see that most hand.
participants avoided skipping questions, regardless of their dif- Figure 6 shows the answers related to the participants’ experi-
ficulty level. As for Q8, only 2 participants (1.1%) affirmed that ences using CleanGame. The items are grouped according to the

443
SBES 2019, September 23ś27, 2019, Salvador, Brazil Santos et al.

9
8 Challenge Q10 2 2 4 6 4 1

6
0
5 Q11 2 5 6 2 3

Satisfaction
3 0
Q12 1 4 5 5 3
2 2
1
0 0 Social Interaction Q13 10 2 8 7 1

1 2 3 4 5
Q14 1 2 3 7 5 1
Fun
Difficulty without CleanGame Difficulty with CleanGame

Q15 3 5 6 1 3 0
Focus
Figure 4: Difficulty in performing the code smell identifica-
tion without and with CleanGame. Q16 0 3 7 8 1

Relevance Q17 0 4 7 7 1

Q7 9 1 2 5 1
Q18 0 2 4 6 6 1

Q8 12 3 10 2

Q19 0 3 4 7 4 1
Q9 3 2 6 5 2
Learning
Definetely disagree Disagree Indifferent Agree Definetely agree Perception Q20 2 2 2 6 6 1

Figure 5: Results for survey questions Q7, Q8, and Q9. Q21 2 01 4 11 2

Definitely disagree Disagree Indifferent Agree Definitely agree


following factors: challenge, satisfaction, social interaction, fun, fo-
cus, relevance, and learning. For each item, the right most column Figure 6: Participants’ experience with CleanGame
presents the median value of the participants’ responses, ranging
from “Definitely Disagree” (-2) to “Definitely Agree” (2). No factors
relevant categories. Consequently, it is possible to count the number
presented a negative median value. The factors “satisfaction” and
of occurrences of codes and the number of items in each category
“focus” presented median values of 0, meaning that these aspects
to understand what recurring positive and negative were reported
were observed with indifference or mixed opinions by the partici-
by the participants. Tables 4 and 5 list the positive and negative
pants. For all other factors, the median values were positive. The
aspects reported by the participants. The column “code” groups
items Q14 and Q19 had the most “Definitely agree” responses. These
recurring items observed in the responses. The column “category”
items are related to the adequacy of the content of CleanGame to
presents a broad category used to group the codes. The column
the course, and the learning impact of the quiz on how the partici-
“occurrences” presents the number of times the each code appeared
pants committed code smells related concepts to memory. Except
in the responses.
for the items Q11, Q12 and Q15, all other items received more than
50% of positive responses (“Agree” and “Definitely agree”). The
Table 4: Positive Aspects stated by the participants
items with the highest count of positive responses are the follow-
ing: Q13 (i.e., likeliness of recommending CleanGame for other Positive Aspects Category #
students), Q16 (i.e., adequacy of CleanGame contents) and Q21 (i.e., Ludic and interactive tool Design / Usability 7
Support comprehension on code smells Learning 6
how much CleanGame supports the memorization of code smell Competition Gamification 5
related concepts) with 83.3% of positive responses each. In contast, Easiness of use Design / Usability 3
Tips Gamification 2
items Q15 (i.e., focus), Q11 (i.e., satisfaction) and Q12 (i.e., feeling of Dynamic leaderboards Gamification 2
achievement) received the highest number of negative responses, Multiple choice questions Question structure 2
Adequate to different profiles of students Learning 1
with 44.4%, 38.9% amd 27.8% of responses “Disagree” or “Definitely Score system Gamification 1
Question and tip visualization Design / Usability 1
disagree”. Motivating Learning 1
As for RQ3 , positive and negative feedback from participants Interesting for online courses Learning 1
were captured from the answers to Q22, Q23, and Q24. To gather and
synthesize such feedback, we employed an approach inspired in the We found 32 occurrences of 12 distinct codes describing posi-
coding phrase of ground theory [24]. Two researchers analyzed the tive aspects. These codes are grouped in four categories: “Design
responses individually and marked relevant segments with “codes” and Usability” (11 occurrences); “Gamification” (10 occurrences);
(i.e., tagging with keywords). Afterwards, the researchers compared “Learning” (9 occurrences); and “Question structure” (2 occurrences).
their codes to reach consensus, and tried to group these codes into The most recurring codes found were the following: “Ludic and

444
CleanGame: Gamifying the Identification of Code Smells SBES 2019, September 23ś27, 2019, Salvador, Brazil

interactive tool” (7 occurrences); “Support comprehension on code 8 THREATS TO VALIDITY


smells” (6 occurrences); and “Competition” (5 occurrences). These As with any empirical study, this experiment has several threats
results provide evidence of the positive attitude of students towards to validity. In this subsection we outline the main threats to four
the effects of CleanGame (and its gamification approach) when types of validity that might jeopardize our experiment: (i) internal,
applied for the acquisition of code smell identification skills. (ii) external, (iii) conclusion, and (iv) construct. Internal validity
We found 28 occurrences of 15 distinct codes representing nega- has to do with the confidence that can be placed in the cause-effect
tive feedback. These codes are grouped in three categories: “Design relationship between the treatments and the dependent variables in
and usability” (11 occurrences); “Business rules” (10 occurrences); the experiment. External validity is concerned with generalization:
and “Experiment design” (7 occurrences). Problems in the “Design whether the cause-effect relationship between the treatments and
and usability” group are related to graphical user interface, how the dependent variables can be generalized outside the scope of the
its visual elements are arranged, or the lack of a particular visual experiment. Conclusion validity is centered around the conclusions
element. “Business rules” are problems related to how things work that can be drawn from the relationship between treatment and
in the software. These are the most interesting feedback, because outcome. Finally, construct validity is about to the relationship
they affect the functional aspect of CleanGame. For instance, the between theory and observation: whether the treatments properly
most recurring “Business rules” codes were related to showing the reflect the cause the suitability of the outcomes in representing the
correct answer as a feedback for the user after picking a wrong effect.
choice, and the suggestion to not disclosing the score of all users,
as it may lead to embarrassment or undermine the motivation of 8.1 Internal Validity
users with lower performance. Finally, the category “Experiment
design” groups codes related to complaints regarding how the ex- We mitigated the selection bias issue by using randomization. How-
periment was organized. For instance, two users complained about ever, since we assumed that all subjects have similar background,
the duration of the classes that took place before the experiment. no blocking factor was applied to minimize the threat of possible
We observed that most of the negative aspects are actually oppor- variations in the performance of the subjects. Therefore, we cannot
tunities of improvement and do not jeopardize the learning process rule out the possibility that some variability in how the subjects
using CleanGame. performed stems from their previous knowledge and experience.
Another possible threat to the internal validity has to do with the
files containing the code smells we used in our experiment: if we
had used other files, the results could have been different. Never-
Table 5: Negative Aspects stated by the participants
theless, we tried to mitigate this threat by selecting files with code
Negative Aspects Category #
smells that are representative for the experience level of undergrad-
Interface problems Design / Usability 7 uate and grad students alike. Specifically, we selected code smells
Should show the correct answer after failure Business rules 4
Disclosing scores of all participants Business rules 4 from Landfill [17], which is a web-based platform for sharing and
Code length Experiment design 2 validating code smell datasets. To the best of our knowledge, Land-
Confusing scoring system Business rules 1
Rules for losing score for using tips Business rules 1 fill comprises the largest publicly available collection of manually
Should have ohter types of questions Design / Usability 1 validated smells.
Not being able to see tips already used Business rules 1
Experiment duration Experiment design 1
Form of displaying earned and lost points Design / Usability 1
Difference betwaeen the duration of the quiz Experiment design 1 8.2 External Validity
and identification activities
Poor experiment instructions Experiment design 1 The main threat to the external validity of our results is the sam-
Should have provided the correct answers by Experiment design 1 ple: as mentioned, the experiment was staffed by students, so all
the end of the experiment
Unknown metrics used Experiment design 1 subjects have academic backgrounds. Thus, the insights gained
Should show the quantity of questions Design / Usability 1 from our experiment can be generalized only to similar settings
(i.e., in the context of students with similar experience). We are
aware that further replication of our experiment is needed to es-
As for RQ2 , the results of our attitudinal survey would seem tablish more conclusive results: to increase external validity, we
to suggest that most participants showed a positive attitude to- need to replicate our study with a larger sample. Moreover, our
wards CleanGame. Results of Q5 and Q6 indicate that the partic- sample consisted solely of students. Therefore, we cannot be sure
ipants found it less difficult to practice code smell identification whether CleanGame would also be able to engage practitioners
with CleanGame support. The results related to the participants while helping them hone their code smell identification skills. We
experience with CleanGame show positive perception regarding cannot rule out the threat that the results could have been different
relevance, perception of learning, and social interaction aspects of if practitioners were selected as subjects. So, it may be worthwhile
the tool. However, there were some mixed opinions regarding focus to replicate our experiment with a more diverse sample (including
and satisfaction. Among the positive aspects described by the par- practitioners) to corroborate our findings. It is also worth point-
ticipants, there were 10 mentions to gamification and 9 mentions to ing out that the subjects in sample may have higher affinity for
positive effects on learning, out of 32 codes identified. Therefore we video games and thus better attitudes toward a gamification based
have positive findings about students attitude towards CleanGame, approach than the general population.
specially regarding the gamification strategy used in the tool and Additionally, regarding the generalization of our findings, we
its effect on learning. are aware that we limited the scope of our experiment to only Java

445
SBES 2019, September 23ś27, 2019, Salvador, Brazil Santos et al.

programs. For future studies, we intend to replicate our experiment REFERENCES


using programs written in other programming languages. It is also [1] B. S. Akpolat and W. Slany. 2014. Enhancing software engineering student team
worth noting that we focused on code smells found in open-source engagement in a high-intensity Extreme Programming course using gamification.
In 27th IEEE Conference on Software Engineering Education and Training (CSEE T).
programs, hence we cannot speculate about how the results of our 149–153.
experiment would be different when taking into account industrial- [2] Manal M. Alhammad and Ana M. Moreno. 2018. Gamification in Software
Engineering Education: A Systematic Mapping. Journal of Systems and Software
scale software. However, we conjecture that using programs that 141 (2018), 131–150.
are way to complex might hinder learning. [3] M. M. Alhammad and A. M. Moreno. 2018. Gamification in software engineering
education: A systematic mapping. Journal of Systems and Software 141 (2018),
131–150.
8.3 Conclusion Validity [4] Wendy L. Bedwell, Davin Pavlas, Kyle Heyne, Elizabeth H. Lazzara, and Eduardo
The approach we used to analyze the results of our experiment Salas. 2012. Toward a taxonomy linking game attributes to learning: An empirical
study. Simulation & Gaming 43 (2012), 729–760.
represents the main threat to the conclusions we can draw from our [5] Jonathan Bell, Swapneel Sheth, and Gail Kaiser. 2011. Secret ninja testing with
study: we discussed our results by presenting descriptive statistics HALO software engineering. In 4th Proceedings of the International Workshop on
and statistical hypothesis tests. Social Software Engineering (SSE). 43–47.
[6] Pierre Bourque and Richard E. Fairley. 2014. Guide to the Software Engineering
Body of Knowledge (SWEBOK(R)): Version 3.0. IEEE Computer Society Press.
8.4 Construct Validity [7] Luis Cruz, Rui Abreu, and Jean-Noël Rouvignac. 2017. Leafactor: Improving
Energy Efficiency of Android Apps via Automatic Refactoring. In 4th International
We cannot rule out the possibility that the measures we employed Conference on Mobile Software Engineering and Systems (MOBILESoft). 205–206.
in our experiment may not be appropriate to quantify the effects [8] Sebastian Deterding, Miguel Sicart, Lennart Nacke, Kenton O’Hara, and Dan
Dixon. 2011. Gamification. using game-design elements in non-gaming contexts.
we set out to investigate. For instance, the amount of correct an- In 11st Extended Abstracts on Human Factors in Computing Systems (CHI). 2425–
swers may neither be the only nor the most important predictor 2428.
of post-training effectiveness and engagement. If the measures we [9] Daniel J. Dubois and Giordano Tamburrelli. 2013. Understanding gamification
mechanisms for software development. In 9th Proceedings of the 2013 Joint Meeting
used do not reflect the target theoretical construct, the results of on Foundations of Software Engineering (ESEC/FSE). 659–662.
our experiment might be less reliable. One way to extend our study [10] Leonard Elezi, Sara Sali, Serge Demeyer, Alessandro Murgia, and Javier Pérez.
is to examine which other measures might be relevant for a model 2016. A game of refactoring: Studying the impact of gamification in software
refactoring. In 16th Proceedings of the Scientific Workshop Proceedings (XP2016).
of post-training effectiveness in the context of code smell identifi- 1–6.
cation. Moreover, given that the experiment took place as part of a [11] Martin Fowler. 2018. Refactoring: Improving the Design of Existing Code. Addison-
Wesley Professional.
course, in which students are graded, students (i.e., subjects) may [12] G. Fraser. 2017. Gamification of Software Testing. In 12th 2017 IEEE/ACM Inter-
bias their answers in hope of getting a better grade. To mitigate national Workshop on Automation of Software Testing (AST). 2–7.
this threat, subjects were assured that the experiment would not [13] Felix Garcia, Oscar Pedreira, Mario Piattini, Ana Cerdeira-Pena, and Miguel
Penabad. 2017. A framework for gamification in software engineering. Journal
have any effect on their grades. of Systems and Software 132 (2017), 21–40.
[14] Eduardo Herranz, Ricardo Colomo-Palacios, and Antonio de Amescua Seco. 2015.
9 CONCLUDING REMARKS Gamiware: A Gamification Platform for Software Process Improvement. In 22nd
Systems, Software and Services Process Improvement (SPI). 127–139.
We empirically evaluated whether gamification can have a positive [15] P. Kruchten, R. L. Nord, and I. Ozkaya. 2012. Technical Debt: From Metaphor to
impact on post-training reinforcement for code smell identification Theory and Practice. IEEE Software 29 (2012), 18–21.
[16] The Joint Task Force on Computing Curricula. 2015. Curriculum Guidelines
skills and concepts. To the best of our knowledge, this is the first for Undergraduate Degree Programs in Software Engineering. Technical Report.
study that applies gamification to engage students in code smell MISSING.
[17] F. Palomba, D. Di Nucci, M. Tufano, G. Bavota, R. Oliveto, D. Poshyvanyk, and A.
identification tasks during post-training reinforcement. To evaluate De Lucia. 2015. Landfill: An Open Dataset of Code Smells with Public Evaluation.
the effectiveness of gamification in this context, we used the average In 12th 2015 IEEE/ACM Working Conference on Mining Software Repositories (MSR).
rate of correct answers as a proxy. According to the results of 482–485.
[18] Giani Petri, Christiane Gresse von Wangenheim, and Adriano Ferreti Borgatto.
our experiment, on average, subjects managed to identify twice as 2017. A large-scale evaluation of a model for the evaluation of games for teaching
much code smells during learning reinforcement with a gamified software engineering. In 39th 2017 IEEE/ACM International Conference on Software
approach in comparison to the IDE-driven approach: the results of a Engineering: Software Engineering Education and Training Track (ICSE-SEET). 180–
189.
non-parametric test show that subjects perform significantly better [19] Jose Miguel Rojas and Gordon Fraser. 2016. Code defenders: a mutation testing
when using CleanGame than when using an IDE. We interpret these game. In 9th Software Testing, Verification and Validation Workshops (ICSTW).
162–167.
findings as general support for our hypothesis that gamification can [20] Tommaso Dal Sasso, Andrea Mocci, Michele Lanza, and Ebrisa Mastrodicasa.
be applied to engage students in activities that tend to be somewhat 2017. How to gamify software engineering. In 24th Software Analysis, Evolution
tedious and complex, viz., code smell identification. Furthermore, and Reengineering (SANER). 261–271.
[21] Tushar Sharma and Diomidis Spinellis. 2018. A survey on software smells. Journal
subjects were less apt to skip code identification tasks when using of Systems and Software 138 (2018), 158–173.
CleanGame. We believe that this can be ascribed to the metrics-, [22] Mauricio Ronny Almeida Souza, Lucas Veado, Renata Teles Moreira, Eduardo
refactoring-, and definition-related tips provided by the tool. Figueiredo, and Heitor Costa. 2018. A Systematic Mapping Study on Game-
related Methods for Software Engineering Education. Information and Software
The results of our post-experiment attitudinal survey suggest Technology 95 (2018), 201–218.
that most participants showed a positive attitude towards CleanGame [23] D. Spinellis. 2012. Refactoring on the Cheap. IEEE Software 29 (2012), 96–95.
[24] Klaas-Jan Stol, Paul Ralph, and Brian Fitzgerald. 2016. Grounded theory in
(and its gamification strategy) as an educational support tool for software engineering research: a critical review and guidelines. In 38th 2016
practicing code smell identification. The participants’ evaluation of IEEE/ACM International Conference on Software Engineering (ICSE). 120–131.
the tool aesthetics revealed that, while “Focus” and “Satisfaction” [25] C. Wohlin, P. Runeson, M. Höst, M. C. Ohlsson, B. Regnell, and A. Wesslén. 2012.
Experimentation in Software Engineering. Springer.
were the lowest rated aspects of the game (however, not negatively
rated), the other aspects were rated positively, specially “Relevance”,
“Learning Perception” and “Social Interaction”.

446

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy