An Application of Importance-Performance Analysis
An Application of Importance-Performance Analysis
https://doi.org/10.1007/s11092-020-09338-4
Magdalena Cladera 1
Received: 4 August 2020 / Accepted: 13 October 2020 / Published online: 20 October 2020
# Springer Nature B.V. 2020
Abstract
Students’ feedback is usually gathered in institutions of higher education to evaluate the
teaching quality from the students’ perspective, using questionnaires administered at
the end of the courses. These evaluations are useful to pinpoint the course strengths,
identify areas of improvement, and understand the factors that contribute to students’
satisfaction. They are an important mechanism for improving the teaching and learning
processes. However, there is little standardisation in how this kind of feedback is
collected, analysed, and used, and their active use for improving the teaching and
learning processes is low. Additionally, students are rarely asked if they consider that
those aspects included in the questionnaires are really important; this information
would allow relativizing students’ evaluations of teaching. This research proposes the
use of importance-performance analysis (IPA) together with a student’s evaluation of
teaching questionnaire as a tool for lecturers to collect, analyse, and interpret the data
obtained from the student’s feedback. This work shows how using IPA lecturers can
obtain a visual representation of what teaching attributes are important for their
students, how important each attribute is, and how well the instructors performed on
each attribute from their students’ point of view. The usefulness of this tool for lecturers
to assess students’ evaluation of their teaching and to guide the course programming in
higher education is shown.
1 Introduction
Students are a key element, if not the most important, in the educational system.
Consequently, their opinions, perceptions, and feedbacks are an important input for
* Magdalena Cladera
mcladera@uib.es
1
Department of Applied Economics, University of the Balearic Islands, Palma de Mallorca, Spain
the assessment of different aspects of the system, particularly in the case of undergrad-
uate college education. Students have an important role since, because of their direct
classroom contact with the lecturer, they can uniquely answer questions related to
numerous aspects of the quality of teaching (Casey et al. 1997). Students’ voices should
be sought and their views must be seen as centrally important to the development of
knowledge about teaching in higher education (Su and Wood 2012).
One of the main purposes for which the students’ feedback is collected at the
undergraduate level is the teaching quality assessment. Questionnaires to evaluate the
quality of teaching are frequently used in institutions of higher education, where
students’ evaluations of teaching quality have been used for more than 80 years, and
the practice has grown in importance in recent decades (Borch et al. 2020; Gursoy and
Umbreit 2005). In many countries, students are asked about their perceptions of
teaching in order to make decisions about the further development of teaching practices
on the basis of this feedback (Gaertner and Brunner 2018). Receiving feedback from
students has become a normal part of life for university lecturers worldwide (Flodén
2017). These evaluations are useful to pinpoint the course strengths, identify areas of
improvement, and understand the factors that contribute to students’ satisfaction.
Course and teacher evaluations should help to reduce the gap between what lecturers
and students perceive as the quality of teaching (Venkatraman 2007), and they are an
important mechanism for improving the teaching and learning processes (Borch et al.
2020; Borman and Kimball 2005; Flodén 2017; Gaertner and Brunner 2018; Jaafar
et al. 2016). According to Aditomo and Köhler (2020), teachers and the way they teach
are major factors which determine students’ learning outcomes. Finding out the factors
affecting the students’ satisfaction with the teaching is a relevant issue, since the levels
of satisfaction or dissatisfaction strongly affect the student’s success or failure in
learning (Sembiring et al. 2017).
Students’ evaluations of teaching are applied in almost every higher educational
system in the world (Flodén 2017; Zabaleta 2007). Usually, formal measurement of
the teaching quality is conducted through course evaluations completed by students at
the end of the course. Many studies evaluate teaching quality based on student rating
data since, in addition to being relatively efficient, such data are typically based on
students’ extensive experience of the assessed behaviours; moreover, student ratings
reflect students’ interpretations of the learning environment, which is an important
mediator between teaching and learning (Aditomo and Köhler 2020). Although the
expansion of students’ evaluations of teaching quality in higher education has trig-
gered an ongoing discussion about their validity, Grammatikopoulos et al. (2015)
report that numerous well-designed studies confirmed the usefulness and validity of
this tool.
Student feedback of some sort is usually collected by most institutions. Numerous
colleges and universities have accepted the measurement of students’ perceptions of
instruction as a major element of teaching evaluation, and this trend is likely to continue
given the increased emphasis on teaching quality. However, there is little
standardisation in how this kind of feedback is collected, analysed, and used. The
students’ evaluation of teaching is one of the most thoroughly studied topics in higher
education (Borch et al. 2020; Gursoy and Umbreit 2005), and the interest persists, since
there is still little understanding of how to use and how to act upon the collected data
(Tóth et al. 2013).
Despite the high number of collected evaluations, as Borch et al. (2020) recognize, it
is evident that the use of evaluation data remains low. Based on previous studies, these
authors report several explanations for why academics do not use survey responses,
such as superficial surveys, low desires to develop teaching, little support with respect
to how to follow up the absence of explicit incentives to make use of these data, time
pressure at work, and scepticism as to the relevance of students’ feedback in teaching
improvement.
Students frequently are asked about their opinions regarding several aspects of the
course and teacher performance. However, they are rarely asked if they consider that those
aspects are really important, which would allow relativizing students’ evaluations of
teaching. As Borch et al. (2020) indicate, few evaluations collect information about which
aspects of courses students consider as important for their learning. Nale et al. (2000)
pointed that studies directed at improving higher education outcomes have a drawback:
they focus exclusively either on importance or on performance. To alleviate this concern,
the two factors can be combined (McKillip 2001; Nale et al. 2000). According to Alberty
and Mihalik (1989), the use of two scales would result in a more informative evaluation,
since not only does the evaluator know what the participant observed (i.e. the instructor
communicates ideas clearly) but also how important this was to the participant (i.e. very
important). This knowledge would enable evaluators to put into better perspective the
results of evaluations. It is important to the researchers to determine first what the students
expect regarding different attributes in the same way as that consumer perceptions of
service quality result from comparing expectations prior to receiving the service, and their
actual experience of the service (Zeithaml et al. 1990).
This research tries to cover the mentioned lack with the use of importance-performance
analysis (IPA). The usefulness of this tool to assist the assessment of the students’
evaluation of teaching and to guide the course programming in higher education is
shown. The IPA is a tool that can provide usable feedback to improve training. This
technique measures the gaps between the importance and how “good” (performance)
an attribute is perceived by a student and is presented graphically on a 2 × 2 matrix, the
IPA grid. The quadrant in which the data is placed in this matrix helps determine
possible future actions (Siniscalchi et al. 2008).
Originally developed in the field of marketing, IPA has been applied and used
successfully as a managerial or research tool in many areas such as tourism (Su 2013),
health services (Gonçalves et al. 2014), recreation (Gill et al. 2010), business and
management (Riviezzo et al. 2009), and sports research (Rial et al. 2008). In the
education field, the literature has shown several studies done by researchers such as
Alberty and Mihalik (1989), Attarian (1996), Chen (2018), Joseph et al. (2005),
Kanchana and Triwanapong (2011), Keong (2017), McLeay et al. (2017), Mourkani
and Shohoodi (2013), O’Neill and Palmer (2004), Silva and Fernandes (2011), Silva
and Fernandes (2010), Siniscalchi et al. (2008), Tóth et al. (2013), Wang and Tseng
(2011), Yu and Ming (2012), and Yusoff (2012), in the field of adult education,
program effectiveness, school selection, and evaluation of institution services. Howev-
er, their application is almost non-existent (Anderson et al. 2016; Jaafar et al. 2016) in
the context of the analysis that a lecturer can make of the responses of his/her students
in relation to the attributes related to the quality of teaching, and how the information
obtained can be used to improve future teaching.
This study reports the potential of IPA, together with a questionnaire of teaching
quality evaluation, such as the Student Evaluation of Educational Quality (SEEQ)
(Marsh 1982), as a tool for lecturers to analyse their students’ feedback relating a set
of attributes associated to the teaching quality of their courses in a higher education
context. Given the IPA adaptability, simplicity, and ease of being administered, scored,
evaluated, and interpreted, it may be a valuable tool to be used for lecturers in their
courses to provide them with information useful for assisting the course programming.
IPA can provide the instructor with a visual representation of what teaching attributes
are important for their students, how important each attribute is, and how well the
instructor performed on each attribute from their students’ perspective.
As Attarian (1996) points out, implementing IPA requires four steps: developing a
set of attributes that accurately describe and reflect the topic of study, presenting the
attributes to respondents in questionnaire form that requires them to rate importance
and teacher performance for each attribute, analysing data for the importance and
performance values of each attribute, and plotting each attribute on a four-section
action grid according to its rated importance and teacher performance.
Classical IPA measures the differences between the perceived importance and perfor-
mance of each of the attributes considered relevant for the evaluation of a certain topic. For
each attribute, its importance and performance scores of central distribution (means or
medians) are calculated. Thereafter, graphing its coordinates, a two-dimensional matrix,
called the action grid, is formed (Blake et al. 1978). The coordinates of each attribute will
fall into one of four quadrants (Ortinau et al. 1989). The quadrat characterized with high
importance and high performance is defined as “keep up the good work”. For attributes
landing in this quadrant, the current conditions and expected outcomes are being met.
These attributes are considered strengths. The quadrant with low importance and high
performance is labelled as “possible overkill” and the quadrant with low importance and
low performance is labelled “low priority”. The attributes landing in these two quadrants
are considered superfluous due to their low importance, and for future actions, it may be
advisable to consider not continuing to dedicate efforts to these attributes. Finally, the
quadrant with high importance and low performance is labelled as “concentrate here”,
which identifies the attributes with potential for improvement, on which it is advisable to
concentrate efforts for corrective action. The IPA helps to identify the strengths and
weaknesses of teaching and, therefore, to decide about the pertinent future actions to
improve the teaching-learning experience. Its repeated use in successive courses can help
the lecturer to evaluate the effectiveness of the corrective measures implemented.
The data to carry out the IPA analysis is collected through surveys. Following the
methodology used by (Anderson et al. 2016), the importance survey is designed to be
implemented at the beginning of the semester and could also be considered a pre-
survey; and the performance survey is deployed at the end of the semester and is
considered a post-survey.
According to all that has been said, the IPA can provide a useful and adaptable tool
for lecturers to analyse and use their students’ feedback about the teaching quality. IPA
is an established and effective evaluation tool that is easy to apply, and it provides a
visualization of data that affords immediate feedback and can be used to facilitate
change in areas of concern (Siniscalchi et al. 2008).
The first step for applying the IPA is to develop a set of attributes that accurately
describe and reflect the topic of study. With this purpose, this research has used an
adaptation of the SEEQ (Marsh 1982), for determining the attributes to be assessed in
the IPA analysis.
The SEEQ was originally developed by Marsh (1982) to be administered at the end of a
course, to collect the students’ evaluation of teaching quality. However, in this study, it
has been used with a twofold objective. On the one hand, to collect the students’
perception of the teacher’s performance at the end of the course, as is its usual
objective. But, on the other hand, it has also been used to gather the importance that
students give to the different aspects of the quality of teaching before the beginning of
the course. For this last objective, minimal changes have been made in the writing of
the items to allow administering the survey before starting the lectures.
As Cladera (2020) summarizes, given the widespread use of teaching evaluation
questionnaires, a wide range of instruments for collecting students’ assessment of their
courses and teachers have been developed in recent decades, both qualitative and
quantitative (Brennan and Williams 2004). A review can be found in Spooren et al.
(2013) and Richardson (2005). However, the SEEQ is considered one of the most
widely used and universally accepted instruments for collecting the students’ evaluation
of teaching (Ghedin and Aquario 2008; Grammatikopoulos et al. 2015), and its
reliability and validity have been confirmed by numerous researchers (e.g. Al-
Muslim and Arifin 2015; Coffey and Gibbs 2001). The “superiority” of SEEQ against
other students’ evaluation of teaching instruments relies on psychometric analyses, as it
constantly reveals high levels of validation and reliability scores (Coffey and Gibbs
2001; Marsh 1987; Marsh and Hocevar 1991). The SEEQ questionnaire is the instru-
ment that has been most widely used in published work, and their factor structure has
been confirmed in several studies (Richardson 2005). Grammatikopoulos et al. (2015)
explained that SEEQ has successfully provided valid and reliable students’ evaluation
of teaching scores in several higher education settings and different countries (e.g.
Australia, USA, UK, Hong Kong, China, Spain, India, Greece) (Balam and Shannon
2010; Coffey and Gibbs 2001; Marsh and Roche 1997; Marsh 1986; Watkins and
Thomas 1991; Grammatikopoulos et al. 2015). Another critical point in favour of the
SEEQ is the theoretical basis on which it was developed. Other existing instruments for
students’ evaluation of teaching did not take under consideration the theories of
teaching and learning in higher and adult education. Marsh and Dunkin (1997)
evaluated the content of SEEQ in relation to general principles of teaching and learning
in post-secondary education reported by Feldman (1976) and Fincher (1985). They
revealed that SEEQ factors adequately included the principles described on the afore-
mentioned studies (Grammatikopoulos et al. 2015).
The SEEQ structure, as developed by Marsh (1982), is as follows (Author, 2020).
The first part of the questionnaire is made up 29 items grouped in the following blocks:
learning, enthusiasm, organization, group interaction, individual rapport, breadth, ex-
aminations, and assignments. A five-point Likert-type scale ranging from “very poor”
to “very good” is used for assessing each of the items. Next, there are two questions
about the overall assessment of the course and the lecturer. Subsequently, it includes
some questions related to student and course characteristics, such as difficulty, work-
load, prior interest in the subject, expected grade, and major department. Adequate
provision for student comments to open-ended questions is also provided. More details
and the SEEQ survey form can be obtained from Marsh (1982).
The SEEQ has been the instrument chosen in the present study, but using an adapted
version (Author, 2020) to gather not only students’ assessment of the course and
lecturer performance but also the importance that students give to the different aspects
of the teaching of an undergraduate course.
This research combines the use of the SEEQ questionnaire with the IPA technique to
demonstrate their potential as an effective instrument for lecturers to evaluate the
quality of their teaching from their students’ perspective. The main objective of the
proposed procedure would be the lecturers to identify the factors that were given the
greatest importance by the students, but where the performance of the lecturer/course
was low to highlight the areas with potential for improvement. The findings help to
identify the less satisfactory teaching attributes, which require improvement actions.
With this information, lecturer’s teaching strategies can be modified to enhance
students’ satisfaction and, in turn, the students’ learning process.
There appears to be a general consensus that student feedback helps to improve
courses (Flodén 2017). However, despite an overall aim to improve teaching and the
generally positive attitudes of academics towards the actual use of evaluation, data for
these purposes is low (Borch et al. 2020). Having simple, easy to apply and interpret
instruments could help lecturers make more active use of the data provided by students’
evaluations, and use them as pedagogical tools useful for improvement of teaching and
learning. According to Hammonds et al. (2017), given the large investment in students’
evaluation of teaching and the strong likelihood that they will continue to be used to
measure teaching quality and learning outcomes, it is important to maximize the
practical information gained from them. The proposal presented in this work goes in
this direction.
The following section presents the survey instrument administered to gather stu-
dents’ opinions and the statistical methods used for analysing the data. Next, results are
presented, reporting the characteristics of the sample, the importance and the perfor-
mance scores that the surveyed students give to the different aspects of teaching, and
the importance-performance analysis. Finally, the main findings are discussed.
2.1 Participants
Since the objective of the study is to show the potential of IPA, together with the
SEEQ, as an instrument for a lecturer himself/herself to collect and analyse their
students’ feedback about the quality of his/her teaching in a particular course, the
population under study are the students enrolled in the lecturer’s course. In particular,
the participants of the current study were second-year undergraduate Economics
students enrolled on a compulsory introductory course in Econometrics, with 87
students enrolled when the study was conducted.
Based on the previous literature review, two self-administered questionnaires were de-
signed. Both questionnaires included the SEEQ, original or adapted, along with questions
about sociodemographic and academic characteristics of students, such as gender, age,
and studies. The pre-survey was administered at the beginning of the course for gathering
the students’ perceived importance about several aspects of teaching. Since the usual
objective of the SEEQ is the assessment of the lecturer’s performance at the end of the
course, in the pre-survey the writing of the items that make up the SEEQ was adapted (e.g.
the item You have learned something which you consider valuable of the original scale
was changed to To learn something which I consider valuable) (following Author, 2020).
In the post-survey, administered at the end of the course, the original SEEQ was used for
gathering the students’ perceived performance of each of the attributes included in the
questionnaire. In both questionnaires, the pre- and the post-survey, 29 items were included
to assess the following dimensions: learning, enthusiasm, organization, interaction, rap-
port, breadth, examinations, and assignments. These items were assessed using a five-
point Likert scale ranging from 1 (not at all important) to 5 (very important) in the
importance survey and from 1 (very poor) to 5 (very good) in the performance survey.
Next, the students were asked two questions about the specific subject they were going to
start: Your level of interest in the subject prior to this course is…, measured with five
possible answers (very low, low, medium, high, very high), and The grade that you expect
to obtain in this course is…, with the following possible answers: lower than 3, between 3
and 5, between 5 and 7, between 7 and 9, and higher than 9. Finally, six sociodemographic
and classification questions were included, asking the studies in which the respondents
were enrolled, the subject group, gender, age, whether the student was repeating or not the
subject, and the highest year in which the student was enrolled.
Following the procedure used by Anderson et al. (2016), the pre-survey was
administered the first day of the course, for gathering importance data, and the post-
survey was administered the last day of the course, for gathering the performance data.
In both cases, the students were asked for their voluntary collaboration and were also
informed about the objectives of conducting the survey, and that the procedure was
going to be completely anonymous.
Firstly, individual items of the SEEQ were analysed, calculating their mean scores for
both the importance and the performance assessment. Secondly, the mean scores of the
dimensions in which the individual items are grouped were also calculated. Thirdly, a
reliability analysis was performed for the total scale and for each dimension of the
quality of teaching, using Cronbach’s alpha. Fourthly, differences in the importance
and performance mean scores of the quality of teaching dimensions were calculated and
the importance-performance methodology, described below, was applied.
IPA is a simple graphical technique designed to compare the perceived importance with
the perceived performance of the corresponding attribute. The IPA model provides a
matrix that can enable lecturers to detect the most important teaching attributes as
perceived by students. A high level of performance with these priority characteristics is
related to students’ satisfaction, and a low level of performance is related to students’
dissatisfaction.
As illustrated in Fig. 1, an attractive and interesting feature of the IPA is that the
results may be graphically displayed on a two-dimensional grid using the mean
importance and performance ratings of the attributes (McLeay et al. 2017). Presentation
of the results on the grid will help lecturers to easily interpret the data and to identify
teaching aspects that need attention.
One of the major issues in this technique is the positioning of the thresholds that
divide the plot into four quadrants, namely: quadrant A “concentrate here” (high
importance/low performance), quadrant B “keep up the good work” (high
importance/high performance), quadrant C “low priority” (low importance/low perfor-
mance), quadrant D “possible overkill” (low importance/high performance) (Martilla
and James 1977). In the scale-centred approach, thresholds are placed in the centre of
the established scale (e.g. the value of 3 in the 5-point Likert scale); however, most
attributes often fall in “the ‘keep up the good work’ quadrant as respondents tend to
give high performance and importance ratings (Boley et al. 2017). A second approach
is the data-centred approach, which usually use the mean values of importance and
performance as the crosshairs. Misplacing the thresholds could generate confusing and
contradictory recommendations (Azzopardi and Nash 2013). In this study, the second
approach was used to determine the crosshairs, because of the high ratings on both
importance and performance of the attributes.
IPA involves four steps: (i) identifying a list of attributes to evaluate (in this work the
attributes included in the SEEQ), (ii) rating the attributes according students’ perceived
satisfaction and importance, (iii) analysing data of the importance and performance
ratings of each attribute, and (iv) plotting the importance-performance rating on a two-
dimensional grid. Analysis of the results will indicate the following: (a) attributes
needing immediate improvement (quadrant A); (b) attributes to be retained i.e. major
strengths (quadrant B); (c) attributes needing less attention (quadrants C and D) (Jaafar
et al. 2016).
3 Results
3.2 Mean importance and performance scores of the SEEQ items and dimensions
Table 1 shows the importance and performance sample mean scores for the SEEQ
items and the dimensions in which they are grouped. The higher the value of the mean
score is, the greater the importance or performance assessment that the students give to
that aspect. Results presented in Table 1 indicate that according to their importance, the
aspects of teaching can be ranked in the following way (from the most to the less
important): the lecturer’s enthusiasm (enthusiasm), the lecturer’s organization of the
course contents, materials and expositions (organization), the students’ assessment
methodologies and feedback (examinations), the learning, interest, and value of the
subject (learning), the good relationship, interest, and accessibility that the lecturer
shows to the students (rapport), the lecturer’s breadth of knowledge and updating
(breadth), the usefulness of the assignments (assignments), and the facilities that are
given to the students to participate in the lectures, either asking questions, showing their
opinions, etc. (Interaction). As for the performance, the aspects of teaching can be
ranked in the following way (from the best to the worst rated): rapport, organization,
assignments, examinations, enthusiasm, breadth, learning, and interaction.
Overall, we can see that for all dimensions except rapport, the performance rating is
below importance. With the IPA analysis, it will be possible to see in which dimensions
is most necessary to concentrate efforts.
One of the major advantages of IPA is that teaching quality dimensions can be plotted
graphically on a two-dimensional grid matrix and this can assist in quick and efficient
interpretation of the results (O’Neill and Palmer 2004). Figure 2 represents the IPA grid
for the teaching quality dimensions. The importance mean scores are represented on the
vertical axis, while performance mean scores are on the horizontal axis. Since students’
Table 1 Importance and performance mean scores of the SEEQ items and dimensions
Importance Performance
ratings for both importance and performance are mostly high, the mean values were
used to represent the crosshairs of the IPA matrix. This helped to identify the stronger
and weaker attributes more clearly (O’Neill and Palmer 2004).
Quadrant A (concentrate here) highlights the dimensions that need more attention,
since these dimensions are considered very important by the students, but their
perceived performance are very low. Considerable improvement efforts are required
for these dimensions since they are a source of students’ dissatisfaction. In this study,
the lecturer seems to be underperforming in two dimensions, learning and enthusiasm.
Quadrant B (keep up the good work) represents the dimensions that are considered very
important by the students and perform above the mean. The lecturer can continue
working in the same line in relation to the dimensions that are located in this quadrant.
In this work, the dimensions located in quadrant B are examinations, organization, and
rapport. Quadrant C (low priority) includes the dimensions with low importance and
low performance. Lecturer is underperforming in these dimensions, although it is not
worrying, since they are not important dimensions for the students. It is not necessary to
focus too much effort on these dimensions, since its improvement probably will not
greatly increase students’ satisfaction. Here, the dimensions that fall in this quadrant are
interaction and breadth. Lastly, quadrant D (possible overkill) represents those dimen-
sions to which too much effort is being devoted given the little importance they have
for students. The teacher may be a little unconcerned with these dimensions and
concentrate on others that are more priority. In this study, only assignments is located
in this quadrant.
In summary, there are two priority dimensions in which the lecturer needs to
concentrate his/her efforts. They are those that are located in quadrant A, learning
and enthusiasm.
4 Discussion
Students are the raison d’etre of the educational system. As such, their voices must be
heard for assessing and improving different aspects of the system. Students’ feedback is
frequently collected in institutions of higher education for assessing the quality of teaching
at the end of the courses. However, the use made of this feedback by lecturers to
implement improvements in teaching is still limited. This study proposes a methodology
that can be used for lectures to collect and analyse the feedback of their students about the
quality of their teaching, with the aim to obtain insights into the attributes that need more
attention. If, additionally to the opinions regarding the course and lecturer performance
gathered at the end of the course, the students’ opinions regarding the importance of each
aspect of the quality of teaching are gathered when starting the course, perceived
performance can be interpreted in relation to the importance of each attribute. It would
enable the lecturer to find out the priority aspects in which to focus his/her efforts to refine
their teaching methods for greater student satisfaction, and in turn students’ performance
(Jaafar et al. 2016; Peters and Kortecamp 2010; Sembiring et al. 2017).
This work has shown how two recognized and validated instruments such as SEEQ and
IPA can be used together for teaching evaluation. The SEEQ is used to define the attributes
of the quality of teaching and the IPA is used for interpreting the results. Both instruments
are relatively easy to administer and interpret, making them a useful tool for lecturers to
monitor students’ evaluation of teaching and design improvement strategies. Compared to
traditional teaching evaluation instruments that gather only information about lecturer
performance at the end of the course, this methodology can provide the lecturer with more
informative feedback on the strengths and weaknesses of the course and his/her teaching,
helping him/her to identify the priority aspects in which improvements are most needed.
The SEEQ is a questionnaire originally developed to be administered at the end of
the course to collect the students’ evaluations of teaching. In this paper, this instrument
has been slightly adapted to be used at the beginning of the course too. The aim was to
gather the opinion of students regarding the importance of the different attributes of the
teaching quality.
The analysis of the survey data has shown that in this study the teaching character-
istics that require more lecturer attention are lecturer enthusiasm and the interest and
intellectual challenge of the course, since these attributes are rated with high importance
and low performance.
The analysis of previous studies reveals that there appears to be a general consensus
that student feedback helps to improve teaching and learning (Borch et al. 2020; Flodén
2017). However, the actual use of evaluation data for these purposes is low (Borch et al.
2020). This study tries to contribute to improve this situation, proposing a simple, easy
to apply and interpret methodology to help lecturers make more active use of the data
provided by students’ evaluations, and use them as pedagogical tools useful for
improvement of their teaching and learning of their students.
The number of observations used in this work can be seen as a limitation of the
study. However, it should be noted that the objective of the study is to show the
potential of IPA, together with a students’ evaluation of teaching questionnaire, such as
the SEEQ, as an instrument for lecturers to analyse their students’ feedback about the
quality of their teaching in a particular course. Therefore, the sample size is determined
by the number of students in the course.
References
Aditomo, A., & Köhler, C. (2020). Do student ratings provide reliable and valid information about teaching
quality at the school level? Evaluating measures of science teaching in PISA 2015. Educational
Assessment, Evaluation and Accountability, 1–36. https://doi.org/10.1007/s11092-020-09328-6
Alberty, S., & Mihalik, B. J. (1989). The use of importance-performance analysis as an evaluative technique in
adult education. Evaluation Review, 13(1), 33–44.
Al-Muslim, M., & Arifin, Z. (2015). The usability of SEEQ in quality evaluation of Arabic secondary
education in Malaysia. International Education Studies, 8(3), 202–211. https://doi.org/10.5539/ies.
v8n3p202.
Anderson, S., Hsu, Y.-C., & Kinney, J. (2016). Using importance-performance analysis to guide instructional
design of experiential learning activities. Online Learning, 20(4).
Attarian, A. (1996). Using importance-performance analysis to evaluate teaching effectiveness. Research
Reports.
Azzopardi, E., & Nash, R. (2013). A critical evaluation of importanceeperformance analysis. Tourism
Management, 35, 222–233. https://doi.org/10.1016/j.tourman.2012.07.007.
Balam, E. M., & Shannon, D. M. (2010). Student ratings of college teaching: a comparison of faculty and their
students. Assessment & Evaluation in Higher Education, 35(2), 209–221. https://doi.org/10.1080/
02602930902795901.
Blake, B. F., Schrader, L. F., & James, W. L. (1978). New tools for marketing research: the action grid.
Feedstuffs, 50(19), 38–39.
Boley, B. B., McGehee, N. G., & Tom Hammett, A. L. (2017). Importance-performance analysis (IPA) of
sustainable tourism initiatives: the resident perspective. Tourism Management, 58, 66–77. https://doi.org/
10.1016/j.tourman.2016.10.002.
Borch, I., Sandvoll, R., & Risør, T. (2020). Discrepancies in purposes of student course evaluations: what does
it mean to be “satisfied”? Educational Assessment, Evaluation and Accountability, 32(1), 83–102. https://
doi.org/10.1007/s11092-020-09315-x.
Borman, G. D., & Kimball, S. (2005). Teacher quality and educational equality: do teachers with higher
standards-based evaluation ratings close student achievement gaps? Elementary School Journal, 106, 3–
20.
Brennan, J., & Williams, R. (2004). Collecting and using student feedbacks. A guide to good practice.
Learning and Teaching Support Network.
Casey, R. J., Gentile, P., & Bigger, S. W. (1997). Teaching appraisal in higher education: an Australian
perspective. Higher Education, 34(4), 459–482. https://doi.org/10.1023/A:1003042830109.
Chen, Y. C. (2018). Applying importance-performance analysis to assess student employability in Taiwan.
Journal of Applied Research in Higher Education, 10(1), 76–86. https://doi.org/10.1108/JARHE-10-
2017-0118.
Cladera, M. (2020), Let's ask our students what really matters to them . Journal of Applied Research in Higher
Education, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/JARHE-07-2019-0195.
Coffey, M., & Gibbs, G. (2001). The evaluation of the student evaluation of educational quality questionnaire
(SEEQ) in UK higher education. Assessment & Evaluation in Higher Education, 26(1), 89–93. https://
doi.org/10.1080/02602930020022318.
Feldman, K. A. (1976). The superior college teacher from the students’ view. Research in Higher Education.
Springer, 5, 243–288. https://doi.org/10.2307/40195219.
Fincher, C. (1985). Learning theory and research. In J. C. Smart (Ed.), Higher education: Handbook of theory
and research (pp. 63–96). New York: Agathon Press.
Flodén, J. (2017). The impact of student feedback on teaching in higher education. Assessment & Evaluation
in Higher Education, 42(7), 1054–1068. https://doi.org/10.1080/02602938.2016.1224997.
Gaertner, H., & Brunner, M. (2018). Once good teaching, always good teaching? The differential stability of
student perceptions of teaching quality. Educational Assessment, Evaluation and Accountability, 30(2),
159–182. https://doi.org/10.1007/s11092-018-9277-5.
Ghedin, E., & Aquario, D. (2008). Moving towards multidimensional evaluation of teaching in higher
education: a study across four faculties. Higher Education, 56(5), 583–597. https://doi.org/10.1007/
s10734-008-9112-x.
Gill, J. K., Bowker, J. M., Bergstrom, J. C., & Zarnoch, S. J. (2010). Accounting for trip frequency in
importance-performance analysis. Journal of Park and Recreation Administration, 28(1), 16–35.
Gonçalves, J. R., Pinto, A., Batista, M. J., Pereira, A. C., & Bovi Ambrosano, G. M. (2014). Importance-
performance analysis: revisiting a tool for the evaluation of clinical services. Health, 06(05), 285–291.
https://doi.org/10.4236/health.2014.65041.
Grammatikopoulos, V., Linardakis, M., Gregoriadis, A., & Oikonomidis, V. (2015). Assessing the students’
evaluations of educational quality (SEEQ) questionnaire in Greek higher education. Higher Education,
70(3), 395–408. https://doi.org/10.1007/s10734-014-9837-7.
Gursoy, D., & Umbreit, W. T. (2005). Exploring students’ evaluations of teaching effectiveness: what factors
are important? Journal of Hospitality and Tourism Research, 29(1), 91–109. https://doi.org/10.1177/
1096348004268197.
Hammonds, F., Mariano, G. J., Ammons, G., & Chambers, S. (2017). Student evaluations of teaching:
improving teaching quality in higher education. Perspectives: Policy and Practice in Higher Education,
21(1), 26–33. https://doi.org/10.1080/13603108.2016.1227388.
Jaafar, N. A. N., Noor, Z. M., & Mohamed, M. (2016). Student ratings of teaching effectiveness: an
importance - performance analysis (IPA). Journal of Educational and Social Research, 6(3), 33–44.
Joseph, M., Yakhou, M., & Stone, G. (2005). An educational institution’s quest for service quality: customers’
perspective. Quality Assurance in Education, 13(1), 66–82. Retrieved from. https://doi.org/10.1108/
09684880510578669.
Kanchana, R., & Triwanapong, S. (2011). Identifying the key quality improvement of undergraduate
engineering education - using importance-performance analysis. In The 9th International and National
Conference on Engineering Education (INCEE9). Thailand.
Keong, W. E. Y. (2017). Importance-performance analysis of e-learning technologies in Malaysian higher
education. In Proceedings - 2017 International Symposium on Educational Technology, ISET 2017 (pp.
24–28). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/ISET.2017.14.
Marsh, H. W. (1982). SEEQ: a reliable, valid, and useful instrument for collecting students’ evaluations of
university teaching. British Journal of Educational Psychology, 52(1), 77–95. https://doi.org/10.1111/j.
2044-8279.1982.tb02505.x.
Marsh, H. W. (1986). Applicability paradigm: students’ evaluations of teaching effectiveness in different
countries. Journal of Educational Psychology, 78(6), 465–473. https://doi.org/10.1037/0022-0663.78.6.
465.
Marsh, H. W. (1987). Students’ evaluation of university teaching, research findings, methodological issues,
and directions for future research. International Journal of Educational Research, 11, 253–388.
Marsh, H. W. W., & Dunkin, M. J. J. (1997). Students’ evaluations of university teaching: a multidimensional
perspective. In R. P. Perry & J. C. Smart (Eds.), Effective teaching in higher education: Research and
practice (pp. 241–320). New York: Agathon Press.
Marsh, H. W., & Hocevar, D. (1991). The multidimensionality of students’ evaluations of teaching effective-
ness: the generality of factor structures across academic discipline, instructor level, and course level.
Teaching and Teacher Education, 7(1), 9–18.
Marsh, H. W., & Roche, L. A. (1997). Making students’ evaluations of teaching effectiveness effective: The
critical issues of validity, bias, and utility. American Psychologist, 52(11), 1187–1197.
Martilla, J. A., & James, J. C. (1977). Importance-performance analysis. Journal of Marketing, 41(1), 77.
https://doi.org/10.2307/1250495.
McKillip, J. (2001). Case studies in job analysis and training evaluation. International Journal of Training and
Development, 5(4), 283–289. https://doi.org/10.1111/1468-2419.00140.
McLeay, F., Robson, A., & Yusoff, M. (2017). New applications for importance-performance analysis (IPA)
in higher education. Journal of Management Development, 36, 780–800. https://doi.org/10.1108/jmd-10-
2016-0187.
Mourkani, G. S., & Shohoodi, M. (2013). Quality assurance in higher education: combining internal
evaluation and importance-performance analysis models. Middle-East Journal of Scientific Research,
15(5), 643–651. https://doi.org/10.5829/idosi.mejsr.2013.15.5.217.
Nale, R. D., Rauch, D. A., Wathen, S. A., & Barr, P. B. (2000). An exploratory look at the use of importance-
performance analysis as a curricular assessment tool in a school of business. Journal of Workplace
Learning, 12(4), 139–145. https://doi.org/10.1108/13665620010332048.
O’Neill, M. A., & Palmer, A. (2004). Importance-performance analysis: a useful tool for directing continuous
quality improvement in higher education. Quality Assurance in Education, 12(1), 39–52.
Ortinau, D. J., Bush, A. J., Bush, R. P., & Twible, J. L. (1989). The use of importance-performance analysis
for improving the quality of marketing education: interpreting faculty-course evaluations. Journal of
Marketing Education, 11(2), 78–86. https://doi.org/10.1177/027347538901100213.
Peters, M. L., & Kortecamp, K. (2010). Rethinking undergraduate mathematics education: the importance of
classroom climate and self - efficacy on mathematics achievement. Current Issues in Education, 13(4),
34.
Rial, A., Rial, J., Varela, J., & Real, E. (2008). An application of importance-performance analysis (IPA) to the
management of sport centres. Managing Leisure, 13(3–4), 179–188. https://doi.org/10.1080/
13606710802200878.
Richardson, J. T. E. (2005). Instruments for obtaining student feedback: a review of the literature. Assessment
& Evaluation in Higher Education, 30(4), 387–415. https://doi.org/10.1080/02602930500099193.
Riviezzo, A., de Nisco, A., & Rosaria Napolitano, M. (2009). Importance-performance analysis as a tool in
evaluating town centre management effectiveness. International Journal of Retail & Distribution
Management, 37(9), 748–764. https://doi.org/10.1108/09590550910975808.
Sembiring, P., Sembiring, S., Tarigan, G., & Sembiring, O. (2017). Analysis of student satisfaction in the
process of teaching and learning using importance performance analysis. Journal of Physics: Conference
Series, 930(1), 012039. https://doi.org/10.1088/1742-6596/930/1/012039.
Silva, F., & Fernandes, P. (2010). Using importance-performance analysis in evaluating institutions of higher
education: a case study. ICEMT 2010–2010 International Conference on Education and Management
Technology, Proceedings, 121–123. https://doi.org/10.1109/ICEMT.2010.5657689.
Silva, F., & Fernandes, P. O. (2011). Importance-performance analysis as a tool in evaluating higher education
service quality: the empirical results of ESTiG (IPB). In The 17th International Business Information
Management Association Conference (pp. 306–315). University of Pavia, Milan, Italy.
Siniscalchi, J., Beale, E., & Fortuna, A. (2008). Using importance-performance analysis to evaluate training.
Performance Improvement, 47(10), 30–35.
Spooren, P., Brockx, B., & Mortelmans, D. (2013). On the validity of student evaluation of teaching. Review
of Educational Research, 83(4), 598–642. https://doi.org/10.3102/0034654313496870.
Su, C.-S. (2013). An importance-performance analysis of dining attributes: a comparison of individual and
packaged tourists in Taiwan. Asia Pacific Journal of Tourism Research, 18(6), 573–597. https://doi.org/
10.1080/10941665.2012.695281.
Su, F., & Wood, M. (2012). What makes a good university lecturer? Students’ perceptions of teaching
excellence. Journal of Applied Research in Higher Education, 4(2), 142–155. https://doi.org/10.1108/
17581181211273110.
Tóth, Z. E., Jónás, T., Bérces, R., & Bedzsula, B. (2013). Course evaluation by importance-performance
analysis and improving actions at the Budapest University of Technology and Economics. International
Journal of Quality and Service Sciences, 5(1), 66–85. https://doi.org/10.1108/17566691311316257.
Venkatraman, S. (2007). A framework for implementing TQM in higher education programs. Quality
Assurance in Education, 15(1), 92–112. https://doi.org/10.1108/09684880710723052.
Wang, R., & Tseng, M.-L. (2011). Evaluation of international student satisfaction using fuzzy importance-
performance analysis. Procedia - Social and Behavioral Sciences, 25, 438–446. https://doi.org/10.1016/J.
SBSPRO.2012.02.055.
Watkins, D., & Thomas, B. (1991). Assessing teaching effectiveness: an Indian perspective. Assessment &
Evaluation in Higher Education, 16(3), 185–198. https://doi.org/10.1080/0260293910160302.
Yu, Y. T., & Ming, S. H. (2012). Analysis of the efficiency of teaching methods: using the variance-based
method as an example. The Journal of Human Resource and Adult Learning, 8(1), 99–104.
Yusoff, M. (2012). Evaluating business student satisfaction in the Malaysian private educational environ-
ment. Doctoral thesis, Northumbria University. This version was downloaded from Northumbria
Research Link: http://nrl.northumbria.ac.uk/7991/.
Zabaleta, F. (2007). The use and misuse of student evaluations of teaching. Teaching in Higher Education,
12(1), 55–76. https://doi.org/10.1080/13562510601102131.
Zeithaml, V. A., Parasuraman, A., & Berry, L. L. (1990). Delivering quality service : balancing customer
perceptions and expectations. New York: The Free Press.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.
1. use such content for the purpose of providing other users with access on a regular or large scale
basis or as a means to circumvent access control;
2. use such content where to do so would be considered a criminal or statutory offence in any
jurisdiction, or gives rise to civil liability, or is otherwise unlawful;
3. falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association
unless explicitly agreed to by Springer Nature in writing;
4. use bots or other automated methods to access the content or redirect messages
5. override any security feature or exclusionary protocol; or
6. share the content in order to create substitute for Springer Nature products or services or a
systematic database of Springer Nature journal content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a
product or service that creates revenue, royalties, rent or income from our content or its inclusion as
part of a paid for service or for other commercial gain. Springer Nature journal content cannot be
used for inter-library loans and librarians may not upload Springer Nature journal content on a large
scale into their, or any other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not
obligated to publish any information or content on this website and may remove it or features or
functionality at our sole discretion, at any time with or without notice. Springer Nature may revoke
this licence to you at any time and remove access to any copies of the Springer Nature journal content
which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or
guarantees to Users, either express or implied with respect to the Springer nature journal content and
all parties disclaim and waive any implied warranties or warranties imposed by law, including
merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published
by Springer Nature that may be licensed from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a
regular basis or in any other manner not expressly permitted by these Terms, please contact Springer
Nature at
onlineservice@springernature.com