OEinHE ManagerialImplicationsFinal
OEinHE ManagerialImplicationsFinal
net/publication/359939625
CITATIONS READS
4 1,622
1 author:
James Pounder
Fiji National University
40 PUBLICATIONS 1,472 CITATIONS
SEE PROFILE
All content following this page was uploaded by James Pounder on 14 April 2022.
Organizational Effectiveness in
Higher Education
Managerial Implications of a Hong Kong Study
James Pounder
Introduction
THIS PAPER DESCRIBES a study which examined the organizational effectiveness of higher
educational institutions and should be seen as one response to the worldwide pressure on
higher educational establishments to provide evidence of effective performance (Pounder,
1997). The study took place in Hong Kong which, along with many of the developed higher
educational systems around the world, needs to demonstrate that its higher educational insti-
tutions are providing value for money (Pounder, 1997). Organizational effectiveness self-
rating scales for institutions of higher education were developed. The self-rating scales were
based on the effectiveness criteria or dimensions of the Competing Values Framework or
Model of Organizational Effectiveness (Quinn and Rohrbaugh, 1981, 1983) for which its
authors have claimed general paradigm status. Their claim is based on the fact that the
model makes explicit major perspectives on organizational effectiveness taken by acknowl-
edged experts in the field. Further, the extent to which the model has been used in organiz-
ational and management studies appears to support Quinn and Rohrbaugh’s assertion, since
it has provided an analytical framework for over 40 studies (Pounder, 1997). However, prior
to the Hong Kong study, the applicability of the Competing Values Model (Quinn and
Rohrbaugh, 1981, 1983) to higher educational organizations had never been tested. The
framework is illustrated in Figure 1.
The framework contained in Figure 1 depicts Quinn and Rohrbaugh’s multidimensional
scaling analysis, involving a total of 52 organizational theorists.1 Reference to the figure
shows that the lower right quadrant contains the Rational Goal Model which emphasizes
control and external focus and stresses planning and goal setting (as means) and productivity
and efficiency (as ends). In sharp contrast to the Rational Goal Model is the Human Relations
Model in the upper left quadrant with its emphasis on flexibility and internal focus and stress-
ing cohesion and morale (as means) and human resources development (as ends). The lower
left quadrant contains the Internal Process Model with an emphasis on control and internal
focus and stressing the criteria of information management and communication (as means)
and stability and control (as ends). Juxtaposed in the upper right quadrant is the Open System
Model based on flexibility and external focus and emphasizing adaptability and readiness (as
means) and growth and resource acquisition (as ends). Quinn and Rohrbaugh’s analysis
revealed that the quality dimension did not fit into one particular model of organizational
effectiveness and the authors concluded that quality ‘may be an important element of any or
all (of the models)’ (1983: 371). Hence, quality has been placed in the centre of Figure 1.
The Hong Kong study was completed in August 1997 and data collection took place at a
time when the Hong Kong higher educational system comprised only nine accredited higher
educational institutions. This facilitated a high degree of involvement with seven of the nine
institutions actually participating in the scale development procedure. Thus, Hong Kong
was viewed as a suitable base for an initial examination of the applicability of the
Competing Values Model of Organizational Effectiveness to higher education. The instru-
ment development method was based on the procedure for developing Behaviourally
Anchored Rating Scales (BARS) (Smith and Kendall, 1963) modified to produce scales
capable of the valid and reliable rating of organizational effectiveness. These modifications
were also designed to test the applicability of the Competing Values Model (Quinn and
Rohrbaugh, 1981, 1983) to higher educational institutions in Hong Kong.
Method
What follows is a brief synopsis of the method2 employed in the study to develop the rating
scales. All nine accredited higher educational institutions were invited to participate in
developing organizational effectiveness self-rating scales in each of the nine effectiveness
dimensions (i.e. productivity–efficiency, quality, cohesion, adaptability–readiness, infor-
mation management-communication, growth, planning–goal setting, human resources
development, stability–control) contained in the Competing Values Framework (Quinn and
Rohrbaugh, 1981, 1983). Definitions of the effectiveness dimensions are contained in
Appendix A. These definitions took the descriptions employed in an earlier experiment by
Edwards (1986) conducted in a commercial context, and slightly modified them where
necessary for a higher educational setting.
Seven of the institutions accepted the invitation to participate in the study. As many as
700 academic and administrative staff across all seven institutions, and as few as four staff
in one of the institutions, were surveyed in developing the scales, the actual number being
determined by the stage in the scale development procedure. Scale development began with
a group of senior academics and administrators in Hong Kong higher education providing
592 examples of good, average and poor institutional behaviour covering all the nine effec-
tiveness dimensions in the Competing Values Framework (Quinn and Rohrbaugh, 1981,
1983). At each subsequent stage of the scale development procedure, examples were elim-
inated in order to arrive at a final set of scale anchors that were widely accepted by academic
and administrative staff drawn from all seven of the participating institutions and repre-
senting various levels in those institutions’ hierarchies. This acceptance was based on the
fact that the final set of scale anchors had withstood stringent tests of relevance to, and posi-
tioning on, the nine effectiveness scales. Equally, only those examples considered to be well
phrased in the Hong Kong higher educational context were retained in the study.
As stated above, the BARS procedure employed in the study was modified to maximize
the capacity of the resulting scales to produce valid and reliable ratings. Notable modifi-
cations to the basic BARS formula were multidimensional scaling for verification of the val-
idity of scale anchor ordering, and multitrait-multirater analysis (Campbell and Fiske, 1959)
for examination of each scale’s construct validity (unidimensionality). Research prior to the
Hong Kong study had demonstrated the value of multidimensional scaling (Edwards, 1986)
and multitrait-multirater analysis (Campbell and Fiske, 1959; Henderson et al., 1995;
Kinicki et al., 1985; Lascu et al., 1995; Schreisheim and Eisenbach, 1995; Spreitzer, 1995;
Sullivan, 1996) in enhancing the validity of rating scales. Additionally, inter-rater, and
test–retest, reliabilities were ascertained for each scale against a minimum criterion rec-
ommended by Nunnally and Bernstein (1994). Any of the scales which failed to meet strin-
gent validity and reliability requirements were eliminated from the analysis.
assessing the extent of institutional cohesion and morale with a high degree of validity and
reliability.
It is not suggested that the development of rating scales such as those produced in this
study is a panacea for the tendency to overconcentrate on the easily measurable in higher
education. What is suggested is that, in the absence of any obvious alternative means of
measuring the softer dimensions of higher educational performance, the methodology pre-
sented in this paper should be considered. Failure to do so may result in higher educational
establishments becoming organizations solely preoccupied with output, staffed exclusively
by academics with no interest in process. The arguments presented in this paper should not
be viewed as being solely applicable to Hong Kong higher education. This author’s experi-
ence of interacting with academics from a variety of countries gained over a 14-year career
in Hong Kong academic institutions has revealed that Hong Kong higher education is far
from unique in the issues it is facing.
Finally, from a theoretical perspective, the Hong Kong study may be considered in the
context of Quinn and Cameron’s (1983) thesis which held effectiveness dimensions appro-
priate to specific organizations to be related to various stages in these organizations’ life
cycles. The Hong Kong study found a core set of effectiveness dimensions to be applicable
across the higher educational institutions participating in the final analysis, despite the fact
that the institutions in question were very different in terms of their stages of development.
For example, one participating institution, Lingnan College, achieved Hong Kong govern-
ment recognition as a degree-awarding institution as late as 1991. This fairly recent recog-
nition necessitated the development of degree programmes in all faculties and the
introduction of a research orientation at a pace unprecedented amongst the recognized
higher educational institutions. At the time of the study, Lingnan College was still in the
process of establishing an environment of scholarship appropriate to the delivery of degree
programmes. In complete contrast to Lingnan College was one of the other participating
institutions, the City University of Hong Kong, which was established as late as 1984 but
was able to recruit research-oriented staff from the outset and permitted to introduce under-
graduate and graduate programmes at a far more leisurely pace than Lingnan College. While
both institutions were offering undergraduate and graduate programmes at the time of the
study, it would be erroneous to argue that the institutions were at a similar stage of maturity
as degree-awarding institutions (Pounder, 1997).
Thus, contrary to Quinn and Cameron’s (1983) thesis, the findings of the Hong Kong
study indicated that organizations can share a common set of effectiveness dimensions
despite differences in these organizations’ maturity. A repeat of the scale development
method in other higher educational systems, as well as moving research towards a general
model of effectiveness in higher education, would confirm whether or not these findings are
unique to the Hong Kong study or represent a serious challenge to Quinn and Cameron’s
hypothesis.
Appendix A
• Productivity–Efficiency: This aspect of an organization’s performance has to do with
behaviour that reflects the extent to which it is concerned with the quantity or volume
of what it produces and the cost of operation.
• Quality: This aspect of an organization’s performance has to do with behaviour that
reflects the extent to which it is concerned with the quality of what it produces.
• Cohesion: This aspect of an organization’s performance has to do with behaviour that
reflects the extent to which it is concerned with staff morale, interpersonal relation-
ships, teamwork and sense of belonging.
• Adaptability–Readiness: This aspect of an organization’s performance has to do with
behaviour that reflects the extent of its ability to readily alter or adapt its structure, pro-
grammes, courses etc., in response to changing demands. In other words, the extent of
the organization’s readiness to adapt to change.
• Information Management–Communication: This aspect of an organization’s per-
formance has to do with behaviour that reflects the extent of its ability to distribute
timely and accurate information needed by its members to do their jobs.
• Growth: This aspect of an organization’s performance has to do with behaviour that
reflects the extent of its ability to secure external support, acquire resources and
increase its capabilities.
• Planning–Goal Setting: This aspect of an organization’s performance has to do with
behaviour that reflects the extent of its ability to set goals and objectives and system-
atically plan for the future.
• Human Resource Development: This aspect of an organization’s performance has to
do with behaviour that reflects the extent to which it is responsive to the individual
needs of its staff. It also has to do with the extent to which the institution facilitates par-
ticipation in decision making. Additionally, this aspect is concerned with behaviour
relating to the hiring, training and development of staff.
• Stability–Control: This aspect of an organization’s performance has to do with behav-
iour that reflects the extent of its ability to control the flow of work, to direct the behav-
iour of its members and to maintain the organization’s continuity, particularly under
periods of pressure or threat.
Appendix B (1)
Performance Dimension: Information Management–Communication
Definition: This aspect of an organization’s performance has to do with behaviour that reflects the extent of
its ability to distribute timely and accurate information needed by members to do their jobs.
5
Typically, in this organization, one would expect mechanisms to have been introduced for the express
purpose of cascading information systematically from top to bottom of the organizational hierarchy.
4.5 Typically, in this organization, one would expect a management information provision unit to have
been established which is constantly consulting information users on their present and future needs.
4 Typically, in this organization, one would expect minutes of governing body meetings (e.g. Board of
Governors/Council/Academic Board) to be made available to all staff, and actively circulated to those
who need them.
3.5 Typically, in this organization, one would expect information bulletins from management, and meet-
ings, to focus primarily on developments that have already taken place, as opposed to developments in
the pipeline.
3 Typically, in this organization, one would expect there to be both formal and informal channels for
information but information provision to be not always timely.
Typically, in this organization, one would expect there to be widespread use of the ‘confidential’ and
‘restricted’ stamp on documents and reports.
2.5
Typically, in this organization, one would expect information produced centrally to be consigned to
the waste bin frequently by recipients because it is thought to serve no useful purpose.
2 Typically, in this organization, one would expect information provision to be ‘ad hoc’ in the sense of
being provided when requested if one happens to know that it is available and the relevant party to
contact.
1.5 Typically, in this organization, one would expect little or no publicity to be given to major develop-
ments such as the setting up of a new unit or the introduction of a new facility.
The space below is provided for raters to write down their own example (optional—see rater’s instructions):
_________________________________________________________________________
________________________ Numerical Rating
Appendix B (2)
Performance Dimension: Planning–Goal Setting
Definition: This aspect of an organization’s performance has to do with behaviour that reflects the extent of
its ability to set-goals and objectives and systematically plan for the future.
5
Typically, in this organization, one would expect the institutional plan to be coherent, in the sense of
moving from mission statement to broad aims and then to specific objectives, and action oriented, in
the sense of providing a framework which allows each lower level unit to define its own objectives and
action plans.
4.5
Typically, in this organization, one would expect long-term goals to have been established that are
consistent with the institution’s traditions and the likely future of society, and mechanisms for regu-
lar review to have been set up.
4 Typically, in this organization, one would expect an institutional plan, comprising mission, goals and
strategies, to be revised and updated each year and circulated widely amongst members of staff.
Typically, in this organization, one would expect academic and administrative unit heads to prepare
their own budgets which reflect their plans for the coming year.
3.5 Typically, in this organization, one would expect a planning committee to have been established,
comprising members of the senior management team, which regularly reviews the institution’s
mission, objectives and strategies.
3
Typically, in this organization, one would expect there to be an approximate plan or view on future
direction but with implementation not fully worked out.
2.5
Typically, in this organization, one would expect management to exhibit a reluctance to deal with
open ended aspects of planning, preferring to plan within a framework which limits strategic options.
2
Typically, in this organization, one would expect management not to have given direction on priority
areas for the present and future.
Typically, in this organization, one would expect powerful members of the senior management team
1.5
to display little understanding of, or interest in, long-term development. Consequently, one would
expect such committees as are established to deal with change to have no obvious goals or functions.
Typically, in this organization, one would expect different committees to deal with different planning
1 related activities in an uncoordinated way.
The space below is provided for raters to write down their own example (optional—see rater’s instructions):
_________________________________________________________________________
________________________ Numerical Rating
Appendix B (3)
Performance Dimension: Productivity–Efficiency
Definition: This aspect of an organization’s performance has to do with behaviour which reflects the extent to which it is
concerned with the quantity or volume of what it produces and the cost of operation.
5
Typically, in this organization, one would expect there to be regular meetings at all levels devoted to improving pro-
ductivity and efficiency.
4.5
Typically, in this organization, one would expect that rationalization (e.g. grouping of departments into faculties or
schools) and establishment of budget centres have been carried out with a view to improving productivity and
efficiency.
Typically, in this organization, one would expect departments and individuals to have been provided with incen-
4
tives to use resources efficiently.
Typically, in this organization, one would expect periodic reviews of support units to be made with a view to estab-
lishing the extent to which the units give ‘value for money’. One would also expect appropriate action to be taken
in cases where ‘value for money’ is not established.
3.5 Typically, in this organization, one would expect that in new programme/course design, a great deal of emphasis
would be placed on demand for the programme/ course and economy in the use of resources.
Typically, in this organization, one would expect it to have been made clear to all unit heads and above that pro-
ductivity–efficiency is a major criterion used to judge the worth of their decisions.
Typically, in this organization, one would expect evening and extension programmes/courses to be approved on the
3 understanding that they cover costs.
Typically, in this organization, one would expect there to be a constant drive to achieve low unit costs.
2.5
Typically, in this organization, one would expect resources to be underutilized in the evenings or weekends.
2
Typically, in this organization, one would expect that, so long as externally established norms are met, little atten-
tion is paid to efficiency.
1.5
Typically, in this organization, one would expect there to be a notable absence of mechanisms for assessing pro-
ductivity and efficiency.
The space below is provided for raters to write down their own example (optional—see rater’s instructions):
__________________________________________________________________________________
_______________ Numerical Rating
Appendix B (4)
Performance Dimension: Cohesion
Definition: This aspect of an organization’s performance has to do with behaviour which reflects the extent
to which it is concerned with staff morale, interpersonal relationships, teamwork and sense of
belonging.
5
Typically, in this organization, one would expect senior management to have taken positive steps to
create a climate in which employees at all levels are made to feel valuable members of the organiz-
ation.
4.5
Typically, in this organization, one would expect staff to regularly refer to their sense of commitment
to the institution.
4 Typically, in this organization, one would expect mechanisms to exist for staff to share problems and
to work together.
Typically, in this organization, one would expect the senior management team to visit academic and
administrative units regularly and talk freely and informally with members of staff.
3.5
Typically, in this organization, one would expect there to have been an effort either to foster alle-
giance to parts of the organization in the face of growth or to limit expansion so as to maintain a sense
of belonging.
3 Typically, in this organization, one would expect staff to demonstrate greater allegiance to the parent
faculty or department than to the institution as a whole.
Typically, in this organization, one would expect senior management to express a commitment to the
maintenance of staff morale but not to do sufficient to ensure that people at all levels have a sense of
2.5 belonging.
Typically, in this organization, one would expect there to be frequent conflicts between the centre and
subunits, between academic and administrative units, between one academic or administrative unit
and another, and between individuals.
2
Typically, in this organization, one would expect feelings of loyalty and sense of belonging to be
undermined by an approach to human relations issues (e.g. to contract renewal) which generates inse-
curity.
1.5
Typically in this organization, one would expect there to be a general lack of informal contact
amongst staff members demonstrated by inadequate staff common room life and too many closed
office doors.
The space below is provided for raters to write down their own example (optional—see rater’s instructions):
_________________________________________________________________________
________________________ Numerical Rating
Notes
1. Details of the methodology used to generate the Competing Values Framework can be found in
Quinn and Rohrbaugh (1981, 1983).
2. Full details of the method can be obtained from the author of this paper.
References
Cameron, K.S. (1986) ‘Effectiveness as Paradox: Consensus and Conflict in Conceptions of
Organisational Effectiveness’, Management Science, 32(5): 539–53.
Campbell, J.P. and Fiske, D.W. (1959) ‘Convergent and Discriminant Validation by the
Multitrait–Multimethod Matrix’, Psychological Bulletin 36: 81–105.
Edwards, R.L. (1986) ‘Using Multidimensional Scaling to Test the Validity of Behaviorally
Anchored Rating Scales: An Organisational Example Involving the Competing Values
Framework’, PhD diss., State University of New York at Albany.
Henderson, F., Anderson, N. and Rick, S. (1995) ‘Future Competency Profiling: Validating and
Redesigning the ICL Graduate Assessment Centre’, Personnel Review 24(3): 19–31.
Kinicki, A., Bannister, B., Hom, P. and DeNisi, A. (1985) ‘Behaviorally Anchored Rating Scales vs
Summated Rating Scales: Psychometric Properties and Susceptibility to Rating Bias’, Educational
and Psychological Measurement 45: 535–49.
Lascu, D.N., Ashworth, N., Giese, T. and Omar, M. (1995) ‘The User Information Satisfaction
Scale: International Applications and Implications for Management and Marketing’, Multinational
Business Review 3(2): 107–15.
Nunnally, J.C. and Bernstein, I.H. (1994) Psychometric Theory. New York: McGraw-Hill.
Pounder, J.S. (1997) ‘Measuring the Performance of Institutions of Higher Education in Hong
Kong: An Organisational Effectiveness Approach’, PhD diss. Brunel University/Henley
Management College.
Quinn, R.E. and Cameron, K.S. (1983) ‘Organisational Life Cycles and Shifting Criteria of
Effectiveness: Some Preliminary Evidence’, Management Science 29: 33–51.
Quinn, R.E. and Cameron, K.S. (1988) Paradox and Transformation: Towards a Theory of Change
in Organisation and Management. Cambridge, MA: Ballinger.
Quinn, R.E. and Rohrbaugh, J. (1981) ‘A Competing Values Approach to Organisational
Effectiveness’, Public Productivity Review 5(2): 122–40.
Quinn, R.E. and Rohrbaugh, J. (1983) ‘A Spatial Model of Effectiveness Criteria: Towards a
Competing Values Approach to Organisational Analysis’, Management Science 29(3): 363–77.
Schreisheim, C.A. and Eisenbach, R.J. (1995) ‘An Exploratory and Confirmatory Factor-Analytic
Investigation of Item Wording Effects on the Obtained Factor Structures of Survey Questionnaire
Measures’, Journal of Management 21(6): 1177–93.
Smith, P.C. and Kendall, L.M. (1963) ‘Retranslation of Expectations: An Approach to the
Construction of Unambiguous Anchors for Rating Scales’, Journal of Applied Psychology 47:
149–55.
Spreitzer, G.M. (1995) ‘Psychological Empowerment in the Workplace: Dimensions, Measurement,
and Validation’, Academy of Management Journal 38(5): 1442–65.
Sullivan, D. (1996) ‘Measuring the Internationalization of a Firm: A Reply’, Journal of
International Business Studies 27(1): 179–92.
Correspondence to:
DR J . POUNDER , Lingnan College, Department of Management, Tuen Mun, New Territories, Hong
Kong. [pounder@ln.edu.hk]