0% found this document useful (0 votes)
148 views

Model ANswer

The document describes a writing test administered to a hotel manager named M after completing an intensive English course focused on professional writing. The test consists of two tasks - writing a letter and a memo. The summary evaluates how effectively the test measures M's writing abilities for her job requirements. Positively, the test directly measures writing skills at an appropriate level using work-related language and formats. Negatively, the test does not assess email writing, topic vocabulary may not be relevant, and marking is subjective. Overall, the test provides a good but not complete picture of M's writing skills.

Uploaded by

Mahgoub Alarabi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
148 views

Model ANswer

The document describes a writing test administered to a hotel manager named M after completing an intensive English course focused on professional writing. The test consists of two tasks - writing a letter and a memo. The summary evaluates how effectively the test measures M's writing abilities for her job requirements. Positively, the test directly measures writing skills at an appropriate level using work-related language and formats. Negatively, the test does not assess email writing, topic vocabulary may not be relevant, and marking is subjective. Overall, the test provides a good but not complete picture of M's writing skills.

Uploaded by

Mahgoub Alarabi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

8 Paper 2 Task 1

The text for this task is reproduced on pages 3 and 4. It is being used in the following situation:

M is a manager in a tourist hotel, with customers mainly from the UK. Her job involves
dealing with bookings and correspondence with hotel customers by email. Her overall
level is low intermediate (CEFR B1). Her company has sent her on a 2-week intensive
one-to-one course, focusing on her professional writing needs. At the end of the course
the company informs the teacher that it would like a report. In order to provide more
information for the report for the company, the teacher decides to administer this two-part
writing test from a public exam.

Using your knowledge of relevant testing concepts, evaluate how effectively this task fulfils its purpose
for this learner in this situation.

Make a total of six points. You must include both positive and negative points.

8.1 Guideline Answer

Positive

 Level The level of the tasks is suitable for a low intermediate student.

 Direct test It is a direct test of writing / it is clearly a writing test and so has face validity.

 Integrative test It tests M’s ability to use both language and writing subskills.

 Context/content The test is clearly work and business orientated, increasing face validity.

 Appropriateness of text type/task A letter to an unknown person uses similar


language/organisation as a business email / is similar to a business email.

 Writing sub-skills The writing will test M’s ability to use writing sub-skills such as paragraphing,
logical progression of ideas, layout, salutations etc.

 Functional language The functions required in both tasks (explaining, acknowledging, offering,
requesting, giving directions) are appropriate.

 Style Both tasks require neutral / formal language.

 Instructions There are clear instructions / information about content to include.

Positive Applications

 Motivation M should find the test motivating to do.

 Relevance M/employer will see the relevance of the test in terms of M’s needs.

 Usefulness The functional language tested is useful for M in terms of her work needs.

 Appropriateness of task types M/employer will feel able to trust the outcome of the test.

 Data from the test The teacher should be able to write a useful report on the basis of the writing.

41
 Evidence for M/her employer The test should provide evidence of M’s overall writing ability in
terms of organisation / ability to select relevant content / use of fixed phrases / cohesion etc.

 Appropriate style M will see the test as valid as she needs to write in a neutral/formal style at
work.

 Format M will not be tested on her ability to think of ideas but rather on her language/writing
skills / she’ll be able to show her true abilities / she’ll know what to do.

 Predictive validity The test will show M/employer how she will perform at work / what she needs
to continue working on after the course has finished.

Negative

 Appropriateness of memo A memo to a colleague lacks content validity because it is unlikely


that M would do this in English.

 Lack of email The test does not test M’s ability to write emails.

 Lexis/Topic the vocabulary / topic required, e.g. job applications is not relevant to M’s
situation/hotel work.

 Subjective marking There is not one correct answer and the marker will have to use their
judgement.

Negative applications

 Needs/ lack of relevance M/her employer will not feel that the topics / task types are relevant to
M’s needs at work.

 Motivation M may not be motivated to perform well.

 Not a full picture the results of the test will not give indication for M/employer of M’s ability to use
relevant lexis.

 Feedback on the course Neither the teacher or M will be able to assess the success of the
course.

 Lack of trust/reliability M may not have faith in the teacher’s ability to mark the test / trust the
results of the test / the marking may not be reliable.

8.2 Candidate performance


Candidate performance improved slightly from June 2010 with over 50% of the cohort gaining more
than ten marks for this task. Examiners noted an overall improvement in terms of candidates
understanding what is required in this task. This was reflected in a more appropriate use of
terminology, with fewer candidates attempting to use terms as the basis of their positive and negative
points and a more frequent and effective use of applications. The most frequent positive points which
were identified by candidates were that it was a direct test at an appropriate level with clear
instructions and the most frequent negative ones were that there was no test of the learner’s ability to
write an email, the lexis was not relevant and the marking of the test was subjective. Less common
points which were identified were that it was an integrative test set in a work and business context in
an appropriate neutral/formal style and that writing a memo was not appropriate for this learner. The
least common points to be made were that a letter to an unknown person is similar to a business email
and that it tests the learner’s ability to use a range of writing sub-skills and functional language. These
omissions reflect the fact that these were the points that candidates needed to be able to identify in
terms of this particular test. Examiners commented on the fact that candidates are still over-relying on
pre-learnt generic points taken from previous GLAs rather than developing their ability to evaluate the
appropriateness of a test with a particular learner in mind. Poor layout continued to be a problem with

42
many answers suffering from cramped or confusing layout. Rather than using columns, better answers
were laid out under the headings of Point and Application, which meant that candidates did not forget
to include both elements.

Weaker answers:
 forgot to state the overall purpose of the test, i.e. achievement or they described it as being a
progress or diagnostic text, or they avoided the issue by not saying what kind of test it was
 repeated the same point, particularly that it was a direct test or repeated the same application,
particularly that the test would be relevant or not relevant to the learner
 linked to the above point, did not use a wide enough range of criteria with which to evaluate
the test
 still continued to use testing terms such as content validity as the headings for their points
which resulted in the terminology not being related sufficiently to the point being made. As a
result, it was unclear whether the candidates understood the terms or not and they lost marks
 did not refer to relevant testing concepts in terms of this particular test or used terminology
inaccurately to describe the test. For example, a high number of candidates used backwash to
describe an achievement test, showing yet again that they did not understand that this term
refers to the effect that the test has on a course rather than an effect in general. Candidates
clearly did not understand that backwash could not be used to describe an end-of-course
achievement test as the course had already been completed. Some candidates also described
the test as being an example of an indirect rather than a direct test
 repeated pre-learnt points that they had seen in previous Guideline answers, particularly with
reference to fresh starts which was not relevant to this particular test
 identified key points but then lost marks because they did not include applications
 repeated the same application for different points (an application is only credited once) or
included more than one application for a point which resulted in repetition of applications over
the whole answer
 did not refer explicitly to the learner and their stated needs/goals and how the test met or did
not meet these needs and goals

Stronger answers (did the opposite of the above PLUS)


 applied the use of terminology to the test as appropriate so that it was clear that they
understood the meaning of the terms
 used testing terms judiciously, i.e. they did not use them for the sake of including them in their
answer but used them where appropriate
 outlined a range of points in terms of the content, purpose, writing sub-skills, text type,
(functional) language required and the marking system of the test
 combined the points and applications well so that it was clear that they were evaluating the
effectiveness of the test with this particular learner in mind

Candidates are recommended to:


 explicitly state what kind of test it is which will help them to use the correct terminology relevant to
that type of testing
 avoid approaching their evaluation through assessing the test against testing concepts such as
validity, reliability, backwash etc.
 read the situation in the rubric carefully, seeing how each part of it can be relevant to the
test and to the specified learner
 make sure their answers are specifically about the particular test
 make sure they always show how the points they make about the test’s effectiveness apply to the
particular learner
 cover a wide range of points relating to the test’s effectiveness in their answers
 use terminology only when relevant and use it accurately
 avoid repeating the same application to the learner under different points
 use clear layout that shows which points are intended as positive and which as negative
 make sure they include both positive and negative points
 make sure they make six points, including both positive and negative ones

43
8.3 Sample Answers

8.3.1 The following sample answer gained a high number of the marks available for this task
POSITIVE

This is a writing test and a direct test of the writing skill so and this is what the learner wanted to work
on in the course. Therefore the test has face validity for the learner.

It is also an integrative test – it will test her knowledge of discourse features – especially in relation to
correspondence – and it tests lexis and grammar appropriate for the general genre of correspondence
too. The style in which the learner must write is formal in part 2 so this has content validity for the
learner who needs to correspond with clients (formal style).

The test has construct validity – the rubric is very specific & the learner will not have to invent
information (this is relevant to her needs because at work she presumably wouldn’t have to invent
information either.) It tests the candidate’s ability to write and not her imagination.

The test, by being divided into 2 parts, allows for “fresh tests” – so that if the learner is nervous or at
the start, she has a chance to settle in and can get be more rel relaxed for the second part of the
exam.

It’s a reliable test – it should give the teacher a good idea of the student’s overall writing ability in
controlled situations and under pressure. It will enable the teacher to make valid points on the report
about the student’s level, progress & achievement in the course.

NEGATIVE

There is a problem with content validity because the learner specifically wants to write emails to clients
and both pa neither parts of the test deal with email (Part 1: memo, Part 2: letter – so the genre is
slightly wrong)

There is another problem with the topics of the correspondence – neither is related to hotels or tourism
– this gives the test rather low content validity and possibly low face validity as the learner may
question what relevance the topics have to her needs.

The one-to-one course that M did may have addressed her specific writing needs and may not have
prepared her for this type of test where she has to write a very formal letter in reply to a person who is
applying for a job (part 2) – responding accurately to this task means requires a good knowledge of
genre, discourse conventions and appropriate lexis and grammar – M may not have studied these
things on the course and therefore the test could be unreliable – it may not be a good indicator of her
actual ability to write emails for her job.

Examiner’s comments on sample answer


The candidate identifies four positive aspects of the test with different applications for the learner and
two negative aspects, one with an application and one without. In terms of the point without an
application, the candidate writes that the learner specifically wants to write emails to clients and
neither parts of the test deal with email but does not say what the learner’s reaction would be, e.g. she
could be demotivated or the test would not provide a full picture of her abilities etc. It is worth noting
that whilst this is a good answer, the candidate gives no thought to the appropriateness of the
functions or range of writing sub-skills tested, which would have moved her answer to a higher level of
sophistication. Furthermore, the candidate’s answer is also over long and contains three other points
which could not be credited because she had already outlined six valid points, which is the maximum
required for this task. In addition, these points are not valid, i.e. that the test contains two fresh starts
(a test with fresh start would contain more prompts), it’s a reliable test (this is a general comment with
no specific reference to this test), and speculating on what might have been included in the course,
which is outside the remit of the task. On a positive note, the candidate integrates her use of testing
terminology well into her answer, thereby showing that she fully understands the terms and can apply
them to this particular test.

44
8.3.2 The following sample answer gained half of the marks available for this task

Positive Points Negative Points

1. The two tasks are examples of direct 1. The test does not test M’s use of emails
testing in that they can be applied to which is her principle use of English at work.
everyday language use. As a result, the test As a result, the test lacks validity.
has some content validity.
2. The test covers different skills in writing 2. The test is administered by the teacher
(e.g. note taking, acknowledging, requesting, (and presumably corrected by him/her as
etc.) which are integrative skills and are thus well). As a result, the test may lack some
valid for M. reliability as it is not totally objective.
3. The test items are appropriate for M’s level 3. It is a summative test and presumably
(Low – Intermediate). They should not be does not reflect the 2 – week course
seen by M as beyond her level and should content as M had expressed a desire or a
therefore be seen a containing a high level of course for her needs. As a result, it may not
construct validity. be seen as motivating and creates a negative
4. The tests are practical and easy to backwash effect for M.
administer requiring little expertise on the
part of the tester.

Examiner’s comments on sample answer


This answer contains three positive points and one negative point but the candidate loses marks
because there is a limited amount of reference to the learner. In terms of the positive points, there is
only one clearly stated application related to a point, i.e. the test covers different writing skills which is
relevant for the learner. The other two points of it being a direct test and the test being appropriate to
the learner’s level were not supported by applications to the learner, e.g. that the teacher would be
able to write a useful report on the learner’s control of writing sub-skills based on the data generated
by the test and that she would be able to show her true abilities because the test is at the right level.
The examiners noted that it would have been better if the candidate had laid out her answer under the
main headings of Positive Points and Negative Points as she does but then with the sub-headings of
Point and Application so that she ensures that she includes a different application for each point that
she makes. They also noted that there is a fourth positive point which could not be credited because it
was generic, i.e. that the tests are practical and easy to administer. In terms of the negative points, the
fact that the test will be marked subjectively is supported by the application to the learner that this
could make the marking unreliable but the other negative point that the test does not test the learner’s
ability to write emails which is her principal use of English at work is not accompanied by a clearly
stated application, e.g. that she will feel that the test is not relevant to her needs at work. The third
negative point could not be credited because the candidate is making an assumption about the course
content which is outside the remit of the task. It is positive that she integrates some testing terms into
her answer although she misuses backwash.

8.3.3 The following sample answer gained only a few of the marks available for this task

(+) The task requires the candidate to use a variety of language structures.
e.g. to talk about the future
to give instructions
Therefore, it is more valid than, for example, a gap-fill activity.

(+) It tests what has been taught.


i.e The course was a writing course, focusing on professional needs.
The test is a writing test, aimed at a professional context.

(+) The test would meet a learner’s expectations of what a test should look like i.e. It has face validity.
e.g. – quite formal structure
– bullet points.
– specifies appropriate word count.

(-) The level of the test may not be suitable for a low – intermediate learner.

45
Part 2, for example, requires topic lexis and grammatical structures which a student of this level might
struggle with.
e.g. acknowledging a letter in a formal letter requires quite complex structures
i.e I gratefully received your letter.

(-) The test is not direct.

We are told that she mainly deals with UK customers by email.

In neither question is she tested on her ability to communicate with customers or her ability to write
emails. It is therefore not an accurate gauge of her English in relation to her purpose in learning the
language.

(-) Both task types are very similar. This may be de-motivating to the candidate.

Examiner’s comments on sample answer


The candidate makes six points but only identifies one valid positive point (with no application to the
learner) that the test is set in a professional context and one valid negative point (with an application)
that the test does not test her ability to write emails. The first strength shows some potential in that the
candidate correctly identifies the fact that the test requires the learner to give instructions but this
could not be credited because she then goes on to say that this is more valid than .. a gap-fill activity
which is not relevant. The third strength is a general point and descriptive. This also applies to the
third negative point, whilst the first negative point is inaccurate as the test is appropriate for the level of
the learner. This answer did not receive any marks for use of testing terms because face validity
occurred in a point which was not credited and direct test was used inaccurately as is shown when the
candidate writes that We are told that she mainly deals with UK customers by email. Again, this
candidate would benefit from thinking more carefully about layout with the use of the sub-headings
Point and Application to guide her answer. Overall, the lack of range and accuracy in this answer is
typical of weaker candidates who are not familiar / confident with evaluating the effectiveness of a test
with a specific learner / situation in mind.

46

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy