LANGUAGE TESTING
Types
of Test:
There are many kinds of tests, each with a
specific purpose, a particular criterion to be measured. Language Tests differ
with respect to how they are designed (test method) and what they are for(Test
Purpose). Tests can be categorized according to the types of information they
provide. This categorization would prove useful in deciding whether an existing
test is suitable for the particular purpose.
Proficiency
Tests:
Proficiency tests are designed to measure people’s ability in a language, regardless of
any training they may have had in that language. In the case of some
proficiency tests, ‘proficient’ means
having sufficient command of the language for a particular purpose.
A Proficiency Test is concerned with
assessing what a student has learnt; it may relate to a known of unknown
syllabus. The aim of Proficiency test is
to assess the learner’s ability to apply in actual situations what he has
learnt. The content of a proficiency test therefore, is not based on the
content or objectives of language courses which the learners have already
followed. Rather, it is based on the
specification of what candidates have to be able to do in the language in order
to be considered proficient.
There are several well known standard
proficiency tests in English, including the Test of English as a Foreign
Language (TOEFL) and the Cambridge Proficiency Exams.
Achievement
Tests:
In contrast to proficiency tests,
achievement tests are directly related to language courses, their purpose being
to establish how successful individual students, groups of students, or the
courses themselves have been in achieving objectives.
In contrast to proficiency tests,
achievement tests are directly related to
a known syllabus. They attempt to examine
a learner’s achievement with specific reference to the objectives of a
particular course. The University Examination at the end of the BA degree
course, for example, is an achievement test. The uses of an achievement test
are limited. It can only indicate how
much of the syllabus has been learnt. It cannot, however, predict the future
performance of the learner.
There are two kinds: Final Achievement
Tests and Progress Achievement Tests.
Final Achievement Tests are those
administered at the end of the course of study. The content of the final achievement
test should be based directly on a detailed course syllabus or on the books and
other materials used. This has been referred to as the ‘syllabus content
approach’. The alternative approach is to base the test content directly on the
objectives of the course.
Progress Achievement Tests, as their name
suggests, are intended to measure the progress that students are making. They
contribute to formative assessment. Since ‘progress’ is towards the achievement
of course objectives, these tests too, should relate to objectives.
Aptitude
Tests:
Aptitude tests are used to predict future
performance. If, for instance, one wants to become an airline pilot, one should
possess certain qualities like physical dexterity, sensitivity to lights,
etc. If one does not have the aptitude,
he\she will not be allowed to enroll for training as a pilot.
In language learning, aptitude test measures the learner’s probable performance in learning a
new language. They show whether the learner has any special aptitude for
learning a new language. For example, a particular learner may have
difficulties in identifying the sounds of a new language.
Diagnostic
Tests:
Diagnostic
test are used to identify learner’s strengths and weakness. They are intended
primarily to ascertain what learning skill needs to take place.
A diagnostic test, as its name suggests,
enables a teacher to diagnose a learner’s strength and weakness so that she
could plan a suitable remedial program.
Diagnostic tests also provide immediate
feedback to the learner. If he\she scores well, he\she will be motivated to
learn further. If, however, he\she
scores badly, he\she would get clear indications of his\her shortcomings so
that he\she can do some useful revision.
Placement
Test:
Placement test are intended to provide
information that will help to place
students at the stage of the teaching program most appropriate to their
abilities. Typically they are used to assign students to classes at
different levels.
Direct
vs Indirect Testing:
Testing is said to be direct when it requires the candidates to perform precisely
the skill that is to be measured, particularly easier with productive skills.
Backwash effect is likely as practice for the test involves practice of the
skills.
Indirect testing attempts to measure the abilities that underlie the skills in which we
are interested. For an instance, Lado’s proposed method of testing
pronunciation ability by a paper and pencil test in which the candidate has to
identify pairs of words which rhyme with each other.
Discrete
point vs Integrative Testing:
Discrete point testing refers to the testing of one element at a time, item by
item(grammar). Integrative testing, by contrast requires the candidate to combine many language elements in the
completion of a task(writing composition).
TESTING: BASIC CONCEPTS: BASIC
TERMINOLOGY
by Anthony Bynom, Ph.D., December
2001
Most
teachers are involved in testing in some form, either invigilating, marking or
actually writing tests. This article is aimed at teachers who may be either
assessing test material or writing their own tests. The basic question to begin
with is why do we test?
I hope there
are not many of you who will follow the example of a former colleague. This
person would use tests as a punishment. If his class became too boisterous he
would announce a test. Then he would retire to his desk and set the most difficult
grammar test he could find in order, as he put it, ‘to teach the blighters a
lesson.’ In this instance testing was used as a means of classroom management.
The more conventional answer to the question of why we test is to get
information. The type of information required will dictate the type of test
needed. Types of information needed could come under the following headings.
· SELECTION
This is where testees are
being selected for some future course or type of
employment. You are trying to
find out if the people concerned have the right
attributes to benefit from
the course.
· PLACEMENT
Is when you want to place testees at
a correct level for their abilities.
· RANK
ORDERING OF TESTEES
This familiar process places testees
in order, who is first second etc.
· APTITUDE
This attempts to Predict likely
future performance.
· DIAGNOSIS OF
PROBLEM AREAS
An attempt to find out why things
are happening.
· VALIDATION
OF CURRICULA, PEDAGOGICAL PRACTICES AND MATERIALS
Is the curriculum working? Do your
classroom practices succeed? Are the materials you are using appropriate?
· FEEDBACK
This is used to amend
procedures, if necessary.
· EXPERIMENTATION
When you want to try something new
or different.
TYPES OF TESTING
Testing may initially be divided into two types.
1.
Norm Referenced Tests
Norm Referenced tests answer such questions as, how
student ‘A’ compares with student ‘B’. Attainment or Achievement
tests should be specifically designed to answer this question.
2.
Criterion Referenced Tests
Criterion Referenced tests answer such questions as,
How much has student ‘Y’ learnt? Or, How much does student ‘X’ know?
Proficiency tests should be designed to answer such questions.
In all the other test areas you should always bear in
mind the purpose of your test. For example;
Aptitude tests should be
designed to provide information to assist prediction of future learning
success.
Diagnostic tests
should obviously provide information on areas of difficulty.
Performance tests
should be designed to provide information for the evaluation of a specific
skill or task.
RELIABILITY AND VALIDITY
We now move on to the two key issues for any test,
reliability and validity.
These two concepts and their
relationship to testing form the most fundamental issue in current thinking on
testing. Although they are absolutely basic they very often appear to be
incompatible, in that some tests that are known as reliable are seen as falling
short in validity. While criticism of test validity is based on the question of
how reliable they are In fact it might be truly said that the whole art and
science of testing lies in attempting to find ways of harmonising these two
important qualities. To help with this let’s look at their meaning.
Reliability. This means the consistency of the test’s
judgement and results. It is about producing precise and repeatable
measurements on a clear scale of measurement units. Such tests give consistent
results across a wide range of situations. This is achieved by carefully
piloted trials of the test. Sometimes, several versions of the test may be used
on a controlled population of testees. The outcomes of these trials are
carefully analysed in order to establish consistency. When a consistent set of
figures is achieved the test may be deemed reliable.
Validity. This means the truth of the test in relation to what
it is supposed to evaluate. It concerns the relevance and usefulness of
what you are is measuring. The difficulty in setting such tests lies in the
problem how sure you can be about what is actually being measured. Is it
consistent with the worthwhile quality you think you’re measuring?
To help with this you should consider the
following:
·
Content validity. Have satisfactory samples of
language and language skills been selected for testing?
·
Construct validity. Is the test based on the best
available theory of language and language use?
·
Predictive validity. Does the test successfully predict
future outcomes?
·
Concurrent validity. Does the test correlate with other
existing measures? Usually a similar test.
There are other ways one can look at
the subject of validity but the above are the main ones and give you the basic
idea.
DISCREET POINT AND INTEGRATIVE TESTING
You may see or hear these words when
being asked to assess or even write a test so let’s see what they mean.
Discrete Point tests are based on an analytical view of language.
This is where language is divided up so that components of it may be tested.
Discrete point tests aim to achieve a high reliability factor by testing a
large number of discrete items. From these separated parts, you can form an
opinion is which is then applied to language as an entity. You may
recognise some of the following Discrete Point tests:
1. Phoneme
recognition.
2. Yes/No,
True/ False answers.
3. Spelling.
4.
Word completion.
5.
Grammar items.
6.
Most multiple choice tests.
Such tests
have a down side in that they take language out of context and usually bear no
relationship to the concept or use of whole language.
Integrative tests
In order to overcome the above defect, you
should consider Integrative tests. Such tests usually require the
testees to demonstrate simultaneous control over several aspects of language,
just as they would in real language use situations. Examples of Integrative
tests that you may be familiar with include:
1.
Cloze tests
2.
Dictation
3.
Translation
4.
Essays and other coherent writing tasks
5.
Oral interviews and conversation
6.
Reading, or other extended samples of real text
OTHER ISSUES
Should you aim for direct or indirect testing? To
help with this decision you may find the following helpful:
Indirect testing
makes no attempt to measure the way language is used in real life, but proceeds
by means of analogy. Some examples that you may have used are:
·
Most, if not all, of the discrete point tests mentioned above.
·
Cloze tests
·
Dictation (unless on a specific office skills course)
Indirect tests have the big
advantage of being very ‘test-like’. They are popular with some teachers
and most administrators because can be easily administered and scored, they
also produce measurable results and have a high degree of reliability.
Direct tests, on
the other hand, try to introduce authentic tasks, which model the student’s
real life future use of language. Such tests include:
·
Role-playing.
·
Information gap tasks.
·
Reading authentic texts, listening to authentic texts.
·
Writing letters, reports, form filling and note taking.
·
Summarising.
Direct tests are task oriented
rather than test oriented, they require the ability to use language in real
situations, and they therefore should have a good formative effect on your
future teaching methods and help you with curricula writing. However, they do
call for skill and judgment on the part of the teacher.
COMMUNICATIVE LANGUAGE TESTING
Since the late 1970s and early 1980s
the Communicative approach to language teaching has gained dominance. What is
actually meant by ‘Communicative ability’ is still a matter of academic
interest and research. Broadly speaking communicative ability should encompass
the following skills:
·
Grammatical competence. How grammar rules are actually applied in
written and oral real life language situations.
·
Sociolinguistic competence. Knowing the rules of language use, ‘Turn
taking’ during conversation discourse, etc. or using appropriate language for a
given situation.
·
Strategic competence. Being able to use appropriate verbal and
non-verbal communication strategies.
Communicative tests are concerned
not only with these different aspects of knowledge but on the testees’ ability
to demonstrate them in actual situations. So, how should you go about setting a
Communicative test?
Firstly, you should attempt to
replicate real life situations. Within these situations communicative ability
can be tested as representatively as possible. There is a strong emphasis on
the purpose of the test. The importance of context is recognised. There
should be both authenticity of task and genuiness of texts. Tasks ought
to be as direct as possible. When engaged in oral assessment you should
attempt to reflect the interactive nature of normal speech and also assess
pragmatic skills being used.
Communicative tests are both direct
and integrative. They attempt to focus on the expression and
understanding of the functional use of language rather than on the more limited
mastery of language form found in discreet point tests.
The theoretical status of
communicative testing is still subject to criticism in some quarters, yet as
language teachers see the positive benefits accruing from such testing, they
are becoming more and more acceptable. They will not only help you to develop
communicative classroom competence but also to bridge the gap between teaching,
testing and real life. They are useful tools in the areas of curriculum
development and in the assessment of future needs, as they aim to reflect real
life situations. For participating teachers and students this can only be
beneficial.
A great deal of valuable and interesting guidance published by you. I am contented to discover this article here. Continue sharing. Much thanks for the share. Keep blogging.
ReplyDeleteEnglish practice App | English with experts