GRE General standardized tests have two testing options, which are paper-based and computer-adaptive testing. In order to assess all GRE candidates fairly, ETS launched level adaptive scoring. Section-based adapting refers to the capability of the computer to measure the performance of a test-taker on the first section and use it as a basis to determine the difficulty level that they can handle in the second operational section. The first section consists of a fixed set of 20 questions. On the other hand, the questions in the second section vary depending on the examinees’ previous performance. A more straightforward question may sound like a relief. However, it may not be ideal to achieve the desired results. A harder question provides examinees with a safety net score-wise and allows them to reach a certain threshold. In this article, how computer-adaptive tests generate accurate scores is discussed.
Adaptive testing measures the level of performance of the examinees in the first phase of the test and provides them with questions that have a level of difficulty that is appropriate to their cognitive abilities. The number of difficult questions that they get is crucial to their success in reaching a certain threshold. Easy questions in the second phase are linked to poor performance, so thus, examinees must correctly answer as many average questions as possible. Although examinees may correctly answer the same number of questions, those who answered harder questions in the second section will have a greater final score. (3)
Computer-adaptive testing relates the difficulty of a question in the second phase according to the level of knowledge of the examinee that is measured during the first phase. It precisely determines their intellectual capability through the number of questions they correctly answer. The test starts with average questions, and from that point, the success or failure of the test takers will be crucial to the weightage of their final score. Despite the diversity displayed by this feature, the ability scores are still comparable to one another, as made possible by the item response theory (IRT), which is a psychometric technology to produce equitable scores from varied sets of items. (5)
The measures have two sections that contain static questions. The first section consists of average questions that do not change at all, regardless of the correctness of the test taker’s answers. Although questions may vary in the second section depending on the performance of the examinees, test-takers will still encounter the same problems if they take the test more than once. Moreover, every question is weighted the same way, and the level of their difficulty is random. (4)
The algorithm of the GRE adaptive test equips it with artificial intelligence to select the types of questions that the examinee will encounter in the second question based on the level of knowledge that it measured in the previous section. This computer-based format values the nature of the performance of the test-takers in securing the precision of the results. The adaptability of the test only applies on a section-by-section basis. Each GRE measure's performance will have no effect on the others. Furthermore, questions will not change regardless of the accuracy of the answers.
The old GRE adaptive test only includes one section and adjusts the level of difficulty of the succeeding questions to the examinee’s previous answer. Examinees were assumed to have average knowledge; thus, they were provided with questions of medium difficulty. In the old GRE, the level of difficulty of the questions increased with every correct answer. (6)
August 2011 is the date when ETS published the GRE Section-adaptive test. The level of difficulty of the questions was no longer adjusted every time the examinees answered. Instead, the adjustments occur after answering the set of questions in the first section.
The attention span of the test takers is challenged in the computer-based test. Compared to the paper-based test, reading the exam questions from a bright monitor tires the eyes of the students easily. To cope with the issue of concentration, test takers may practice the actual tests on the computer and reserve the study session for the traditional method. Drills and reading can be done with books and flashcards instead. (1)
Before taking the actual computer-based test, test takers must be familiar with the following features:
- Preview and review options in a section.
- Mark and review questions so you can proceed to the easier questions instead.
- Change or edit answers in a section
- On-screen calculator for the quantitative reasoning section
- Basic word processing features for the analytical writing section
These features offer test-takers more convenience as they help them save time, review their answers, and calculate math problems easily.
Instead of worrying about the adaptive portion, test takers may focus on the following strategies:
- Score strategy
- Time management
In this way, they can allocate their time efficiently and develop a good plan to garner the desired results on the test.
For verbal reasoning and quantitative reasoning measures, raw scores are crucial to the level of performance of the examinees. The scale of the scoring system ranges from 130 to 170. Raw scores refer to the number of correctly answered questions. The raw scores for each measure are converted into a scale score through equating, reflecting their level of performance regardless of the chosen second section. During the equating process, the variations and differences in the difficulty of the test editions are taken into account. On the other hand, the final score of the analytical writing measure is based on the points given by the human examiners and the algorithm of the computer. The first phase is scored by a trained rater. An e-rater, a computerized program that determines writing proficiency with the essay features, also provides a score. If the result of human and e-rater scores is harmonious, their average will be the final score. If they disagree, a second human score will be added to the primary human score, and the average will be the final score.
The only way to get a harder second question is to answer more questions correctly. The level of difficulty of the questions depends on the mastery that you showcase in answering them. Thus, the more weight you put into the value of your responses, the heavier your challenges will be in the long run. Hence, test takers must invest their time studying, developing an effective strategy, and doing their best to gain scores as high as possible. Remember to not dwell on whatever question you may encounter. Instead, move on to the easier ones and return to the harder ones during your remaining time.
Scaled scores vary depending on the level of performance of the test-takers in the first phase of the test. An outstanding score in the first section gets challenging questions in the second question, while a poor performance in the first section gets easy questions in the second question. Consequently, the weightage of the correct answers in the challenging section bears the most significance. The measures of the GRE test are independent of each other; thus, one may have varied results in each of them.