Frequently Asked Questions about Curriculum-Based Measurement

FAQs In Mathematics
FAQs In Reading

In 2004, the National Center on Student Progress Monitoring held its first Summer Institute. The focus of the Institute was on CBM in Reading. Here, you will find an extensive compilation of the questions asked at the Institute, as well as some others that we thought would be helpful.

Reading Sub-topics
Click to explore below – page last updated on 9/9/05
General Curriculum-Based Measurement (CBM) Questions
Curriculum-Based Measurement (CBM) versus Other Tools
Benchmarks
Letter Sound Fluency (LSF)
Word Identification Fluency Measure (WIF)
Passage Reading Fluency (PRF)
Where do I go from here?
If there are questions from the new Reading FAQ that you would like to ask or there is something you want to discuss, visit our Discussion Board.

General Curriculum-Based Measurement (CBM) Questions
What is the purpose of the “timing”, measuring what the child has/can master within a certain timeframe?
These measures are called curriculum-based, but are they based on our curriculum? Does that matter?
How is the minimum # of assessments required determined? How do these requirements relate to NCLB? (Are there requirements stated by law?)
Doesn’t the child get distracted by the timer and the teacher marking on a piece of paper while they are trying to recite?
For students who have difficulty reading how do you deal with anxiety/self-esteem issues connected with reading aloud for one minute?
What sorts of adjustments are made for students whose native language is not English?
How can assessments be valid and reliable if teachers can grade tests differently?
Do you have any resources/references for research correlating CBM results with performance on high-stakes testing?
Curriculum-Based Measurement (CBM) versus Other Tools
We already use the MCAs and other standardized achievement test to assess students. How is this different from CBM?
How is CBM different from running records? Or informal reading inventories?
What are the similarities and differences between Dynamic Indicators of Basic Early Literacy Skills (DIBELS) and Curriculum-Based Measurement (CBM)?
Can you use DIBELS materials and apply the decision making framework and goal-setting criteria used with CBM?
Benchmarks
What is the rationale for conducting benchmarking three times a year, especially for those students who are performing at or above the benchmark?
How are benchmarks determined?
Does the Tukey method use the middle point or median number?
Letter Sound Fluency (LSF)
What is the purpose of the Letter Sound Fluency Test?
How is the Letter Sound Fluency Test administered?
How do you grade the responses of a child with speech or dialect issues?
According to the directions in the Letter Sound Fluency Test, students are asked to tell the sounds the letter makes. If a letter has two sounds (e.g., hard and soft c and g), which response is correct?
On the Letter Sound Fluency test, why are some letters capitalized and some lower case?
Is it important to monitor the progress of phonemic awareness?
Word Identification Fluency Measure (WIF)
What is the Word Identification Fluency measure and what purpose does it serve?
How is the Word Identification Fluency measure administered?
How should I score the student’s response if he/she sounds out each sound in the word but doesn’t say the entire word fluently?
How should I score a student’s response if he/she corrects a word that was first mispronounced? How should a score a response if the student repeats the word and the second pronunciation is incorrect—even if the original pronunciation was correct?
If a student hesitates for two seconds on a word, why doesn’t the teacher tell them the word before instructing them to “Go on?”
A list of 50 words may seem overwhelming to some of my first graders. Can I put the words on individual flash cards instead?
How is the student’s score prorated (i.e., adjusted) if a student finishes reading a Word Identification Fluency list in less than 1 minute?
How are words selected for the Word Identification Fluency measures? Why are these words chosen? Shouldn’t I just use special vocabulary words from the student’s language arts program for the word lists?
Can Word Identification Fluency measures be reused?
How should I score a student’s response if the student has an articulation problem or mispronounces a word but always pronounces the word that same way in his/her own speech?
Passage Reading Fluency (PRF)
Passage Reading Fluency seems similar to some other measures (e.g., Running Records, Reading Miscue Inventory, and other types of Reading Inventories). How is Passage Reading Fluency different from these other measures?
How is Passage Reading Fluency administered?
If the child stammers and then says the word, does that count as a mistake?
In Reading Passage Fluency if the student hesitates on a word, why do you tell them the word before they move on?
Since we use “looking at the picture” as one of our reading strategies, does the passage reading section for lower grades (i.e. 1 st and 2 nd) include pictures? .
What are PRF benchmarks for 4 th grade up?
How does one address prosody (reading expression) within reading fluency?
When scoring a reading passage is there a difference between proper vs. common nouns? If they miss a proper noun one time, do you continue to count it as incorrect throughout the passage?
All of my students’ oral reading scores go up on one passage and all go down on another passage. Do these passages have different levels of difficulty even though they are supposed to be at the same level?
If the passages have different levels of difficulty, what should we do? Are they still useful for measuring?
Should I have my students practice reading passages out loud for one minute?
Should I count mispronounced words wrong for ELL students? Even if the student mispronounces a word due to an accent? Should I count mispronounced words wrong for students who speak with a different regional dialect?
For the “maze” test – is it always multiple choice? Ever just a straight closed procedure?
Where do I go from here?
You say to keep the child at the same “level” all year long. What if they make great progress and get 100% accuracy for several trials? Do you still keep them for a handful of months until the year is over?
Some of my students’ scores are going down instead of up. Does this mean that they not learning or that they are actually becoming worse?
Some of my students are making progress but they are still not meeting their goal. Should I lower their goal?
After a student has demonstrated a need for a different intervention, where can I find these interventions? Do these interventions increase in intensity as one is discarded as ineffective for another?
General Curriculum-Based Measurement Questions
What is the purpose of the “timing”, measuring what the child has/can master within a certain timeframe?
The purpose of timing during CBM is to determine the child’s fluency with literacy-related skills. If a child is able to read 100 words of 150 words correctly, but takes five minutes to do so, then the child is reading at a rate that indicates some degree of reading difficulty. Additionally, one criterion to be met by any CBM measure is that it must be efficient. One way to maximize efficiency is to minimize the amount of time a teacher must take to implement the assessment.

Back to Top

These measures are called curriculum-based, but are they based on our curriculum? Does that matter?
Research has shown that it doesn’t matter whether passages are based on a particular school’s curriculum. What’s important is whether the passages used for monitoring are constant (or at a similar level of difficulty) from one measurement period to the next. We would like to be able to detect whether students are growing in their overall reading proficiency.

Back to Top

How is the minimum # of assessments required determined? How do these requirements relate to NCLB? (Are there requirements stated by law?)
In terms of NCLB, each State establishes its assessment program. While CBM can be used within a framework of Adequate Yearly Progress, and can certainly provide information as to the progress of individual students toward meeting state standards, NCLB has no requirements related to the number of assessments for instructional decision making.

Back to Top

Doesn’t the child get distracted by the timer and the teacher marking on a piece of paper while they are trying to recite?
Student distractions can be kept to a minimum if the assessor does not sit right next to the child during the assessment and uses a clipboard that is held in a position where the child cannot see when responses are being recorded on the protocol. In addition, place the timer where the student cannot see the digits. Many assessors clip the timer to the top or side of the clipboard for convenient and easy viewing. Traditional kitchen timers with a dial that “ticks,” should not be used because they are not accurate. Only digital timers or stopwatches should be used.

Back to Top

For students who have difficulty reading how do you deal with anxiety/self-esteem issues connected with reading aloud for one minute?
Self-esteem and anxiety issues can be addressed successfully by the way the teacher sets the stage for CBM in his/her classroom. If the teacher makes CBM seem like a punitive activity linked directly to other things students consider negative (e.g., grades or class ranking), then students will feel insecure and nervous. If the teacher explains CBM as a method for helping a student see the gains he/she is making in reading and a method for individual goal-setting, then students are less likely to be insecure or nervous.

Back to Top

What sorts of adjustments are made for students whose native language is not English?
There are no research-based adjustments made for ELL. The only adjustment supported by CBM research is purposeful ignoring of errors attributable to dialectical differences.

Back to Top

How can assessments be valid and reliable if teachers can grade tests differently?
Assessments for the purpose of progress monitoring can be valid and reliable at the teacher level if they grade consistently. However, comparisons between teachers may not be as valid and reliable if tests are graded differently.

Back to Top

Do you have any resources/references for research correlating CBM results with performance on high-stakes testing?
See:

Good, R.H. III, Simmons, D.C., & Kameenui, E.J. (2001). The importance of decision-making utility of a continuum of fluency-based indicators of foundational reading skills for third-grade high-stakes outcomes. Scientific Studies of Reading, 5, 257-288.

(We are currently seeking permission to post this article on our site.)

Back to Top

CBM versus Other Tools
We already use the MCAs and other standardized achievement test to assess students. How is this different from CBM?
Standardized tests of achievement, like the MCAs, the Northwest Achievement Levels Tests, and the Iowa Tests of Basic Skills, are typically given once a year and provide an indication of student performance relative to peers at the state or national-level. Conversely, curriculum-based measures are an efficient means of monitoring student performance on an ongoing basis. With CBM, we are able to detect whether students are, in fact, making progress toward an end goal and to monitor the effects of instructional modifications aimed at helping the student reach this goal.

Back to Top

How is CBM different from running records? Or informal reading inventories?
Running records and informal reading inventories (IRIs) focus on specific skills whereas curriculum based measures are indicators of overall reading proficiency. In addition, a large body of research has shown that one-minute samples of the number of words read correctly from reading passages are sensitive, reliable, and valid of measures of reading proficiency – there is little research to support the use of running records and IRIs. If teachers find them useful, running records and IRI’s may be used in conjunction with weekly progress monitoring to help inform changes to students’ instructional programs.

Back to Top

What are the similarities and differences between Dynamic Indicators of Basic Early Literacy Skills (DIBELS) and Curriculum-Based Measurement (CBM)?
CBM was developed over 25 years ago at the University of Minnesota and focused on reading, spelling, writing, and math for students in grades 1 through 8. CBM was developed to be efficient, reliable and valid, inform instruction, monitor student growth, and be tied to the curriculum. DIBELS was developed over 10 years ago at the University of Oregon and extended the procedures used in CBM down to early literacy skills in Kindergarten through grade 3. The DIBELS procedures used for Oral Reading Fluency are the same as Passage Reading Fluency in CBM.

Back to Top

Can you use DIBELS materials and apply the decision making framework and goal-setting criteria used with CBM?
The oral reading fluency passages in DIBELS could be used as the materials for CBM passage reading fluency. The same criteria for setting goals and using the data in a decision making framework would apply. The additional DIBELS materials — Initial Sounds Fluency, Phoneme Segmentation Fluency, Nonsense Word Fluency, Word Use Fluency, Retell Fluency — are not identical to the other CBM reading materials (Letter-Sounds Fluency, Word-Identification Fluency, or Mazes) and therefore should not be substituted for CBM measures when setting goals and using the data in a decision making framework.

Back to Top

Benchmarks
What is the rationale for conducting benchmarking three times a year, especially for those students who are performing at or above the benchmark?
The rationale for conducting benchmarking three times a year with ALL students, even those students who have met or exceeded the benchmark, is that there is no guarantee students will continue to learn the skills being taught. Benchmarking ALL students three times a year provides educators with an efficient and accurate indicator of current skill performance. It also allows educators to monitor which students are on track for making AYP without having to wait until the end of the year testing.

Back to Top

How are benchmarks determined?
The benchmarks were derived from a compilation of research studies that have examined what are “typical” performance levels for students on these tasks. The most frequently cited source is (Fuchs, Fuchs, Hamlett, Walz, & Germann, 1993).

Fuchs, L. S., Fuchs, D., Hamlett, C. L., Walz, L., & Germann, G. (1993). Formative evaluation of academic progress: How much growth can we expect? School Psychology Review, 22, 27-48.

(We are currently seeking permission to post this article on our site.)

Back to Top

Does the Tukey method use the middle point or median number?
The Tukey method uses both the “median” and “middle” data points in establishing a trend.

Here, the “median” refers to the middle score with respect to absolute value from an array of scores. The “middle” refers to the middle score with respect to relative position among a group of scores. For example, let’s say that you have the following scores in Phase I (33, 45, 36, 42, 44) and Phase III (52, 60, 58, 64, 68) (recall that collected scores are divided into three fairly equal groups in the first step of the process). In Phase I, “42” would represent the median score as it is the middle score with respect to absolute value (i.e., 33, 36, 42, 44, 45). Once the median is established, an “x” would be placed in Phase I at the place where 42 on the vertical axis and 36 on the horizontal axis meet. Why 36 on the horizontal axis? Because it represents the middle score with respect to relative position among the five scores. Here, absolute value is of no concern. We simply want to identify the midpoint of the data run with respect to time (recall that the horizontal axis represents time or weeks of instruction). In this example then, an “x” would be placed above the score of 36 (because it represents the midpoint of the five weeks in which data were collected) at a point that corresponds to 42 on the vertical axis. Once this is done in Phase I, the same is done in Phase III. In this case “60” represents the median score and 58 corresponds with week 3 of the five weeks in Phase III in which data were collected. In this case, the “x” would be placed just above the score of 58 at a point that corresponds to 60 on the vertical axis. Once each “x” is determined, they are connected with a straight line. This line represents the “trend line” for student performance during the entire progress monitoring period.

Back to Top

Letter Sound Fluency (LSF)
What is the purpose of the Letter Sound Fluency Test?
The purpose of the Letter Sound Fluency Test is to determine the student’s ability to fluently recode letters into sounds at the kindergarten level. Each of the five alternate forms of the test contains the 26 letters of the alphabet in random order. Therefore, in addition to measuring fluency with letter sounds, student responses can be used to make data based instructional decisions.

Back to Top

How is the Letter Sound Fluency Test administered?
Letter Sound Fluency is administered individually. The examiner presents the student with a single page showing 26 letters in random order. The student has 1 minute to say the sounds that correspond with the 26 letters. The examiner marks student responses on a separate score sheet. The soccer is the number of correct letter sounds spoken in 1 minute. If the student finishes in less that 1 minute, the score is prorated.

Back to Top

How do you grade the responses of a child with speech or dialect issues?
Students with speech impairments or dialect issues are not penalized. For example, if the letter is c and a student who has a frontal lisp, says, “th-ee,” that response iscorrect.

Back to Top

According to the directions in the Letter Sound Fluency Test, students are asked to tell the sounds the letter makes. If a letter has two sounds (e.g., hard and soft c and g), which response is correct?
The most common sound is counted as the correct response. Therefore, the hard sounds of c (as in cat and cute) and g (as in gate and get) are the correct responses. The soft sounds of c (as in city and cent) and g (as in gem and gist) are incorrect. Regarding vowels, only the short sounds are counted as correct responses.

Back to Top

On the Letter Sound Fluency test, why are some letters capitalized and some lower case?
In the updated edition of the Letter Sound Fluency Test, all letters are lower case.

Back to Top

Is it important to monitor the progress of phonemic awareness?
For kindergarten students, an alternative to measuring letter-sound fluency is to measure phoneme-segmentation fluency, but it is much more difficult to administer and score.

Back to Top

Word Identification Fluency Measure (WIF)
What is the Word Identification Fluency measure and what purpose does it serve?
The Word Identification Fluency measure is designed to monitor the reading progress of first-grade students. Alternate forms of 50 high-frequency words are used to determine whether students gain fluency in recognizing and correctly pronouncing high-utility words across time.

Back to Top

How is the Word Identification Fluency measure administered?
The Word Identification Fluency measure is administered individually for 1 minute. Typically, a list of 50 words, divided into three columns, is provided to the student. The examiner’s copy is identical to the student copy except that it also contains a line next to each word. The examiner records a “1” on the line if the student pronounces the word correctly and a “0” if the student misses the word. If the student hesitates in reading a word for 2 seconds, the examiner says to “Go on.” If the student attempts to sound out a word, the examiner waits 5 seconds before prompting the student to move to the next word. When 1 minute has elapsed, the examiner says, “Stop” and circles the last word read. The total number of words the student read correctly is recorded, along with the date, on the test protocol and then the score is charted on the student’s progress monitoring graph. If a student finishes reading the entire list of 50 words in less than 1 minute, the examiner prorates (i.e., adjusts) the student’s score.

Back to Top

How should I score the student’s response if he/she sounds out each sound in the word but doesn’t say the entire word fluently?
If the student sounds out the word correctly, even if the word is not pronounced quickly, the word should be scored as correct. For some sight words, the student may be able to use decoding skills to sound out the word. The examiner should keep in mind that the student takes up testing time by trying to sound out a word. Later, as a student acquires more skill in fluency or learns to recognize words by sight, he/she will be able to say the word more quickly and be able to read farther on the list, ultimately scoring more words correct. Of course, if any sound is not correct while the student sounds out the word, the word is scored as incorrect (unless the student self-corrects).

Back to Top

How should I score a student’s response if he/she corrects a word that was first mispronounced? How should a score a response if the student repeats the word and the second pronunciation is incorrect—even if the original pronunciation was correct?
For self-corrections: The examiner scores the word as correct if the student self-corrects within the time period allowed. To carry the example to the extreme, if the student mispronounces the first word in the list, the examiner scores it as incorrect and waits for the student to pronounce the second word on the list. After the student hesitates for 2 seconds, the examiner would mark the second word wrong and tell the student to proceed to the next (i.e., third) word. However, if the student says the first word incorrectly but then self-corrects within the 2-second hesitation period that the examiner mistakenly thought was for pronouncing the second word, the examiner would go back to the first word and mark it as correct. If the student says the first word incorrectly but starts then to sound it out, the examiner may wait 5 seconds for a possible self-correction before prompting the student to go to the next word. If the student does provide a self-correction, the word is scored as correct.

For incorrect repetitions: The student’s last response within the allotted time period is scored. For example, even if the student pronounced the word correctly the first time and then mispronounces the word when repeating it, the word is scored as incorrect. However, if the student just repeats a correct word, the repetition is counted as correct.

Back to Top

If a student hesitates for two seconds on a word, why doesn’t the teacher tell them the word before instructing them to “Go on?”
If a student hesitates for 2 seconds on a word in the Word Identification Fluency measure, the examiner provides only the prompt to “Go on” and does not tell the student the correct word. Of course, the teacher may make note of particular words that were missed and may provide instruction later on those words. Rapid recognition of particular words, that is, high-frequency words, is the critical aspect of the Word Identification Fluency measure. However, knowledge of one high-frequency word does not affect performance on other words. Therefore, the examiner does not want the student to focus any more attention on the word for which he/she hesitates; rather, the student should be focusing on the next isolated word.