At the beep, pay attention

At the beep, pay attention

Bracey, Gerald W

I HAVEN’T heard a lot lately about time-on-task as an important factor in explaining achievement. Maybe I’m reading the wrong journals, but maybe a lot of people have realized that the usual ways of defining timeon-task all suffer a major weakness: they don’t get inside the students’ heads. They rely on observation.

In the late 1980s, Mihaly Csikszentmihalyi and his colleagues developed a nonobservational approach called the Experience Sampling Method. Simply put, you ask people what they’re thinking about at particular times. Since asking students what they’re thinking while they are in the classroom would be rather disruptive, Gad Yair of the Hebrew University of Jerusalem came up with a technological solution. He gave 865 students digital watches programmed to beep eight times a day between 7:30 a.m. and 10:30 p.m. for a week. (Although Yair is in Israel, the students were American, part of the Sloan Study of Youth and Development begun in 1993.) At the beep, students were supposed to answer such questions as “Where were you? What was on your mind? What was the main thing you were doing? What else were you doing? Was the main thing important to you? Were you succeeding at what you were doing?” And the list goes on. The results appear in the October 2000 issue of the Sociology of Education.

Most of the variables are self-explanatory – ethnicity, grades, etc. – but one needs to be explained. Yair constructed a “general mood” variable based on the degree to which students felt active as opposed to passive, the extent to which they felt in control, their intrinsic motivation, and their sense of accomplishment. Yair defined as “at risk” those who were one standard deviation below the mean score on general mood, who spent on average more than three hours a day hanging out with peers, and who spent less than half an hour a day on homework.

One of Yair’s analyses examined the percentage of time students were engaged with the instruction they were receiving. The results are instructive if not especially surprising. Overall, students were engaged with their lessons only 54% of the time. Asian and white students reported higher percentages of engagement than Hispanic and African American students; students with good grades were more engaged than students with poor grades; and at-risk students were less engaged than students not judged to be at risk.

Students were least engaged when being lectured to (54.4% of the time – a figure that is possible because much of the time the instructional method was unknown). Lectures, though, were the most common form of instruction. Students were most engaged in science and mathematics and least engaged in social sciences and English. Students were substantially more engaged during discussions, lab work, group work, and when receiving individualized instruction, and less so while watching videos or TV. From grade 6 through grade 12, there was a steady decline in the percentage of time the students were engaged with the school.

Students’ lives outside of school also played a role in their attention to school. “The better the overall mood at home, with friends, and at work, the more engaged students are in class and the less likely they are to be externally preoccupied. Furthermore, the more time that students spend hanging out with peers, the more likely they are to report external preoccupations. ” Instruction that students found relevant to their lives, instruction that was academically challenging, and instruction that made more academic demands also increased engagement.

When Yair recoded the variables to reflect what he calls high- and low-quality instruction, an interesting quality-by-ethnicity interaction appeared. Hispanic students had the lowest rates of engagement during low-quality instruction. When they were receiving high-quality instruction, though, their engagement rates were second only to those of white students. This was true for both at-risk and not-at-risk Hispanics. At-risk Asian students showed a similar sensitivity to high-quality instruction, and while the Asians who were not at risk showed some sensitivity to instructional quality, it was smaller than for their at-risk peers. African American students who were not at risk showed the least variation in engagement under conditions of differing instructional quality.

“Overall,” writes Yair, “the results of this study suggest that American schools are far from achieving their ideal of excellent yet common schools. Rather, they provide instruction that encourages mediocre engagement rates and almost equal rates of alienation [nonengagement] from instruction…. External factors … consume students’ attention, culminating in significant losses of human capital, especially among minority and at-risk students.” By “external” Yair means anything other than instruction, including the internal dialogues and musings we all have.

A Further Word on No Excuses

A SHORT version of the critique mentioned in “Erratum” appeared in the 10th Bracey Report, and Megan Farnsworth and I exchanged views on the matter in Backtalk (January 2001). Mine was mostly a general treatment of the report. Richard Rothstein of the Economic Policy Institute dug deeper into information about the actual schools. Some of his findings appeared as his weekly column in the New York Times (3 January 2001). A longer version will be forthcoming from EPI later in the year.

Rothstein finds a number of factors that work against the assumptions of No Excuses. For instance, one of the schools with many children eligible for federally subsidized lunches had more parents with college degrees than in the nation as a whole, 30% versus 24%. Fully 12% of the students at this school have parents with graduate degrees. Says Rothstein, “Bennett-Kew may be terrific, but can schools filled with the children of high school dropouts learn much from it?” Rothstein continues:

Or consider another No Excuses school, Morse Elementary in Cambridge, Mass. It has bilingual classes in Korean and Chinese for children of graduate students and faculty members at Harvard University and the Massachusetts Institute of Technology. Graduate students may have incomes low enough for their children to get subsidized lunches, but schools with children of less literate immigrants cannot just as easily post high scores.

Some schools that emphasize phonics have declining scores in the upper grades. Such is the case at Houston’s Mabel B. Wesley School, whose former principal, Thaddeus Lott, has been lionized on national television. (He now has responsibility for a number of schools, not just the one.) Samuel Casey Carter, author of No Excuses, says this decline happens in all schools, but, as Rothstein points out, this explanation makes no sense. No Excuses reports all test scores as national percentile ranks. Since, by definition, half of all students are always below the 50th percentile, if a school’s rank falls, the students are losing ground relative to other students in the nation. “Their school may be sacrificing comprehension to excessive focus on phonics,” Rothstein writes.

Moreover, test scores are not always consistent across tests. One school has high scores on commercial tests, but the North Carolina State Department of Public Instruction designates it as “low performing” because its state scores are so low. Rothstein thinks this result might stem from an emphasis on the kinds of low-level skills tested by commercial tests, to the detriment of the more advanced thinking required by North Carolina’s state test. On the other hand, it could also have something to do with the ease with which teachers and students accommodate to a particular commercial test in contrast to a state test that has some proportion of new items in each assessment. The 10th Bracey Report also contained a section indicating that standardized test scores don’t generalize well beyond themselves, even when used in a “low-stakes” setting.

This contrast between basic skills and more advanced thinking gives one pause in deciding what is best for children in poverty. In 1978, during the arguments of Debra P v. Turlington, I mentioned to one of Debra R’s lawyers, Diana Pullin, that for some students, the minimum competency test at issue in the case would become the curriculum. Her reply was that that would be more than they were getting now. Maybe so. But it seems foolish for Heritage to present a case that these children are learning as well as or better than those in more middle-class schools.

Rothstein credits No Excuses for highlighting some good practices, such as teacher collaboration, parent involvement, and strong leadership from principals. “But,” he says, “it also highlights others, like nearexclusive drill in phonics and computation, that seem inspired more by ideology than results.”

‘A’ Schools Fetch ‘A’ Home Prices

Sometime in the early 1980s, I recall a Washington Post story indicating that housing values in Fairfax County, Virginia, a generally highperforming school district, varied as much as $50,000 solely as a function of SAT scores. A recent study now indicates that, in Florida at least, home values rise and fall depending on the letter grade (A through F) the state assigns to the neighborhood school. The variability is independent of test scores, which also influence value, and of the other variables.

David Figlio and Maurice Lucas of the National Bureau of Economic Research looked at price changes in Gainesville, Florida. Gainesville is an ideal area for such a study, they contend, because of tightly defined neighborhoods, stable zoning boundaries, and high housing turnover (45% of the houses in the study’s neighborhoods sold twice during this 10-month study, and 10% sold three times). They looked at the prices of homes that were sold during the five months before the schools received their grades and were sold again in the five months after the rades came out.

In discussing how grades are assigned, Figlio and Lucas show that the grades are somewhat arbitrary. “It is not clear that the grades provide ‘good’ information,” they write. For instance, if grading criteria were changed slightly, some B schools would be A schools and vice versa. The grades are widely known, however.

Prices for homes in the sending areas for two A schools increased by 11.2% and 20.4%, while prices for homes assigned to the four B schools decreased for three of those schools and increased by only 2.6% for the fourth. Plugging data into a regression equation, the researchers find that getting an A increases the value of a typical home by $9,179 (or about 7%) over a B grade. They also found that a B was valued at $9,876 over a C.

There is some evidence that the effect is not permanent. The effect dropped by about 10% during each month after the grade assignment was made public. However, since Figlio and Lucas analyze only the results of the first year of grading, we don’t know what the impact, if any, was in the second year, 1999-2000. Still, the results “suggest that the housing market responds significantly to the new information about schools provided by these `school report cards’ even when taking into consideration the test scores or other variables used to construct these same grades.” Policy makers should, they caution, be careful. An abstract of the paper is available online from the National Bureau of Economic Research at papers/w8019; there’s a $5 charge for the full text. K

Copyright Phi Delta Kappa Mar 2001

Provided by ProQuest Information and Learning Company. All rights Reserved.