Dissociation of Musical Tonality and Pitch Memory from Nonmusical Cognitive Abilities
W R Steinke
Abstract The main purposes of this study were to replicate, validate, and extend measures of sensitivity to musical pitch and to determine whether performance on tests of tonal structure and pitch memory was related to, or dissociated from, performance on tests of nonmusical cognitive skills — standardized tests of cognitive abstraction, vocabulary, and memory for digits and nonrepresentational figures. Factor analyses of data from 100 neurologically intact participants revealed a dissociation between music and nonmusic variables, both for the full data set and a set for which the possible contribution of levels of music training was statistically removed. A neurologically impaired participant, C.N., scored within the range of matched controls on nonmusic tests but much lower than controls on music tests. The study provides further evidence of a functional specificity for musical pitch abilities.
How various types of cognitive abilities are related, and further, how musical abilities are related to other abilities are questions with both a long history and current importance in cognitive psychology and the psychology of music. Music listening and performance engage a variety of processing levels – from elementary sensory-motor encoding to higher-level relational and symbolic representations. Music perception and cognition invite comparisons to other perceptual and cognitive processes, both in terms of commonalities and of differences.
For example, both music and speech are highly structured forms of communication processed by the auditory system. It has been suggested that precocious abilities in music and speech emerge from common origins (e.g., Davidson & Scripp, 1988; Lynch, Short, & Chua, 1995; Trehub & Trainor, 1993). Warren (1993) remarks that “our use of speech and our production and enjoyment of music are based on an elaboration of global organizational skills possessed by our prelinguistic ancestors” (p. 64). Bigand (1993) comments on general cognitive constraints influencing not only hierarchical organization in music and speech but symbolic processing in general.
However, the ongoing search for and description of functional music modules (e.g., Deliege, 1995) illustrates concern for the differentiation of musical abilities from one another and from other cognitive abilities. Distinct neurological processes revealed by brain electrical activity (e.g., Besson, 1997), cerebral blood flow patterns measured with positron emission tomography (e.g., Zatorre, Evans, & Meyer, 1994; Zatorre, Halpern, Perry, Meyer, & Evans, 1996), and patterns of dissociation found in neurologically compromised individuals (e.g., Patel & Peretz, 1997) indicate mental operations specific to the domain of music. Taken together, therefore, accounts of both integration and differentiation have been proposed. A further illustration, one of the earliest and most pertinent to the present study, concerns the relation of music and intelligence.
Music and intelligence
Earlier in this century, Spearman (1904, 1927) concluded that music shared the common g or general function with all other branches of intellectual activity, but allowed that a specific music factor s was operating in its own right. Within the next few decades, researchers identified a music group factor beyond g, but as Vernon (1950) noted, the factor was poor in reliability. Moreover, no consistent sub-grouping of musical factors such as pitch, rhythm, and tonal memory was found.
Subsequent studies, however, continued to provide encouragement for the notion that musical abilities were separable from general intelligence. Shuter-Dyson and Gabriel (1981) summarized a large number of studies (involving some 16,000 participants) that examined the relations between intelligence and musical abilities as measured by standard musical aptitude tests assessing a wide variety of musical skills. All reported correlations, though positive, were low. The authors concluded that, although intelligence may play a role in musical development, measures of intellectual efficiency are weak indicators of musical aptitude and ability. More recently, Howe (1990) also summarized the literature on the relation between intelligence and abilities, including music, and came to a conclusion similar to Shuter-Dyson and Gabriel (1981).
When empirical evidence failed to support the notion of a unitary construct of intelligence, the notion of separate intelligences was put forward. Gardner’s (1983) theory of multiple intelligences, for example, states that music intelligence is one of seven separate domains of intelligence. In a related fashion, Fodor (1983), Jackendoff (1987), and Peretz and Morais (1989) have suggested that the human cognitive system may comprise distinct “modules,” or physically separate subsystems each “endowed with a specific corpus of procedural and declarative knowledge” (Peretz & Morais, 1989, pp. 279-280).
Support for the distinctiveness of components or subskills of music has also increased. In a recent comprehensive review of the literature on human cognitive abilities, Carroll (1993) concluded that several independent musical factors within a factor called Broad Auditory Perception are suggested (italics Carroll, 1993, p. 393) by current research evidence. These include discrimination of tones and sequences of tones on pitch, intensity, duration, and rhythmic dimensions, judgments of complex relations among tonal patterns, and discrimination and judgment of tonal patterns in musicality with respect to melodic, harmonic, and expressive aspects. Carroll cautions that a more definitive list awaits further research. Carroll also concluded that a higher-order factor of general intelligence dominates the Broad Auditory Perception factor as well as the musical factors listed above. Put differently, the variance associated with a musical factor may be partially accounted for by a unique component and partially accounted for by a general or shared intelligence.
The suggestions above arise from test results obtained from neurologically intact individuals with varying levels of music training and ability. Evidence that musical ability comprises distinct components of music separate from each other and from other cognitive abilities has also been obtained from neurologically compromised individuals in the form of single-patient studies. Individuals with brain injuries have demonstrated unique patterns of selective loss and sparing for musical factors such as melody recognition (Steinke, Cuddy, &Jakobson, 1996), contour processing (Peretz, 1993a), and timbre discrimination (Samson & Zatorre, 1994). Melody and rhythm may be dissociated (Peretz & Kolinsky, 1993). Brain injury and degenerative brain disease have also been shown to differentially affect musical abilities in comparison to intellectual and linguistic abilities (Peretz, 1993b; Polk & Kertesz, 1993). Finally, high degrees of musical ability have been reported for idiot savants who display exceptional skills in some limited field but are otherwise defective (Howe, 1990; Judd, 1988).
Sloboda (1985), in reviewing some of the studies on the supposed location and independence of music in the brain, cites evidence from both normal and brain-damaged individuals to conclude that “various sub-skills of music have a certain degree of neural independence. There is little evidence for a single `music centre’ in the brain” (p. 265). While Sloboda tentatively supports the notion of multiple modules within music, he agrees with Marin (1982) and writes that further progress is not likely to be made in this area until “the categories and distinctions between musical activities made on psychological and music-theoretic grounds are taken seriously by researchers” (p. 265).
Purposes of the Present Study
The present study had three purposes. The first purpose was to examine that category of musical activity known as a sense of tonality – to replicate, validate, and extend measures of sensitivity to tonal structure. The second was to determine whether performance on the tests of tonal structure (or a subset of the tests) was related to, or dissociated from, performance on tests of nonmusical cognitive skills. Data were collected from a large group of volunteers (n – 100) from the general community and statistically analysed in two stages corresponding to the first two purposes of the study. The third purpose was to examine the performance of a neurologically compromised individual, C.N., previously assessed as a case of atonalia (Peretz, 1993a). C.N.’s test results were expected to provide a direct assessment of dissociation that could be compared with the statistical solution obtained for the general sample.
Stages 1 and 2 – the General Sample
The purpose of Stage 1 was to test the convergent validity of a number of tonality measures. The purpose of Stage 2 was to examine the relation between sense of tonality in music and selected nonmusic abilities. The data for Stages 1 and 2 were collected from the same group of participants. The overall rationale for test selection, and the general method, will be presented followed by the results for each stage.
Stage I-Music Tests
Seven music tests were directed toward assessing sensitivity to tonality. An eighth test assessed memory for tonally unrelated pitches. Materials for the first four tests were based on previously available musical constructions. Materials for the remaining tests were constructed by W.R. Steinke, following rules consistent with traditional music theory. The musical validity of the rules and their application to test construction was verified by a professor of composition, C. Crawley, at the School of Music at Queen’s University.
Sensitivity to tonality. A sense of tonality, an important component of the “grammar” of music, is presumed necessary for musical understanding and enjoyment, and is therefore one of the most important and basic aspects of Western music. “Without the framework provided by the tonic (tonality in general), a note or chord is not integrated and remains merely a sound” (Handel, 1989, p. 342).
Our approach to measuring the sense of tonality was informed by both psychological theory and evidence (for reviews see Bigand, 1993; Dowling & Harwood, 1986; Frances, 1958/1988; Krumhansl, 1990a) and by music theory (Lerdahl, 1988; Lerdahl & Jackendoff, 1983; Meyer, 1956; Piston, 1987). In the Western tonal-harmonic idiom, tonality is defined in terms of the hierarchical organization of pitch relations. Hierarchical pitch relations exist at three interrelated levels, that of tone, chord, and key. Pitch relations are described in terms of stability. In any given musical context, some tones, chords, and keys are considered more stable or unstable than others.
The concept of the tonal hierarchy describes the relation among the single tones within a key. One single tone, the tonic, forms a reference point for all tones in the key. Each of the remaining tones is located in a hierarchical relation with the tonic. Similarly, the chords within a key may be described in terms of a harmonic hierarchy. The chord built on the tonic note, the tonic chord, forms a reference point for all other chords in the key. Individual tones and chords reference the tonic in an ongoing fashion as music unfolds. The listener is presumed to abstract a sense of tonality from the individual melodic and harmonic cues within the overall context of the music.
Sensitivity to the hierarchy of pitch relations, along with other mental operations thought to reflect tonality, tap the resources of a complex system. The different tests administered in the present study sought converging evidence through addressing different aspects of conventional notions of tonality. A variety of methods and contexts was employed that included both melodic and harmonic structures. Three assumptions were involved. First, it was assumed that prototypic instances of tonality could be created (Cuddy, 1991; Jones, 1981, 1982, 1991; Krumhansl, 1990a). Second, it was assumed that four or five distinct levels (at least) could be discriminated along a tonality continuum (Cuddy, Cohen, & Mewhort, 1981; see also Croonen, 1994; Dowling, 1991). The sense of tonality does not merely involve a categorical distinction between tonality and absence of tonality. Third, it was assumed that differentiation among the levels requires access to tonal knowledge.
The bulk of research findings suggests that the tonal hierarchy construct is psychologically valid. Listeners are able to abstract underlying or global aspects of music in spite of surface variations, distortions, or transformations of various kinds. No previous studies, however, sought to validate different tests of tonality against each other.
Probe-tone tests. Three tests implemented the probe-tone method (Krumhansl, 1990a; Krumhansl & Kessler, 1982; Krumhansl & Shepard, 1979) to assess recovery of the tonal hierarchy for three different contexts. Each was a key-defining context, according to traditional music theory. On each trial, the context was followed by a probe tone – one of the 12 chromatic scale tones, randomly selected. Each probe tone was rated on a 10-point scale for degree of goodness-of-fit of the probe tone to the context. The tonal hierarchy is said to be recovered if highest ratings are given to the tonic note, next highest ratings to the other notes of the tonic triad, lower ratings to other scale notes, and lowest ratings to the remaining nonscale notes. These levels are coded A, B, C, and D in Table 1.
For the Probe-tone Cadences tests, contexts were major (IV-V-I) and minor (iv-V-i) chord cadences in the keys of C major and c minor. Each probe tone was presented twice, for a total of 24 presentations of the chord cadence and probe tone for each major and minor cadence. The duration of each chord in the cadence and the probe tone was 1.1 s. The cadence and the probe tone were separated by a pause of .5 s. The context for the Probe-tone Melody test was the “March of King Laois” (as transcribed and rhythmically simplified for experimental purposes by Johnston, 1985; see also Cuddy, 1993). The melody, a 16th-century Celtic tune characterized by simple elaborations of the tonic triad, was chosen because it is highly tonal according to music-theoretic descriptions and because it was not likely to be familiar to any participant. The melody contains 60 notes of equal value; the duration of each note in the melody was .2 s. The duration of the probe tone and the pause between the end of the melody and the probe tone was 1 s. See Appendix A for the music notation of the Probe-tone Melody.
Melody completion ratings. The fourth and fifth tests required participants to rate the last note of a tonal melody on how well it completed the melody. Several studies have used ratings of goodness and completeness of phrase endings to assess tonal knowledge (e.g., Abe & Hoshino, 1990; Boltz, 1989).
For the Familiar Melodies test, six melodies were selected with the restrictions that each melody be: (a) probably within the accessible cultural repertoire of socalled “familiar” melodies; (b) in a major mode; (c) in 4/4 time; (d) four bars in length; (e) typically reproduced at a “moderate” tempo; and (f) ended on the tonic note. The melodies selected were Oh Susannah, Joy to the World, Early One Morning, London Bridge, Frere Jacques, and Good Night Ladies. The melodies contained an average of 21 notes. The duration of each quarter note in each melody was .5 s.
Each melody ending was varied according to a five-level tonal-atonal continuum. The levels, from A, the most tonal to E, the most atonal, are listed in Table 1. According to Table 1, therefore, six melodies ended with level A, the tonic, and 24 variations on these melodies did not end on the tonic. Twelve of the 24 variations maintained the original contour of the tonic ending, and 12 violated the original contour. Participants were asked to rate, on a 10point scale, how well the last note of the melody completed the melody. See Appendix A for an example of a familiar melody with five possible endings.
For the Novel Melodies test, six melodies were constructed similar in melodic structure to the melodies of the Familiar Melodies test. There were two stylistic differences. First, the rhythmic structure of the novel melodies was somewhat simpler than the familiar melodies. Second, for novel melodies, the melody ending for each of the first three levels of tonality (A, B, and C; see Table 1) was sounded equally often, on average, within the melody. The total-duration of melody notes corresponding to the ending note was, on average, the same -10.5 sixteenthnote beats, or 1.33 s. (Level D and E endings, of course, never occurred in the melody.) For familiar melodies, on the other hand, the total duration of melody notes corresponding to the ending note decreased across levels A, B, and C. In other respects, the novel melodies were similar to the familiar melodies. See Appendix A for an example of a novel melody with five possible endings.
Rating tonal structure of melodies. A sixth test involved rating the tonal structure of unfamiliar melodic sequences. Melodic changes, however, were not limited to the final note. The systematic addition of nonkey tones to a tonal sequence resulted in melodic sequences with increasingly ambiguous tonal centres. Previous studies have shown that listeners are reliably able to track the perceived degree of syntactic completeness of such melodies (Cuddy et al., 1981; Cuddy & Lyons, 1981).
For the Tonal/Atonal Melodies test, six tonal melodies were constructed with the restrictions that each melody be: (a) in a major mode; (b) in 4/4 time; and (c) four bars long. The duration of each quarter note was .5 s. Each of the six melodies was then used as a prototype and varied according to a five-level tonal-atonal continuum. The levels, A to E, are listed in Table 1. Participants were asked to rate, on a 10-point scale, how good or well-formed each melody sounded. See Appendix A for an example of five levels for one melody prototype.
Rating tonal structure of chord progressions. The seventh tonality test involved rating the perceived tonal structure of chord progressions. Studies have demonstrated that variations in the properties of chord sequences influence recognition memory (Bharucha & Krumhansl, 1983; Krumhansl, Bharucha, & Castellano, 1982; Krumhansl & Castellano, 1983), prototypicality ratings (Smith & Melara, 1990), and perceptions of modulation (Cuddy & Thompson, 1992; Krumhansl & Kessler, 1982; Thompson & Cuddy, 1989).
For the Chord Progressions test, 25 progressions of eight chords were constructed. The 25 progressions represented five levels of tonality, with five examples of each level. Chords used in the progressions were major, minor, major seventh, minor seventh, dominant seventh, augmented, diminished, and diminished seventh. The duration of each chord was 1.2 s. The levels of tonality, A to E, are listed in Table 1. Participants were asked to rate, on 10-point scale, how well the eight chords in the sequence followed one another in an expected manner. See Appendix A for an example of each level of the chord progressions. Test of pitch memory. Memory for pitch is considered a “basic ingredient of musical ability” (Shuter-Dyson & Gabriel, 1981, p. 239). Memory for pitch may be assessed by requiring participants to judge whether a tone was or was not part of a sequence of tones (Dewar, Cuddy, & Mewhort, 1977), to judge the relation between the first and last tones of a sequence when other tones or silence intervenes (e.g., Deutsch, 1970, 1972, 1978; Frankland & Cohen, 1996; Krumhansl, 1979), to note scalar and nonscalar changes in pairs of melodies (Bartlett & Dowling, 1980; Dowling & Bartlett, 1981), or to note changes in short melodic fragments either tested in isolation or in the context of additional preceding and following sequences of a tonal or atonal nature (Cuddy, Cohen, & Miller, 1979). The present study had participants judge whether a tone presented in isolation was part of a preceding sequence of tones.
Seventy-two trials were constructed, each consisting of a sequence of tones. The duration of each tone was .6 s. Each sequence was followed by a pause of .9 s, followed by a test tone of .6 s. The first eight trials consisted of one tone followed by a test tone, the next eight consisted of two tones followed by a test tone, the next eight consisted of three tones followed by a test tone, and so on up to eight trials of nine tones followed by a test tone.
The Pitch Memory test was constructed with a deliberate effort to avoid or violate tonal rules. It was intended to assess memory for tonally unrelated pitches and, as such, to provide a musical counterpart for the Digit Span test (below) which assessed memory for unrelated digits. Several steps were taken. First, sequences of tones were randomly selected from the 12 chromatic tones within one octave. Next, sequences which predominantly contained notes from a single major or minor key, contained major or minor triads, or contained scalar sequences were discarded. Third, the first author played and listened to the remaining sequences; those sequences that conveyed a musical impression of tonality to the author were discarded.
Finally, a key-finding algorithm (Krumhansl & Schmuckler, cited in Krumhansl, 1990a) was applied post-hoc to assess the tonal strength of the pitch distribution of the 72 sequences. Correlations were obtained between the distribution of pitches in each sequence and the standardized tonal hierarchy for each of the 24 major and minor keys. The standardized tonal hierarchies for c major and c minor were reported in Krumhansl and Kessler (1982) and the set of probe-tone values are given in Krumhansl (1990a, p. 30). Values for each of the other keys were obtained by orienting the set to each of the different tonic notes. For each sequence the highest correlation so obtained was selected to represent the tonal strength of the distribution. The average of these correlations was .54; the average for each sequence length ranged from .45 to .63, with no relation between length of sequence and size of correlation. A correlation of .66 is required for significance at the .01 level (one-tailed test).
For four randomly chosen sequences within each group of eight trials, the test tone was one of the preceding tones; except for the one- and two-tone sequences, the test tone was never the first or last note of the sequence. For the four remaining trials, the test tone was a tone within the contour boundaries of the preceding sequence but not occurring in the sequence. A single random order within each sequence length was constructed in an attempt to model the test on the procedures for the Digit Span subtest of the WAIS-R (Wechsler, 1981).
Participants were asked to respond ‘Yes’ if the test tone following the sequence of tones was heard within the preceding sequence, and ‘No’ if the test tone was not heard within the preceding sequence.
Stage 2 -Nonmusic Tests and Evaluation of Factors
Nonmusic tests. Nonmusic tests were standardized psychological tests specifically designed to assess cognitive skills. The tests are listed in Table 2. They are widely available and in common use in a variety of clinical and experimental situations. The tests were selected to assess both cognitive abstraction and nonabstraction abilities (column 1 of Table 2). As well, they were selected to provide both auditory and nonauditory contexts and both linguistic and nonlinguistic contexts (columns 2 and 3 of Table 2, respectively).
The first three tests listed were developed to assess abstraction abilities. The Wisconsin Card Sorting Test (Heaton, 1981) was first introduced by Berg (1948) as an objective test of abstraction and “shift of set.” Participants are required to sort cards of various forms, colours, and numbers, according to shifting criterion principles. Abstraction ability is required to discern the correct sorting principles based on information presented on the cards and information given by the examiner as to whether each sort was correct or incorrect.
The Abstraction subtest of the Shipley Institute of Living Scale is described as requiring the participant to “induce some principle common to a given series of components and then to demonstrate [his or her] understanding of this principle by continuing the series” (Shipley, 1953, p. 752). The components in the subtest include letters, numbers, and words.
The Similarities subtest of the WAIS-R (Wechsler, 1981) consists of 14 items which assess logical abstract reasoning or concept formation. The items require test-takers to recognize the relation between two objects or ideas. The three nonabstraction tests were tests of vocabulary, attention and memory. The Vocabulary subtest from the Shipley Institute of Living Scale is a measure of vocabulary knowledge. In addition, the Total score, a combination of the Vocabulary and Abstraction subtest scores, may be used to assess general intellectual functioning and to detect cognitive impairment (Heaton, 1981). The Total score can also be used to obtain a reliable estimate of the WAIS-R overall IQ (Zachary, 1986).
The Digit Span subtest of the Wechsler Adult Intelligence Scale-Revised (WAIS-R) was designed as a measure of attention/concentration/freedom from distractibility and of immediate auditory memory (Wechsler, 1981; Zimmerman & Woo-Sam, 1973). Digit Span appears to be a valid measure of short-term auditory memory and attention, but is not considered a valid indicator of other types of memory skills (Zimmerman & Woo-Sam, 1973).
In contrast to the Digit Span test, the Figural Memory subtest of the Wechsler Memory Scale-Revised (Wechsler, 1987) is a nonverbal measure of short-term memory that tests ability to remember nonrepresentational designs.
Evaluation of factors. Principal component analysis, followed by model testing analyses, was conducted on the full data set (performance for each participant on eight music tests and six nonmusic tests). Various possible outcomes were evaluated. One was that if the abstraction of the tonal hierarchy required for the tonality tests shared resources with other processes of abstraction, the factor structure should then isolate abstraction abilities (music and nonmusic) from other abilities. Descriptions of tonal organization include the general cognitive principles of hierarchical ordering, categorization, classification, and prototypicality (Krumhansl, 1990a). Thus it is possible that general mechanisms are shared.
Examples of other outcomes evaluated were that the factor structure would isolate all music tests from nonmusic tests, auditory contexts from nonauditory contexts, and/or linguistic contexts from nonlinguistic contexts (see Carroll, 1993; Gardner, 1983; Shuter-Dyson Sz Gabriel, 1981; Sternberg & Powell, 1982; Waterhouse, 1988). Yet another possible outcome was that all tests would primarily engage general intelligence or would reflect test-taking ability. In that case, no factor structure beyond a single factor should then emerge.
One hundred adults served as voluntary participants in this study, 41 males and 59 females. They were recruited both from the university community (from campus posters and a participant pool) and the community at large (through newspaper advertisements). All were able to speak and read written English, and all claimed normal hearing.
The age range of participants was 1840 years (mean = 26.8, SD = 6.2 years). The range of years of formal education was 7-22 years (mean = 15.6, SD = 2.7). Nineteen participants had 12 years or fewer of formal education, 66 had 13-16 years, and 15 had 18 or more years.
Sixty-one participants had little or no music training, defined as no classroom or private music lessons after elementary school and/or one year or less of secondary school band, and no current engagement in music instruction or performance activities. Twenty-two participants had moderate training, defined as classroom or private music lessons during elementary school and/or two or more years of classroom or private lessons during secondary school plus current engagement in instruction or performance activities (including choir singing, or playing and/or singing in a band or other ensemble as a hobby). Seventeen participants were highly trained, defined as achievement of a university degree or college diploma in music, or present engagement in music performance or instruction at a semi-professional or professional level.
Music Test Procedures
Stimuli for melodic sequences were synthesized musical timbres, created by a Yamaha TX81Z synthesizer. The synthesizer was controlled by an Atari 1040ST computer running “Notator” music processing software (Lengeling, Adam, & Schupp, 1990). An exception was the Probe-tone Melody test for which the synthesizer was controlled by a Zenith Z-248 computer running “DX-Score” software (Gross, 1981). The synthesizer settings were factory preset timbres, and differed among tests to provide variety. Synthesizer settings were: Probe-tone Melody – Wood Piano (A15); Familiar and Novel Melodies – Pan Floot (B12); Tonal/Atonal Melodies – Flute (Bll); and Pitch Memory – New Electro (A12).
Stimuli for probe tones and harmonic sequences (cadences and chord progressions) were “circular” tones (Shepard, 1964) and “circular” chords (Krumhansl, Bharucha, & Kessler, 1982), respectively. They were created on a Yamaha TX802 synthesizer controlled by an Atari 1040ST computer running “Notator” music processing software. Circular tones and chords consisted of 6 and 15 sine-wave components, respectively, with rise and decay times of 20 ms each. The components were distributed over a six-octave range under an amplitude envelope that approached hearing threshold at the high and low ends of the range. This procedure results in tones and chords that sound organ-like and do not have a well-defined pitch height. The purpose of this method of construction is to increase the likelihood that listener judgments will be made on the basis of tone or chord function within the tonal scale rather than on pitch height.
All trials were recorded on Sony uX-S60c audiocassettes with an Alpine AL-35 tape recorder. The order of trials was randomized, and, except for the Pitch Memory test, three different random orders were recorded for each test. Trials were separated on the tape by a silent gap of 4.5 s (Probe-tone Cadence tests) or 3 s (all other tests). In addition, for all tests other than the probe-tone tests, each trial was assigned at random to one of the 12 major keys. Practice trials were also recorded. For the probe-tone tests, practice trials were sampled from the test trials. For the remaining tests, practice trials consisted of materials similar, but not identical to, the test trials.
Music sequences were reproduced through the speakers of a Phillips AW.7690/07 portable tape player at a comfortable loudness level, as determined by each participant (about 55 to 70 dB SPL).
For each music test, participants provided written responses. The rating scales were always oriented so that “10” was the high end of the scale, “1” the low. Participants were told that there were no time limits on their ratings for each trial of each test; they were instructed to use the pause button on the tape player if necessary, or to indicate to the experimenter that more time was needed than that which was provided by the silences between trials. No feedback was given following practice trials on this test or any subsequent music test, but instructions were clarified whenever necessary.
Nonmusic Test Procedures
Each participant was tested on each test listed in Table 2. Administration followed published test protocols. For the nonmusic tests, the Shipley Institute of Living Scale specified a ten-minute time limit for each subtest. None of the other nonmusic tests had time limits, and pacing was determined by each participant.
General Testing Procedures
All participants were tested in a quiet room. They were asked to read a written description of the study and to read and sign a consent form.
Data were collected from each participant in the following order. Demographic data, including age, gender, years of formal education completed, level of music training, and self-perceived level of musicality, were collected first. The music and nonmusic tests were presented next in an alternating fashion, beginning with a music test. The order of presentation of both the music tests and the nonmusic tests was independently randomized for each participant. Each participant was randomly assigned to one of the three random orderings of stimuli for the music tests with the exception of the Pitch Memory test which was constructed in only one order. Order of presentation of Probe-tone Major and Minor Cadence tests was counterbalanced across participants.
Each participants was verbally debriefed at the conclusion of the testing. Each testing session lasted approximately two hours.
STAGE 1- RESULTS OF THE MUSIC TESTS
For the Probe-tone Major and Minor Cadence and Probetone Melody tests, mean ratings for each of the 12 probe tones were computed for each participant. For the Familiar, Novel, and Tonal/Atonal Melodies tests, and for the Chord Progressions test, mean ratings for each of the five levels on the tonal/atonal continuum were computed for each participant. For the Pitch Memory test, the number of correct responses (out of eight) for each of the nine sequence lengths was calculated for each participant, as well as the total number of correct responses out of a possible total of 72.
Probe-tone tests. Mean ratings for the probe-tone tests are given in Figure 1 for the entire sample of 100 participants and for each level of music training. For the Probe-tone Major Cadence, Minor Cadence, and Melody tests, overall mean ratings were highest for the tonic note. The third and fifth scale tones were rated next most highly, followed by the remaining diatonic notes. The chromatic notes were all rated lowest. Similar results were obtained for each of the three levels of music training.
Analyses of variance for the Major Cadence revealed significant main effects for probe tones, F(11, 1067) – 81.93, MSe – 2.26, p
Results of analyses of variance for Probe-tone Minor Cadence and Probe-tone Melody tests were similar to the results described above. (Full details are available in Steinke, 1992.) The correlations between the overall mean ratings in Figure 1 and the standardized tonal hierarchies reported in Krumhansl (1990a) were .93 (Probe-tone Major Cadence and standardized C-major hierarchy), .98 (Probe-tone Minor Cadence and standardized C-minor hierarchy), and .96 (Probe-tone Melody and standardized C-major hierarchy). All correlations are significant beyond the .001 level (one-tailed t-test), df – 10.
Familiar Melodies and Novel Melodies tests. Mean ratings for Familiar and Novel Melodies tests are shown in Figure 2 (top two panels). Overall, listeners rated the Level A endings, consisting of the tonic note, the highest. The other four types of endings (3rd or 5th, other diatonic, close chromatic, distant chromatic) were rated progressively lower with Level E endings rated the lowest. This trend was also evident at each level of music training. No participant reported that any melody from the Familiar Melodies test was unfamiliar.
Analyses of variance revealed significant main effects for levels of melody ending for both Familiar Melodies, F(4, 388) – 738.81, MS, – 0.73, p
Tonal/Atonal Melodies test The results for Tonal/Atonal Melodies were similar to those for the Familiar and Novel Melodies and are presented in the lower left-hand panel of Figure 2. Level A melodies were rated highest and Level E melodies the lowest. Unlike the results for the previous tests mentioned, Level c melodies were rated about the same as Level B melodies. However, for participants in the highly trained group, ratings did decrease monotonically with music-theoretic levels.
Analyses of variance supported the conclusion that participants rated melodies significantly differently according to the tonality level of the melody, F(4, 388) – 262.00, Ms, – 0.78, p
Pitch Memory test. Results of the Pitch Memory test are presented in Figure 3. The average number of correct identifications out of eight for each of the nine sequence lengths decreased from 7.9 for 1-note sequences to 4.4 for the 9-note sequences. Analysis of variance revealed significant effects of sequence length, F(8, 766) = 81.01, MSe 1.35, p
The data were inspected for evidence that accuracy of identification was related to the size of the correlation between the distribution of pitches in the sequence and the standardized tonal hierarchy (see Method above). No reliable trends were found – an unsurprising result given that the test was designed to yield a range of low correlations.
STAGE 2 – RESULTS OF NONMUSIC TESTS AND EVALUATION OF FACTORS
Means, standard deviations, and ranges for all the nonmusic tests can be found on the right-hand side of Table 2.
Wisconsin Card Sorting Test Scores on the Wisconsin Card Sorting Test represent the percentage of conceptual level responses. The results are similar to normative scores for normal participants (Heaton, 1981).
Abstraction. Scores on the Abstraction subtest of the Shipley Institute of Living Scale are number of correct responses to twenty items, multiplied by two. Although the normative data of the Shipley Institute of Living Scale are based on a psychiatric population, the obtained means in Table 2 are similar to results obtained on normal populations of student nurses and hospital employees (Zachary, 1986).
Similarities (WAIS-R). Scores on the Similarities subtest of the WAIS-R represent the number of items answered correctly, with each item scored as 0, 1, or 2, depending on the quality of the response. These means reflect average scores based on the norms of the WAIS-R (Wechsler, 1981).
Vocabulary. Scores on the Vocabulary subtest of the Shipley Institute of Living Scale represent the total number correct of 40 vocabulary items. The obtained means are similar to results obtained on normal populations of student nurses and hospital employees (Zachary, 1986).
Digit Span (WAIS-R). Scores on the Digit Span subtest of the WAIS-R represent total number of sequences recalled correctly; These means reflect average scores based on the norms of the WAIS-R (Wechsler, 1981).
Figural Memory (WMS-R). These scores represent the total number of figures correctly recalled, out of 10. The mean scores are slightly higher than the mean raw score reported in the standardization sample (Wechsler, 1987).
Principal Components Analysis
For the principal components analysis, each participant was assigned a single score for each music test. The score for tonality tests represented the correspondence between the participant’s ratings and the music-theoretic levels of tonality as defined in Table 1. For the probe-tone tests, rank-order correlations were calculated between obtained ratings and a quantified predictor for which levels A, B, C, and D were coded 4, 3, 2, and 1, respectively. For the remaining tests, rank-order correlations were calculated between obtained ratings and a quantified predictor for which levels A, B, C, D, and E were coded 5, 4, 3, 2, and 1 respectively. The mean rank-order correlation for each test is given in Table 3 for all participants and for each level of music training. All mean correlations are significantly different from zero (one-tailed t-test, p
All distributions of scores for the cognitive and music tests were checked for normality and outliers. Several tests were transformed to reduce skewness and kurtosis, and three outlying data points were brought to within three standard deviations of the mean. A square-root transformation was used on the data from the Abstraction subtest of the Shipley Institute of Living Scale, Similarities subtest of the WAIS-R, Probe-tone Melody test, Novel Melodies test, Familiar Melodies test, Tonal/Atonal Melodies test, and Chord Progressions test. A logarithmic transform was used on the data from the Wisconsin Card Sorting Test. Bivariate scatterplots were produced to check for bivariate normality.
The principal components analyses obtained one solution based on full correlations between the music and nonmusic tests and another solution based on partial correlations. Music training was the variable controlled to obtain partial correlations. Each participant was assigned to one of three discrete levels of music training: little or none, moderate, or high (designated “1”, “2”, and “3”, respectively, and described in detail above). Correlations were then obtained between the music and nonmusic tests with music training partialled out.
A two-factor solution was chosen on the basis of factor loadings, a scree plot, and a parallel analysis procedure (Horn, 1965; Zwick & Velicer, 1986) which estimated eigenvalues for random-data correlation matrices (Longman, Cota, Holden, & Fekken, 1989). Table 4 pre-sents the varimax rotated two-factor solution of the principal components analysis of the music and cognitive tests calculated on the full correlation matrix and with music training partialled out. Factor loadings indicate that for both full and partial correlation matrices, the music variables loaded on the first component and the cognitive variables loaded on the second component. However, the total variance accounted for was higher in the full correlation solution than in the partial correlation solution that controlled for the effects of music training.
Model testing analyses were carried out to assess the congruence of the obtained factor loadings of the cognitive and music variables with each of eight hypothesized models. Analyses were conducted on both full and partial correlation matrices. Model testing involved: (a) extraction of as many components as indicated by a particular model; (b) orthogonal procrustean rotation of observed component loadings to a hypothesis matrix representing the model; and (c) computation of a coefficient of congruence (Harman, 1976) between each rotated component’s loadings and its corresponding model component’s loadings.
The first model tested (Model 1) was a basic “music versus nonmusic” model. This model reflected the factor loadings obtained from the principal components analysis described above. Other models were assessed to further verify the validity of the factor structure that emerged from the principal components analysis, and to address the possibility that a number of other models might provide a better account of the data.
Models 2, 3, and 4 addressed auditory, linguistic, and abstraction characteristics of the nonmusic tests, summarized in Table 1. For Model 2, “auditory versus nonauditory,” all of the music tests plus the Similarities and Digit Span tests were loaded on the auditory factor because they require auditory processing. All other tests were loaded on the other factor. Model 3 considered that the variables would split on a “linguistic/nonlinguistic” dimension. Nonlinguistic tests did not require spoken or written language and included all music tests plus two nonmusic tests, the Wisconsin Card Sorting Test and Figural Memory. Linguistic tests included Similarities, Digit Span, and the Abstraction and Vocabulary subtests of the Shipley Institute of Living Scale. Model 4, “abstraction versus nonabstraction,” considered that both sense of tonality and nonmusic abstraction involve the same underlying abstraction ability. This model therefore suggested that both the tonality and nonmusic abstraction tests would load on one factor, while the nonabstraction tests (pitch memory and nonmusic nonabstraction) would load on a second factor.
Models 5, 6, and 7 explored still other partitions of the data – “probe-tone versus nonprobe-tone versus nonmusic” (Model 5), “music versus nonmusic abstraction versus nonmusic” (Model 6); and a four-factor solution, “probe-tone music versus nonprobe-tone music versus abstraction nonmusic versus nonabstraction nonmusic” (Model 7). Finally, Model 8, “general factor,” considered that all tests would load on a single factor.
The congruence of each of these eight hypothesized target models with the obtained rotated factor loadings is presented in the left-hand columns of Table 5. Model 1, “music vs nonmusic,” achieved the highest level of congruence for the full correlation and for the partial correlation matrices. Next strongest support emerged for Model 8, the “general factor” model. To determine whether the targeted rotations had capitalized on chance, the magnitude of the congruences was evaluated by undertaking 100 parallel analyses on data matrices of equal order but comprising random normal deviates. The upper 95% confidence interval for the congruences of corresponding random data sets, shown in the right-hand columns of Table 5, indicates that the congruences for the observed data were not likely to have occurred by chance.
SUMMARY OF STAGES 1 AND 2
The first stage of this study sought convergent evidence for the validity of measures of sensitivity to tonal structure, collected from the same sample of participants. Several music tests employing different methodologies and response demands were all successful in assessing participants’ sense of tonality evaluated against quantified predictors derived from music theory. The second stage of the study involved principal component analysis and model testing. It revealed that the tonality tests, along with a test of pitch memory, loaded on a different factor from tests of nonmusic cognitive skills, even when the contribution of levels of music training was statistically removed. According to the two-factor solution, the sense of tonality was not associated with the ability to abstract in several nonmusic tests, nor was dissociation found between auditory/nonauditory or between linguistic /nonlinguistic factors. A general intelligence model was the next-best fitting model following the music/nonmusic model.
Stage 3 – The Amusic Participant
The aim of Stage 3 was to examine data from a single amusic participant, C.N., to determine whether the selective sparing and deficit in C.N.’s abilities were convergent with the music/nonmusic dissociation found for the general sample. McCloskey (1993), while cautioning about the extent to which generalizations can be made from neurologically compromised to normal functioning, nevertheless values “bring[ing] to bear data from multiple single-patient studies in formulating and evaluating theories” (p. 728). Indeed, it has been argued that neuropsychological data must be explored only with regard to an explicit or implicit model of the normal cognitive system (Peretz, 1993b).
C.N. was referred to us by Isabelle Peretz, University of Montreal. C.N. was a 40 year-old woman with a pure amusic disorder following successive brain surgeries to clip aneurisms in her right (in 1986) and left (in 1987) middle cerebral arteries. Patel, Peretz, Tramo, and Labreque (1998) provide a lesion profile of C.N., including a CT scan image that revealed bilateral temporal lobe lesions. Primary auditory cortex was spared.
C.N. was right-handed, French-speaking and had 15 years of formal education. She had no musical training, but used to sing every day to her child.
After surgery, C.N. scored within the normal range on standardised tests of intelligence, speech comprehension, and speech expression, but complained, however, of exclusively music-related symptoms. Peretz, Kolinsky, Tramo, Labreque, Hublet, Demeurisse, and Belleville (1994) demonstrated in C.N. an auditory dissociation between music and nonmusic stimuli — perception of tunes, prosody, and voice recognition was impaired, but perception of speech and environmental sounds was preserved. Peretz and Kolinsky (1993) also demonstrated in C.N. a dissociation between the processing of melodic and rhythmic information. The amusic disorder thus appeared to be one of atonalia.
In one session, C.N. was tested on six of the music tests from Stage 1 and one nonmusic test from Stage 2. A French speaker, Isabelle Peretz, translated test instructions and recorded C.N.’s responses. The order of presentation of music tests and the single nonmusic test was randomized for C.N., and she was randomly assigned to one of the three random orders of trials for each of the tonality measures. Music sequences were reproduced by a Aiwa xK-009 Excelia cassette player through Epos speakers.
The single nonmusic test was the Wisconsin Card Sorting Test. Scores from a previous administration (in French) of the Similarities and Digit Span subtests of the WAS-R, and the Figural Memory subtest of the Wechsler Memory Scale-Revised were also available (Peretz et al., 1994). Published test protocols were followed in administering each of these tests. The Vocabulary and Abstraction subtests of the Shipley Institute of Living Scale were not administered to C.N. because they were not available in French.
The data for six control participants were selected from the data obtained in Stages 1 and 2. These participants were matched to C.N. for age, sex, handedness, years of formal education, and level of music training. None, however, was French-speaking.
RESULTS AND DISCUSSION
C.N.’s scores on the music and nonmusic tests, and scores for the matched controls (mean and range) are presented in Table 6. Although it is difficult to interpret the results of any single test because of the problem of reliability of a single score, it is clear that there is an overall pattern in the data. For all music tests, C.N.’s scores were consistently lower than the lowest score obtained from control participants. For the nonmusic tests, however, C.N.’s scores were comparable to the scores of the control participants. These nonmusic tests scores are also consistent with normative data for the tests. The results suggest that the selective loss experienced by C.N. is convergent with the statistical solution obtained for the general sample.
Three general findings address the purposes of this study. First, sensitivity to five levels of tonal structure was demonstrated for a variety of test measures and participants’ musical backgrounds. The data support Krumhansl’s (1990b) defence of the reliability and validity of the probe-tone method and, in addition, provide evidence that sensitivity to levels of tonality in each test converged with music-theoretic descriptions of levels of tonality. The data illustrate a trend for more highly trained participants to rate the more tonal stimuli higher and the less tonal stimuli lower than the other participants. For each level of music training, however, sensitivity to tonality was found for each tonality test. This finding documents the sensitivity of the musical novice to tonal structure (see also Cuddy & Badertscher, 1987) thus adding information about a population under-represented in music science (Smith, 1997). Moreover, it suggests that participants shared a common representation of tonality independent of the level of music training and the type of musical context.
Second, factor analyses and model testing of the data collected from the general sample revealed that the music variables reflecting sensitivity to tonal structure and pitch memory dissociated from variables reflecting nonmusic cognitive skills. Results implicating dissociation were found both when the contribution of levels of music training was included and when it was statistically removed from the analyses. A general intelligence model was the next best-fitting model over other candidate models in which alternative partitions of the data were evaluated.
Third, the performance of a neurologically compromised participant, C.N., revealed a pattern of selective loss with respect to controls that was consistent with the twofactor (music vs nonmusic) solution for the general sample. The results verified and extended earlier findings (Peretz et al., 1994) to new tonality tests. C.N. demonstrated a consistent loss of a sense of tonality with preservation of nonmusic cognitive abilities.
The music factor isolated in this study involves the processing of pitch and pitch relations. This result may be considered in the light of the proposal by Peretz and Morais (1989) that tonal encoding of musical pitch fulfils many of the properties of modularity as proposed by Fodor (1983). As noted at the outset, music shares many characteristics with other cognitive processes, but the processing of tonality does not appear to share features with other domains (Patel & Peretz, 1997). Evidence from the present study supports the view that tasks involving processes of pitch abstraction and categorization operate within neurally specialized subsystems. While it has been observed that categorization and classification are basic to all intellectual activities (Estes, 1994; Repp, 1991), the dissociations observed in the present study suggest that categorization ability is task-specific, and does not proceed from a more generalized ability, as proposed by Ashby (1992), or Anderson (1983). Tonal encoding, therefore, may be a task-specific example of categorization. One of the present results may further address the nature of the proposed module. Pitch memory was associated with sensitivity to tonal structure, despite the fact that the Pitch Memory test was constructed to avoid tonal conventions. Possibly, despite the efforts to construct nontonal materials, the Pitch Memory test nevertheless engaged tonal knowledge. On the other hand, the Pitch Memory test may have involved low-level categorization that assigned pitches along a continuous sensory dimension to discrete steps of the chromatic scale. In the latter case, the module isolated by the factor solution may be a perceptual mechanism at the “front-end” of tonal processing. We discuss these two possibilities in turn.
The first possibility is that, despite the nontonal construction of the pitch sequences, the Pitch Memory test did engage participants’ tonal knowledge. Participants may have attempted to assimilate the nontonal information to a tonal schema, hearing the sequences as tonal melodies with “wrong notes.” Such a possibility, however, is not strongly supported by the pitch memory studies cited earlier. When tonal and nontonal stimulus conditions are compared, large differences in performance accuracy are consistently reported with significantly poorer performance for nontonal materials. If tonal knowledge is engaged as a strategy to encode and remember nontonal materials, it is not very effective.
Participants may, however, have differed in the degree to which a tonal strategy was applied. Substantial individual differences were reported by Krumhansl, Sandell, and Sargeant (1987) in a probe-tone study evaluating responses to excerpts of serial (twelve-tone) music. Among the differences revealed by analyses of probe-tone ratings was the extent to which tonal (key) implications of the excerpt were present in the ratings. Thus, participants in the present study may have varied in their attempts to assimilate the nontonal sequences to a tonal framework. What is not yet clear, however, is that this kind of assimilation has a facilitating effect on performance.
The second possibility is that the Pitch Memory tests and the tonality tests shared low-level processes of categorization and a common sensitivity to the distribution of pitch categories. The common sensitivity reflects an attunement to the regularities in the auditory environment (Bregman, 1990,1993). Krumhansl (1987,1990a) has provided statistical evidence of the close correspondence between the prominence of tones in the tonal hierarchy and the frequency with which these tones are sounded in music. Oram and Cuddy (1995; see also Cuddy, 1997) demonstrated that for nontonal musical contexts, listeners use the surface properties of the pitch distribution – the frequency with which tones are sounded – to construct a hierarchical organization of pitch structure. Moreover, responsiveness to the surface properties superseded assimilation to the tonal hierarchy.
It is important to note that this second possibility does not imply that all tests merely reflected rote memory for the stimulus materials. Not all findings for tonality tests can be accounted for in terms of the surface properties of the musical context. Rather, the point is that high sensitivity to pitch distribution, which would be advantageous in a pitch memory task, would also facilitate the acquisition of the tonal knowledge and its application in the tonality tests. The more precisely one is able to preserve the distributional pitch properties of the tonal contexts, the more effectively tonal knowledge may be internalized and applied. Damage to this mechanism would lead to impairment of tonal processing, as in the case of C.N.
Finally, we briefly discuss the finding that under the two-factor solution the tonality tests dissociated from the nonmusic abstraction tests, despite the sharing of descriptive characteristics. Descriptions of tonal processing often include properties of abstraction such as prototypes, categorization, classification, and expectancy. Perhaps the description of shared characteristics is incorrect, or misleading. An alternative interpretation, however, is suggested by the model testing: The most competitive model for the two-factor music/nonmusic model was the general intelligence model. Under this general factor, music and nonmusic abstraction are associated not only with each other but also with all nonmusic cognitive skills tested. A general intelligence factor may account for how adequately each individual was able to respond to the cognitive demands of each of the tests (Carroll, 1993).
Considerations of two of the questions arising from our data suggest first, a pitch processing module of a specialized nature, and second, a general intelligence factor underwriting all test performance. The two accounts, modular and general intelligence, are compatible with a distinction between modules and thought in Fodor’s recent writing (Fodor, 1996). “Modules function to present the world to thought… But, of course, it’s really the thinking that makes our minds special. Somehow, given an appropriately parsed perceptual environment, we manage to figure out what’s going on in it and what we ought to do about it… this `figuring out’ is really a quite different kind of mental process from the stimulus analysis that modules perform… On my view, the phylogeny of cognition is the interpolation, first of modularized stimulus analysis and then of the mechanisms of thought…” (p. 23).
The limitations of the present study preclude conclusive statements about Fodor’s (1996) notions. The study focussed exclusively on musical pitch tests, to the exclusion of other musical domains such as rhythm, dynamics and timbre. The nonmusic tests, though selected for validity and reliability, were restricted in number. The general sample, though drawn from a population more diverse than typical studies in this area, was relatively healthy, young adult, and educated.
The idea offered here for further testing, nevertheless, is that the processing of discrete musical pitches engages a low-level, domain-specific, process – one that attunes to pitch regularities. Such a proposal is compatible with Deliege (1995; see also Melen & Deliege, 1995) who suggests that a hierarchy of modules may be involved. Deliege (1995) argues that a higher-level module carrying out a process of “cue abstraction” is integral to the processes of classification and comparison necessary for music listening, and suggests that tonal encoding of pitch, like rhythmic organization, may be examples of cues abstracted by modules situated within such a higher-level module.
The domain-specific process we propose is not dependent on music training and may not reflect the cultural attributes of the Western, or for that matter, any particular tonal system. It therefore can be engaged panstylistically at early stages of musical apprehension. As well, however, the processing of musical pitch involves general properties of thought including those that reflect interactions with the cultural/linguistic environment. These general properties of thought share resources with multiple other skills and abilities.
The data for Stages 1 and 2 were presented in a thesis submitted in partial fulfilment of the MA degree by W.R. Steinke, under the supervision of L.L. Cuddy. Research was supported by a research grant to L.L.C. and a postgraduate fellowship to W.R.S. from the Natural Sciences and Engineering Research Council of Canada. We thank I. Peretz, University of Montreal, for support, encouragement, and the opportunity to test the late C.N., whose patience and co-operation contributed much.
We acknowledge continued support of C. L. Krumhansl and many productive discussions. Preliminary results were presented at meetings of the Acoustical Society of America (1993), the Society for Music Perception and Cognition (1993), and the International Conference for Music Perception and Cognition (1994). Correspondence may be sent to L.L. Cuddy, Department of Psychology, Queen’s University, Kingston, Ontario, K7L 3N6.
Abe, J., & Hoshino, E. (1990). Schema driven properties in melody cognition: Experiments on final tone extrapolation by music experts. Psychomusicology, 9, 161-172.
Anderson, J.R. (1983). The architecture of cognition. Cambridge, MA: Harvard University Press.
Ashby, F.G. (1992). Multidimensional models of categorization. In F.G. Ashby (Ed.), Multidimensional models of perception and cognition (pp. 449-483). Hillsdale, NJ: Erlbaum.
Bartlett, J.C., & Dowling, WJ. (1980). Recognition of transposed melodies: A key-distance effect in developmental perspective. Journal of Experimental Psychology: Human Perception and
Performance, 6, 501-515.
Berg, E.A. (1948). A simple objective technique for measuring flexibility in thinking. Journal of General Psychology, 39, 15-22.
Besson, M. (1997). Electrophysiological studies of music processing. In I. De ‘hege & J. Sloboda (Eds.), Perception and cognition of music (pp. 217-250). Hove, East Sussex, UK: Taylor & Francis.
Bharucha, J.J., & Krumhansl, C.L. (1983). The representation of harmonic structure in music: Hierarchies of stability as a function of context. Cognition, 13, 63-102.
Bigand, E. (1993). Contributions of music to research on human auditory cognition. In S. McAdams & E. Bigand (Eds.), Thinking in sound The cognitive psychology of human audition (pp. 231-277). Oxford: Clarendon Press/Oxford University Press.
Boltz, M. (1989). Perceiving the end: Effects of tonal relationships on melodic completion. Journal of Experimental Psychology: Human Perception and Performance, 15, 749-761.
Bregman, A.S. (1990). Auditory scene analysis: The perceptual organization of sound Cambridge, MA: The MIT Press.
Bregman, A.S. (1993). Auditory scene analysis: Hearing in complex environments. In S. McAdams & E. Bigand (Eds.), Thinking in sound The cognitive psychology of human audition (pp. 10-36). Oxford: Oxford University Press.
Carroll, J.B. (1993). Human cognitive abilities: A survey of factor-analytic studies. Cambridge, UK: Cambridge University Press.
Croonen, W.L.M. (1994). Two ways of defining tonal strength and implications for recognition of tone series. Music Perception, 13, 109-119.
Cuddy, L.L. (1991). Melodic patterns and tonal structure: Converging evidence. Psychomusicology, 10, 107-126.
Cuddy, L.L. (1993). Melody comprehension and tonal structure. In TJ. Tighe & WJ. Dowling (Eds.), Psychology and music: The understanding of melody and rhythm (pp. 19-38). Hillsdale, NJ: Erlbaum.
Cuddy, L.L. (1997). Tonal relations. In I. Deliege & J. Sloboda (Eds.), Perception and cognition of music (pp. 329-352). Hove, East Sussex, UK: Taylor & Francis.
Cuddy, L.L., & Badertscher, B. (1987). Recovery of the tonal hierarchy: Some comparisons across age and levels of musical experience. Perception & Psychophysics, 41, 609-620.
Cuddy, L.L., Cohen, A.J., & Mewhort, DJ.K. (1981). Perception of structure in short melodic sequences. Journal of experimental Psychology: Human Perception and Performance, 7, 869-883.
Cuddy, L.L., Cohen, AJ., & Miller, J. (1979). Melody recognition: The experimental application of musical rules. Canadian Journal of Psychology, 33, 148-157.
Cuddy, L.L., & Lyons, H. (1981). Musical pattern recognition: A comparison of listening to and studying tonal structure and tonal ambiguities. Psychomusicology, 1(2), 15-33. Cuddy, L.L., & Thompson, W. F. (1992). Asymmetry of perceived key movement in chorale sequences: Converging
evidence from a probe-tone analysis. Psychological Research/ Psychologische Forschung, 54, 51-59. Davidson, L., & Scripp, L. (1988). Young children’s musical representations: Windows on music cognition. In J.A. Sloboda (Ed.), Generative processes in music (pp. 195-230). Oxford: Clarendon Press.
Deliege, . (1995). The two steps of the categorization process in music listening: An approach of the cue extraction mechanism as modular system. In R. Steinberg (Ed.), Music and the mind machine (pp. 63-73). Berlin Heidelberg: Springer-Verlag. Deutsch, D. (1970). Tones and numbers: Specificity of interference in short-term memory. Science, 168, 1604-1605. Deutsch, D. (1972). Effect of repetition of standard and comparison tones on recognition memory for pitch. Journal of Experimental Psychology, 93, 156-162.
Deutsch, D. (1978). Delayed pitch comparisons and the principle of proximity. Perception & Psychophysics, 23, 227-230. Dewar, K.M., Cuddy, L.L., & Mewhort, DJ.K. (1977). Recognition memory for single tones with and without context. Journal of Experimental Psychology: Human Learning and Memory, 3, 60-67.
Dowling, WJ. (1991). Tonal strength and melody recognition after long and short delays. Perception & Psychophysics, 50, 305-313.
Dowling, WJ., & Bartlett, J.C. (1981). The importance of interval information in long-term memory for melodies. Psychomusicology, 1(1), 30-49.
Dowling, WJ., & Harwood, D.L. (1986). Music cognition. Toronto: Academic Press.
Estes, W.K. (1994). Classification and cognition. Oxford: Oxford University Press.
Fodor, J. (1983). The modularity of mind. Cambridge, MA: MIT Press.
Fodor, J. (1996). It’s the thought that counts [Review of the book The prehistory of the mind]. London Review of Books, 18 (23), 22-23.
Frankland, B.W., & Cohen, AJ. (1996). Using the Krushansl and Schmuckler key-finding algorithm to quantify the effects of tonality in the interpolated-tone pitch-comparison task. Music Perception, 14, 57-83.
Frances, R. (1988). The perception of music. (WJ. Dowling, Trans.). Hillsdale, NJ: Erlbaum. (First French edition published in 1958)
Gardner, H. (1983). Frames of mind The theory of multiple
intelligences. NY: Basic Books Inc.
Gross, R. (1981). DX-Score [Computer program]. Rochester, New York: Eastman School of Music.
Handel, S. (1989). Listening:An introduction to the perception of auditory events. Cambridge, MA: MIT Press. Harman, H.H. (1976). Modern factor analysis (3rd edition, revised). Chicago: University of Chicago Press. Heaton, R.K. (1981). A manual for the Wisconsin Card Sorting Test Odessa, FL: Psychological Assessment Resources. Horn, J.L. (1965). A rationale and test for the number of factors
in factor analysis. Psychometrika, 30, 179-185. Howe, MJ.A. (1990). The origins of exceptional abilities. Oxford:
Basil Blackwell Ltd.
Jackendoff, R. (1987). Consciousness and the computational mind. Cambridge, MA: MIT Press.
Johnston, J.C. (1985). The perceptual salience of melody and melodic structures. Unpublished B.A. thesis, Queen’s University, Kingston, Ontario, Canada.
Jones, M.R. (1981). Music as a stimulus for psychological motion: Part I. Some determinants of expectancies. Psychomusicology, 1(2), 34-51.
Jones, M.R. (1982). Music as a stimulus for psychological motion: Part II. An expectancy model. Psychomusicology, 2 (1), 1-13.
Jones, M.R. (1991). Preface. Psychomusicology, 10, 71-72. Judd, T. (1988). The varieties of musical talent. In L.K. Obler & D. Fein (Eds.), The exceptional brain. Neuropsychology of talent and special abilities (pp. 127-155). New York: Guilford Press. Krumhansl, C.L. (1979). The psychological representation of musical pitch in a tonal context. Cognitive Psychology, 11, 346-374.
Krumhansl, C.L. (1987). Tonal and harmonic hierarchies. In J. Sundberg (Ed.), Harmony and tonality (pp 13-32). Stockholm, Sweden: Royal Swedish Academy.
Krumhansl, C.L. (1990a). Cognitive foundations of musical pitch.
New York: Oxford University Press.
Kr”har,sl, C.L. (1990b). Tonal hierarchies and rare intervals in music cognition. Music Perception, 7, 309-324. Krumhansl, C.L., Bharucha, JJ., & Castellano, M.A. (1982). Key distance effects on perceived harmonic structure in music. Perception & Psychophysics, 32, 96-108. Krumhansl, C.L., Bharucha, JJ., & Kessler, EJ. (1982). Perceived harmonic structure of chords in three related musical keys. Journal of Experimental Psychology: Human Perception and Performance, 8, 24-36.
Krumhansl, C.L., & Castellano, M.A. (1983). Dynamic processes in music perception. Memory & Cognition, 11, 325-334. Krumhansl, C.L., & Kessler, EJ. (1982). Tracing the dynamic changes in perceived tonal organization in a spatial representation of musical keys. Psychological Review, 89, 334-368. Krumhansl, C.L., Sandell, G., & Sergeant, D. (1987). The
perception of tone hierarchies and mirror forms in twelve-tone serial music. Music Perception, 5, 31-78. Krumhansl, C.L., & Shepard, R.N. (1979). Quantification of the hierarchy of tonal functions within a diatonic context. Journal of Experimental Psychology: Human Perception and Performance, 5, 579-594.
Lengeling, G., Adam, C., & Schupp, R. (1990). C-LAB: Notator SL/Creator SL (Version 3.1)[Computer program]. Hamburg, Germany: C-LAB Software GmbH.
Lerdahl, F. (1988). Tonal pitch space. Music Perception, 5, 315-349.
Lerdahl, F., & Jackendoff, R. (1983). A generative theory of tonal music. Cambridge, MA: MIT Press.
Longman, R.S., Cota, A.A., Holden, R.R., & Fekken, G.C. (1989). A regression equation for the parallel analysis criterion in principal components analysis: Mean and 95th percentile eigenvalues. Multivariate Behavioral Research, 24, 59-69.
Lynch, M.P., Short, L.B., & Chua, R. (1995). Contributions of experience to the development of musical processing in infancy. Developmental Psychobiology, 28, 377-398. Marin, O.S.M. (1982). Neurological aspects of music perception and performance. In D. Deutsch (Ed.), The psychology of music (pp. 453-477). New York: Academic Press. McCloskey, M. (1993). Theory and evidence in cognitive neuropsychology: A “radical” response to Robertson, Knight, Rafal, and Shimamura (1993). Journal of Experimental Psychology: Learning, Memory, and Cognition, 19, 718-734. Melen, M., & Deliege, I. (1995). Extraction of cues or underlying harmonic structure: Which guides recognition of familiar melodies? European Journal of Cognitive Psychology, 7, 81-106. Meyer, L.B. (1956). Emotion and meaning in music. Chicago: University of Chicago Press.
Oram, N., & Cuddy, L.L. (1995). Responsiveness of Western adults to pitch-distributional information in melodic sequences. Psychological Research/Psychologische Forschung, 57, 103-118.
Patel, A., & Peretz, I. (1997). Is music autonomous from language? A neuropsychological appraisal. In I. Deliege & J. Sloboda (Eds.), Perception and cognition of music (pp. 191-215). Hove, East Sussex, UK: Taylor & Francis. Patel, A., Peretz, I., Tramo, M., & Labreque, R. (1998). Processing prosodic and musical patterns: A neuropsychological investigation. Brain & Language, 61, 123-144. Peretz, I. (1993a). Auditory atonalia for melodies. Cognitive Neuropsychology, 10, 21-56.
Peretz, L (1993b). Auditory agnosia: A functional analysis. In S. McAdams & E. Bigand (Eds.), Thinking in sound: The cognitive psychology of human audition (pp. 199-230). Oxford: Clarendon Press/Oxford University Press. Peretz, I., & Kolinsky, R. (1993). Boundaries of separability between melody and rhythm in music discrimination: A neuropsychological perspective. Quarterly Journal of Experimental Psychology, 46, 301-325.
Peretz, IL, Kolinsky, R., Tramo, M., Labrecque, R., Hublet, C., Demeurisse, G., & Belleville, S. (1994). Functional dissociations following bilateral lesions of auditory cortex. Brain, 117, 1283-1301.
Peretz, I., & Morais, J. (1989). Music and modularity. Contempo
rary Music Review, 4, 279-294.
Piston, W. (1987). Harmony (Revised and expanded by M. DeVoto). New York: Norton.
Polk, M., & Kertesz, A. (1993). Music and language in degenerative disease of the brain. Brain fi Cognition, 22, 98-117. Repp, B.H. (1991). Some cognitive and perceptual aspects of speech and music. In J. Sundberg, L. Nord & R. Carlson (Eds.), Music, language, speech and brain (pp. 257-268). Lon
Samson, S., & Zatorre, RJ. (1994). Contribution of the right temporal lobe to musical timbre discrimination. Neuropsychologia, 32, 231-240.
Shepard, RN. (1964). Circularity in judgments of relative pitch.
Journal of the Acoustical Society of America, 36, 2346-2353. Shipley, W.C. (1953). Shipley-Institute of Living Scale for measuring intellectual impairment. In A. Weider (Ed.), Contributions toward medical psychology: Theory and psychodiagnostic methods (Vol. 2, pp. 751-756). New York: The Ronald Press Company.
Shuter-Dyson, IL, & Gabriel, C. (1981). The psychology of musical ability (2nd edition). London: Methuen. Sloboda, J.A. (1985). The musical mind. The cognitive psychology of music. Oxford: Clarendon Press.
Smith, J.D. (1997). The place of musical novices in music science. Music Perception, 4, 227-262.
Smith, J.D., & Melara, RJ. (1990). Aesthetic preference and syntactic prototypicality in music: Tis the gift to be simple. Cognition, 34, 279-298.
Spearman, C. (1904). “General intelligence,” objectively determined and measured. American Journal of Psychology, 15, 201-293.
Spearman, C. (1927). The abilities of man: Their nature and measurement New York: Macmillan and Company Ltd. [Reprinted: New York: AMS Publishers, 1981]. Steinke, W.R. (1992). Musical abstraction and nonmusical abstraction abilities in musically trained and untrained adults. Unpublished M.A. Thesis, Queen’s University, Kingston, Ontario, Canada.
Steinke, WR., Cuddy, L.L., &Jacobson, L.S. (1996). Melody and rhythm processing following right-hemisphere stroke: A case study. In B. Pennycook & E. Costa-Giomi (Eds.), Proceedings of the 4th International Conference on Music Perception and Cognition (pp. 323-325). Montreal: McGill University. Sternberg, RJ., & Powell, J.S. (1982). Theories of intelligence. In RJ. Sternberg (Ed.), Handbook of human intelligence (pp.
975-1006). New York: Cambridge University Press. Thompson, W.F., & Cuddy, L.L. (1989). Sensitivity to key change in chorale sequences: A comparison of single voices and four-voice harmony. Music Perception, 7, 151-168. Trehub, S.E., & Trainor, L.J. (1993). Listening strategies in infancy: The roots of music and language development. In S. McAdams & E. Bigand (Eds.), Thinking in sound (pp. 278-327). Oxford: Clarendon Press.
Vernon, P.E. (1950). The structure of human abilities. London: Methuen.
Warren, R.M. (1993). Perception of acoustic sequences: Global integration versus temporal resolution. In S. McAdams & E. Bigand (Eds.), Thinking in sound (pp. 37-68). Oxford: Clarendon Press.
Waterhouse, L. (1988). Speculations on the substrate of special talents. In L.K. Obler & D. Fein (Eds.), The exceptional brain: Neuropsychology of talent and special abilities (pp. 493-512). New York: Guilford Press.
Wechsler, D. (1981). Manual for the Wechsler Adult Intelligence Scale-Revised. New York: The Psychological Corporation. Wechsler, D. (1987). Wechsl=Memory Scale-Revised ManuaL New
York: The Psychological Corporation.
Zachary, R.A. (1986). Shipley Institute of Living Scale-Revised
manual. Los Angeles: Western Psychological Services. Zatorre, RJ., Evans, A.C., & Meyer, E. (1994). Neural mechanisms underlying melodic perception and memory for pitch. Journal of Neuroscience, 14, 1908-1919. Zatorre, RJ., Halpern, A.R., Perry, D.W., Meyer, E., & Evans, A.C. (1996). Hearing in the mind’s ear: A PET investigation of musical imagery and perception. Journal of Cognitive Neuroscience, 8, 29-46.
Zimmerman, I.L., & Woo-Sam, J.M. (1973). Clinical interpretation of the Wechsler Adult Intelligence Scale. New York Grune & Stratton.
Zwick, W.R., & Velicer, W.F. (1986). Comparison of five rules for determining the number of components to retain. Psychological Bulletin, 99, 432-442.
Copyright Canadian Psychological Association Dec 1997
Provided by ProQuest Information and Learning Company. All rights Reserved