Lee etal 2011

8 pages
of 8
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Lee etal 2011
  Investigation of melodic contour processing in the brain using multivariatepattern-based fMRI Yune-Sang Lee a,b,c, ⁎ , Petr Janata d , Carlton Frost a , Michael Hanke a,b,e , Richard Granger a,b a Dept. of Psychological & Brain Sciences, Dartmouth College, Hanover, NH, USA b Center for Cognitive Neuroscience, Dartmouth College, Hanover, NH, USA c Neurology Department, University of Pennsylvania, Philadelphia, PA, USA d Center for Mind and Brain, U.C. Davis, CA, USA e Dept. of Experimental Psychology, Otto-von-Guericke University, Magdeburg, Germany a b s t r a c ta r t i c l e i n f o  Article history: Received 23 November 2010Revised 28 January 2011Accepted 2 February 2011Available online 21 February 2011 Keywords: AuditoryMusicMelodyContourfMRIMVPANeuroimagingMultivariateSpatialAction perceptionEmotionrSTSACCIPL  Music perception generally involves processing the frequency relationships between successive pitches andextractionofthemelodiccontour.Previousevidencehassuggestedthatthe ‘ ups ’ and ‘ downs ’ ofmelodiccontourare categorically and automatically processed, but knowledge of the brain regions that discriminate differenttypes of contour is limited. Here, we examined melodic contour discrimination using multivariate patternanalysis (MVPA) of fMRI data. Twelve non-musicians were presented with various ascending and descendingmelodic sequences while being scanned. Whole-brain MVPA was used to identify regions in which the localpatternofactivityaccuratelydiscriminatedbetweencontourcategories.Weidenti 󿬁 edthreedistinctcorticalloci:therightsuperiortemporalsulcus(rSTS),theleftinferiorparietallobule(lIPL),andtheanteriorcingulatecortex(ACC). These results complement previous  󿬁 ndings of melodic processing within the rSTS, and extend ourunderstanding of the way in which abstract auditory sequences are categorized by the human brain.Published by Elsevier Inc. Introduction When listening to music, we effortlessly follow a series of ups anddowns between notes in a melody. Moreover, we can easily recognize awell-knownmusicaltuneregardlessof thekeyinwhichitisplayed.Oneof the dominant music theories is that musical melodies are encoded viatwodistinctsystems:contourprocessing,whichconcernsupsanddownsof pitch change irrespective of its exact distance, and interval processing,whichanalyzestheabsoluteorrelativedistancefromonenotetoanother(Dowling, 1978; Dowling and Fujitani, 1971; Peretz, 1990; Peretz andBabai, 1992). Although interval processing is important for establishingthe tonality (key) of a musical passage, considerable evidence hassuggestedthat contour processingprovides anessentialbasis formelodyrecognition (Dowling, 1978; Dowling and Fujitani, 1971; Edworthy,1985; Hebert and Peretz, 1997; Peretz and Babai, 1992). For example,Dowling and Fujitani (1971) investigated the role of both interval andcontour in melody recognition. In one condition, subjects were requiredto detect subtle changes in interval size between reference and targetmelodies that were played in either the same or different keys. Theresults showed that performance was worse when the target melodieswerepresentedindifferentkeysthanwhenpresentedintheoriginalkey.In another experiment, subjects were required to detect changes in thecontours of target melodies that were played in the same or differentkeys. Forthis task,subjects'performancewas robustacross transpositionto different keys. The experiments suggest that contour is a de 󿬁 ningfeature of melodies, and thereby a potent cue to the identity of musicalpieces, for instance the short note sequences that form recognizable ‘ hooks ’ and ‘ themes ’ .Furtherbehavioralstudiesrevealedthatbothinfantsand musically naïve adults were able to detect contour but not intervalchanges in unfamiliar melodies across changes of key (Trainor andTrehub, 1992; Trainor and Trehub, 1994). Together, these studies havesuggestedthat contourmay bea morefundamentalattribute formelodyrecognition than interval size. NeuroImage 57 (2011) 293 – 300 ⁎  Corresponding author at: Center for functional neuroimaging, Neurology depart-ment,3W.GatesBldg.,3400SpruceSt.,Philadelphia,PA,19104-4283,USA.Fax:+1215349 8260. E-mail address:  yslee@mail.med.upenn.edu (Y.-S. Lee).1053-8119/$  –  see front matter. Published by Elsevier Inc.doi:10.1016/j.neuroimage.2011.02.006 Contents lists available at ScienceDirect NeuroImage  journal homepage: www.elsevier.com/locate/ynimg  A melodic contour can be categorically parsed into its minimalunits, namely ups and downs. An in 󿬂 uential lesion study ( Johns-rude et al., 2000) revealed that patients with a lesion in the rightsuperior temporal lobe could reliably distinguish whether twopitches were the same or different, but were impaired when judging whether the second note was higher or lower than the 󿬁 rst. This partial impairment clearly indicates that there existneural substrates that provide information about the directionalityof successive pitches beyond the tonotopic organization of theprimary auditory cortex (A1).Patterson et al. (2002) showedthis to be the case using spectrally-matchedauditory stimuli. In this sophisticated fMRI study, contrast of melodies (either random or diatonic scale) versus  󿬁 xed-pitch soundrevealed activity in the right superior temporal region whereascontrastof  󿬁 xed-pitchsoundversusnon-pitchsoundrevealedactivitymostly within the Heschl's gyrus. This  󿬁 nding suggests a hierarchicalorganization of complex pitch processing. Relatedly, Stewart et al.(2008) found that detection of   ‘ local ’  interval violations in 4-pitchsequences primarily recruited regions of the right posterior superiortemporal sulcus (STS), whereas detection of more  ‘ global ’  contourviolations preferentially recruited the left posterior STS.The present fMRI study sought to better understand  ‘ where ’  and ‘ how ’  melodic contour information is represented along the auditorypathway. To this end, we chose a multivariate pattern-based fMRIanalysis(MVPA)approach.Althoughthesubtractionlogic of standardneuroimaging techniques has offered a way to highlight the networkof areas associated with melody processing (Hyde et al., 2008, 2011; Janata et al., 2002a, 2002b; Patterson et al., 2002; Platel et al., 1997;Stewart et al., 2008; Warren et al., 2003; Warren and Grif  󿬁 ths, 2003;Zatorre et al., 1994, 1996), the method suffers if the objective is todifferentiate stimulus categories and all of the voxels in a region aremodulated by one category or the other. This limitation led us tobelieve that the conventional neuroimaging approach may not besuited to address our question. Instead, we expected that  ‘ ups ’  and ‘ downs ’  of contour category may be found by measuring differentialvoxel patterns across the stimuli using MVPA.One MVPA method, the searchlight analysis developed byKriegeskorte et al. (2006), has been proven to effectively delineatebrain regions that are inherently invisible to the standard fMRIanalysis method (Kriegeskorte et al., 2006; Raizada et al., 2010; for atutorial review, see Pereira et al., 2009). With this approach, weexamined each location of the brain to identify areas that maycategorize between ascending and descending melodies in a non-musician group.Additionally, the identical analysis method was applied to explorethe brain regions that may be involved in mode processing bycollapsing the same set of stimuli into major and minor categories.Following the fMRI experiment, two behavioral experiments wereconducted 1) to construct a perceptual similarity space with regardsto all the melodies and 2) to evaluate the emotional valence (e.g.,happy vs. sad) aroused by each melodic sequence. These behavioralexperiments would further support the neural  󿬁 ndings and provideinterpretability of the neural  󿬁 ndings in relation to the observedbehavior. Materials and methods Subjects Subjects were 12 healthy right-handed volunteers (7 male;average age=20.4; average musical training=5.7 years), none of whom majored in music nor had participated in professional or semi-professional music activities (e.g., playing in an orchestra or a rockband). No subjects had absolute pitch. Consent forms were obtainedfrom all subjects as approved by the Committee for the Protection of Human Subjects at Dartmouth College. Stimuli Twentyshortmelodicsequencesconsistingof  󿬁 vepianotonesinthemiddle octave range were generated using the MIDI sequence tool inApple's Garage Band software and exported to .wav format. All stimuliwere matched in duration (2.5 s, 500 ms per each note), sampling rate(44.1 kHz, 16-bit, Stereo), and volume intensity using SoundForge 9.0(Sony, Japan) and Matlab 2009b (Mathworks Inc, Natick, MA, USA). A2×2designwasemployedwithMode(major,minor)inonedimensionand Contour (ascending, descending) in another dimension, creatingfourcategoriesofstimuli,eachofwhichcontained 󿬁 vedifferentmelodyexemplars whose slopes were systematically varied (Fig. 1). In additionto these 20 stimuli, melodies with a third type of contour comprisingboth upward and downward pitch changes were created to be used ascatch trials during the scans (Supplementary Fig. 1).  fMRI scanning  A slow event-related design was employed with an 8 s inter-stimulusinterval(ISI)ineightruns(44trialsperrun).Fixationcrosseswere displayed during runs. Scanning was conducted on a 3 T PhilipsInteraAchievawholebodyscanner(PhilipsMedicalSystems,Best,theNetherlands) at the Dartmouth College Brain Imaging Center. Theparameters of standard echo-planar imaging (EPI) sequences are asfollows: TR=2000 ms, TE=35 ms, FOV=240×240 mm, 30 slices,voxel size=3×3×3 mm, and inter-slice interval=0.5 mm, sequen-tial axial acquisition. A high-resolution T1-weigthed MPRAGE scan Fig.1. Staff viewofthe20melody stimuli.a. Diatonic scale, b.7th scale,c.Arpeggio scale, d.5th scale,e. Widearpeggio scaleAll melodieswere anchoredto thenoteofC(261.63 Hz)in the middle octave range (261.63 Hz). The tempo of each melodic sequence was 120 bpm.294  Y.-S. Lee et al. / NeuroImage 57 (2011) 293 –  300  (voxel size=1×1×1 mm) was acquired at the end of the scan.Stimuli were delivered binaurally using high- 󿬁 delity MR-compatibleheadphones (OPTIME 1, MR confon, Germany). Experimental procedures fMRI experiment  Duringthescan,subjectsheardaseriesofmelodieswhile 󿬁 xatingthecrossonthescreen.Eachmelodywaspresentedtwiceperrun(atotalof 40melodiesperrun)every8 sandtheorderwasrandomizedacrosstheruns. Occasionally, a catch trial melody (Supplementary Fig. 1) waspresentedtomonitorsubjects'alertness(atotalof4melodies)forwhichsubjects were instructed to press a button to indicate when theyperceived a change in contour within a particular melody. Happiness rating  Following fMRI scans, happiness rating was measured on aseparate day. In a quiet behavioral testing room, stimuli werepresented via noise-canceling headphones (Quiet Comfort acousticnoise-canceling headphones, Bose, USA)and subjects were instructedto report how happy each melody sounded using a Likert-type scalefrom 1 ( very sad ) to 7 ( very happy ). Similarity distance measurement  For another post-fMRI experiment, similarity distance among themelodies was measured. In a quiet behavioral testing room,consecutive pairs of sequences consisting of the stimuli from thefMRI experiment were presented via noise-canceling headphones(Quiet Comfort acoustic noise-canceling headphones, Bose, USA) andsubjects were asked to indicate how similar each pair of melodies(400pairs,20×20)soundedusingaLikert-typescalefrom1( notatallsimilar  ) to 7 ( exactly alike ). Subjects were encouraged to use the fullscale. The full list comprised the set of all possible pairings, presentedover the course of two half-hour sessions. MVPA methods fMRI data were preprocessed using the SPM5 software package(Institute of Neurology, London,UK) andMATLAB2009b (MathworksInc., Natick, MA, USA). All images were realigned to the  󿬁 rst EPI tocorrect movement artifacts, and then spatially normalized intoMontreal Neurological Institute (MNI) standard stereotactic space(e.g., ICBM152 EPI template) with their preserved srcinal voxel size(3 mm×3 mm×3 mm). Classi  󿬁 cation on the contour (ascending vs. descending) Afterpreprocessing,fMRItime-coursesofallvoxelswereextractedfromunsmoothedimages.Subsequently,theserawsignalswerehigh-pass  󿬁 ltered with a 300 s cut-off to remove slow drifts caused by thescanner, and standardized across entire runs using z-score tonormalize intensity differences among runs. To guard against aconfounding signal from different stimulus onsets, a signal solelygenerated by each stimulus (i.e., corresponding to time points 4, 6,and 8 s after the onset of the stimulus) was acquired from voxels. Theneural signals corresponding to ascending and descending categorieswere vectorized to be submitted to a classi 󿬁 er. For the binaryclassi 󿬁 er, we used the Lagrangian Support Vector Machine algorithm(Mangasaian and Musicant, 2001). The classi 󿬁 er was initially trainedby a subset of datasets (training set) and applied to the remainingdatasets (testing set). For the purpose of validating results, signalsfrom six scanning runs servedas a training set andtwo runs servedasatestingset,resultingin4-foldcross-validation.Theclassi 󿬁 cationwasperformed approximately 50,000 times at every searchlight sphere (aradius size consisting of two neighboring voxels, maximum 33voxels). The percent correct result for each classi 󿬁 cation test wasaveraged across the four training/testing combinations and stored ineach voxel (the center voxel of a sphere) of an output image for eachsubject. These output images of all subjects were submitted to asecond-level random effect (Raizada et al., 2010; Walther et al., 2009;Stokes et al., 2009) using SPM such that the average accuracy of classi 󿬁 cation test for each voxel was compared to chance (50%) andthe group  t  -map containing the corresponding  t  -value for each voxelwas generated. Classi  󿬁 cation on the mode (major vs. minor) Procedures were identical except that the corresponding neuralsignals were chosen based upon mode (major and minor). Results Behavioral results (similarity distance matrix) Similarity data were acquired from 7 out of 12 subjects who hadpreviously participated in the fMRI experiment, compiled in a squaresymmetrical matrix format, and analyzed using SPSS v. 17.0 (Chicago,IL), generating 2-dimensional Euclidean-distance plots both withinand across subjects with S-stress convergence of .001. The multi-dimensional scaling (MDS) structure revealed that the primarydimension of clustering among the sequences was contour (Fig. 2a).More speci 󿬁 cally, it was observed that melodies within the samecontour tended to cluster together more than melodies acrossdifferent contours. The result con 󿬁 rmed our notion that the contouris categorically perceived. Another dimension captured the degree of slope across the stimuli, in that high-slope melodies such as widearpeggio and 5th melodies tended to cluster together and low-slopemelodiessuchasdiatonicand7thmelodiestendedtoclustertogether(Fig. 2a). Together the  󿬁 rst two dimensions explained 82% of thevariance in the similarity judgments. When added, a 3rd dimension,which appears to be weakly associated with mode, increased thepercentage of explained variance by only 2% (see SupplementaryFigs. 2a and b). Behavioral results (happiness rating) A one-way repeated-measures ANOVA was performed on theaverage happiness ratings across the four melodic categories. Theresultsrevealedthattherewasasigni 󿬁 cantdifferenceamongthefourcategories,  F  (3,33) =19.99,  P  b 0.05 (Fig. 2b). In line with the previousreport(CollierandHubbard,2001),therewasamaineffectofcontoursuch that ascending melodies sounded happier than descendingmelodiesirrespectiveofmode, t  11 =5.58,  P  b 0.05.Likewise,therewasamaineffectofmodesuchthatmajormelodiessoundedhappierthanminor melodies irrespective of contour,  t  11 =4.90,  P  b 0.05. There wasno interaction between contour and mode ( n.s. ).  fMRI results Ascending vs. descending (contour) The searchlight analysis revealed three distinct brain regions thatreliably categorized between ascending and descending melodies (  p ( cluster-size corrected ) b 0.05 in combination with  p ( uncorrected ) b 0.005)(Fig.3 andTable 1). Amongthe areas,a partof therightSTS (x, y,z: 51,  − 18,  − 7) exhibited the most differential neural patternbetween the categories,  t  11 =7.71, con 󿬁 rming the previous  󿬁 ndingsthat melodic processing is mainly mediated by the right superiortemporal region (Hyde et al., 2008; Johnsrude et al., 2000; Warrierand Zatorre, 2004; Zatorre, 1985; Zatorre et al., 1994). The IPL on thecontralateral left hemisphere also displayed a categorical neuralpattern in response to ascending and descending melodies. Withinthis region, the most robust local pattern ( t  11 =5.59) was observed inthe intraparietal sulcus (x,y,z: − 48, − 36, 39). Finally, the ACC (x,y,z:3, 21, 28) in the frontal lobe was found to discriminate between 295 Y.-S. Lee et al. / NeuroImage 57 (2011) 293 –  300  ascending and descending melodies. Subsequently, ROIs (Regions of Interest) were extracted for veri 󿬁 cation of overall accuracy withineachoftheidenti 󿬁 edareas.ThenumberofvoxelswithintheROIswas133 (rSTS), 183 (lIPL), and 233 (ACC) and the fMRI intensity in eachROI was submitted to another set of classi 󿬁 cation tests. The overallaccuracies of classi 󿬁 cation tests were 52.1% ( s.e =0.6,  t  11 =3.5, P  =0.005), 51.7% ( s.e =0.6,  t  11 =3.1,  P  =0.01), 52.3% ( s.e =0.4, t  11 =5.3,  P  =0.0003) in the rSTS, lIPL, and ACC respectively.Whilesigni 󿬁 cant,theobservedpercentaccuraciesweresomewhatlow. In order to validate the overall accuracy within each of thoseareas, simulation using Monte Carlo shuf  󿬂 ing was performed. To thisend, fMRI signals within each area were randomly assigned toascending or descending categories in a training set and submitted tothe classi 󿬁 er. This classi 󿬁 er was then applied to predict the categoriesthat correctly corresponded to fMRI signals in the remaining testingset. This was done across the 4-fold cross-validation sets. The MonteCarlo shuf  󿬂 ing with 1000 iterations was plotted and compared to theobserved accuracy (Fig. 4). It was revealed that the observed accuracywasindeedsigni 󿬁 cantlyabovethechancelevel( P  b 0.001)forallthreeROIs con 󿬁 rming that each area can distinguish between ascendingand descending categories. Major vs. minor (mode) Although our primary focus was an investigation of contourprocessing,weperformedasecondsearchlightanalysistoidentifythebrain regions involved in implicit categorization of melodies bymajor/minor mode during the contour task. The analysis did not yieldany signi 󿬁 cant voxels for this classi 󿬁 cation even at an extremelyrelaxed signi 󿬁 cance threshold (  p ( uncorrected ) b 0.01). Discussion This study sought to identify neural structures underlying melodiccontour processing. More speci 󿬁 cally, we searched for sets of voxelsthat can distinguish between ascending and descending melodicsequences. Using a whole brain searchlight method, we found thatthree distinct areas, namely, the right STS, left IPL, and ACC produceddifferential neural patterns in response to ascending and descendingcontour categories (Fig. 3). By contrast, a subsequent searchlightanalysis with respect to mode (major vs. minor) did not  󿬁 nd anysigni 󿬁 cant voxels. Given that our behavioral results showed that bothmode and contour were separable by emotional valence, these nullresults may suggest that the categorical neural responses that wereobserved during the contour task may have more to do with pitchsequence processing rather than higher-order emotional processing.Nonetheless, the possibility of categorization by emotion cannot befully discounted by the results as the contribution of mode to anemotional differentiation of the melodies could have been disre-garded during the contour-detection task. Further, both the rSTS andACC have been implicated in emotion literature, leaving open thepossibility of functional heterogeneity within those regions. Fig. 2.  a. The similarity structure in 2D projections among all pairwise (20 x 20) melody comparisons. The horizontal axis captures the distinction in contour and the vertical axiscaptures the variance of the slopes among melodies. Together, they account for a total of 82% of variances. Abbreviations: as=ascending, ds=descending, di=diatonic,arp=arpeggio, warp=wide arpeggio, maj=major, min=minor. b. Happiness ratings for the four melody categories. The x-axis depicts 2 different melodic categories by mode,linesbycontour,andthey-axisdepictstheratingbetween1and7.Therewassigni 󿬁 cantdifferenceamongthefourmelodytypesintheiremotionalcontent( F  (3,33) =19.99, P  b 0.05).296  Y.-S. Lee et al. / NeuroImage 57 (2011) 293 –  300  In line with the neural  󿬁 ndings, the similarity distance matrixrevealed that ascending and descending melodies were indeedcategorically divided in perceptual space (Fig. 2a). As was expected,GLM analysis yielded no signi 󿬁 cant voxels except for the melodies vs.resting period comparison, which mainly showed bilateral activationof the auditory cortices (see supplemental material).Inthe 󿬁 eldofcognitiveneuroscienceofmusic,theneuralsignatureof contour processing has been primarily investigated in EEG(electroencephalography) studies (Fujioka et al., 2004; Paavilainenetal.,1998;Saarinenetal.,1992;Schiavettoetal.,1999;Tervaniemietal.,1994;Trainoretal.,2002).Forexample,Trainoretal.(2002)found that a contour shift from ascending to descending melodies elicited aMMN(mismatchnegativity)inmusicallyuntrainedsubjects,suggest-ing that melodic contour information might be categorized automat-ically by the brain even in the absence of attention. Moreover, asubsequent MEG (magnetoencephalography) study (Fujioka et al.,2004) extended the previous  󿬁 ndings by demonstrating that theMMN was more pronounced in musicians than in non-musicianswhen a contour change occurred, indicating that automatic contourprocessing can be sharpened by musical experience.While EEG (MEG) studies have been conducted in the temporaldomain, neuroimaging work has made a substantial contribution tocreate the spatial map of melody processing in the brain (Hyde et al.,2008,2011;Janataetal.,2002a,2002b;Pattersonetal.,2002;Plateletal., 1997; Stewart et al., 2008; Warren et al., 2003; Warren andGrif  󿬁 ths, 2003; Zatorre et al., 1994, 1996). A number of neuroimagingstudies have revealed that melodies tend to evoke activation inregions of the right hemisphere, including superior temporal andinferior frontal lobes (Hyde et al., 2008, 2011; Janata et al., 2002a;Patterson et al., 2002; Zatorre et al., 1994). For example, an early PET(Positron Emission Tomography) study by Zatorre et al. (1994)showed that the right superior temporal sulcus was more activatedwhen listening to a melody than to a noise burst matched inamplitude envelope. Hyde et al. (2008) showed that the right planumtemporale was parametrically modulated by the degree of pitchdistance in melodic sequences whereas the left planum temporalewas not responsive until the pitch distance was increased up to 200cents between adjacent notes. In a more recent study by Hyde et al.(2011), right inferior frontal gyrus showed deactivation and reducedfunctionalconnectivity withthe auditory cortex in amusics whowereimpaired in pitch processing when compared to normal subjects.Nevertheless, as was discussed in the Introduction, the conven-tional neuroimaging paradigm may be limited when directlycomparing different melody stimuli. Notably, our study showed thatthe conventional approach was blind to the brain regions thatgenerate differential patterns in response to different contours of melodies that were matched in other physical characteristics such astempo and duration (see supplemental material). While signi 󿬁 cant,the classi 󿬁 cation test indicated that the distinction in each area wasquite subtle. This may have been due to normalizing subjects' brainsinto the standard MNI template, which inevitably disregardsanatomical differences across individuals. Our aim, however, was toidentify melodic contour modules at the group level, not to testclassi 󿬁 cationresultsineachsubject'spre-de 󿬁 nedROIs.Weattemptedto validate the group results by simulating classi 󿬁 cation accuraciesthat were derived from randomly shuf  󿬂 ed data with 1000 iterations.The Monte Carlo results for each area con 󿬁 rmed that the observed Fig. 3.  Brain regions that distinguish between ascending and descending melodic sequences ( P  (cluster-size corrected) b 0.05 in combination with  P  (uncorrected) b 0.005). The colorscale indicates separability t-value from the second level group analysis of individual searchlight analysis. Top: right superior temporal sulcus, middle: left inferior parietal lobule,bottom: anterior cingulate cortex.  Table 1 Cortical loci involved in melodic contour processing.Region name HEM BA MNI coordinates T-value Cluster sizex y zSuperior temporal sulcus R 22 51  − 18  − 7 7.71 133Inferior parietal lobule L 40  − 48  − 36 39 5.59 183Anterior cingulate R 32 3 24 28 4.66 233Table lists signi 󿬁 cant areas (  p ( cluster-size corrected ) b 0.05).HEM: hemisphere; BA: approximate Brodmann area.297 Y.-S. Lee et al. / NeuroImage 57 (2011) 293 –  300
Related Search
Similar documents
View more...
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks