• Tidak ada hasil yang ditemukan

Decoding the temporal dynamics of affective scene processing NeuroImage

N/A
N/A
Protected

Academic year: 2024

Membagikan "Decoding the temporal dynamics of affective scene processing NeuroImage"

Copied!
14
0
0

Teks penuh

(1)

ContentslistsavailableatScienceDirect

NeuroImage

journalhomepage:www.elsevier.com/locate/neuroimage

Decoding the temporal dynamics of affective scene processing

Ke Bo

a,b

, Lihan Cui

a

, Siyang Yin

a

, Zhenhong Hu

a

, Xiangfei Hong

a,c

, Sungkean Kim

a,d

, Andreas Keil

e

, Mingzhou Ding

a,

aJ. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA

bDepartment of Psychological and Brain Sciences, Dartmouth college, Hanover, NH 03755, USA

cShanghai Key Laboratory of Psychotic Disorders, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai 200030, China

dDepartment of Human-Computer Interaction, Hanyang University, Ansan, Republic of Korea

eDepartment of Psychology, University of Florida, Gainesville, FL 32611, USA

a r t i c le i n f o

Keywords:

Emotion, affective scenes IAPS

Multivariate pattern analysis EEG

fMRI

Representation similarity analysis Visual cortex

a b s t r a ct

Naturalimagescontainingaffectivescenesareusedextensivelytoinvestigatetheneuralmechanismsofvisual emotionprocessing.FunctionalfMRIstudieshaveshownthattheseimagesactivatealarge-scaledistributedbrain networkthatencompassesareasinvisual,temporal,andfrontalcortices.Theunderlyingspatialandtemporaldy- namics,however,remaintobebettercharacterized.WerecordedsimultaneousEEG-fMRIdatawhileparticipants passivelyviewedaffectiveimagesfromtheInternationalAffectivePictureSystem(IAPS).Applyingmultivariate patternanalysistodecodeEEGdata,andrepresentationalsimilarityanalysistofuseEEGdatawithsimultaneously recordedfMRIdata,wefoundthat:(1)∼80msafterpictureonset,perceptualprocessingofcomplexvisualscenes beganinearlyvisualcortex,proceedingtoventralvisualcortexat∼100ms,(2)between∼200and∼300ms (pleasantpictures:∼200ms;unpleasantpictures:∼260ms),affect-specificneuralrepresentationsbegantoform, supportedmainlybyareasinoccipitalandtemporalcortices,and(3)affect-specificneuralrepresentationswere stable,lastingupto∼2s,andexhibitedtemporallygeneralizableactivitypatterns.Theseresultssuggestthat affectivescenerepresentationsinthebrainareformedtemporallyinavalence-dependentmannerandmaybe sustainedbyrecurrentneuralinteractionsamongdistributedbrainareas.

1. Introduction

Thevisualsystemdetectsandevaluatesthreats andopportunities incomplexvisualenvironmentstofacilitatetheorganism’ssurvival.In humans,toinvestigatetheunderlyingneuralmechanisms,werecord fMRIand/orEEGdatafromobserversviewingdepictionsofnaturalistic scenesvaryinginaffectivecontent.AlargebodyofpreviousfMRIwork hasshown thatviewingemotionally engagingpictures, comparedto neutralones,heightensbloodflowinlimbic,frontoparietal,andhigher- ordervisualstructures(Langetal.,1998;Phanetal.,2002;Liuetal., 2012;Bradleyetal.,2015).ApplyingMVPAandfunctionalconnectiv- itytechniquestofMRIdata,wefurtherreportedthataffectivecontent canbedecodedfromvoxelpatternsacrosstheentirevisualhierarchy, includingearlyretinotopicvisualcortex,andthattheanterioremotion- modulatingstructuressuchastheamygdalaandtheprefrontalcortex arethelikelysourcesofthese affectivesignalsviathemechanismof reentry(Boetal.,2021).

Temporaldynamicsofaffectivesceneprocessingremainstobebet- terelucidated.Theevent-relatedpotential(ERP),anindexofaverage neuralmassactivitywithmillisecondtemporalresolution,hasbeenthe

Correspondingauthor.

E-mailaddress:[email protected]fl.edu(M.Ding).

mainmethodforcharacterizingthetemporalaspectsofaffectivescene perception(Cuthbertetal.,2000;Keiletal.,2002;Hajcaketal.,2009).

UnivariateERPsaresensitivetolocalneuralprocessesbutdonotre- flectthecontributionsofmultipleneuralprocessestakingplaceindis- tributed brainregionsunderlying affectivesceneperception.Thead- ventof themultivariatedecodingapproachhasbeguntoexpand the potentialoftheERPs(BaeandLuck2019;Suttereretal.,2021).Bygo- ingbeyondunivariateevaluationsofconditiondifferences,thesemul- tivariate patternanalyses(MVPA)take intoaccountvoltagetopogra- phiesreflectingdistributedneuralactivitiesandhelpuncoverthedis- criminabilityofexperimentalconditionsnotpossiblewiththeunivari- ateERPmethod.TheMVPAmethodcanevenbeappliedtosingle-trial EEGdata.Bygoingbeyondmeanvoltages,thedecodingalgorithmscan examinedifferencesinsingle-trialEEGactivitypatternsacrossallsen- sors,whichfurthercomplementstheERPmethod(Grootswagersetal., 2017; Continietal., 2017).Conceptually,thepresenceof decodable informationinneuralpatternshasbeentakentoindexdifferencesin neuralrepresentations(Normanetal.,2006).Thus,in thecontextof EEG/ERPdata,thetimecourseofdecoderperformancemayinformon howneuralrepresentationslinkedtoagivenconditionorstimulusform

https://doi.org/10.1016/j.neuroimage.2022.119532.

Received15February2022;Receivedinrevisedform1July2022;Accepted1August2022 Availableonline2August2022.

1053-8119/© 2022TheAuthors.PublishedbyElsevierInc.ThisisanopenaccessarticleundertheCCBY-NC-NDlicense

(2)

and evolve over time (Cauchoix et al., 2014; Wolff et al., 2015; Dimaetal.,2018).

Thefirstquestionweconsideredwashowlongittakesfortheaffect- specificneural representations of affective scenes toform.For non- affectiveimagescontainingobjectssuchasfaces,housesorscenes,past workhasshownthattheneuralresponsesbecomedecodableasearlyas

∼100msafterstimulusonset(Cichyetal.,2014;Cauchoixetal.,2014).

Thislatencyreflectstheonsettimeforthedetectionandcategorization ofstereotypicalvisualfeaturesassociatedwithdifferentobjectsinearly visualcortex(Nakamuraetal.,1997;DiRussoetal.2002).Forcomplex scenesvaryinginaffectivecontent,however,althoughmappedontorich category-specificvisualfeaturesinamultivariatefashion(Krageletal., 2019),thereareno stereotypicalvisualfeaturesthat unambiguously separatedifferentaffectivecategories(e.g.,unpleasantscenesvsneu- tralscenes).Accordingly,univariateERPstudieshavereportedrobust voltagedifferencesbetweenemotionalandneutralcontentatrelatively latetimes,e.g.,∼170–280msattheleveloftheearlyposteriornegativ- ity(Schuppetal.,2006; Fotietal., 2009)and∼300msatthelevel of thelatepositive potential(LPP) (Cuthbert etal., 2000;Lang and Bradley,2010;Liuetal.,2012;Sabatinellietal.,2013).Wesoughtto furtherexaminetheseissuesbyapplyingmultimodalneuroimagingand theMVPAmethodology.Itisexpectedthatperceptualprocessingofaf- fectivesceneswouldbegin∼100msfollowingpicture onsetwhereas affect-specificneuralrepresentationswouldemergebetween∼150ms and∼300ms.

Arelatedquestioniswhethertherearesystematictimingdifferences intheformationofneuralrepresentationsofaffectivescenesdiffering inemotionalcontent.Specifically,ithasbeendebatedtowhatextent pleasantversusunpleasantcontentsemergeoverdifferenttemporalin- tervals(e.g.,Oyaetal.,2002).Thenegativitybiasideasuggeststhat aversive informationreceivesprioritizedprocessing in thebrainand predictsthatscenescontainingunpleasantelementsevokefasterand strongerresponsescomparedtoscenescontainingpleasantorneutral elements.TheERPresultstodatehavebeenequivocal(Carretié etal., 2001;HuangandLuo,2006;Frankenetal.,2008).Analternativeidea is thatthetimingofemotional representationformation depends on thespecificcontentoftheimages(e.g.,eroticwithinthepleasantcate- goryvsmutilatedbodieswithintheunpleasantcategory)ratherthanon thebroadersemanticcategoriessuchasunpleasantscenesandpleasant scenes(WeinbergandHajcak,2010).Wesoughttotesttheseideasby applyingtheMVPAapproachtodecodesubcategoriesofimagesusng EEGdata.Itisexpectedthatthetimingofrepresentationformationis content-specific.

How do neuralrepresentations of affective scenes, once formed, evolveover time?Fornon-affectiveimages,theneuralresponsesare foundtobetransient,withtheprocessinglocusevolvingdynamically fromonebrainstructuretoanother(Carlsonetal.,2013;Cichyetal., 2014;Kaiseretal.,2016).Foraffectiveimages,incontrast,theenhanced LPP,amajorERPindexofaffectiveprocessing,ispersistent,lastingup toseveralseconds,andsupportedbydistributedbrainregionsincluding thevisualcortexaswellasfrontalstructures,suggestingsustainedneu- ralrepresentations.Totestwhetherneuralrepresentationsofaffective scenesaredynamicorsustained,weappliedaMVPAmethodcalledthe generalizationacrosstime(GAT)(KingandDehaene,2014),inwhich theMVPAclassifieristrainedondataatonetimepointandtestedon datafromalltimepoints.Theresultingtemporalgeneralizationmatrix, whenplottedontheplanespannedbythetrainingtimeandthetesting time,canbeusedtovisualizethetemporalstabilityofneuralrepresenta- tions.Foradynamicallyevolvingneuralrepresentation,highdecoding accuracywillbeconcentratedalongthediagonalintheplane,namely, theclassifiertrainedatonetimepointcanonlybeusedtodecodedata fromthesametimepointbutnotdatafromothertimepoints.Forasta- bleorsustainedneuralrepresentation,ontheotherhand,highdecoding accuracyextendsawayfromthediagonalline,indicatingthattheclas- sifiertrainedatonetimepointcanbeusedtodecodedatafromother timepoints.Itisexpectedthattheneuralrepresentationsofaffective

scenesaresustainedratherthandynamicwiththevisualcortexplaying animportantroleinthesustainedrepresentation.

WerecordedsimultaneousEEG-fMRIdatafromparticipantsviewing affectiveimagesfromtheInternationalAffectivePictureSystem(IAPS) (Langetal.,1997).MVPAwasappliedtoEEGdatatoassesstheforma- tionofaffect-specificrepresentationsofaffectivesceneinthebrainand theirstability.EEGandfMRIdatawereintegratedtoassesstheroleofvi- sualcortexinthelarge-scalerecurrentnetworkinteractionsunderlying thesustainedrepresentationofaffectivescenes.FusingEEGandfMRI datavia representationsimilarityanalysis(RSA)(Kriegeskorteetal., 2008),wefurthertestedthetimingofperceptualprocessingofaffective scenesinareasalongthevisualhierarchyandcomparethatwiththe formationtimeofaffect-specificrepresentations.

2. Materialsandmethods 2.1. Participants

Healthyvolunteers(n=26)withnormalorcorrected-to-normalvi- sionsignedinformedconsentandparticipatedintheexperiment.Two participants withdrawbefore recording. Four additional participants wereexcluded forexcessivemovementsinside thescanner.EEG and fMRIdatafromthesefourparticipantswerenotconsidered.Datafrom theremaining20subjectswereanalyzedandreportedhere(10women;

meanage:20.4±3.1).

Thesedatahavebeenpublishedbefore(Boetal.,2021)toaddress adifferentsetofquestions.Inparticular,inBoetal.(2021),weasked thequestionofwhetheraffectivesignalscanbefoundinvisualcortex.

AnalyzingfMRI,anaffirmativeanswerwasfoundwhenitwasshown that pleasant,unpleasant,andneutralpicturesevokedhighlydecod- ableneuralrepresentationsintheentireretinotopicvisualhierarchy.

Usingthelatepositivepotential(LPP)andeffectivefunctionalconnec- tivityasindicesofneutralreentrywefurtherarguedthattheseaffective representationsarelikelytheresultsoffeedbackfromanterioremotion- modulatingstructuressuchastheamygdalaandtheprefrontalcortex.

Inthepresentstudyweaddressthetemporaldynamicsofaffectivescene processingwherethefocuswasplacedonEEGdecoding.

2.2. Procedure

2.2.1. Thestimuli

Thestimuliincluded20pleasant,20neutraland20unpleasantpic- turesfromtheInternationalAffectivePictureSystem(IAPS;Langetal., 1997): Pleasant: 4311, 4599, 4610, 4624, 4626, 4641, 4658, 4680, 4694,4695,2057,2332,2345,8186,8250,2655,4597,4668,4693, 8030;Neutral:2398,2032,2036,2037,2102,2191,2305,2374,2377, 2411,2499,2635,2347,5600,5700,5781,5814,5900,8034,2387;

Unpleasant:1114,1120,1205,1220,1271,1300,1302,1931,3030, 3051,3150,6230,6550,9008,9181,9253,9420,9571,3000,3069.

Thepleasantpicturesincludedsportsscenes,romance,anderoticcou- ples andhad average arousal and valenceratings of 5.8± 0.9 and 7.0±0.5,respectively.Theunpleasantpicturesincludedthreat/attack scenesandbodilymutilationsandhadaveragearousalandvalencerat- ingsof6.2±0.8and2.8±0.8,respectively.Theneutralpictureswere imagescontaininglandscapes,adventures,andneutralhumansandhad average arousal andvalenceratings of 4.2± 1.0and6.3± 1.0,re- spectively.Thearousalratingsforpleasantandunpleasantpicturesare not significantlydifferent (p= 0.2)butbotharesignificantlyhigher thanthatoftheneutralpictures(p<0.001).Valencedifferencesbe- tweenunpleasantvsneutral(p<0.001)andbetweenpleasantvsneu- tral(p=0.005)arebothsignificant.Basedonspecificcontent,the60 picturescanbefurtherdividedinto6subcategories:disgust/mutilation body,attack/threatscene,eroticcouple,happypeople,neutralpeople, andadventure/naturescene.Thesesubcategoriesprovidedanopportu- nitytoexaminethecontent-specificityoftemporalprocessingofaffec- tiveimages.

(3)

Fig.1. Experimentalparadigmanddataanalysispipeline.(A)Affectivepictureviewingparadigm.Eachrecordingsessionlastssevenminutes.60IAPSpictures including20pleasant,20unpleasantand20neutralpictureswerepresentedineachsessioninrandomorder.Eachpicturewaspresentedatthecenterofscreen for3sandfollowedbyafixationperiod(2.8or4.3s).Participantswererequiredtofixatetheredcrossatthecenterofthescreenthroughoutthesessionwhile simultaneousEEG-fMRIwasrecorded.(B)Analysispipelineillustratingthemethodsusedatdifferentstagesoftheanalysis(seetextformoredetails).

Twoconsiderations went into theselectionof the60 pictures as stimuliin thisstudy.First,thesepicturesarewellcharacterized,and havebeenusedinabodyofresearchattheUFCenterfortheStudy ofEmotionandAttentionaswellasinpreviousworkfromourlabo- ratories.Thecategorieswerenotsolelydesignatedonthebasisofnor- mativeratingsofvalenceandarousal,butalsotakenintoaccount of thepictures’abilitytoengageemotionalresponses,asassessedbyauto- nomic,EEG,andBOLDmeasures(Liuetal.,2012;Deweeseetal.,2016; Thigpenetal.,2018;Tebbeetal.,2021).Second,wehaveusedthesame picturesetpreviouslyinanumberofstudieswhereEEGLPPsandre- sponsetimeswererecordedacrossseveralsamplesofparticipants(see, e.g.,Thigpenetal.,2018),enablingustobenchmarktheEEGdatafrom insidethescanneragainstdatarecordedinanEEGlaboutsidethescan- ner,andtoconsidertheimpactofthesepicturesonmodulatingovert responsetimebehavior, wheninterpretingtheresults of thepresent study.

2.2.2. Theparadigm

TheexperimentalparadigmwasillustratedinFig.1A.Therewere fivesessions.Eachsessioncontains60trialscorrespondingtothepre- sentationof60differentpictures.Theorderofpicturepresentationwas randomizedacrosssessions.EachIAPSpicturewaspresentedonaMR- compatiblemonitorfor3s,followedbyavariable(2800msor4300ms) interstimulusinterval.Thesubjectsviewedthepicturesviaareflective mirrorplacedinsidethescanner.Theywereinstructedtomaintainfixa- tiononthecenterofthescreen.Aftertheexperiment,participantsrated thehedonicvalenceandemotionalarousallevelof12 representative pictures(4 pictures foreach broadcategory), which arenot partof the60-pictureset,basedonthepaperandpencilversionoftheself- assessmentmanikin(BradleyandLang,1994;Boetal.,2021).

2.3. Dataacquisition

2.3.1. EEGdataacquisition

EEGdatawererecordedsimultaneouslywithfMRIusinga32chan- nelMR-compatibleEEGsystem(BrainProductsGmbH).Thirty-onesin- teredAg/AgClelectrodeswereplacedonthescalpaccordingtothe10–

20systemwiththeFCzelectrodeservingasthereference.Anadditional electrodewasplacedonsubject’supperbacktomonitorelectrocardio- gram(ECG);theECGdatawasusedduringdatapreprocessingtoassist intheremovalofthecardioballisticartifacts.EEGsignalwasrecorded withanonline0.1–250Hzband-passfilteranddigitizedto16-bitata samplingrateof5kHz.Toensurethesuccessfulremovalofthegradient artifactsinsubsequentanalyses,theEEGrecordingsystemwassynchro- nizedwiththescanner’sinternalclockthroughoutrecording.

2.3.2. fMRIdataacquisition

FunctionalMRIdatawerecollectedona3TPhilipsAchievascan- ner(PhilipsMedicalSystems).Therecordingparametersareasfollows:

echotime(TE),30ms;repetitiontime(TR),1.98s;flipangle,80°;slice number,36;fieldofview,224mm;voxelsize,3.53.53.5mm;ma- trixsize,6464.Sliceswereacquiredinascendingorderandoriented paralleltotheplaneconnectingtheanteriorandposteriorcommissure.

T1-weightedhigh-resolutionstructuralimageswerealsoobtained.

2.4. Datapreprocessing

2.4.1. EEGdatapreprocessing

TheEEGdatawasfirstpreprocessedusingBrainVisionAnalyzer2.0 (BrainProducts GmbH,Germany)toremovegradientandcardiobal- listic artifacts.Toremove gradientartifacts,anartifacttemplatewas

(4)

createdbysegmentingandaveragingthedataaccordingtotheonset ofeach volumeandsubtracted fromtherawEEG data(Allenetal., 2000).To remove cardioballisticartifacts, ECGsignal waslow-pass- filtered,andtheRpeaksweredetectedasheart-beatevents(Allenetal., 1998).Adelayedaverageartifacttemplateover21consecutiveheart- beateventswasconstructedusingasliding-windowapproachandsub- tractedfromtheoriginalsignal.Aftergradientandcardioballisticarti- factswereremoved,theEEGdatawerelowpassfilteredwiththecut- off setat50 Hz,downsampled to250 Hz,re-referenced totheaver- agereference,andexportedtoEEGLAB(DelormeandMakeig,2004) forfurtheranalysis.Thesecond-orderblindidentification(SOBI)proce- dure(Belouchranietal.,1993)wasperformedtofurthercorrectforeye blinking,residualcardioballisticartifacts,andmovement-relatedarti- facts.Theartifact-correcteddatawerethenlowpassfilteredat30Hzand epochedfrom−300msto2000mswith0msdenotingpictureonset.The prestimulusbaselinewasdefinedtobe−300msto0ms.

2.4.2. fMRIdatapreprocessing

The fMRI data were preprocessed using SPM (http://www.

fil.ion.ucl.ac.uk/spm/).Thefirstfivevolumesfromeachsessionwere discardedtoeliminatetransientactivity.Slicetimingwascorrectedus- inginterpolationtoaccountfordifferencesinsliceacquisitiontime.The imageswerethencorrectedforheadmovementsbyspatiallyrealigning themtothesixthimageofeachsession,normalizedandregisteredto theMontrealNeurologicalInstitute(MNI)template,andresampledto aspatialresolutionof3mmby3mmby3mm.Thetransformedimages weresmoothedbyaGaussianfilterwithafullwidthathalfmaximumof 8mm.Thelowfrequencytemporaldriftswereremovedfromthefunc- tionalimagesbyapplyingahigh-passfilterwithacutoff frequencyof 1/128Hz.

2.5. MVPAanalysis:EEGdata

2.5.1. EEGdecoding

MVPAanalysiswasdoneusingsupportvectormachine(SVM)im- plementedinMatlab2014LIBSVMtoolbox(ChangandLin,2011).To reducenoiseandincreasedecodingrobustness,5consecutiveEEGdata points(nooverlap)wereaveraged,resultinginasmoothedEEGtimese- rieswithatemporalresolutionof20ms(50Hz).Unpleasantvsneutral scenesandpleasantvsneutralscenesweredecodedwithineachsubject ateachtimepointtoformadecodingaccuracytimeseries.Eachtrial oftheEEGdata(100trialsforeachemotioncategory)wastreatedas asamplefortheclassifier.The31EEGchannelsprovided31features fortheSVMclassifier.Aten-foldcrossvalidationapproachwasapplied.

Theweightvectororweightmapfromtheclassifierwastransformed accordingtoHaufeetal.(2014)anditsabsolutevalueisvisualizedas atopographicalmaptoassesstheimportanceofeachchannelinterms ofitscontributiontothedecodingperformancebetweenaffectiveand neutralpictures.

2.5.2. Temporalgeneralization

Thestabilityoftheneuralrepresentationsevokedbyaffectivescenes wastestedusingageneralizationacrosstime(GAT)method(Kingand Dehaene,2014).Inthismethod,theclassifierwasnotonlytestedon thedatafromthesametimepointatwhichitwastrained,itwasalso testedondatafromallothersamplepoints,yieldingatwo-dimensional temporalgeneralizationmatrix.Thedecodingaccuracyatapointonthis plane(𝑡𝑥,𝑡𝑦)reflectsthedecodingperformanceattime𝑡𝑥oftheclassifier trainedattime𝑡𝑦.

2.5.3. StatisticalsignificancetestingofEEGdecodingandtemporal generalization

Whetherthedecoding accuracywasabove chancewasevaluated bytheWilcoxonsign-ranktest.Specifically,thedecodingaccuracyat eachtimepointwastestedagainst50%(chancelevel).Theresulting pvaluewascorrectedformultiplecomparisonsbycontrollingforthe

falsediscoveryrate(FDR,p<0.05)acrossthetimecourse.Afurther requirementtoreducepossible falsepositivesisthatthesignificance clustercontainsatleastfiveconsecutivesuchsamplepoints.

Thedecodingaccuracywasexpectedtobeatchancelevelpriorto andimmediatelyafterpictureonset.Thetimeatwhichdecodingaccu- racyroseabovechancelevelwastakentobethetimewhentheaffect- specificneuralrepresentationsof affectivescenesformed.Thestatis- ticalsignificanceof thedifferencebetweentheonsettimesofabove- chance-decodingfordifferentdecodingaccuracytimeserieswasevalu- atedbyabootstrapresampleprocedure.Eachresampleconsistedofran- domlypicking20sampledecodingaccuracytimeseriesfrom20subjects withreplacementandabove-chancedecodingonsetwasdeterminedfor thisresample.Theprocedurewasrepeated1000timesandtheonset timesfromalltheresamplesformedadistribution.Thesignificantdif- ferencebetweentwosuchdistributionswasassessedbythetwo-sample Kolmogorov-Smirnovtest.

Totestthestatisticalsignificanceoftemporalgeneralization,wecon- ductedWilcoxsign-ranktestateachpixelinthetemporalgeneraliza- tionmapthedecodingaccuracyagainst50%(chancelevel).Thecor- respondingpvalueiscorrectedformultiplecomparisonsaccordingto FDRp<0.05.Clustersizeisafurthercontrol(>10points).

2.6. MVPAanalysis:fMRIdata

The picture-evokedBOLDactivationwas estimatedon a trial-by- trialbasisusingthebetaseriesmethod(Mumfordetal.,2012).Inthis method,thetrialofinterestwasrepresentedbyaregressor,andallthe othertrialswererepresentedbyanotherregressor.Sixmotionregres- sorswereincludedtoaccountforanymovement-relatedartifactsduring scanning.RepeatingtheprocessforallthetrialsweobtainedtheBOLD responsetoeachpicturepresentationinallbrainvoxels.Thesingle-trial voxelpatternsevokedbypleasant,unpleasant,andneutralpictureswere decodedbetweenpleasantandneutralaswellasbetweenunpleasant andneutralusingaten-foldvalidationprocedurewithintheretinotopic visualcortexdefinedaccordingtoarecentlypublishedprobabilisticvi- sualretinotopicatlas(Wangetal.,2015).Heretheretinotopicvisual cortexconsistedofV1v,V1d,V2v,V2d,V3v,V3d,V3a,V3b,hV4,hMT, VO1,VO2,PHC1,PHC2,LO1,LO2,andIPS.Forsomeanalyses,thevox- elsinalltheseregionswerecombinedtoformasingleROIcalledvisual cortex,whereasforotheranalyses,theseregionsweredividedintoearly, ventral,anddorsalvisualcortex(seebelow).

2.7. FusingEEGandfMRIdataviaRSA

Decodingbetweenaffectivescenesvsneutralscenes,asdescribed above, yields informationon theformation and dynamics of affect- specificneuralrepresentations.Forcomparisonpurposes,wealsoob- tainedtheonsettimeofperceptualorsensoryprocessingofaffective imagesinvisualcortex,whichisexpectedtoprecedetheformationof affect-specificrepresentations,byfusingEEGandfMRIdataviarepre- sentationsimilarityanalysis(RSA)(Kriegeskorteetal.,2008).RSAisa multivariatemethodthatassessestherepresentationalsimilarity(e.g., usingcrosscorrelation)evokedbyasetofstimuliandexpressesthere- sultasarepresentationaldissimilaritymatrix(1-crosscorrelationma- trix)(RDM).CorrelatingthefMRI-basedRDMsfromdifferentROIsand theEEG-basedRDMsfromdifferenttimepoints,onecanobtainthespa- tiotemporalprofileofinformationprocessinginthebrain.

Inthecurrentstudy,foreach trial,31channels ofEEGdataata giventimepointprovideda31-dimensionalfeaturevector,whichwas correlatedwiththe31-dimenstionalfeaturevectorfromanothertrial atthesametimepoint.Forall300trials(60trialspersessionx5ses- sions)a300× 300representationaldissimilaritymatrix(RDM)wascon- structedateachtimepoint.ForfMRIdata,followingthepreviouswork (Boetal., 2021),wedividedthevisualcortexintothree ROIs:early (V1v,V1d,V2v,V2d,V3v,V3d),ventral(VO1,VO2,PHC1,PHC2),and dorsal(IPS0-5)visualcortex.ForeachROI,thefMRIfeaturevectorwas

(5)

extractedfromeachtrialandcorrelatedwiththefMRIfeaturevector fromanothertrial,yieldinga300× 300RDMfortheROI.TofuseEEG andfMRI,acorrelationbetweentheEEG-basedRDMateachtimepoint andthefMRI-basedRDMfromaROIwascomputed,andtheresultwas therepresentationalsimilaritytimecoursefortheROI.Thisprocedure wascarriedout atsinglesubject levelfirstandthenaveragedacross subjects.

Wenotethatinourstudy,sinceEEGandfMRIweresimultaneously recorded,thereistrial-to-trialcorrespondencebetweenEEGandfMRI, whichmakessingletrialRSAanalysispossible.SingletriallevelRDMs, bycontainingmorevariability,mayenhancethesensitivityoftheRSA fusionanalysis.InmostpreviousRSAstudiesfusingMEG/EEGandfMRI (e.g.,Cichyetal.,2014;Muukkonenetal.,2020),thesingletrial-based RSAanalysisisnotpossible,becauseMEG/EEGandfMRIwererecorded separatelyandtherewasnotrial-to-trialcorrespondencebetweenthe twotypesofrecordings.Inthosesituations,theonlyavailableoption wastoaveragetrialsfromthesameexemplarorexperimentalcondition andconstructRDMmatriceswhosedimensionequalsthenumberof exemplarsorexperimentalconditions.

ToassesstheonsettimeofsignificantsimilaritybetweenEEGRDM andfMRIRDM,wefirstcomputedthemeanandstandarddeviationof thesimilaritymeasureduringthebaselineperiod(−300msto0ms).

Alongtherepresentationalsimilaritytimecourse,similaritymeasures thatarefivestandarddeviationsabovethebaselinemeanwereconsid- eredstatisticallysignificant(p<0.003).Tofurthercontrolformultiple comparisons,clusterscontainingfewerthanfiveconsecutivesuchtime pointswerediscarded.ForagivenROI,thefirsttimepointthatmeets theabovesignificancecriteriawasconsideredtheonsettimeforper- ceptualorsensoryprocessingforthatROI.Tostatisticallycomparethe onsettimesfromdifferentROIs,weconductedabootstrapresamplepro- cedure.Eachresampleconsistedofrandomlypicking20sampleRDM similaritytimeseriesfromthe20subjectswithreplacementandtheon- settimewasdeterminedfortheresample.Theprocedurewasrepeated 1000timesandtheonsettimesfromalltheresamplesformedadistribu- tion.Thesignificantdifferencebetweendistributionswasthenassessed bythetwo-sampleKolmogorov-Smirnovtest..

3. Results

3.1. Affect-specificneuralrepresentations:formationonsettime

We decoded multivariate EEG patterns evoked by pleasant, un- pleasant,andneutralaffectivescenesandobtainedthedecodingaccu- racytimecoursesforpleasant-vs-neutralandunpleasant-vs-neutral.As showninFig.2A,forpleasantvsneutral,above-chanceleveldecoding began∼200msafterstimulusonset,whereasforunpleasantvsneutral, theonsettimeofabove-chancedecodingwas∼260ms.Usingaboot- strapprocedure,thedistributionsoftheonsettimeswereobtainedand showninFig.2B,wherethedifferencebetweenthetwodistributions wasevident,withpleasant-specificrepresentationsformingsignificantly earlierthanthatofunpleasant-specificrepresentations(ksvalue=0.87, effectsize=1.49,two-sampleKolmogorov-Smirnovtest).Toexamine thecontributionof differentelectrodestothedecodingperformance, Fig.2Cshowstheclassifierweightmapsattheindicatedtimes.These weightmapssuggestedthatneuralactivitiesthatcontributedtoclassi- fierperformancewasmainlylocatedinoccipital-temporalchannels,in agreementwithpriorstudiesusingfMRIwhereenhancedand/ordecod- ableBOLDactivitiesevokedbyaffectivesceneswasobservedinvisual cortexandtemporalstructures(Sabatinellietal.,2006;Sabatinellietal., 2013;Boetal.,2021).

Giventhatabove-chancedecodingstarted∼200postpictureonset, itisunlikelythatthedecodingresultsweredrivenbylow-levelvisual features,whichwouldhaveentailedearlierabove-chancedecodingtime (e.g.,∼100ms).Tofirmupthisnotion,wefurthertestediftherearesys- tematiclowlevelvisualfeaturedifferencesacrossemotioncategories.

LowlevelvisualfeatureswereextractedbyGISTusingamethodfroma

previouspublication(Khoslaetal.,2012).WehypothesizedthatifGIST featuresdependoncategory labels,we shouldbeabletodecodebe- tweendifferentcategoriesbasedonthesefeatures.ASVMclassification analysiswasappliedtoimage-basedGISTfeatures,andthedecoding accuracyisatchancelevel: pleasantvsneutralis49%(p=0.9,ran- dompermutationtest)andunpleasantvsneutralis52.5%(p=0.8,ran- dompermutationtest).Theseresultssuggestthatthedecodingresults inFig.2arenotlikelytobedrivenbylow-levelvisualfeatures.

Dividingthescenesinto6subcategories:eroticcouple,happypeo- ple,mutilationbody/disgust,attack,naturescene/adventure,andneu- tral people,wefurtherdecodedmultivariateEEGpatternsevokedby thesesubcategoriesofimages.Againstneutralpeople,theonsettimes of above-chance decoding for erotic couple, attack, and mutilation body/disgustwere∼180ms,∼280ms,and∼300ms,respectively,with happypeoplenotsignificantlydecodedfromneutralpeople.Theon- settimesweresignificantlydifferentbetweeneroticcoupleandattack witheroticcouplebeingearlier(ksvalue=0.81,effectsize=2.1),and betweeneroticcoupleandmutilationbody/disgustwitheroticcouple beingearlier(ksvalue=0.92,effectsize=2.3).Theonsettimesbe- tweenattack andmutilatedbody/disgustwereonlyweaklydifferent withattackbeingearlier(ksvalue=0.35,effectsize=0.34).Against naturalscenes,theonsettimesofabove-chanceleveldecodingforerotic couple,attack,andmutilationbody/disgustwere∼240ms,∼300ms, and ∼300 ms, respectively, with happy people not significantly de- coded fromnaturalscenes.Theonsettimes weresignificantlydiffer- entbetweeneroticcoupleandattackwitheroticcouplebeingearlier (ksvalue=0.7,effectsize=1.3)andbetweeneroticandmutilation body/disgustwitheroticcouplebeingearlier(ksvalue=0.87,effect size=1.33);theonsettimingswerenotsignificantlydifferentbetween attackandmutilationbody/disgust(ksvalue=0.25,effectsize=0.25).

Combiningthesedata,forsubcategoriesofaffectivescenes,theforma- tiontimeofaffect-specificneuralrepresentationsappeartofollowthe temporalsequence:eroticcouple→attack→mutilationbody/disgust.

Affectivepicturesarecharacterizedalongtwodimensions:valence andarousal.Wetestedtowhatextentthesefactorsinfluencedthedecod- ingresults.Erotic(arousal:6.30,valence:6.87)andDisgust/Mutilation (arousal:6.00,valence:2.18)pictureshavesimilararousal(p=0.76) butsignificantlydifferentvalence(p<0.001).AsshowninFig.3A,the decodingaccuracybetweenthesetwosubcategoriesroseabovechance level∼200msafterpictureonset,suggestingthatthepatternsevoked byaffective scenestoalargeextentreflect valence.Incontrast,nat- uralscenes/adventure(arousal:5.4,valence: 7.0)andneutralpeople (arousal:3.5,valence:5.5)havesignificantlydifferentarousalratings (p=0.05),butthetwosubcategoriescannotbedecoded,asshownin Fig.3B,suggestingthatarousalisnotaverystrongfactordrivingde- codability.

3.2. Affect-specificneuralrepresentations:temporalstability

Howdoaffect-specificneuralrepresentations,onceformed,evolve overtime?Aserialprocessingmodel,inwhichneuralprocessingpro- gressesfromonebrainregiontothenext,wouldpredictthattherepre- sentationswillevolvedynamically,resultinginatemporalgeneraliza- tionmatrixasschematicallyshowninFig.4ALeft.Incontrast,arecur- rentprocessingmodel,inwhichtherepresentationsareundergirdedby therecurrentinteractionsamongdifferentbrainregions,wouldpredict sustainedneuralrepresentations,resultinginatemporalgeneralization matrixasschematicallyshowninFig.4ARight.Weappliedatempo- ralgeneralizationmethodcalledthegeneralizationacrosstime(GAT) totestthesepossibilities.Aclassifierwastrainedondatarecordedat time𝑡𝑦 andtested ondataattime𝑡𝑥.Thedecodingaccuracyisthen displayed asacolor-codedtwo-dimensionalfunction(calledthetem- poralgeneralizationmatrix)ontheplanespannedby𝑡𝑥and𝑡𝑦.Ascan beseeninFig.4B,astableneuralrepresentationemerged∼200msaf- terpictureonsetandremainedstableaslateas2000mspoststimulus onset,withthepeakdecodingaccuracyoccurringwithinthetimein-

(6)

Fig.2. DecodingEEGdatabetweenaffectiveandneutralscenesacrosstime.(A)Decodingaccuracytimecourses.(B)Bootstrapdistributionsofabove-chance decodingonsettimes.Subjectsarerandomlyselectedwithreplacementandonsettimewascomputedforeachbootstrapresample(atotalof1000resampleswere considered).(C)Weightmapsshowingthecontributionofdifferentchannelstodecodingperformanceatdifferenttimes.

terval300–800ms.Althoughthedecodingaccuracydecreasedafterthe peaktime,itremainedsignificantlyabovechance,asshownbythelarge areawithintheblackcontour.Theseresultsdemonstratethattheaffect- specificneuralrepresentationsofaffectivescenes,whetherpleasantor unpleasant,arestableandsustainedoverextendedperiodsoftime,sug- gestingthataffectivesceneprocessingcouldbesupportedbyrecurrent interactionsintheengagedneuralcircuits.Repeatingthesametemporal generalizationanalysisforemotionalsubcategories,asshowninFig.5, weobservedsimilarstableneuralrepresentationsforeachemotionsub- category.

3.3. Visualcorticalcontributionstosustainedaffectiverepresentations

WeightmapsinFig.2suggestthatoccipitalandtemporalstructures arethemainneuralsubstrateunderlying affect-specificneuralrepre- sentations,whichisinlinewithpreviousstudiesshowingpatternsof visualcortexactivityencodingrich,category-specificemotionrepresen- tations(Krageletal.,2019;Boetal.,2021).Whetherthesestructures participateintherecurrentinteractionsthatgiverisetosustainedneu-

ralrepresentationsofaffectivesceneswasthequestionweconsidered next.Previouswork,basedontemporalgeneralization,hasshownthat cognitiveoperationssuchasattention,workingmemory,anddecision- makingarecharacterizedbysustainedneuralrepresentations,inwhich sensorycortexisanessentialnodeintherecurrentnetwork(Bücheland Friston,1997;Gazzaleyetal.,2004;Wimmeretal.,2015).Wetested whetherthesameholdstrueinaffectivesceneprocessing.Itisreason- abletoexpectthatifthisisindeedthecase,thenthemorestableand sustainedtheneuralinteractions(measuredbytheEEGtemporalgen- eralization),themoredistincttheneuralrepresentationsinvisualcor- tex(measuredbythefMRIdecodingaccuracyinvisualcortex).Fig.6A shows above-chance fMRIdecoding accuracyfor pleasantvsneutral (p<0.001)andunpleasantvsneutral(p<0.001)invisualcortex.We quantifiedthestrengthofthetemporalgeneralizationmatrixbyaver- agingthedecodingaccuracyinsidetheblackcontour(seeFig.4B)and correlatedthisstrengthwiththefMRIdecodingaccuracyinvisualcor- tex.AsshowninFig.6B,forunpleasantvsneutraldecoding,therewas asignificantcorrelationbetweenfMRIdecodingaccuracyinvisualcor- texandthestrengthoftemporalgeneralization(R=0.66,p=0.0008),

(7)

Fig.3. Furtherdecodinganalysistestingtheinfluenceofvalencevsarousal.(A)EEGdecodingbetweenErotic(normativevalence:6.87,arousal:6.30)vsDis- gust/Mutilationpictures(normativevalence:2.18,arousal:6.00).Redhorizontalbarindicatesperiodofabovechancedecoding(FDRp<0.05).(B)EEGdecoding betweenNeutralpeople(normativevalence:5.5,arousal:3.5)vsNaturalscenes/adventure(normativevalence:7.0,arousal:5.4).Abovechanceleveldecodingis notfound.

whereasforpleasantvsneutraldecoding,thecorrelationisnotasstrong butis stillmarginallysignificant(R= 0.32,p=0.07). Dividingsub- jectsintohighandlowdecodingaccuracygroupbasedontheirfMRI decodingaccuraciesinthevisualcortex,thecorrespondingtemporal generalizationforeachgroupisshowninFig.6C,whereitisagainin- tuitivelyclearthattemporalgeneralizationisstrongerinsubjectswith higherdecodingaccuracyinthevisualcortex.Statistically,thestrength oftemporalgeneralizationforunpleasantvsneutralwassignificantly largerinthehighdecodingaccuracygroup(p=0.01)thanthelowac- curacygroup;thesamewasalsoobservedforpleasantvsneutralbutthe statisticaleffectisagainweaker(p=0.065).Wenotethatthemethod usedheretoquantifythestrengthoftemporalgeneralizationmaybe influencedbythelevelofdecodingaccuracy.IntheSupplementaryMa- terialswe exploredadifferentmethodof quantifyingthestrengthof temporalgeneralizationandobtainedsimilarresults(Fig.S5).

3.4. Onsettimeofperceptualprocessingofaffectivescenes

Pastworkhasfoundthatperceptualprocessingofsimplevisualob- jectsbegins∼100msafterimageonsetin visualcortex(Cichyetal., 2016).Thistimeisearlierthantheonsettimeofaffect-specificneural representations(∼200ms).Sincethepresentstudyusedcomplexvisual scenesratherthansimplevisualobjectsasstimuli,itwouldbehelpfulto obtaininformationontheonsettimeofperceptualprocessingofthese compleximages,providingareferenceforcomparison.Wefusedsimul- taneousEEG-fMRIdatausingrepresentationalsimilarityanalysis(RSA) (Cichyetal.,2016;CichyandTeng,2017)andcomputedthetimeat whichvisualprocessingofIAPSimagesbeganinvisualcortex.Visual cortexwassubdividedintoearly,ventral,anddorsalparts(seeMeth- ods).TheiranatomicallocationsareshowninFig.7A.Wefoundthat sharedvariancebetweenEEGrecordedonthescalpandfMRIrecorded fromearlyvisualcortex(EVC),ventral visualcortex(VVC),anddor- salvisualcortex(DVC)begantoexceedstatisticalsignificancelevelat

∼80ms,∼100ms,and∼360mspostpictureonset,respectively,and remainedsignificantuntil∼1800ms;seeFig.7B.Theseonsettimesare significantlydifferent fromone anotheraccordingtotheKStestap- pliedtobootstrapgeneratedonsettimedistributions:EVC<VVC(ks value=0.21,effectsize=0.37),VVC<DVC(ksvalue=0.75,effect size=1.38),andEVC<DVC(kstest=0.79,effectsize=1.79);see Fig.7C.

Anadditionalanalysiswasconductedtotesttheinfluenceoflow- levelvisualfeaturesontheRSAresults(Groenetal.,2018;Grootswagers etal.,2020).Specifically,wecomputedpartialcorrelationbetweenEEG RDMandfMRIRDMwhilecontrollingfortheeffectoflow-levelfeature RDM.LowlevelfeatureswereextractedbyGISTusingamethodfrom apreviouspublication(Khoslaetal.,2012).300×300GISTRDMwas constructedinasimilarwayasEEGandfMRIRDMs.IfGISTisanim- portantfactordrivingthesimilaritybetweenEEGRDMandfMRIRDM, itwillhaveasignificantcontributiontoEEGRDM-fMRIRDMcorrela- tion,andcontrollingforthiscontributionwouldreduceEEGRDM-fMRI RDMcorrelation.Ascanbe seen,theresultsin Fig.7D,E,wherethe partialcorrelationresultsareshown,arealmostthesameasFig.7B,C, suggestingthatlow-levelfeaturesarenotanimportantfactordriving theRSAresult.

Furthermore,we soughttoexamineifaffect featuresarea factor driving theRSA result.A300×300 emotion-categoryRDMwas con- structed.Specifically,iftwotrialsbelongtothesameemotioncategory, thecorrespondingelementinRDMiscodedas‘0,’otherwiseitiscoded as‘1’.Fig.7FshowedthatthiscategoricalRDMbecomescorrelatedwith EEGRDM∼240mspostpictureonset,whichagreeswiththeonsettime ofaffect-specificrepresentationsfromEEGdecoding,suggestingthatthe EEGpatternsbeyond∼240msmanifestedtheemotionalcontentofaf- fectivescenes.

4. Discussion

Weinvestigatedthetemporaldynamicsofaffectivesceneprocessing andreportedfourmainobservations.First,EEGpatternsevokedbyboth pleasantandunpleasantscenesweredistinctfromthoseevokedbyneu- tralscenes,withabove-chancedecodingoccurring∼200mspostimage onset.Theformationofpleasant-specificneuralrepresentationsledthat ofunpleasant-specificneuralrepresentationsbyabout60ms(∼200ms vs∼260ms);thepeakdecodingaccuracieswereaboutthesame(59%vs 58%).Second,dividingaffectivescenesintosixsubcategories,theonset ofabove-chancedecodingbetweenaffectiveandneutralscenesfollowed thesequence:eroticcouple(∼210ms)→attack(∼290ms)→mutilation body/disgust(∼300ms),suggestingthatthespeedatwhichneuralrep- resentationsformdependsonspecificpicturecontent.Third,forboth pleasantandunpleasant scenes,theneuralrepresentationsweresus- tainedratherthantransient,andthestabilityoftherepresentationswas associatedwiththefMRIdecodingaccuracyinthevisualcortex,sug-

(8)

Fig.4. Temporalgeneralizationanalysis.Classifiertrainedateachtimepointwastestedonallothertimepointsinthetimeseries.Thedecodingaccuracyata pointonthisplanereflectstheperformanceattime𝑡𝑥oftheclassifiertrainedattime𝑡𝑦.(A)Schematictemporalgeneralizationsofdynamicortransient(Left)vs sustainedorstable(Right)neuralrepresentations.(B)Temporalgeneralizationfordecodingbetweenpleasantvsneutral(Left)andbetweenunpleasantvsneutral (Right).Wilcoxsign-ranktestappliedateachpixelinthetemporalgeneralizationmaptotestthesignificanceofdecodingaccuracyagainst50%(chancelevel).The correspondingpvalueiscorrectedformultiplecomparisonsaccordingtoFDRp<0.05.Clustersizeisfurthercontrolled(>10points).Backcontoursenclosepixels withabovechancedecodingaccuracy.

gesting,albeitindirectly,aroleofvisualcortexin therecurrentneu- ralnetworkthatsupportstheaffectiverepresentations.Fourth,apply- ingRSAtofuseEEGandfMRI,perceptualprocessingofcomplexvisual sceneswasfoundtostartinearlyvisualcortex∼80mspostimageonset, precedingtoventralvisualcortexat∼100ms.

4.1. Formationofaffect-specificneuralrepresentations

Thequestionofhowlongittakesforaffect-specificneuralrepre- sentations to form hasbeen considered in the past. Anintracranial electroencephalographystudyreportedenhancementofgammaoscil- lationsforemotionalpicturescomparedtoneutralpicturesinoccipital- temporallobeinthetimeperiodof200–1000ms(Boucheretal.,2015).

Inourdata,the∼200msonsetofabove-chancedecodingand∼500ms occurrenceofpeakdecodingaccuracy,withthemaincontributiontode- codingperformancecomingfromoccipitalandtemporalelectrodes,are consistentwiththepreviousreport.Comparedtononaffectiveimages suchasfaces,housesandscenes,wheredecodabledifferencesinneural representationsinvisualcortexstartedtoemerge∼100mspoststim- ulusonsetwithpeakdecodingaccuracyoccurringat∼150ms(Cichy etal.,2016;Cauchoixetal.,2014),theformationtimesoftheseaffect-

specificrepresentationsappeartobequitelate.Fromatheoreticalpoint ofview,thisdelaymaybeexplainedbythereentryhypothesiswhich holdsthatanterioremotionregionssuchastheamygdalaandthepre- frontalcortex,uponreceivingsensoryinput,sendfeedbacksignalsto visualcortextoenhancesensoryprocessingandfacilitatemotivatedat- tention(LangandBradley,2010).InarecentfMRI study(Boetal., 2021),wefoundthatscenesexpressingdifferentaffectcanbedecoded frommultivoxelpatternsintheretinotopicvisualcortexandthedecod- ingaccuracyiscorrelatedwiththeeffectiveconnectivityfromanterior regions tovisualcortex,inagreementwiththehypothesis.Whathas notbeenestablishedishowlongittakesforthereentrysignalstoreach visualcortex.Toprovideareferencetimeforaddressingthisquestion.

wefusedEEGandfMRIdataviaRSAandfoundthatsensoryprocessing ofcomplexvisualscenessuchasthosecontainedIAPSpicturesbegan

∼100mspostpicture onset.This gaveusanestimate ofthereentry timewhichisontheorderof∼100msorshorter.Wecautionthatthese estimatesaresomewhatspeculativeasourinferencesaremaderather indirectly.

UnivariateERPanalysis,presentedintheSupplementaryMaterials, wasalsocarriedtoprovideadditionalinsights.Fourgroupsofelectrodes centeredonOz,Cz,Pz,andFzwerechosenasROIs.ERPsevokedby affectivepicturesandneutralpictureswerecontrastedateachROI.At

(9)

Fig.5.Temporalgeneralizationanalysisforsubcategoriesofaffectivescenes.(A)Decodingemotionsubcategoriesagainstneutralpeople.(B)Decodingemotion subcategoriesagainstnaturalscenes.SeeFigure4forexplanationofnotations.

Cz,thedifferenceERPwavesbetweenpleasantvsneutralshowedclear activationstartingat∼172ms,whereasforunpleasantvsneutral,the activationstartedat∼200ms,bothingeneralagreementwiththetiming informationobtainedfromMVPAanalysis.

Theforegoing indicatesthatpleasantscenesevokedearlieraffect- specificrepresentationsthanunpleasantscenes.Thispositivitybiasap- pearstobeatvariancewiththenegativitybiasidea,whichholdsthat negativeeventselicitmorerapidandstrongerresponsescomparedto pleasantevents(RozinandRoyzman,2001;Vaishetal.,2008).While theideahasreceivedsupportinbehavioraldata,e.g.,subjectstendtolo- cateunpleasantfacesamongpleasantdistractorsinshortertimethanthe reverse(Öhmanetal.,2001),theneurophysiologicalsupportismixed.

Somestudiesusingaffectivepictureviewingparadigmsreportedshorter ERPlatencyandlargerERPamplitudeforunpleasantpicturescompared topleasantonesincentralP2andlatepositivepotential(LPP)(Carretié etal.,2001;HuangandLuo,2006),butotherERPstudiesfoundthat positivesceneprocessingcanbeasstrongandasfastasnegativescene processingwhenexaminingearlyposteriornegativity(EPN)inoccipi- talchannels(Schuppetal.,2006;Frankenetal.,2008;Weinbergand Hajcak,2010).Onepossibleexplanationforthediscrepancymightbe thechoiceofstimuli.Theinclusionofexcitingandsportsimages,which havehighvalencebutaveragearousal,asstimuliinthepleasantcate- goryweakensthepleasantERPeffectswhencomparedagainstthreat- eningscenesincludedintheunpleasantcategorywhichhavebothlow valenceandhigharousal(WeinbergandHajcak,2010).Inthepresent work,byincludingimagessuchaseroticaandaffiliativehappyscenes inthepleasantcategory,whichhavecomparablearousalratingsasim- agesincludedintheunpleasantcategory,wewereabletomitigatethe possibleissuesassociatedwithstimulusselection.Otherexplanations neededtobesought.

Subdividingtheimagesinto6subcategories:eroticcouples,happy people,mutilationbody/disgust,attackscene,neutralscene,andneu-

tralpeople, anddecodingtheemotionsubcategoriesagainsttheneu- tral subcategories, we found the following temporal sequence of formation of neuralrepresentations: erotic couple (pleasant)→attack (unpleasant)→mutilationbody/disgust (unpleasant),with happypeo- ple failing to be decoded from neutralimages. This findingcan be seen as providing neural support toprevious electrodermal findings showing thateroticscenesevokedlargestresponseswithinIAPS pic- tures,whichwasfollowedbymutilationandthreatscenes(Sarloetal., 2005), suggesting the temporal dynamic of emotion processing de- pends on specificscene content.It alsosupports a behavioral study thatfoundafastdiscriminationoferoticpictures comparedtoother categories, assessed using choice and simple response time experi- ments, using thesamepictures asused here (Thigpen et al., 2018).

In a neural study of nude body processing (Alho et al. 2015), the authors reportedanearly100–200 ms nude-bodysensitiveresponse in primary visual cortex, which was maintained in a later period (200–300 ms). Their consistent occipitotemporal activation is com- parable with our weight map analysis which implicates the occipi- totemporalcortexasthemainneuralsubstratesustainingtheaffective representations.

Thefaster discriminationbetween eroticscenesvsneutralpeople compared toeroticscenesvsnaturalscenesis worthdiscussing.One possibility is thattheneutralpeople category haslowerarousal rat- ings(3.458)comparedtonaturalscenes(5.42)andarousalinfluences decodability.Inaddition,comparingdiscriminationperformanceand ERPsforpictureswithnopeopleversuspictureswithpeople,Ihssenand Keil(2013)foundnoevidencethataffectivesubcategories withpeo- plewerebetterdiscriminatedagainstsubcategorieswithobjectsthan subcategorieswithpeople.Instead,aface/portraitcategorywasmost rapidlydiscriminated whenusing ago/no-goformat forresponding.

Despitethesimilarities,theexactmechanismsunderlyingourdecoding findings,remaintobebetterunderstood.

(10)

Fig.6. Visualcorticalcontributiontostablerepresentationsofaffect.(A)fMRIdecodingaccuracyinvisualcortex.p<0.05thresholdindicatedbythedashedline.

(B)CorrelationbetweenstrengthofEEGtemporalgeneralizationandfMRIdecodingaccuracyinvisualcortex.(C)Subjectsaredividedintotwogroupsaccordingto theirfMRIdecodingaccuracyinvisualcortex.Temporalgeneralizationforunpleasantvsneutral(Upper)andpleasantvsneutral(Lower)wasshownforeachgroup (highaccuracygroupontheLeftvslowaccuracygroupontheRight).Blackcontoursoutlinethestatisticallysignificantpixels(p<0.05,FDR).

(11)

Fig.7.Representationalsimilarityanalysis(RSA).(A)Regionsofinterest(ROIs):earlyvisualcortex(EVC),ventralvisualcortex(VVC),anddorsalvisualcortex (DVC).(B)SimilaritybetweenEEGRDMandfMRIRDMacrosstimeforthethreeROIs.Similaritylargerthanfivebaselinestandarddeviationsformorethan5 consecutivetimepointsaremarkedasstatisticallysignificant.(C)OnsettimeofsignificantsimilarityforeachROIinB.Smalleffectsize.Largeeffectsize.(D) PartialcorrelationbetweenEEGRDMandfMRIRDMwithGISTRDMbeingsetascontrolvariable.(E)OnsettimeofsignificantsimilarityforeachROIinD.(F) TimecourseofsimilaritybetweenEEGRDMandemotioncategoryRDM.

(12)

4.2. Temporalevolutionofneuralrepresentationsofaffectivescenes

Oncetheaffect-specificneuralrepresentationsform,howdothese representationsevolveovertime?Ifemotionprocessingissequential, namely,ifitprogressesfromonebrainregiontothenextastimepasses, wewouldexpectdynamicallyevolvingneuralpatterns. Ontheother hand,iftheemotionalstateisstableovertimeundergirdedbyrecurrent processingindistributedbrainnetworks,wewouldexpectasustained neuralpattern.Atechniquefortestingthesepossibilitiesisthetemporal generalizationmethod(KingandDehaene,2014).Inthismethod,aclas- sifiertrainedondataatonetimeisappliedtodecodedatafromallother times,resultingin a2Dplotofdecoding accuracycalled thetempo- ralgeneralizationmatrix.Paststudiesdecodingbetweennon-emotional imagessuch asneutralfaces vs objectshave founda transienttem- poralgeneralizationpattern(Carlsonetal.,2013;Cichyetal., 2014; Kaiseretal.,2016),supportingasequentialprocessingmodelforob- jectrecognition(Carlsonetal.,2013).Thetemporalgeneralizationre- sultsfromourdatarevealedthattheneuralrepresentationsofaffective scenesarestableoverawidetimewindow(∼200msto2000ms).Such stablerepresentationsmaybemaintainedbysustainedmotivationalat- tention,triggeredbyaffectivecontent(Schuppetal.,2004;Hajcaketal., 2009),whichcouldinturnbesupportedbyrecurrentinteractionsbe- tweensensorycortexandanterioremotionstructures(Keiletal.,2009; Sabatinellietal.,2009;LangandBradley2010).Inaddition,thetime windowin whichsustainrepresentationswerefoundis broadlycon- sistentwithpreviousERPstudieswhereelevatedLPPlastedmultiple seconds,extendingevenbeyondtheoffsetofthestimuli(FotiandHaj- cak,2008;Hajcaketal.,2009).

4.3. Roleofvisualcortexinsustainedneuralrepresentationsofaffective scenes

Thevisualcortex,inadditiontoitsroleinprocessingperceptualin- formation,isalsoexpectedtoplayanactiveroleinsustainingaffective representations,becausethepurposeofsustained motivationalatten- tionistoenhancevigilancetowardsthreatsoropportunitiesinthevi- sualenvironment(LangandBradley,2010).Thesensorycortex’srole in sustainedneuralcomputationshasbeen shownin othercognitive paradigms,includingdecision-making(Mostertetal.,2015),wheresta- bleneuralrepresentationsareshowntobesupportedbythereciprocal interactionsbetweenprefrontaldecisionstructuresandsensorycortex.

Infaceperceptionandimagery,neuralrepresentationsarealsofoundto bestableandsustainedbycommunicationsbetweenhighandloworder visualcortices(Dijkstraetal.,2018).Inourdata,twolinesofevidence appeartosupportasustainedroleofvisualcortexinemotionrepresenta- tion.First,overanextendedtimeperiod,theweightmapsobtainedfrom EEGclassifierswerecomprisedofchannelslocatedmainlyinoccipital- temporalareas.Second,iftheemotion-specificneuralrepresentationsin thevisualcortexstemfromtherecurrentprocessingwithindistributed networks,thenthestrongerandlongertheseinteractions,thestronger andmoredistincttheaffectiverepresentationsinvisualcortex.Thisis supportedbythefindingthatthestrengthoftemporalgeneralizationis correlatedwiththefMRIdecodingaccuracyinvisualcortex.

4.4. Temporaldynamicofsensoryprocessinginvisualpathway

The temporal dynamics of sensory processing of complex visual scenes can be revealed by fusing EEG-fMRI using RSA. The results showedthatvisualprocessingofIAPSimagesstarted∼80mspostpic- tureonsetinearlyvisualcortex(EVC)andproceededtoventralvisual cortex(VVC)at∼100ms.Itisinstructivetocomparethistimingin- formationwithapreviousERPstudywhereitisfoundthatduringthe recognitionofnaturalscenes,thelow-levelfeaturesarebestexplained bytheERPcomponentoccurring∼90mspostpictureonsetwhilehigh- levelfeatures arebest representedby theERPcomponentoccurring

∼170msafterpictureonset(GreeneandHansen,2020).Comparedwith

the∼100msstarttimeofperceptualprocessingin visualcortex,the

∼200msformationonsetofaffect-specificneuralrepresentationslikely includesthetimeittookforthereentrysignalstotravelfromemotion processingstructuressuchastheamygdalaortheprefrontalcortexto thevisualcortex(seebelow),whichthengiverisetotheaffect-specific representationsseenintheoccipital-temporalchannels.Thedorsalvi- sualcortex(DVC),abrainregionimportantforactionandmovement preparation(WandellandWinawer,2011),is activated at∼360ms, which isrelativelylateandmayreflecttheprocessingof actionpre- dispositionsresultingfromaffectiveperceptions.Thissequenceoftem- poralactivityis consistentwiththatestablishedpreviouslyusing the fast-fMRImethodwhereearlyvisualcortexactivationprecededventral visualcortexactivationwhichprecededdorsalvisualcortexactivation (Sabatinellietal.,2014).

ItisworthnotingtheRSAsimilaritytimecoursesinallthreevisual ROIsstayedhighlyactivated forarelativelylongtimeperiod,which maybetakenasfurtherevidence,alongwiththetemporalgeneraliza- tionanalysis, tosupportsustained neuralrepresentationsofaffective scenes.Fromamethodologicalpointofview,theRSAdiffersfromthe decodinganalysisinthatdecodinganalysiscapturesaffect-specificdis- tinctionbetweenneuralrepresentations,whereastheRSAfusingofEEG- fMRIissensitivetoevokedpatternsimilaritysharedbyEEGandfMRI imagingmodalities,withearlyeffectslikelydrivenbysensorypercep- tualprocessingandlateeffectsbybothsensoryandaffectiveprocessing.

4.5. Beyondthevisualcortex

Thevisualcortexisnottheonlybrainregionactivatedbyaffective scenes.IntheSupplementaryMaterials,weperformeda whole-brain decodinganalysisoffMRIdata(Fig.S1),andfoundabove-chancede- codinginmanyareasinprefrontal,limbic,aswellasoccipital-temporal cortices.Interestingly,thestrongestdecodingwasfoundintheoccipital- temporalareas,lendingsupporttoourfocusonthevisualcortex.Shed- dinglightonthetimingoftheseactivations,apreviousEEGsourcelo- calizationstudyreportedthataffect-relatedactivationbegantoappear in visualcortex, prefrontalcortexandlimbicsystems∼200 msafter stimulusonset(Costaetal.,2014),complementingourfMRIanalysis andthefMRIanalysisbyothers(Saarimäkietal.,2016).FusingEEG andfMRIwithRSA,wefurthertestedthetemporaldynamicsinseveral emotion-modulatingstructures,includingamygdala,dACC,anteriorin- sula,andfusiformcortex.AsshowninFig.S3,visualinputreachedthe amygdala∼100mspostpictureonset,whichiscomparablewiththeac- tivationtimeofearlyvisualcortex.Asimilaractivationtimehasbeen reportedinapreviousintracranialelectrophysiologicalstudy(Méndez- Bértoloetal.,2016).EarlyactivationwasalsofoundindACC.Despite these earlyarrivalsof visual input,ittakes longerfor affect-specific signalstoarise,however.Recordingfromsingleneurons,Wangetal.

showedthat ittakes∼250msfortheemotionaljudgementsignalof facestoemergein theamygdala (Wangetal.,2014).Itisintriguing tonotethatthe∼100msdifferencebetweenthearrivalofvisualinput andtheemergenceofaffect-specificactivityissimilartooursuggested reentrytimeof∼100ms.

4.6. Limitations

Thisstudyisnotwithoutlimitations.First,thesuggestionthatsus- tainedaffectiverepresentationsaresupportedbyrecurrentneuralin- teractions isspeculative andbased onindirectevidence, aswehave alreadyacknowledgedabove.Second,weusedcrosscorrelationtocon- structRDMs.Apreviousstudyhasshownthatadecoding-basedanal- ysis leadstomorereliableRDMs(Guggenmosetal., 2018).Unfortu- nately,this methodisnotapplicable toourdata,becausewedonot haveenoughrepetitionsforeachpicture(fivetimes)topermitareli- abledecodingaccuracyforeverypairofpictures.Third,theinclusion ofadventurescenes,whichcontainhumans(smallinsizerelativetothe overallimage),whileprovidingamorerelevant,interestinggroupof

Referensi

Dokumen terkait

Family resilience includes five dimensions, both material and non-material, namely family structure legality, physical resilience, economic resilience, social

Blower Division of Imaging Sciences and Biomedical Engineering , King’s College London, St Thomas’ Hospital , London , UK Jamshed Bomanji Clinical Department , Institute of Nuclear

Camille Baldwin Assistant Professor Coordinator of Clinical Ser- vices & Simulation Ron and Kathy Assaf College of Nursing, Nova Southeastern University, Florida, USA 3:50 - 4:10 pm

1Department of Atmospheric Science, University of Wyoming, Laramie, WY, USA 2Department of Atmospheric Sciences, Texas A&M University, College Station, TX, USA 3Department of Physics,

International Journal of Online & Biomedical Engineering 18 1 [160] I Chaidi, A Drigas 2022 Emotional intelligence and autism spectrum disorder Technium Social Sciences Journal 35 1,

Box 3050, STN CSC, Victoria BC, Canada V8W 3P5 bDepartment of Anthropology, College of William and Mary, Williamsburg, VA 23187-8795, USA cDepartment of Anthropology, 1250 Bellflower

46 The Depiction of Gender Roles in a Singaporean Parenting Magazine Lê Phước Thục Nhi* 343 Infirmary, 280 Fletcher Drive, University of Florida, Gainesville, Florida 32611, USA

Majeed3 1Applied Physics Branch, Applied Sciences Department, University of Technology, Baghdad, Iraq 2Department of Laser and Optoelectronic Engineering, College of Engineering,