• Tidak ada hasil yang ditemukan

Miyoung Chung

N/A
N/A
Protected

Academic year: 2023

Membagikan "Miyoung Chung"

Copied!
78
0
0

Teks penuh

The difference between the group was also tested to find out if the musical ability is associated with the performance of the system. This study demonstrates the feasibility of the closed-loop neurofeedback training via EEG BCI and is expected to be the fundamental study in the initial development of a closed-loop Music-Image-based BCI training system.

BACKGROUNDS

Possibility of Rehabilitative BCI for Musical Ability via Decoding Pitch

The importance of pitch in musical ability has been largely revealed by studies of amusia (i.e., tone deafness). Understanding the mental representation of pitch is essential to decode individual pitch information directly from brain activity, enabling an efficient strategy to find relevant brain activity features.

Possible effect of Neurofeedback for Music BCI and its design

By considering the mutual auditory-motor interaction, the audio-visual feedback design can be predicted to also benefit from the BCI system of musical skill training. Consequently, together with the aspects mentioned above, the auditory-motor ratio and the audio-visual strategy should be taken into account, where the visual stimuli also extend to the visuospatial.

OBJECTIVES

Overall Objectives

Objectives by Studies

We focus on the validity of BC by comparing it with the Random Model (RM) that we generated based on the model and the existing EEG signal, and the comparison measure was the accuracy and false direction score (FDS), which represents the performance of the decoding model and neurofeedback respectively. Those performance measures were compared across subjects, across days, then the trend across days was compared between BC and RM, and both visual inspection and trend slope were compared.

METHODS

  • Participants
  • Stimuli
  • Experimental Task
  • EEG acquisition
  • EEG preprocessing
  • Feature Extraction
  • Classification Algorithm

Then, each of the 7 steps was randomly presented 50 times, and subjects covertly counted the number of presentations of the target dot. The visual cue was made to turn one of the 7 keys red for 500 ms, followed by an ISI of 500 ms. On each trial, the visual cue appeared in one of the 7 voices in random order, and subjects were instructed to imagine the vocal production of the cued pitch by humming.

Reference and ground electrodes were placed on the mastoids of the left and right ears, respectively. From the visual inspection of the time courses averaged over trials, we observed that different time courses intersected at some point after stimulus onset and then diverged. The selected peaks were used to determine the time segment for feature extraction, which corresponds to the full width at half maximum (FWHM) of the gap between negative and positive peaks (see blue lines in Figure 2.2D-E).

The divergence of the time course across pitches (d(t)) for each set of the time courses at F3 (D), F4 (E) and their difference (F) are depicted. The performance of the classifiers was evaluated by test accuracy, the information transfer rate (ITR) and the diagonality of the confusion matrix. The test accuracy was calculated by the ratio of the number of corrected trials to the total number of trials in the test set.

Figure 2.1 RI task and EEG channel Montage (A) A pitch imagery task. The pitch imagery task  included 50 trials of pitch imagery
Figure 2.1 RI task and EEG channel Montage (A) A pitch imagery task. The pitch imagery task included 50 trials of pitch imagery

RESULTS

  • Feature Distribution
  • Decoding Individual Pitches
  • Decoding Groups of Pitches
  • Comparison of MT and NT group

It showed no main effect of classifier (p>0.05), but a significant main effect of feature scheme (p<0.01). A post-hoc analysis using the Kruskal-Wallis (KW) test revealed that the IC function yielded better performance in terms of accuracy and ITR than the DC function (p < 0.05). For K=2 and K=5, the test showed a main effect of classifier, and Dunn's test showed that LSTM yielded the highest accuracy (p<0.05).

For K=3, the test showed a main effect of feature scheme and the KW test revealed higher accuracy with the IC feature (p<0.05). Dunn test score for ITR of IC(A) and DC(B) is characterized by the number of classes. The figures above compare the ITR with the number of classes for each feature type.

The Dunn test revealed no significant difference in accuracy between groups for all K classes (p>0.05). There was also no significant difference in ITR between the groups for all K classes (p>0.05). Finally, for all K classes, no significant difference in the diagonality between the groups was found (p>0.05).

Figure 2.4  Feature distribution on scalp (A) IC Feature (B) DC Feature. The figure shows the  distribution of features extracted before selection of one-way ANOVA test
Figure 2.4 Feature distribution on scalp (A) IC Feature (B) DC Feature. The figure shows the distribution of features extracted before selection of one-way ANOVA test

DISCUSSION

We further examined the commonality of selected IC features across subjects by counting the number of subjects from which a feature was selected in each channel and frequency band (Figure 2.11). Note that only the IC feature score was used here to examine the whole brain distribution of selected features. The features selected by most subjects were distributed in bilateral frontotemporal and parietal areas, especially in the lower frequency bands.

Furthermore, the features in the low-frequency bands were more frequently selected from lateral areas than medial areas, implying pitch information processing pathways across lateral areas. Despite our experimental design relying on the ability to imagine the musical pitch, the MT and NT group difference was not found. Perhaps pitch imagery in a melodic context (eg, in a song) can differentiate the neural features of the MT group from the NT group.

There is a small possibility that the image of the piano keyboard and the horizontal movement of the pointer can affect the features when determining the ASR cutoff parameter. By verifying the possibility of decoding seven pitches in the musical scale from the human EEG, the realization of BCI based on pitch images is assumed to be reliable. The similar decoding performance of the MT and NT sets suggests that future use can be applied to anyone, regardless of their musical ability.

METHODS

  • Participants
  • Stimuli
  • Experimental Task
  • Real-time EEG data acquisition, Preprocessing, and the Feature Extraction
  • Parameter Adjustable Multivariate Linear Regression (MLR) Model
  • Experiment Procedure
  • BrainCoder System Performance Evaluation
    • Accuracy
    • False Direction Score (FDS)
    • Trend Line Fitting
    • Random Model (RM) Generation

The song used here was the first phrase of 'Little Star' (Figure 3.1C), and the task was to 1) perceive the previous bar, 2) imagine the next bar, and 3) listen to the result of the picture bar from BrainCoder. However, the ISI of the SP task was 0.2, and the cue for each tone was 0.7s long, with the onset cued at the same time as the color indicator was shown. The first thing the trained MRL model, BrainCoder, performed was the SP task, accompanied by the feasibility test of the current model.

Also, the steepness of the trend increase was obtained by the first-order coefficient and also included in the analysis. So there were three evaluation methods, and they were compared with the random model we generated to verify the validity of the model. The false direction score (FDS) was derived from measuring the user's effort and whether they controlled the cursor in the right direction.

Then, the FDS for each run was the sum of the negative scores, and the total FDS was the average FDS over 63 runs. In this study, we therefore approach the qualitative analysis of the result rather than testing quantitatively. The estimation was done with the RM model with the generation of the random function when the number of trials was missing.

Figure 3.1 Tasks of the Experiment. (A) RI Task, (B) NFT Task, and (C) SP task. Note tha τ is the  feedback displaying time
Figure 3.1 Tasks of the Experiment. (A) RI Task, (B) NFT Task, and (C) SP task. Note tha τ is the feedback displaying time

RESULTS

  • The BrainCoder System Performance
  • Accuracy of BC and Comparison to RM
  • FDS of BC and Comparison to RM
  • SP task Result

Then the accuracy by days was shown in Figure 3.5 for all the subjects with a corresponding trend line. The accuracy for each day of each group and the trend line were plotted in Figure 3.6A. Also, the steepness difference between BC and RM was averaged over MT and NT, as depicted in Figure 3.6B.

As shown in the figure, values ​​for each day and trend were higher in MT than NT, and the difference in slope also indicated that MT was positive but NT was negative, although only one NT subject produced a negative mean. Each day's precision and corresponding trend line for each item was shown below their title. The accuracy of BC by day and the corresponding trend for each group is shown in (A), and the slope difference of BC and RM for each group is shown in (B).

Also, the difference in slope between BC and RM did not show a remarkable feature, and the comparison between MT and NT also seemed difficult to reveal some differences. Each day's FDS and the corresponding trend line for each item were displayed under their title. The FDS of BC by day and the corresponding trend for each group is shown in (A), and the difference of BC and RM slopes for each group is in (B).

Figure  3.6 Accuracy Comparison of MT and NT group. The accuracy of BC by days and the  corresponding trend for each group is depicted on (A), and the steepness difference of BC and RM for  each group is on (B)
Figure 3.6 Accuracy Comparison of MT and NT group. The accuracy of BC by days and the corresponding trend for each group is depicted on (A), and the steepness difference of BC and RM for each group is on (B)

DISCUSSION

Although the performance of the BrainCoder was valid compared to the random model, performance would still need to be increased to develop a practical BCI. To increase the performance improvement by days, the previous days can be taken into account when building the BrainCoder at the beginning of the NFT session. Despite some limitations in performance and system algorithms, we verified the feasibility of training effect of MT with music imagery-based BCI and neurofeedback effect for all users.

This dissertation demonstrated the feasibility of decoding a musical scale pitch image from EEG and a closed-loop neurofeedback training system using a decoding algorithm previously derived. In addition, Study 2 followed Study 1 to improve the accuracy of the decoding model and the effectiveness of the BCI. Nevertheless, the accuracy of decoding is still a task that we must insist on, as the need to revisit the feature seems to be advisable regardless of the validity of the feature distribution.

Two possibilities can be posed, the growth of musical context involvement and that of the capacity of the musical ability reflection in the protocol Study2. By investigating the feasibility of these, this thesis has built the cornerstone of the Music-Imagery-based BCI by providing the strategies for decoding pitch information and designing neurofeedback. Confusion matrix of IC feature classification results from all the other decoding models.

Kober et al., “Specific effects of EEG-based neurofeedback training on memory functions in stroke victims,” J. Herrmann, “Neurofeedback training of the upper alpha frequency band in EEG improves cognitive performance,” Neuroimage, vol.

Figure A2.1. Comparison Matrix
Figure A2.1. Comparison Matrix

Gambar

Figure 2.1 RI task and EEG channel Montage (A) A pitch imagery task. The pitch imagery task  included 50 trials of pitch imagery
Figure 2.2 Feature extraction procedure. The time courses of the mean theta band power from the  visual cue onset (0 s) to the end of an epoch (0.8 s) for each of the seven pitches (C4 – B4) are depicted  for EEG channels of F3 (A) and F4 (B), respectively
Figure 2.3 Channel and channel pair averaged positive peak values of d(t) in all frequency bands
Figure 2.5  Selected features from one-way ANOVA test and its distribution. IC Features  Distribution is at upper row and DC Features Distribution is at lower row
+7

Referensi

Dokumen terkait

06, Issue 07,July 2021 IMPACT FACTOR: 7.98 INTERNATIONAL JOURNAL 23 Table 1: Softening Point Test Result Figure 6: Softening Point Test Result From the results it can be observed