APJCP 18(10): 2775
3. Research Questions
3.3.4.1. Public dataset i. DEAP and SEED
These are two publicly available datasets. In DEAP [111], The EEG and peripheral physiological signals of 32 subjects were documented and each subject was exposed to 40 one-minute-long music videos as emotional stimuli. SEED [112, 113] is an EEG dataset, in which the signals were acquired from 15 participants. Movie clips were selected to simulate three types of emotions: positive, neutral, and negative emo- tions. In [52], authors used both SEED and DEAP datasets, where they used 32-channel EEG signals, 4-channel EOG signals, 4-channel EMG signals, along with respiration, plethysmography, galvanic skin response, and body temperature from the DEAP dataset, and only pos- itive and negative emotion samples from SEED dataset.
ii. The Brain Tumor Image Segmentation (BRATS) Benchmark dataset [114]
It is readily accessible via the annual Medical Image Computing and Computer-Assisted Intervention (MICCAI) Brain Tumor Segmentation
Competition, which contains 30 MRI images of glioma patients along with specialists’ annotations. In [39], the authors used 22 MRI images in the experiments.
iii. The PTB Diagnostic ECG Database and EEG motor movement/imagery dataset (EEGMI)
The PTB dataset
[115] comprises 549 ECG recordings of 290 in-dividuals of various ages, each reflecting one to five registrations. Each record consists of 15 concurrently acquired data. EEGMI [116] consists of over one thousand five hundred samples with one and two minutes EEG 64-channel recordings, recorded from 109 subjects and sampled at a frequency of 160 Hz. Each subject has 14 recordings. In
[38], 52subjects from these two public datasets were randomly chosen to confirm the performances of the proposed system and the result showed good improvements and good classification performances.
iv. The Harvard Whole Brain Atlas and Image Fusion Organization
Harvard dataset [117] is available publicly through a website that has dozens of real images of the brain to support researchers to learn the effects of diseases on the brain. It provides free access to PET and MRI scans of normal and diseased brains. Image Fusion Organization is the online resource for investigation in image fusion [118]. In [102] and
[104], the authors utilized images from the Harvard website with a sizeof 256
×256 for experiments. In [105], two datasets were used, the first one used 30 pairs of the abnormal brain from the Harvard dataset and the clinical cases dataset [119], and the second dataset used six pairs of images with different resolutions from MRI and PET images. In [101]
[60], the authors selected nine groups and three sets, respectively, of
multi-modality images from both Harvard Brain Atlas and Image Fusion Organization datasets. In [99], the authors selected five groups of gray images where each group has two images of various types. They selected the source images in group 1 from [118] and in group 2 from 2-5 from
[117]. In [61], data were collected from three resources: Harvard [117],Image Fusion organization [118], and from 38 CT and MRI brain images obtained from 14 patients at the Department of Radio Diagnosis, Post- graduate Institute of Medical Education
&Research (PGIMER). In [100], 65 pairs of multimodal medical images that contain 14 source image pairs of CT-MR, 29 pairs of MR-SPECT, 22 pairs of MR-PET, and all these images were taken from [117] and [118].
3.3.4.2. Private datasets.
Private datasets are difficult to use especially for reproduction purposes. In
[54], 25 right-handed adults with nohistory of psychiatric, neurological illness, or psychotropic drug use participated in the simultaneous EEG and fNIRS measurement. The measurements were carried out when participants solved mathematics questions under two states (stress and control). The entire recording had around 10 minutes of signals and each record contained five blocks. In
[107], 36 subjects—22 healthy controls and 14 schizophrenia patients- participated in the experiments. During the experiments, the fMRI and EEG data were gathered while the participants listened to an auditory oddball and pressed the button when they heard the infrequent sound. In
[108], the Insulin Self-Injection (ISI) dataset was developed but was notmade public. The dataset consists of motion data collected from wear- able cameras of eight subjects, captured with a wrist sensor and video data. Twenty-five videos were collected, where there were 175 seg- ments. In [40], data were obtained using x-ray and bone scan medical modalities from two clinical events. The first occurrence was a 59-year-- old woman who had the right calcaneal pain a year earlier. Following a fall injury, the second case was a 57-year-old female suffering from the groin, right hip, and buttock discomfort.
Table 4 provides a summary of survey papers regarding RQ3.
Table 5 summarizes available survey papers regarding RQ1, RQ2,
and RQ3.
G. Muhammad et al.
Table 4.
Summary of the papers regarding RQ3.
Ref. Task Modality Features Fusion / classifier Dataset Performance
[38] Biometric recognition
system EEG and ECG PSD Sum, product,
weighted sum 52 subjects have chosen from two dataset
-The PTB Diagnostic ECG Database -the EEG Motor Movement/
Imagery Dataset.
EER =22.97% ~ 29.36%
AUC =70.84 ~ 96.15
[39] Detect brain tumor MRIs NA Intersection and union BRATS Med =0.68
Avg =0.62 Std =0.211 [40] multimodal medical
image fusion X-ray and bone scan medical modalities, CT/MRI and MR- T1/MR-T2 images were fused
salient information, activity measure based on normalized Shannon entropy, directive contrast
four separate collections of human brain info, and two specific cases as follows.
-59-year-old female who has been feeling the right calcaneal pain for 12 months.
-a 57-year-old female with the continued right knee, pelvic and buttocks discomfort after a fall some months ago.
QMI =2.30 QS =0.73 QAB/F=0.64
[52] Emotion recognition 32-ch EEG signals, 4-ch EMG, 4-ch EOG, respiration, galvanic skin response, body temperature, and plethysmography.
10 types Significance test/
sequential backward selection
SVM DEAP and SEED datasets For DEAP:
Accuracy =72%
For SEED:
Accuracy =89%
[53] User satisfaction
detection Microphone and video
camera directional
derivative features from mel-frequency cepstral coefficients (MFCC)
and linear predictive coding (LPC) from images.
SVM 40 male students. Accuracy =78%
[54] Rate of mental stress fNIRS and EEG NA SVM 25 healthy participants For EEG signals:
Accuracy =89.8%
For fNIRS signals:
Accuracy =85.6%
[60] brainstem images CT and MRI NA PCNN and an
improved weighted fusion rule
3 sets of CT and MRI images from
[117] and [118] AG =8.45
MI =6.27 Std =64.70 QAB/F=0.83 Corr. =0.98 D =4.24 [61] Fusion of Ct scan and
MRI images. CT, MR NA entropy of square,
weighted sum- modified Laplacian
-38 brain images of CT and MRI were used from 14 patients.
-Harvard Medical School website [117] and [118]
MI =4.37 QAB/F=0.77 SD =58.06 [98] multimodal heart beat
detection. ECG, ABP, EEG, stroke
volume (SV). Pulse Transit Time
(PTT), Signal Quality Index
based Fusion. 200 recordings of up to 8
physiological signals Sensitivity = 95.07%
positive predictive value (PPV) = 89.3%
[99] Similarity measures of different CT scan images
MR T1–MR T2, CT-MR, MR T1–FDG, CT–MR T2, and MR T2–SPET images.
NA PCNN, QPSO-PCNN Data collected from [118] and
Groups 2-5 from [117] Group 1:
STD =65.18 MI_AF =3.21 SF =22.82 Entropy for Group 2-5:
Group2 =4.53 Group3 =5.47 Group4 =5.47 Group5 =3.78 [100] Diagnosis of several
critical diseases and its treatment
CT, MR, and SPECT principal information and edge details.
Spare representation SR-based dictionary learning approach and guided filtering
65 pairs of multimodal medical images that contain 14 source image pairs of CT-MR, 29 pairs of MR-SPECT, 22 pairs of MR-PET.
All these images were taken from the Harvard Whole Brain Atlas available in [117]
Running time = 5.31s En =5.19 STD =69.51
[101] Medical image fusion CT-MRI, MRI-PET, and MRI-
SPECT image fusion NA weighted average
algorithm, Gaussian filter, dictionary-
Data collected from two public
repositories [117],[118] Q^(AB/F) =0.6291 VIF =0.3426 MI =2.1045 (continued on next page)
Information Fusion 76 (2021) 355–375
371 4. Challenges and future research directions
The complexity of multimodal signal fusion increases along with two directions: how many sensors or IoMTs are connected to the system, and how many users or patients are using the system. It will, therefore, be influenced by a vast range of factors, which have not been adequately described and evaluated. In certain healthcare settings, the majority of IoMTs to detect and diagnose a disease should be fitted to the body, and data obtained from heterogeneous sensors include several uncertainties, including hardware failures, depleted batteries, or communication is- sues. Certain fundamental difficulties are natural and unrestrained. In fact, there are often unexpected failures in the use of common health sensors, including smartphones and smart wristbands. Unusual prob- lems, for instance, can result from malfunctions or defects, and the collapse of an external system. Daily complexities may occur, such as battery capacity, the distinction between specific physical features, and shifts in the atmosphere.
Based on these issues, there remain some key challenges in smart healthcare when using multimodal signals and many IoMTs. To promote the widespread acceptance of such smart healthcare applications, stan- dardized and simpler fusion approaches should be researched. We pre- sent some challenges and their possible solutions below.
•
A strong IoT network would enable system communication and functions on four layers of data acquisition, data transmission, dis- ease diagnosis, and continuously updating data to the storage. It is important to safeguard these measures; data security is, therefore, a key. The power supply is becoming increasingly necessary to sustain basic device management and control without interruption. The uninterrupted power supply is directly related to various parameters such as efficient algorithm, proper distribution of tasks to servers and devices, data retrieving, encoding, and transmission. Therefore, to
maximize the usage of IoMTs, an optimization algorithm is required.
The algorithm should make an intelligent distribution of edge and computing resources so that the service remains seamless and un- interrupted [124].
•
Wearable devices have been presented for various types of biomed- ical signals, with the ultimate goal of helping users live long and healthy lives. They have shown to be particularly useful to monitor elderly people as they age at home. As such, these instruments must be simple to use, quick to transport, convenient to use and provide an excellent user experience. These specifications are challenging to be satisfied with a compact, small, and well-structured unit. It is ex- pected that the wearable device will be compact and thin and must be able to last for a longer period. For example, research can focus on thinner batteries, bendable electronics, or stretchable sensors.
•
When data are gathered by an IoMT and transferred to a mobile device or edge/cloud services, there is a likelihood of interruption and, as a result, data damage. The interruption should be minimized to the fullest degree practicable to maintain healthy surveillance of safety. Some solutions include the introduction of short-time buff- ering of data, small edge caches, and maintain of a data-log [125].
Therefore, an intelligence edge computing facility can be incorpo- rated into the healthcare framework. Any required steps should be taken because data from different sources are categorized by flexible function measurements, unevenness, and preservation of unnec- essary or incomplete data. AI-based feature selection, data conver- sion, data synchronization, and the generation of missing values could be several solutions [9].
•
Currently, no standard protocol exists to maintain interoperability between smart devices thanks to the rapid development of IoT-based intelligent healthcare. In fact, the procedure must take into consid- eration the problem of energy consumption, as the treatment of certain diseases needs additional coordination capacity that enables
Table 4. (continued)Ref. Task Modality Features Fusion / classifier Dataset Performance
learning based algorithm
[102] Medical image fusion CT and MRI images NA Sum-Modified-
Laplacian SML and LP- based fusion rule
Images extracted from from public
website [117] Q(AB/F)=0.669 MI =4.249 VIF =77.178 [103] multilevel medical
image fusion. CT and MR NA Daubechies complex
wavelet transform (DCxWT)
1st set from: imagefusion.org 2ndset from T1 weighted MR image and MRA.
3rd set from brain CT and MRI.
Entropy =6.38 QAB/F=0.67 SD =67.60 [104] Comparing between
traditional and hybrid fusion techniques
CT, MRI, SPECT NA PCA, DWT It was obtained from harvard
medical school and radiopedia.org medical image database.
STD =5.85 QAB/F=5.85 Entropy =7.68 [105] Intrinsic image
decomposition MRI and PET NA IIC, PCA, intensity-
hue-saturation (IHS) 30 pairs of abnormal brain from Harvard University and clinical cases.
Avg running time:
IID+PCA =0.5 IID+HIS =0.7 IID+IIC =0.5 [106] Produce a highly
informative fused medical image that takes into account the error images
MRI/CT, MRI T1/MRI T2,
CT/PET, MRI/SPECT NA fuzzy transform eight datasets have
the same size 512 51 ×2 with 256 grayscale levels.
Fusion factor = 5.946 Feature similarity index measure (FSIM) = 0.85
SSIM =0.88 [107] schizophrenia patients sMRI, fMRI, and EEG Voxels for fMRI and
sMRI, and time points for ERP
Joint ICA and
transposed IVA Private dataset included 36
subjects MI =0.59
[108] Human action and activity recognition for health monitoring
motion wrist sensor and Google glass wearable camera
Video features and
motion features CNN-LSTM ISI Dataset included four female
and four male participants Accuracy =90%
[109] Human activity
recognition Wearable body sensors NA SRU, GRU MHEALTHbenchmarked dataset.
collected from ten subjects by using body motion and vital signs recordings of SHIMMER2 wearable sensors.
Accuracy = 99.80%
G. Muhammad et al.
the individual to track the various stages of diseases
[126]. Ascontext shifts, IoMTs fusion approaches ought to cope with the ad- justments because they may have a straight effect on the character- istics of systems, such as accuracy. To encourage the machine to familiarize with specific environments by accumulating and trans- ferring data from one context to another, knowledge sharing methods based on transfer learning may be used [127].
•
Explainable AI (XAI) and interpretable machine learning structures should be deployed in fusing multimodal signals for smart health- care. As the automated disease assessment systems are just assistive tools for the medical doctors, different stages of the developed tools should be interpretable to better provide doctors with insights that may be overlooked.
•
Edge-computing can be used to reduce the burden to transmit data to the cloud. The problem of current edge-computing is the lack of proper distribution of edge devices and the privacy-preserving of patients
’data [128,129]. A proper edge optimization should be used to utilize the full capacity of edge computing and seamless data transfer between the edge and the cloud [31,130].
5. Conclusion
The fusion of multi-signals is a highly researched field. There is a vast range of literature addressing signal and/or IoMT system fusion tech- niques within the smart healthcare domain. However, there was a shortage of a comprehensive and systemic study of state-of-the-art fusion approaches in the smart health sector to the best of our under- standing. This survey aimed to fill this gap. The survey provides an
overview of existing multimodal signal fusion and IoMT device fusion schemes, the different fusion strategies, as well as the importance of security and privacy with IoMTs. Finally, interesting research issues and potential avenues for research are given.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Author statement
We, the authors, declare that the work described has not been pub- lished, that it is not under consideration for publication elsewhere, that its publication is approved by all authors and tacitly or explicitly by the responsible authorities where the work was carried out, and that, if accepted, it will not be published elsewhere in the same form, in English or in any other language, including electronically without the written consent of the copyright-holder.
Acknowledgment
The authors extend their appreciation to the Deputyship for Research and Innovation,
“Ministry of Education
”in Saudi Arabia for funding this research work through the Project no. (IFKSURP-158).
Table 5.
Summary of survey papers regarding RQs.
Ref. Task Fusion type Application area Years
covered Publications covered [17] Wearable IoMT devices No fusion; only compares between IoMT devices Motion tracking, vital sign
measurement 2013-
2016 PubMed, IEEE, Google Scholar [20] Human activity
recognition EEG, EMG, ECG, and EOG promising results on the
development of emotion recognition systems
1998-
2017 IEEE, Science Direct, Pubmed, Google Scholar
[34] 5G Networks for the
Internet of Things 5G and IoT Smart healthcare, smart home,
smart city, and industry 2000-
2017 NA
[50] Hand rehabilitation Sensors fusion Hand movement, exoskeleton
control, serious games 2007-
2018 IEEE, WoS, ACM, PubMed, [55] Fusion of Brain Imaging
Data. fMRI, sMRI Psychopathology 1996-
2015 IEEE, Science Direct, Pubmed, Google Scholar
[76] BSN fusion Data-level, feature-level, decision-level Activity recognition, emotion recognition, vital sign monitoring
1997-
2016 IEEE, Springer Link, Science Direct, Pubmed
[91] hybrid methods in multimodality medical images
MRI, CT, PET Brain, heart, spine 2006-
2019 IEEE, Science Direct, Pubmed, Google Scholar
[92] Medical image fusion
characteristics MRI-CT, MRI-PET, and MRI-SPECT images fusion applications in clinical for
physicians 2000-
2016 IEEE, Springer Link, Science Direct, Pubmed
[93] multi-modal medical
visualization No fusion; only discussing visualization techniques and
applications neurosurgery 2007-
2018 EG digital library, EuroVIS, VCBM and VMV, IEEE digital library, Google Scholar [94] multimodal data-driven
smart healthcare decision-making
Early fusion. intermediate fusion and later fusion smart healthcare application 2013-
2018 IEEE, Springer Link, Science Direct, Pubmed
[95] Pixel-level image fusion MRI-PET, MRI-CT, CT-PET, MRI-SPECT Medical diagnosis, remote sensing, Surveillance and Photography
1999- 2016 [96] Tumor in the brain. CT, MRI, PET, X-Ray, and Ultrasound Multi Modalities Fusion 2004-
2019 IEEE, Springer Link, Science Direct, Pubmed
[121] Obstructive Sleep Apnea AI fusion of IoMTs Remote monitoring and
diagnosis 2010-
2019 IEEE, Google Scholar [122] Image fusion technology IHS, PCA, Arithmetic Combination, PCNN, Deep Learning,
Edge-preserving based methods, Fuzzy logic-based methods, Sparse representation, and compressive sensing- based methods
Diagnosis and Treatment of
Liver Cancer 2000-
2020 IEEE, Springer Link, Science Direct, Pubmed
[123] Physical activity
recognition and measure Wearable computing, sensor fusion IoT platform: devices, persons,
and timeline IEEE Xplore, ACM digital
library, and Science Direct
Information Fusion 76 (2021) 355–375
373 References
[1] “Global Smart Healthcare Market Size Report,” Online Report 978-1-68038- 407–9, Jun. 2020. Accessed: Jul. 27, 2020. [Online]. Available: https://www.
grandviewresearch.com/industry-analysis/smart-healthcare-market.
[2] A. Tiwari, T.H. Falk, Fusion of motif-and spectrum-related features for improved EEG-based emotion recognition, Computational Intelligence and Neuroscience 2019 (2019) 14, vol.p.
[3] Y.-D. Zhang, et al., Advances in multimodal data fusion in neuroimaging:
Overview, challenges, and novel orientation, Information Fusion 64 (Dec. 2020) 149–187, vol.pp.
[4] B.M. Booth, et al., Multimodal Human and Environmental Sensing for Longitudinal Behavioral Studies in Naturalistic Settings: Framework for Sensor Selection, Deployment, and Management, Journal of Medical Internet Research 21 (8) (Aug. 2019) e12832, vol.no.p.
[5] M.S. Hossain, G. Muhammad, Emotion recognition using deep learning approach from audio–visual emotional big data, Information Fusion 49 (Sep. 2019) 69–78, vol.pp.
[6] R. Gupta, M. Khomami Abadi, J.A. C´ardenes Cabr´e, F. Morreale, T.H. Falk, N. Sebs, A Quality Adaptive Multimodal Affect Recognition System for User- Centric Multimedia Indexing,” in, Proceedings of the 2016 ACM on International Conference on Multimedia Retrieval - ICMR ’16 (2016) 317–320, pp.
[7] F. Alshehri, G. Muhammad, A Comprehensive Survey of the Internet of Things (IoT) and AI-Based Smart Healthcare, IEEE Access 9 (2021) 3660–3678, vol.pp.
[8] K. Petersen, R. Feldt, S. Mujtaba, and M. Mattsson, “Systematic Mapping Studies in Software Engineering,” Jun. 2008, pp. 68–77.
[9] M. Masud, et al., A Lightweight and Robust Secure Key Establishment Protocol for Internet of Medical Things in COVID-19 Patients Care, IEEE Internet of Things Journal (2020) 1, https://doi.org/10.1109/JIOT.2020.3047662, pp.
[10] G. Muhammad, M.S. Hossain, COVID-19 and Non-COVID-19 Classification using Multi-layers Fusion From Lung Ultrasound Images, Information Fusion 72 (March 2021) 80–88, vol.pp.
[11] M.A. Rahman, M.S. Hossain, M.S. Islam, et al., Secure and Provenance Enhanced Internet of Health Things Framework: A Blockchain Managed Federated Learning Approach, IEEE Access 8 (Dec. 2020) 205071–205087, vol.pp.
[12] F. Firouzi, et al., Internet-of-Things and big data for smarter healthcare: From device to architecture, applications and analytics, Future Generation Computer Systems 78 (Jan. 2018) 583–586, vol.pp.
[13] Y. Zhang, Y. Zhang, X. Zhao, Z. Zhang, H. Chen, Design and Data Analysis of Sports Information Acquisition System Based on Internet of Medical Things, IEEE Access 8 (2020) 84792–84805, vol.pp.
[14] M.S. Hossain, G. Muhammad, Emotion-Aware Connected Healthcare Big Data Towards 5G, IEEE Internet of Things Journal 5 (4) (Aug. 2018) 2399–2406, vol.
no.pp.
[15] Tanveer Hussain, et al., Intelligent Baby Behavior Monitoring using Embedded Vision in IoT for Smart Healthcare Centers, Journal of Artificial Intelligence and Systems 1 (1) (Nov. 2019) 110–124, vol.no.pp.
[16] J.H. Abawajy, M.M. Hassan, Federated Internet of Things and Cloud Computing Pervasive Patient Health Monitoring System, IEEE Communications Magazine, Communications Magazine, IEEE, IEEE Commun. Mag., 55 (1) (Jan. 2017) 48–53, vol.no.pp.
[17] M. Haghi, K. Thurow, R. Stoll, Wearable Devices in Medical Internet of Things:
Scientific Research and Commercially Available Devices, Healthcare Informatics Research 23 (Jan. 2017) vol.
[18] D.V. Dimitrov, Medical Internet of Things and Big Data in Healthcare, Healthcare Informatics Research 22 (3) (Jul. 2016) 156–163, vol.no.pp.
[19] Y. Liu, H. Wang, W. Zhao, M. Zhang, H. Qin, Y. Xie, “Flexible, Stretchable Sensors for Wearable Health Monitoring: Sensing Mechanisms, Materials, Fabrication Strategies and Features,”, Sensors 18 (2) (Feb. 2018) vol.no.Art. no. 2.
[20] P. Kumari, L. Mathew, P. Syal, “Increasing trend of wearables and multimodal interface for human activity monitoring: A review,”, Biosens. Bioelectron. 90 (Apr. 2017) 298–307, vol.pp.
[21] M. Chen, J. Yang, J. Zhou, Y. Hao, J. Zhang, C.-H. Youn, 5G-Smart Diabetes:
Toward Personalized Diabetes Diagnosis with Healthcare Big Data Clouds, IEEE Commun. Mag. 56 (4) (Apr. 2018) 16–23, vol.no.pp.
[22] D. Naranjo-Hernandez, et al., Smart Vest for Respiratory Rate Monitoring of ´ COPD Patients Based on Non-Contact Capacitive Sensing, Sensors 18 (7) (Jul.
2018) vol.no.Art. no. 7.
[23] T. V. Steenkiste, D. Deschrijver, and T. Dhaene, “Sensor Fusion using Backward Shortcut Connections for Sleep Apnea Detection in Multi-Modal Data,” arXiv:
1912.06879 [cs.LG], 2020.
[24] W. Qi, A. Aliverti, A Multimodal Wearable System for Continuous and Real-time Breathing Pattern Monitoring During Daily Activity, IEEE Journal of Biomedical and Health Informatics 24 (8) (2019) 1, vol.no.pp.
[25] B. Farahani, F. Firouzi, V. Chang, M. Badaroglu, N. Constant, K. Mankodiya, Towards fog-driven IoT eHealth: Promises and challenges of IoT in medicine and healthcare, Future Generation Computer Systems 78 (Jan. 2018) 659–676, vol.
[26] pp. H. Zhang, J. Li, B. Wen, Y. Xun, J. Liu, Connecting Intelligent Things in Smart Hospitals Using NB-IoT, IEEE Internet of Things Journal 5 (3) (Jun. 2018) 1550–1560, vol.no.pp.
[27] P. Dong, et al., Edge Computing Based Healthcare Systems: Enabling Decentralized Health Monitoring in Internet of Medical Things, IEEE Network 34 (5) (2020) 254–261, vol.no.pp.September/October.
[28] Z. Ning, et al., Mobile Edge Computing Enabled 5G Health Monitoring for Internet of Medical Things: A Decentralized Game Theoretic Approach, IEEE J.
Sel. Areas Commun. 39 (2) (Feb. 2021) 463–478, vol.no.pp.
[29] H. Fouad, A.S. Hassanein, A.M. Soliman, H. Al-Feel, Internet of Medical Things (IoMT) Assisted Vertebral Tumor Prediction Using Heuristic Hock Transformation Based Gautschi Model–A Numerical Approach, IEEE Access 8 (2020) 17299–17309, vol.pp.
[30] L. Sun, X. Jiang, H. Ren, Y. Guo, Edge-Cloud Computing and Artificial Intelligence in Internet of Medical Things: Architecture, Technology and Application, IEEE Access 8 (2020) 101079–101092, vol.pp.
[31] G. Muhammad, S.M.M. Rahman, A. Alelaiwi, A. Alamri, Smart Health Solution Integrating IoT and Cloud: A Case Study of Voice Pathology Monitoring, IEEE Commun. Mag. 55 (1) (Jan. 2017) 69–73, vol.no.pp.
[32] M. Alhussein, G. Muhammad, M.S. Hossain, S.U. Amin, Cognitive IoT-Cloud Integration for Smart Healthcare: Case Study for Epileptic Seizure Detection and Monitoring, Mobile Netw Appl 23 (6) (Dec. 2018) 1624–1635, vol.no.pp.
[33] M. Asif-Ur-Rahman, et al., Toward a Heterogeneous Mist, Fog, and Cloud-Based Framework for the Internet of Healthcare Things, IEEE Internet of Things Journal 6 (3) (Jun. 2019) 4049–4062, vol.no.pp.
[34] G.A. Akpakwu, B.J. Silva, G.P. Hancke, A.M. Abu-Mahfouz, A Survey on 5G Networks for the Internet of Things: Communication Technologies and Challenges, IEEE Access 6 (2018) 3619–3647, vol.pp.
[35] H. Ullah, N. Gopalakrishnan Nair, A. Moore, C. Nugent, P. Muschamp, M. Cuevas, 5G Communication: An Overview of Vehicle-to-Everything, Drones, and Healthcare Use-Cases, IEEE Access 7 (2019) 37251–37268, vol.pp.
[36] G. Muhammad, M.S. Hossain, Emotion Recognition for Cognitive Edge Computing Using Deep Learning, IEEE Internet of Things Journal (2021), https://
doi.org/10.1109/JIOT.2021.3058587.
[37] G.J. Joyia, R.M. Liaqat, A. Farooq, S. Rehman, Internet of Medical Things (IOMT):
Applications, Benefits and Future Challenges in Healthcare Domain, Journal of Communications 12 (4) (2017).
[38] S. Barra, A. Casanova, M. Fraschini, M. Nappi, Fusion of physiological measures for multimodal biometric systems, Multimed Tools Appl 76 (4) (Feb. 2017) 4835–4847, vol.no.pp.
[39] I. Cabria, I. Gondra, MRI segmentation fusion for brain tumor detection, Information Fusion 36 (Jul. 2017) 1–9, https://doi.org/10.1016/j.
inffus.2016.10.003, vol.pp.
[40] G. Bhatnagar, Q.M.J. Wu, Z. Liu, A new contrast based multimodal medical image fusion framework, Neurocomputing 157 (Jun. 2015) 143–152, vol.pp.
[41] A. Limaye, T. Adegbija, HERMIT: A Benchmark Suite for the Internet of Medical Things, IEEE Internet of Things Journal 5 (5) (Oct. 2018) 4212–4222, vol.no.pp.
[42] M. Mahmud, M.S. Kaiser, A. Hussain, S. Vassanelli, Applications of Deep Learning and Reinforcement Learning to Biological Data, IEEE Transactions on Neural Networks and Learning Systems 29 (6) (Jun. 2018) 2063–2079, vol.no.pp.
[43] S. Swayamsiddha, C. Mohanty, Application of cognitive Internet of Medical Things for COVID-19 pandemic, Diabetes & Metabolic Syndrome: Clinical Research & Reviews 14 (5) (Sep. 2020) 911–915, vol.no.pp.
[44] R. Pratap Singh, M. Javaid, A. Haleem, R. Vaishya, S. Ali, Internet of Medical Things (IoMT) for orthopaedic in COVID-19 pandemic: Roles, challenges, and applications, Journal of Clinical Orthopaedics and Trauma 11 (2020) 713–717.
July–August Volume Issue 4Pages.
[45] R. p. Singh, M. Javaid, A. Haleem, and R. Suman, “Internet of things (IoT) applications to fight against COVID-19 pandemic,” Diabetes & Metabolic Syndrome: Clinical Research & Reviews, vol. 14, no. 4, pp. 521–524, Jul. 2020.
[46] U. Khan, et al., Internet of Medical Things–based decision system for automated classification of Alzheimer’s using three-dimensional views of magnetic resonance imaging scans, Int. J. Distrib. Sens. Netw. 15 (3) (Mar. 2019) vol.no.
[47] R. J. Oskouei, Z. MousaviLou, Z. Bakhtiari, and K. B. Jalbani, “IoT-Based Healthcare Support System for Alzheimer’s Patients,”Wireless Communications and Mobile Computing, Article ID 8822598, 15 pages, 2020.
[48] M. Muzammal, R. Talat, A.H. Sodhro, S. Pirbhulal, A multi-sensor data fusion enabled ensemble approach for medical data from body sensor networks, Information Fusion 53 (Jan. 2020) 155–164, vol.pp.
[49] V. Nathan, R. Jafari, Particle Filtering and Sensor Fusion for Robust Heart Rate Monitoring Using Wearable Sensors, IEEE Journal of Biomedical and Health Informatics 22 (6) (Nov. 2018) 1834–1846, vol.no.pp.
[50] I. Herrera-Luna, E.J. Rechy-Ramirez, H.V. Rios-Figueroa, A. Marin-Hernandez, Sensor Fusion Used in Applications for Hand Rehabilitation: A Systematic Review, IEEE Sens. J. 19 (10) (May 2019) 3581–3592, vol.no.pp.
[51] A. Passon, T. Schauer, and T. Seel, “Hybrid Inertial-Robotic Motion Tracking for Posture Biofeedback in Upper Limb Rehabilitation,” in 2018 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob), Aug. 2018, pp. 1163–1168, Enschede, Netherlands.
[52] F. Yang, X. Zhao, W. Jiang, P. Gao, G. Liu, Multi-method Fusion of Cross-Subject Emotion Recognition Based on High-Dimensional EEG Features, Front Comput Neurosci 13 (Aug. 2019) 1–11, vol.pp.
[53] A. Alamri, Monitoring System for Patients Using Multimedia for Smart Healthcare, IEEE Access 6 (2018) 23271–23276, vol.pp.
[54] F. Al-Shargie, T. Tang, M. Kiguchi, Assessment of mental stress effects on prefrontal cortical activities using canonical correlation analysis: An fNIRS-EEG study, Biomedical Optics Express 8 (5) (2017) 2583–2598, vol.no.pp.01.
[55] V.D. Calhoun, J. Sui, Multimodal Fusion of Brain Imaging Data: A Key to Finding the Missing Link(s) in Complex Mental Illness, Biological Psychiatry: Cognitive Neuroscience and Neuroimaging 1 (3) (May 2016) 230–244, vol.no.pp.
G. Muhammad et al.