Background: In contrast to the high-interest rate in Artificial Intelligence (AI) for business, AI adoption is much lower. It has been found that a lack of consumer trust would adversely influence consumers’ evaluations of information given by AI. Hence the need for explanations in model results.
Methods: This is especially the case in clinical practice and juridical enforcement, where improvements in prediction and interpretation are crucial. Bio-signals analysis, such as EEG diagnosis, usually involves complex learning models, which are difficult to explain. Therefore, the explanatory module is imperative if the results are released to the general public. This research shows a systematic review of explainable artificial intelligence (XAI) advancement in the research community. Recent XAI efforts on bio-signals analysis were reviewed. The explanatory models favor the interpretable model approach due to the popularity of deep learning models in many use cases.
Results: The verification and validation of explanatory models appear to be one of the crucial gaps in XAI bio-signals research. Currently, human expert evaluation is the easiest validation approach. Although the bio-signals community highly trusts the human-directed approach, it suffers from persona and social bias issues.
Conclusion: Hence, future research should investigate more objective evaluation measurements towards achieving the characteristics of inclusiveness, reliability, transparency, and consistency in the XAI framework.
[http://dx.doi.org/10.1536/ihj.21-094] [PMID: 34053998]
[http://dx.doi.org/10.1016/j.compbiomed.2021.104393] [PMID: 33915362]
[http://dx.doi.org/10.3390/s21144900] [PMID: 34300640]