A machine learning model is said to be interpretable if it is able to provide an explanation as to why it made a particular prediction. The availability of traditional machine learning measurements such as area under the curve (AUC), precision (PR), and recall (R) may not be sufficient in many domains where user trust in the predictions of machine learning systems is necessary. The process of recognizing, comprehending, and responding to the human emotional responses that are included in written language is referred to as sentiment analysis. Using an innovative combination of natural language processing and sentiment analysis, data scientists have developed algorithms that are capable of inferring human emotion from text. These algorithms were designed using natural language processing. With their help, we were able to do this. Artificial intelligence (AI) systems have the potential to commit errors, which could have severe effects for patients or lead to other difficulties within the healthcare system. It is impossible to make assumptions about the advantages of developing technology, and there is always the risk that these technologies could have unforeseen impacts. However, it is also feasible to establish assumptions about the benefits of developing technology. It is vital for future study to address both the positive advancements as well as the problems that could have potentially detrimental effects.
AI, Machine Learning, Sentiment Analysis, Healthcare Monitoring
IRE Journals:
Dr. A. Jayanthila Devi , Dr. P. S. Aithal
"Interpretable Machine Learning and Artificial Intelligence for Sustainable Healthcare Monitoring" Iconic Research And Engineering Journals Volume 6 Issue 5 2022 Page 48-53
IEEE:
Dr. A. Jayanthila Devi , Dr. P. S. Aithal
"Interpretable Machine Learning and Artificial Intelligence for Sustainable Healthcare Monitoring" Iconic Research And Engineering Journals, 6(5)