In this post, you will learn about some of key challenges of implementing Telemedicine / Telehealth. In case you are working in the field of data science / machine learning, you may want to go through some of the challenges, primarily AI related, which is thrown in Telemedicine domain due to upsurge in need of reliable Telemedicine services.
Here are the slides I recently presented in Digital Data Science Conclave hosted by KIIT University. The primary focus is to make sure appropriate controls are in place to make responsible use of AI (Responsible AI).
Here are the top 8 challenges which need to be addressed to take full advantage of AI, RPA and cloud computing while delivering Telemedicine services:
Augmented AI or Autonomous AI?
AI application solution design will be key to decide whether the predictions served by the machine learning models could be used to automate the workflow without doctors’ interference (autonomous AI) or assist the doctors in making the final decisions. Let’s say a deep learning model is used to predict whether a person is suffering from a disease or not. The solution design must include whether the decision making can be automated or whether doctors are still asked to take the final decision based on the prediction.
Given that doctors would like to know the values of attributes based on which predictions are made. This will require AI model based solution design to make a trade-off between whether to use complex algorithm whose predictions are accurate but explainability (prediction attributes) is not possible or use algorithm with lesser model performance but predictions explainability is possible.
Apart from explainability at individual prediction level, AI Explainability also includes selection of appropriate metrics which represent the model performance vis-a-vis solution outcomes.
Ethical AI challenges include some of the following to be considered when doing Telemedicine AI application solution design:
- Traceability risks: In case of incorrect predictions resulting in possible conflict, who will be held accountable? Will it be AI application, doctors or hospital.
- Normative risks: In case of incorrect predictions, the downstream applications will behave in different manner leading to possible conflicts. One would require to watch out for related risks.
- Epistemic risks: One would want to make sure how to come up with most optimal model with optimal performance such that inconclusive outcomes can be avoided in the first place.
Given the need to have models which are highly performant at all point in time, there is required strong AI governance practices to be put in place including some of the following:
- Test, Test, Test: The need is to test the model with different kinds of data including adversarial dataset to assess the model performance at regular intervals. The dataset for which model does not perform well would need to be included for retraining the models.
- Model monitoring: Model performance needs to be monitored at regular intervals including daily, weekly or monthly based on inflow of data, data distribution etc.
- Model retraining: Based on the model performance, model would require to be retrained where some of the following can happen:
- One or more new features may get included
- One or more new models may get included
- Machine learning algorithm may change
- Hyper-parameters may get tuned
Data preparation is going to be key when building models to meet telemedicine requirements. This includes some of the following aspects:
- Data gathering
- Data cleansing
- Data annotation
Data security is going to be one of the most important challenges when building models for healthcare requirements. Patients data are critical and there are compliances and regulations in place for safety of patients data. Some of the following data security controls would need to be put in place:
- Controlled access to data to internal stakeholders including data scientists
- No access to data by external sources unless compliance requirements are met.
- Data security requirements for data at rest and in transition need to be met.
Compliances & Regulations
One of the key compliance related issue when dealing with machine learning models is change-control. When new models are ready to be moved into production, as per compliance / regulation requirements, several aspects of change would need to be documented and approved by change / risk control board. And, doing this for machine learning models would become tricky as they are different beast than the regular software development.
Finally, in order to meet telemedicine requirements, one would need to adopt cloud-native design of telemedicine applications to support the need to have parts of application deployed in cloud and other part deployed on-premise. The idea is that the solution design need to support hybrid-cloud architecture for both applications and data.