According to the WHO, one in every eight people in the world lives with a mental disorder, a clinically significant disturbance in an individual’s cognition, emotional regulation, or behaviour. Which impacts do recent advances in artificial intelligence have on mental health issues? Experts on the topic share their insights.
CLAIRE, the largest European network for research and innovation in the field of Artificial Intelligence (AI), recently hosted two live events. Experts from various fields discussed the intersection of AI-powered tools and mental health. What were their key insights?
“A global issue should be addressed globally”
AI can play an important role in mental health, including in low and middle-income countries where there is a lack of adequate services for mental health. Africa is beginning to see a rise in mental health cases, stated Kutoma Wakunuma, Associate Professor from De Montfort University (UK). She pointed out that AI could support mental health workers, were it not for a critical problem: AI systems often use datasets that are not representative of the realities of the African continent. This could lead to inaccuracies in diagnosis and exacerbate the divide between countries. Therefore, Dr. Wakunuma emphasized the importance of inclusive, sustainable and representative data. By taking a broader perspective, we can establish proper AI governance and understand the full potential of AI to improve mental healthcare. With more inclusive and representative data, AI has the potential to revolutionize mental healthcare by improving access to care and personalized treatments.
“AI is changing psychiatry for the better”
How exactly can AI be used in psychiatry nowadays? Dr. Giovanni Briganti, Chair, AI and Digital Medicine at UMons (Belgium), shed light on the process of data collection and the use of AI in mental healthcare. AI models are developed using a range of data sources, including behavioural and biological data, medication dosage, responses to tests and electrophysiology. These models are useful to study the complexity of, for example depression and empathy and if and how symptoms connect and emerge together. AI is also used to study uncertainty in the evidence, its temporal dependencies, to improve existing tools and to detect communities of symptoms across different disorders. Another exciting application of AI is the generation of hypotheses that can help advance the field. However, the use of AI is not without its ethical concerns, particularly in areas such as facial recognition. It’s essential to ensure that AI models are trustworthy and ethically sound to maximize the benefits while minimizing the risks.
“Validate your AI system!”
The ethical implications of AI in mental health cannot be overlooked, and experts like Ana Maiques, CEO of Neuroelectrics (Spain), are concerned about the generation of AI systems and their delivery to the patients and the population. According to Maiques, “mental health requires a different level of careful thought” as it is a complex issue, and we are dealing with the brain and biomarkers. Simon Van Eyndhoven, Imaging Deep Learning Expert at ICOMETRIX (Belgium), agreed that studies in the field of mental health need regulatory oversight and should prove their validity quantitatively. “There are several ways to benchmark measurements, so they are reliable and objective. Unfortunately, the vast majority of AI studies in literature are not replicated. Giovanni Briganti urges scientists and developers: “validate your AI systems and perform replication studies to move the field forward”.
“Therapy” apps without clinical or ethical standard
Next to research studies, also many apps on the market are not validated. According to Briganti, some do not even reflect the reality of patient’s needs. “Most therapy apps are not trained on data that make sense from a clinical standpoint”. He suggested that app developers should “spend (time) working in a clinical department or include mental health professional as founders.” While users may believe that these “therapy” apps are trustworthy, there are currently no binding ethical guidelines for companies to follow. CEO Ana Maiques argued that the industry needs to incorporate ethical assessments. More ethical questions need to be posed before clinical trials, in order to ensure that mental health apps are safe and effective for users.
“TikTok brain”, polarization and manipulation
In addition to the positive and negative impact of AI in mental health research and applications, AI systems can also have an influence on the user’s mental health and wellbeing. Research has shown that AI can have a negative impact by reinforcing systems or patterns that make users more dependent on the product (Carl Morch, adjunct professor at UQÀM, Canada). Some users of social media platforms like Tik Tok “may develop a problematic use pattern manifested by addiction-like undesired behaviors”, write Su et. al (2021). Next to being addictive, algorithms can also polarize people by recommending content belonging to an ideology and creating a parallel reality. There can be positive impact on mental health too, for example when people form meaningful relationships with others and nurture them also outside of the virtual world. However, users need to be aware of the risks and companies should be more transparent and develop ethical tools.
AI can help us to better understand and treat mental disorders – IF the data collection and use of the AI model is transparent, inclusive, scientifically validated, and ethical. AI can do good or harm, depending on how we use it.
More information:
- CLAIRE YouTube Video: AI as a Tool for Mental Health, European AI Week 2023
- CLAIRE YouTube Video: “Impact of AI on Mental Wellbeing”
- Article “How does AI impact mental health?” (in French)
- Marcello et al. “Toward a governance framework for brain data”, Neuroethics 15(2), 2022: https://doi.org/10.21256/zhaw-25339
- Su et al. “Viewing personalized video clips recommended by TikTok activates default mode network and ventral tegmental area”, NeuroImage, Volume 237, 2021: https://doi.org/10.1016/j.neuroimage.2021.118136
There are challenges, like ensuring data privacy and dealing with the complexity of human emotions and contexts that an AI might not fully comprehend. It’s important to remember that while AI can provide valuable support, it doesn’t replace the irreplaceable – the human touch, the empathetic ear, the nuanced understanding of human emotions.
Thanks for the work you’ve done!