Language technology and AI can aid mental health during COVID-19 and beyond

Towards sensing moment to moment changes in mental health conditions

Thursday 21 May 2020

This Mental Health Awareness Week occurs while many parts of the world are on lockdown in response to the coronavirus pandemic. In this blog, Turing AI Fellow Maria Liakata looks at how the crisis impacts on our mental health and how AI could help when human support is limited or non-existent during this extraordinary time.


The COVID-19 pandemic has brought about rapid and dramatic changes to our daily lives with long term impacts that we are yet to fully understand and reckon with. One change that has affected us all in an unprecedented way has been the requirement for social distancing and what it means for the whole fabric of society as we know it.

An important implication of social distancing, and one with worrisome effects, is the reduced access to vital services such as healthcare. Mental health has only recently started receiving more attention as part of this larger picture. The United Nations just warned of a looming mental health crisis due to the impact of social isolation wrought by the pandemic, “as millions worldwide are surrounded by death and disease and forced into isolation, poverty and anxiety.”

A recent article in Lancet Psychiatry from 15 April calls for urgent action in increasing research and collecting data in terms of how the pandemic is affecting mental health, highlighting regular monitoring and reporting of conditions such as anxiety and depression. It argues that such monitoring should go beyond linking information in NHS records into “capturing real incidence in the community” and that “techniques assessing moment to moment changes in psychological risk factors should be embraced.”

Socially distanced mental health monitoring and reporting with AI

There is great potential for using language technology and AI to help with monitoring moment to moment changes in mental health conditions on the basis of language and other digital data that we as individuals produce in our daily lives. This data may be posts on social media or online platforms, messages we exchange or any spoken or written text that we produce (e.g. on our phones or computers).

It can also include non-language data such as digital interactions and preferences, images and video or data such as movement and location information from fitness devices. I refer to all this data as “user generated content” (UGC).

Research at the intersection of mental health and AI has been exploring methods for making use of UGC to supplement medical evidence for different diagnoses (e.g. depression, PTSD, schizophrenia) or mental health risks and indicators (e.g. suicidal ideation). As a first step, statistics about UGC data can be used as aggregates to identify trends in a wider population.

For example, experts have teamed up with multiple digital services providers to capture insights from millions around the world on the scale of the mental health consequences of the ongoing pandemic and lockdowns.

In a number of studies machine learning models have been trained on properties of the linguistic content of social media posts (e.g. words or phrases) to make mental health assessments about the nature of the posts (e.g. indicative of depression) [De Choudhury, Counts & Horvitz, 2013; Balani & De Choudhury, 2015]  or, given a set of posts, about the individuals who have authored them (e.g. having PTSD) [Coppersmith, Harman & Dredze, 2014; [Hovy, Mitchell & Benton, 2017].

Mental health in real-time: Can we sense moment to moment changes to improve care?

Most of the work on characterising mental health from UGC, even when it involves a collection of social media posts by an individual, for instance, does not consider the change in the mental state of individuals over time. Exceptions include work that considers whether an individual may change their posting behaviour on social media between two separate chronological periods and the prediction of psychological distress scores at different points in life from school essays [CLPsych 2018].

Another limiting factor of such methods is that they usually only consider a single type of UGC, usually text. What happens for example when someone stops posting? Work by [Tsakalidis et al, 2016] was the first to combine language data and mobile phone usage information to make predictions about users’ mental well-being over time.

Subsequent work by [Tsakalidis et al, 2018] showed that current popular evaluation settings within the research community fail to penalise systems that cannot generalise well to individuals whose data they would encounter for the first time. Therefore there is a general need for rigorous evaluation standards for welfare applications and their forecasting ability before they can be used in the real-world. 

As part of my Turing AI fellowship on Creating time sensitive sensors from language and heterogeneous user-generated content, my colleagues and I are working on addressing some of these challenges. One of our primary objectives is the identification of moments of change within the mental state of an individual, especially in rapidly evolving situations such as COVID-19. These are defined as points or periods in time signalling the change in a user’s digital behaviour (e.g. their mood shifts or something happens in their life, which they talk about).

Moments of change will allow for a more robust, independent and explainable ground truth (the known information that a model anchors its learning to)  than a flat categorical value (e.g. depressed or not depressed). They can also be complementary to psychological scale scores, which can be difficult to obtain on a large scale as the ground truth, particularly in times of major crises, such as the COVID-19 pandemic.

We are also working on how to jointly represent the different types and modalities (e.g. language, voice, digital interactions) of UGC produced by an individual at different points in time, so that the combination of UGC can adequately describe the state of an individual in a dynamic way. We also plan to subsequently align these dynamic user representations with known moments of change in a user’s mental health condition.

Understanding why the model decides there is a moment of change and then automatically summarising the moments of change across time and the potential reasons behind them will be very important in depicting the trajectory of a person’s well-being to help experts and individuals with moment by moment monitoring of mental health. Such summaries can also help us better understand common patterns of behaviour across different users.

The importance of having datasets with reliable and frequent ground truth values are crucial to the applicability of this type of real-world monitoring of mental health and related work, which would be extremely useful in situations such as the current pandemic. Thus collaboration with digital health providers as well as health services will be crucial.

During the process of mental health monitoring from UGC we have to ensure that important ethical implications are addressed. For example, who will have access to the mental health sensors and how we make sure they are only used to increase our understanding of the complexities surrounding mental health conditions and to benefit those affected.

We are using Turing resources to help us address such issues and handle sensitive data with great care. We welcome input from stakeholders including the general public and health practitioners through membership to our stakeholder committee. For details on how to join our project as a stakeholder please contact [email protected].


Note: The author would like to thank her collaborators Adam Tsakalidis, Becky Inkster, and Federico Nanni for their input to this blog.

Further reading

[Holmes et al, 2020] Holmes, E. A., O'Connor, R. C., Perry, V. H., Tracey, I., Wessely, S., Arseneault, L., ... & Ford, T. (2020). Multidisciplinary research priorities for the COVID-19 pandemic: a call for action for mental health science. The Lancet Psychiatry.
https://www.thelancet.com/journals/lanpsy/article/PIIS2215-0366(20)30168-1/fulltext

[Inkster et al., 2020] Inkster B, O’Brien R, Niederhoffer K, Bidargaddi N, McIntyre RS, Bowden-Jones H, Torous J, Insel T. Early warning signs of a mental health tsunami: Initial data insights from digital services providers during COVID-19. https://www.beckyinkster.com/covid19

[De Choudhury, Counts & Horvitz, 2013] Munmun De Choudhury, Scott Counts, and Eric Horvitz. 2013. Social media as a measurement tool of depression in populations. In Proceedings of the 5th Annual ACM Web Science Conference (WebSci ’13). Association for Computing Machinery, New York, NY, USA, 47–56. DOI: https://doi.org/10.1145/2464464.2464480

[Balani & De Choudhury, 2015] Sairam Balani and Munmun De Choudhury. 2015. Detecting and Characterizing Mental Health Related Self-Disclosure in Social Media. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’15). Association for Computing Machinery, New York, NY, USA, 1373–1378. DOI:https://doi.org/10.1145/2702613.2732733

[Coppersmith, Harman & Dredze, 2014] Coppersmith, G., Harman, C., & Dredze, M. (2014, May). Measuring post traumatic stress disorder in Twitter. In Eighth international AAAI conference on weblogs and social media.

[Hovy, Mitchell & Benton, 2017] Hovy, D., Mitchell, M., & Benton, A. (2017). Multitask Learning for Mental Health Conditions with Limited Social Media Data. In EACL (1) (pp. 152-162).

[De Choudhury et al., 2016] Munmun De Choudhury, Emre Kiciman, Mark Dredze, Glen Coppersmith, and Mrinal Kumar. 2016. Discovering Shifts to Suicidal Ideation from Mental Health Content in Social Media. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). Association for Computing Machinery, New York, NY, USA, 2098–2110. DOI:https://doi.org/10.1145/2858036.2858207

[CLPsych 2018] Lynn, V., Goodman, A., Niederhoffer, K., Loveys, K., Resnik, P., & Schwartz, H. A. (2018, June). Clpsych 2018 shared task: Predicting current and future psychological health from childhood essays. In Proceedings of the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to Clinic (pp. 37-46).

[Tsakalidis et al, 2016] Tsakalidis, A., Liakata, M., Damoulas, T., Jellinek, B., Guo, W., & Cristea, A. (2016, December). Combining heterogeneous user generated data to sense well-being. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers (pp. 3007-3018).

[Tsakalidis et al, 2018] Tsakalidis, A., Liakata, M., Damoulas, T., & Cristea, A. I. (2018, September). Can we assess mental health through social media and smart devices? Addressing bias in methodology and evaluation. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 407-423). Springer, Cham.