Is Behavioral Activation the Killer App for Mobile Sensing?

A presentation at the Center for Behavioral Intervention Technologies (CBITs), Northwestern University, Feinberg School of Medicine Center.

Transcript of Video Presentation

VR: An Example

I wanted to start off by showing you this picture of my lovely wife and here she is with a VR headset on and she’s currently exercising. I think she might be playing Supernatural or Beat Saber or some product like that, and she absolutely loves it. The thing that’s interesting about this, from my point of view, is that she’s not a particularly “tech forward” person. She tends to be a late adopter, so the fact that she is using this device, the Oculus, and using it for exercise is interesting.

I think that it relates to this question. The reason I mention this is because sometimes we don’t really know what the killer app will be for a particular product.

With virtual reality, you can see this report from 2018 in the in the tech press where there was this speculation about like VR does not have this killer app that convinces us to buy it. Then this year, another article looking at this idea that the best thing to do is in VR is to work out.

Here’s a quote Chris Milk who is the CEO of the company that produces one of the platforms for VR workouts: “Fitness is the killer use case for VR it will be the first driving force for mass adoption.” Importantly he says, “What we’re seeing is a 50/50 split between women and men. 60 percent of our users are over 40. This is not what a typical VR demographic looks like.”

The point is when we very first introduced technology like VR, there was there was a strong interest in its application, for example, in gaming.

There was some disappointment with the development of that application. We really do need to look at these technologies once they’re in the hands of people and find out more about what the real killer application that people will really love that will drive them to use it might be.

Passive Mobile Sensing

What I want to do is look at this with respect to passive mobile sensing or personal sensing. There are various phrases for it, but the idea is that we can collect data from what’s generated as people use their mobile and wearable devices in their day-to-day activities naturalistically.

There’s been a lot of interest in this. I’m very interested in it. I know that the CBITs group and many others have been interested in this topic. And you can see why this is such an attractive proposition to people who are interested in behavioral research or who are interested in behavioral interventions.

The first thing is that it’s objective. We can collect these data based on objective variables rather than purely relying on self-report. It’s ecological so you’re collecting data in the person’s natural environment. It can be collected in an unobtrusive way, so unlike self-report rating scales and things like that, the person doesn’t have to interrupt their ongoing behavior in order to provide data. It provides a very rich, individual data set which can be individualized to understand changes from a person’s normal functioning, within that individual, rather than comparing them to other people. It creates the opportunity for real-time analysis, and therefore, intervention of those data and of course because it’s based on in many ways a device that most people already own (smartphone) it’s highly scalable.

You can see that with respect I’m very interested in in teenagers and young adults, youth generally, as a critical group for targeting, in terms of mental health support and intervention. You can see that in this group the proportion of people who own their own smartphone and use it extensively has been rising dramatically and would have risen even further since 2018 (where these data are) and we can also see not only are people using them but the centrality and the importance of these data for people’s day-to-day lives is increasing. As you can see this indicating the increasing importance of social media video and texting in terms of how people communicate within their personal relationships.

Those of us in mental health looked at the emergence of this new technology and thought this offers some exciting and unique opportunities, particularly in the area of measurement of mental health, because it it’s a unique proposition that we haven’t had before.

The Current State of Mental Health Practice

First, I want to step back and look at where we are currently in mental health practice and the fact is that we do have standardized questionnaires and interviews, but only a small percentage of working clinicians out in the field use these measures, even the old-fashioned paper and pencil ones.

Gilbody, S. M., House, A. O., & Sheldon, T. A. (2002). Psychiatrists in the UK do not use outcomes measures. British Journal of Psychiatry, 180(2), 101-103. doi:10.1192/bjp.180.2.101

Hatfield, D. R., & Ogles, B. M. (2004). The Use of Outcome Measures by Psychologists in Clinical Practice. Professional Psychology: Research and Practice, 35(5), 485–491. https://doi.org/10.1037/0735-7028.35.5.485

About 10 percent of psychiatrists, roughly less than 40 of clinical psychologists, routinely use these measures in their clinical practice. Yet we know from meta-analytic reviews and other authoritative sources that when you do conduct routine outcome monitoring that it has a positive impact on a range of critical clinical processes including diagnosis treatment and communication between the client and the practitioner. We do want people to to use these methods more and of course it’s one thing to say, “Well, clinicians they don’t use these methods, but surely in research we’re doing a better job.”

However, all research does involve a systematic collection of data, but one of the things that’s very notable is the absence of objective measures.

When we look at the methods that are used in depression treatment trials, this is an analysis that we did recently, and you can see that the vast majority of them are using self-report only or clinician observed or interview report which, of course, is still based on self-report and a tiny sliver are using some kind of objective measurement. The most common of which is looking at an objective measurement of medication use and compliance.

The question is: what’s wrong with relying so extensively on self-report? It makes a certain amount of sense, especially with mental health conditions like depression and anxiety, because the core symptoms are essentially phenomenological, so obviously we would not want people to tell us how they’re feeling in this regard.

However, one of the important things is that we know that in a range of different areas of research, when we have objective measures as well as subjective measures, they often don’t correlate very highly. Sometimes they don’t correlate at all. Examples include studies that have been done, for example, on condom use. You’re comparing self-report versus objective measures. Sleep is an area that I’ve done some research in where we look at, for example, wrist actigraphy or polysomnographic measurement of sleep versus self-report.

In substance use, where you can compare say urine screens with self-reports of substance use, in every one of these areas we find that although self-report is a valuable data point, it does not tell us the whole story.

Having that objective data is valuable to round out the picture of what’s happening with people in terms of this area of functioning.

When it comes to mobile sensing and all these tantalizing possibilities really the first killer app for mobile sensing that people very quickly gravitated towards, and I would say this includes myself, is the idea of digital biomarkers for symptoms and diagnosis. You can see some of these press reports of which there have been many and we’re still quite regularly seeing this intriguing idea that your smartphone might know when you’re depressed in some way that you don’t. It just makes great journalistic copy.

The idea that this new kind of data that’s continuous and passive and collecting multiple streams of objective behavior could improve our diagnosis has been an intriguing moment. But I do think there are some important caveats that we need to bear in mind.

One of them would be the very long and added and expensive effort that has been put into the area of biological psychiatry. I’ve been a participant in this to some extent. I’ve done extensive research on neuroimaging, a little bit of research on genetics.

Once again, the idea, the hope was that there would be a biomarker that would emerge from this work, but as you can see this statement here from a recent paper that’s just come out in the Annual Review of Neuroscience really sums up what many people in the field are saying:

”As our understanding of the neurobiological and cognitive correlates of mental health and mental illness has grown through decades of research, one thing that’s become clear: Things are more complicated than we might have hoped. The notion of one-on-one mappings between those abnormalities in specific brain areas or cognitive markers and individual categories from the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) (APA 20013) has all but been abandoned.” – Gillian & Rutledge, Annual Review of Neuroscience, 2021 

Gillan, C. M., & Rutledge, R. B. (2021). Smartphones and the Neuroscience of Mental Health. Annual Review of Neuroscience, 44(1), 129-151. doi:10.1146/annurev-neuro-101220-014053

So, this is a cautionary tale that we in digital measurement of behavior should bear in mind and take very seriously.

Why are biomarkers so hard?

One question is: why are biomarkers so hard? Of course, a lot of it has to do with, and I’m certainly not unique in saying this, it has to do with our gold standard of measurement.

Our gold standard measurement is usually a diagnosis or a symptom rating scale, generally a self-report rating scale, sometimes a clinician rating scale.

The criteria that we’re using there are fundamentally descriptive and not mechanistic, especially when they’re based on systems like the DSM and the ICD.

The problem we have is: if we don’t know what the ultimate underlying mechanism is that determines the presence or absence of the condition, then how can you be sure that you’re really measuring the right thing?

This is what comes to mind when I was doing my graduate training. We were studying intelligence we were told this old quote that “Intelligence is what intelligence tests.” To a certain extent depression is what depression tests, when you don’t have this underlying mechanistic understanding.

Depression is Highly Heterogeneous in its Presentation

The second issue that makes biomarkers so hard is that depression is highly heterogeneous in its presentation.

Here is some data that comes from a study that took people who were diagnosed with moderate to severe depression. In the STAR*D study, one of the very significant, large-scale multi-center treatment studies that’s been conducted, you can see that even in this group we’ve all got moderate to severe depression. There’s enormous variability in specific symptoms that are associated with those presentations, and this makes the question of predicting this ultimate criterion very difficult.

Fried, E. I., & Nesse, R. M. (2015). Depression is not a consistent syndrome: An investigation of unique symptom patterns in the STAR*D study. Journal of Affective Disorders, 172, 96-102. doi:10.1016/j.jad.2014.10.010

Multifinality and Equifinality

Finally, one other important thing is there’s these concepts of multifinality and equifinality and what that means.

Multifinality means that one risk factor or underlying mechanism can lead to multiple psychopathological or mental health outcomes.

Equifinality means the idea that there are multiple different risk factors or risk processes that can result in a similar clinical presentation.

Davey, C. G., Yücel, M., & Allen, N. B. (2008). The emergence of depression in adolescence: Development of the prefrontal cortex and the representation of reward. Neuroscience & Biobehavioral Reviews, 32(1), 1-19. doi:10.1016/j.neubiorev.2007.04.016

This makes the task of finding patterns of behavior, even from a very rich data set like the one that we have in digital phenotyping or mobile sensing. To use that to predict symptoms or diagnosis is always going to be challenging because of the heterogeneity of the presentation, but also the heterogeneity of the etiology of the causes.

Finally, there’s the question of: Is automated diagnosis really what clinicians want?

There are some reasons to suspect that it may not be. The first thing is what clinicians really care about are the insights that drive treatment planning.

There’s this question of diagnostic versus transdiagnostic treatment planning. What that essentially means is that just simply knowing someone has depression or they have a certain anxiety disorder is usually not enough, especially in behavioral treatments, to really define the treatment planning. We need to take those extra steps to make an individual case conceptualization or behavioral analysis to then drive the treatment planning process.

You can see in the figure here there’s a whole range of these and there are many others of these transdiagnostic processes which can be targeted across different diagnostic treatments and knowing which ones are going to be relevant to which case is not well answered simply by being told that someone has a certain diagnosis.

The other thing that clinicians really want to know is understanding functioning and recovery in vivo. What’s happening in the person’s real day-to-day life as opposed to, for example, how they might look during a clinical consultation?

The other thing the clinicians really care about we have found is feedback to adjust their therapeutic strategy. Knowing what’s working and what’s not working in terms of the therapeutic strategy that they’re pursuing. We’ve seen some confirmation of this from a recent report from a project by Google X where they were looking at the use of EEG to diagnose depression. As part of the project they did some user research, some customer discovery, and they went and talked to clinicians and here was what they concluded.

They said: “Our initial hypothesis was that clinicians might use a “brainwave test” as a diagnostic aid. However, this concept got lukewarm reception.” In many ways because clinicians think they’re not too bad at diagnosing the problem. “By contrast there was very strong interest in using technology as a tool for ongoing monitoring, capturing the changes in mental health state over time–to learn what happens between visits.” – Google X Blog, Project Amber, 2020

This suggests that this initial application of mobile sensing may not have been the one that is really addressing the primary clinical concern.

Challenges in Mental Health Care

I want to talk about what are some of the challenges that we have in mental health care and why we do need innovation including but not limited to digital innovation.

This is a diagram from a recent report on mental health and digital technologies that points out that for every 10 people in the community with a mental health problem because of various kinds of processes, we only see about one half of those people (1 in 20), receiving full benefit from mental health care. This is, of course, in high income countries like the United States.

The current mental health care system is not effective for 90 of the people with the disorder in the community. It means that we’ve got a range of challenges that we need to address with our innovation.

  • One is prevention: How do we stop people getting to the point where they have a mental health disorder?
  • The second one is access: How do we make sure that people can more people can get access to the treatment?
  • Then there’s quality: Making sure that they get access to good quality care that is consistent with evidence-based protocols.
  • And finally, effectiveness: Discovering new and more effective methods of intervention.

The approach that most people in the digital area have taken is to say what we really need to do is improve access. This is digital, highly scalable. It can really improve the access and solve the access problem, and that’s a very important part of the puzzle.

However, we do have some indications that simply solving the access problem by itself is not enough.

For example, this is a paper from a colleague of mine in Australia, Tony Jorm, and collaborators, where they looked at the trends across time in access to mental health care in high income countries like the USA, Canada, the UK and Australia. What they found is that, despite the fact that rates have increased, that access had increased quite notably, in all these contexts. – Jorm, A.F. et al., (2017). World Psychiatry, 16 (1), 90-99.

Jorm, A. F., Patten, S. B., Brugha, T. S., & Mojtabai, R. (2017). Has increased provision of treatment reduced the prevalence of common mental disorders? Review of the evidence from four countries. World Psychiatry, 16(1), 90-99. doi:10.1002/wps.20388

There was no evidence for a reduction in the prevalence of symptoms or disorders over the same period. If anything, particularly in the case of young people, there are indications that things may have headed in the opposite direction.

What this tells us is that although clearly the access problem is fundamental, if we don’t solve the quality and effectiveness problems as well, then increasing access is not going to get us to where we want to be, where we’re really bending the curve in terms of the overall burden of mental health problems in the community.

I think that one of the things that’s important for improving this quality, is understanding that although we’ve had we’ve invented new treatments and new approaches, we’ve really continued to deliver them primarily through a system that we’ve used for over 100 years. Which is really this system of office-based clinical consultations.

One of the things that I’d point out is that we know that this particular pattern of delivering services is a very poor match for what we know about the fundamentals of behavior change.

Principles of Behavior Change

For example, the principles of behavior change that are well-established in pretty much all areas of behavior change and skill acquisition, is that you must have something like these elements:

  • First, a clear description of the new skills to be learned along with modeling of those skills.
  • Second, the opportunity for practice with timely feedback.
  • And then, specific procedures for generalizing these into real life circumstances.

This is true no matter whether you’re trying to learn to be more interpersonally assertive, or if you’re trying to do some exposure work, or if you’re trying to learn to throw a football or drive a car.

Yet what we what happens with the way we currently deliver mental health services is that it’s a little bit like a football coach who says to the players: “I’m not going to come to the game. I’m not even going to look at any tape. I’m just going to have you come and talk to me once a week and tell me how you think you played, and then I’m going to tell you how I think you should play better next week. But I’m not going to come to the next game. I’m just going to wait for you to come and talk to me next week and tell me how you think you played in that game.”

I put it to you that this approach to sports coaching would be considered quite absurd, and yet in many ways that’s what we are proposing when we limit our interaction with clients to these office-based consultations and hope that the kind of hard behavior change that we’re asking them to do will somehow generalize.

Mobile Sensing

Into mobile sensing, I think this is where, as you can probably tell from my title, I think this is where mobile sensing may have an important contribution to make.

I want to tell you a little bit about some of the work that we’ve done with mobile sensing. We have developed a research platform called the EARS platform, the Effortless Assessment Research System.

Our approach essentially combines a collection of mobile sensing data and ecological momentary assessment data and it’s got some particular emphasis.

We really have emphasized a phone-only approach. We do not use wearables at this stage, because we want this approach to be highly scalable with people’s personal smartphones.

We also don’t ask people to do anything unusual with their smartphone. They use it naturalistically. It’s able to collect data continuously which has allowed you to put look at some of these rhythms and measures that you have and the particular features that we can extract from these data streams that we collect which include things like accelerometer, and GPS, keyboard, a range of other measures of screen touches including things such as measures of phone and app usage, data language and cognition, particularly from the keyboard, sleep, physical activity, mobility, facial expressions in people’s selfies, music choice, circadian patterning, and many other things.

I wanted to emphasize perhaps you’ve heard people speaking this in this series on a number of these measures. One that we’ve done quite a bit of work on that you might be interested to hear a bit more about is that is the language data.

I’ll emphasize that in what I’m going to present next.

Language Data

One of the things that we do see with the language data is that there is high variability, as you might expect, in how much language people type into their phone daily.

This is an average daily word count from a group of adolescent participants, and you can see that social media is a huge portion of where those data are entered those keystrokes are entered with SMS instant messaging being the second most common one, email being a very unlikely use for this group. We have done a number of studies now, looking at the association of different patterns of language use and keyboard use with important external validators.

Using Mobile Sensing Data to Assess Stress

So, for example, this is a paper that’s just recently been published in Digital Health. The lead author is Michelle Byrne, my former postdoc, who’s now working back in Australia, and she is and she and Monika Lind, co-lead author, examined various aspects of the patterns of language that were collected from keyboard behavior with various measures of stress and inflammation. We had measures of inflammatory factors as well as measures of stress and psychopathology.

Byrne, M. L., Lind, M. N., Horn, S. R., Mills, K. L., Nelson, B. W., Barnes, M. L., . . . Allen, N. B. (2020). Using Mobile Sensing Data to Assess Stress: Associations with Perceived and Lifetime Stress, Mental Health, Sleep, and Inflammation. doi:10.3123

  • One thing that we see that’s very interesting, from a lot of these heavy lines, is that the strongest associations, or the most consistent associations, with the language variables is not so much with the psychopathology measures as it is with the stress measures. That’s important because stress is one of those transdiagnostic processes that really is relevant across diagnoses that would then drive a particular transdiagnostic treatment approach.
  • The second thing is that you can see that the correlation across methods especially from the language measures to the salivary measures of inflammatory factors is relatively harder to demonstrate than it is between the language measures and self-report.

Using Adolescent Smartphone Language, Internalizing Symptoms and Mood

A second paper, that is currently under review that I’ll let you know about, was first authored by my graduate student Elizabeth McNeilly. It’s a study that we did with a group of 13-year-old adolescents. We collected data across a number of weeks where we collected these messages.

The final data set had 21,000 individual messages and we subjected those to linguistic analysis of their content.

Mcneilly, E. A., Mills, K. L., Kahn, L. E., Crowley, R., Pfeifer, J. H., & Allen, N. B. (2021). Adolescent Social Communication through Smartphones: Linguistic Features of Internalizing Symptoms and Daily Mood. doi:10.31234/osf.io/6gdkf

Number of Words Being Used

Some of the key findings from this study is that there was an association interestingly which was consistent with what was seen in the previous study with the number of words being used and someone’s well-being on a given day or mood on a given day. This was particularly strong for those who are generally have low levels of well-being indexed by this red regression line shown.

Where what we see is that on those days for those people on days when they’re experiencing low well-being, you see them typing more words into the system.

The other aspect of this that’s interesting is the content of the language.

First-Person Pronouns

One of the features that we’ve found is that the use of first-person pronouns is often associated with higher levels of psychological distress. The heavy line represents the association between how someone said they felt on a given day and how they and their likelihood of using first-person pronouns.

You can see here that when they’re feeling, having lower well-being or lower mood, on a day, then they’ve got a higher likelihood of using those first-person pronouns, but what’s interesting here is all these individual lines. These are lines for the individual participants. As you can see, for the vast majority of them, there is some kind of negative association that is reliable within individuals as well as between individuals, suggesting that this is in fact a good way of tracking these day-to-day fluctuations.

Use of Present-Tense Verbs

Another linguistic marker that was found to be interesting in this study was the use of present tense verbs. Once again, the same idea: that when people were less having lower levels of well-being, there was a higher likelihood of them using present tense verbs, as opposed to other tenses.

Geographic Mobility

Other variables that we’ve looked at include measures of geographic mobility. This is one variable that I know has been important in some meta-analyses and systematic reviews that have been done on mobile sensing and mental health is the proportion of time people are spending at home every day.

Now this is this is some preliminary data from a study that we’re doing of young people who at risk for suicide and we’ve got a group of nine young people who had a suicide risk event leading up to this endpoint of the data. We’re looking at the proportion of people who are experiencing who are spending time a greater proportion of time at home and you can see the rise in that variable that occurs prior to the risk state this of course needs to be validated with larger samples, which is part of an ongoing study that we’re doing now.

I will point out that these data are prior to the COVID pandemic, so the effects of stay-at-home protocols are not reflected in these data.

Sleep

Finally, another important variable that we’ve been looking at a lot is sleep and the validity of detecting sleep periods with a phone-only approach.

Just looking at using naturalistic phone behavior. Here, you can see data from a series of individuals where the phone-only approaches were used to estimate bedtimes and rise times over time. You can see that there’s that there is some consistency in those variables, as well as some misbehavior in the estimate, that we get here. However, we do see that the overall phone-only approach does find does provide a pretty good distribution, a quite sensible distribution, of these variables.

Perhaps more importantly, when we compare the phone-only data to sleep diary data we really see that it shows a good level of accuracy and a level of accuracy.

This lower set of table A set of figures represents the accuracy of the EARS phone-only approach comparing it to the accuracy of a wrist actigraph (this is a research grade wrist actigraph) and in every case you want these distributions to be as close as possible. You can see that the phone-only approach is doing as well as, if not a little better than, the wrist actigraph.

We have some very promising markers of important psychological processes here.

Why Behavioral Activation?

Now I want to talk a little bit about why behavioral activation might be the killer app. The approach which is really going to have the greatest clinical utility for this way of measuring behavior.

I’ll start out with a brief description of what behavioral activation is.

Behavioral Activation Defined

Behavioral activation is a very strongly evidence-based approach to the treatment of depression anxiety, increasing well-being in non-clinical groups. There’s also some a small amount of data showing some utility and reducing risk for suicide. It’s a very straightforward approach. A very simple approach to psychological intervention that basically says what we want to do is we want to change a vicious cycle that characterizes people who are experiencing depression, for example, feeling low. The person then stops doing things that really matter to them: the things that give them a sense of enjoyment or a sense of importance. Valued actions, the things that give them a sense of self-esteem or pleasure, and therefore they get less out of life which makes them feel lower and this vicious cycle continues going.

The therapy says one point of intervention is here, on this behavioral point, and so we change that by understanding how the person can do more of what matters, to promote them getting more out of life. With the goal of having their mood improved, which then of course, permissive of them doing more of what matters, and you get into a virtuous cycle.

Behavioral Activation Components

Behavioral activation broadly has a couple of key components to it:

One is activity monitoring, so this is the tried-and-true method.

It looks a bit daunting, as I’m sure it does to many clients who are given it for the first time, but basically, we get people to record what they do, on an hour-to-hour basis and to rate everything that they do, in terms of enjoyment or pleasure, and mastery or accomplishment. You can get this individualized pattern of what are the activities that promote good mood and well-being for this individual. That provides a map for the treatment planning. I want to make it clear, although this is a very long-standing approach to psychological therapy, it is inherently a personalized therapy, in this sense. A lot of the talk that we have now about personalization can of course leverage this this kind of methodology, and then once we understand what are the activities that support good mood and well-being for this person, then of course, you want to prescribe more of those activities and problem solve to allow the person to engage in them more often.

These often move from greater task assignments, where you break the activity down into small steps, to task assignments where you ask the person to solve a number of problems at once to get to the goal. That’s pretty much it. It’s a very straightforward approach

Behavioral Activation Measurement

The good news about it is that the reason why this is particularly suitable for mobile sensing is that mobile sensing can passively measure some of the critical aspects of behavior that we’re interested in: physical activity, social connection, mobility, and although it’s not always core to behavioral activation, sleep. It is often a very key component of behavioral treatment and depression and there is a strong evidence base for its effectiveness.

There’s been multiple meta-analysis that have shown that this is an effective approach, but also, it’s very simple to administer.

Cost and Outcome of Behavioural Activation versus Cognitive Behavioural Therapy for Depression (COBRA): a randomised, controlled, non-inferiority trial. Richards et al., The Lancet, 2016

I’ll quickly describe this complex graph. This is from an important study that was published in 2016 comparing cognitive behavior therapy to behavioral activation. One of the things that they looked at was not only the effectiveness, but also the cost effectiveness, of the intervention.

This diagram represents the trade-offs between the effectiveness, with behavior activation being more effective than cognitive behavior therapy or less effective, and more expensive versus less expensive. You can see this simulation of different outcomes based on the clinical trial data suggests that the majority of the time, 66% of the time, the outcome would be that behavioral activation is more effective and less expensive to administer.

Another thing that we know about behavioral activation is that it may be, in fact, the most effective component of internet delivered cognitive behavior therapy.

Dismantling, personalising and optimising internet cognitive-behavioural therapy for depression: a study protocol for individual participant data component network meta-analysis. Furukawa et al., The Lancet Psychiatry, 2021

This is another interesting paper that was published just this year in The Lancet Psychiatry by Furukawa and colleagues. What they did is they did a component network meta-analysis of a wide number of randomized trials of internet delivered cognitive behavior therapy. What they found was that behavioral activation was probably the strongest predictor of positive outcomes, along with other things that that are that are key. Interestingly, relaxation was in fact the one factor that they found was a negative predictor of outcomes.

Interestingly, relaxation was in fact the one factor that they found was a negative predictor of outcomes.

The Problem of Engagement with Consumer Facing Digital Mental Health Apps

The final piece of the puzzle that I want to talk about that I know people at your center are very familiar with, because you’ve been thinking and writing about this extensively, is how do we encourage engagement with consumer-facing digital mental health apps. These curves will be very familiar to folks. There is a very rapid drop-off of usage. This is true of all applications on phone operating systems but is also true of mental health apps. You can see that the one group of apps that seem to do slightly better are those that are involved in peer support, or that is relatively few observations contributing to that.

Baumel, A., Muench, F., Edan, S., & Kane, J. M. (2019). Objective User Engagement With Mental Health Apps: Systematic Search and Panel-Based Usage Analysis. Journal of Medical Internet Research, 21(9). doi:10.2196/14567

We think that despite the appeal of a scalability that’s associated with patient or consumer-facing app alone, we feel that there’s still a very strong importance of keeping a human in the loop in some way. This is supported by a range of studies. This is one example from Pim Cuijpers’ meta-analysis of different formats for delivering CBT for depression.

Cuijpers, P., Noma, H., Karyotaki, E., Cipriani, A., & Furukawa, T. A. (2019). Effectiveness and Acceptability of Cognitive Behavior Therapy Delivery Formats in Adults With Depression. JAMA Psychiatry, 76(7), 700. doi:10.1001/jamapsychiatry.2019.0268

You can see that all the group, telephone, individual, and guided self-help formats have similar effectiveness, with unguided self-help without a human in the loop being the one that didn’t have a significant effect size associated with it.

Pain Points of Behavioral Activation

So, one of the particular pain points that people have in delivering this technique that looks pretty promising, behavioral activation, and anyone who’s done clinical work will be very familiar with these, I certainly am.

  • One is that the completion of monitoring. This is a huge bugbear for anyone who’s doing behavioral interventions. Getting people to complete assessments between appointments that give you a real sense of what’s happening in between is often the exception rather than the rule. Not surprisingly, because you’re working with people who are really struggling, in general, in their lives. Therefore, adding an extra burden to them, of that assessment, is something that’s often pretty tough for them.
  • There’s the accuracy of the monitoring: Am I really getting inaccurate insight?
  • There’s the completion of behavioral science assignments once they’ve been set
  • and then monitoring the effectiveness of these behavioral assignments.

Improving Mental Health through Personalized Data

We have developed this largely, I should point out, through our company Ksana Health, an approach to digital intervention that tries to bring together these various streams to leverage the advantages that I’ve just pointed out, of mobile sensing for behavioral activation and clinical practice.

In the mobile app now, we just ask people those same two questions every day, that are asked by the standard monitoring scale that’s been used in behavioral activation.

One is, “How much did you enjoy yesterday?” and another one is “How much did you grow yesterday? We recently changed this language to “How much did you accomplish yesterday?” which we feel is a better a better descriptor of the concept we’re trying to get to.

Then in the background, all the mobile sensing data has been collected and quantified and turned into features, such as features of sleep, physical activity, social connection, patterns of language and social communication, and geographic mobility. The individual does not have to effortfully log or journal to achieve that that data collection.

What we get from that is, ultimately after the person has answered those questions for around about 10 days, then we can give them some feedback on what are the activities that for them individually are associated with good mood and well-being, based on their objectively measured mobile sensing behavior.

Of course, that then provides a series of intervention targets that are personalized to that individual, so in this case, more sleep might be a particular intervention target.

Vira Pro Clinical Dashboard

These data can be used in a self-care mode, but they can also be shared with a practitioner through a web portal that we’ve built, which then allows the practitioner to see these data and then to build important use for treatment planning by building out a series of notifications or nudges that then come out to the person in their day-to-day life to support.

The behavior change plan that they’re trying to follow through with. In this respect, we’re trying to solve some of those pain points that I described about the delivery of behavioral activation. The collection of the data is much less burdensome. It’s continuous. It’s objective and ecological, so hopefully the accuracy is stronger. Then also the system provides this support for positive behavior change that isn’t just limited to the clinical consultation and the advice that’s delivered then, but it’s actually pushed out in real time at the time of the of the therapist’s design. This is why we are proposing that this might be a more productive way of leveraging mobile sensing in clinical practice, rather than using it primarily for automating diagnosis, which doesn’t seem it has some inherent challenges associated with it.

It seems to be an area that is less in demand from clinicians. We collect the personal sensing data. We therefore understand modifiable behavior patterns that can be targeted for behavior change and therefore produce a clinical outcome.

Just-in-time Adaptive Interventions

Ultimately in the future we we’re really interested in this concept of just-in-time adaptive interventions. The idea of adjusting time adaptive intervention is that the intervention is designed to provide just-in-time support by adapting to the dynamics of the individual’s internal state and context this plus point being the critical one which is measured continuously. Mobile sensing provides a potential solution to that.

Nahum-Shani, I., Smith, S. N., Spring, B. J., Collins, L. M., Witkiewitz, K., Tewari, A., & Murphy, S. A. (2017). Just-in-Time Adaptive Interventions (JITAIs) in Mobile Health: Key Components and Design Principles for Ongoing Health Behavior Support. Anna

Digitally Enhanced Mental Health Care

Finally, our ultimate vision. is to provide through the combined use of the smartphone app with mobile sensing, smart nudges and communication tools with the practitioner dashboard, a kind of an ability for the person to move up and down different levels of care without the friction that characterizes these sorts of activities now. The person can move from self-care with the app to maybe getting some automated nudges to a text-based coaching approach to telehealth or face-to-face in-person therapy and you do it all through an application that provides the person with the ability to access the level of care that they need at a particular time. This also obviously has a very good value proposition for health care systems because by using this kind of stepped-care approach, they’re able to maximize the resources that are available for mental health care and focus those more expensive and time-intensive resources on those who need them most.

So that’s the ultimate vision can’t say we’re there yet but it’s always good to know what the north star is. With that I will thank you very much for your attention and for the kind invitation to speak to you and I’m very happy to answer any questions or have any discussion that people might like to have.

Q&A

Question

Speaking from primary care practice, I worry this is like other solutions I’ve seen on cardiac exercise, functional, status, cognitive and other monitoring. We have no capacity to receive and act on more data. We’re already awash in data that we cannot consume. The killer app is actually going to have to deliver less, better data.

Answer

That that is not the first time I’ve heard that. We are aware that busy practitioners particularly in primary care settings need data that comes to you in an extremely usable way. You’ve got short consultations and it needs to be very pragmatic. I do want to emphasize that the system that I just briefly showed you is one that is designed for specialist mental health care and particularly people who are delivering interventions like cognitive behavior therapy or behavioral activation.

However, it’s entirely configurable for these other forms of practice. The other thing of course ultimately that we want to be able to do (this is not where we are right now), but we want to build a system that can intelligently nudge, not only the service user, but also the clinician, to highlight automatically aspects of the data that they need to pay attention to. A little bit like when you’re looking at a blood screen and you see those out of out-of-range values very quickly, so that you can make a judgment about whether they have clinical significance for the care of your patient.

I totally hear what you’re saying and it’s something that we’re that we’re working with and we’re keen to work with people in primary care to understand the user experience so that we can build a system that people really want to use, and that doesn’t make them feel awash in data.

Question

It makes sense that you’re starting with kind of a behavioral activation framework because the geolocation data or some of the more reliable features that we have, but I’m also thinking when we detect different kinds of things that we haven’t had access to, it opens the opportunity to think about new ways of intervening with people. I’m particularly intrigued with what you’re seeing from the keyboard data. It’s similar to what we’re seeing through, not keyboard data, but sentiment analysis that we’re getting from many of the same kinds of feature. We’re starting to think about: now that we can detect how people are speaking and that that’s an indicator of well-being, how can we leverage that to begin to intervene with people? Does how people speak, is that a reflection of how they’re doing, or will how people speak be able to change? If you speak more positively to people, you may develop a more positive attitude. I wonder if you thought at all about how to leverage that data stream?

Answer

We are finding that the language data is fascinating and of course language is the stuff of psychotherapy. It’s the core data of the traditional approaches. I think you put your finger on an interesting point, which is: “Is the language primarily a sort of an informative assessment or is it actually a modifiable factor, something that you would target for modification in and of itself?”

I think we don’t we don’t know the answer to that question. There is certainly literature that’s that shows that you when you speak differently, in a different mode, then it changes your mood. I think the intriguing possibility is, for example, something like the first-person pronoun use, if that it reflects an underlying concept like excessive self-focus, then there are there are cognitive methods that you could use that could be suggested by those data that this is an appropriate approach with this individual to reduce the self-focus, which we know is a correlate of emotional distress. The other thing is that you could use it as a target of intervention. You say, “Okay, this is the skill we want you to practice. We want you to practice focusing when you’re interacting with other people focusing on the other person a bit more.” Something that we often do with people have social anxiety, for example. Or being more positive. Then the system could give you feedback about that, which also gets back to one of those principles of behavior change: that if you give people feedback in a timely way about how they’re doing, then that’s going to be more effective than if they must wait for the appointment to get it.

Question

As folks learn the skills from the digital mental health tools and begin to remit, and may not need digital tools as much, do you think this partially explains engagement drop-off we typically see in the use of digital tools? I’m wondering if we need to rethink our engagement standards.

Answer

There are two ways to think about this: One is that the drop-off is a big problem because behavior change is hard and it’s going to take some time and we want people to engage with these processes over a longer period to actually produce meaningful behavior change. That’s one perspective.

The second perspective is that we should design the interventions so that they realistically fit into the usage patterns that people have. I think that’s also a very compelling idea.

I think the open question for me is what is the minimum dose that we can provide that produces meaningful behavior change. There are people playing around with single session and very brief interventions. I think we’ll learn a lot more about that over the next period of time. For me, I’m still perhaps a little conservative and I’m in the camp that behavior change is hard, and it takes time. I would rather try and do what we can to maximize the length of engagement. My hypothesis is that that’s what we’re going to need for significant clinical effects. But I would love to be proven wrong, and to know that we could intervene more briefly and still have really good effects.

Question

Do you do you include cultural and value factors in behavioral activation?

Answer

The short answer: absolutely. This is one of the things that’s tricky about behavioral activation and why it is good to have a therapist or a human in the loop in some way. Something like accomplishments, mastery of growth, that dimension, is not just about doing a lot of work. It’s more about values-based action that is meaningful to you in terms of your personal goals. It may be something that’s got nothing to do with your career. It may be to do with caregiving. It may be to do with volunteering. It may be to do with community engagement. Understanding at an individual level and getting people to reflect on what are the things that give them a sense of accomplishment in the broadest sense of the term, is a core feature to behavioral activation. I think that’s a very important part where cultural and personal values are critical.

Question

Mobile sensing data quite noisy. In your opinion, what is the state of the art in using passive data to generate automated nudges and the impact of false positives or false negatives of these nudges on the trust of digital biomarkers?

Answer

This is another area where I’m more comfortable building systems that keep a human in the loop in some way. I’ll explain why: because even in traditional delivery of evidence-based therapies, like cognitive therapy and behavioral activation, every intervention you suggest is a hypothesis. You really don’t know fully. It’s a hypothesis, hopefully, that’s based on careful assessment and case conceptualization, but you need to try it to see if it has clinical benefit. I think this is important in this nudging as well, that we need to not assume that what’s emerging from the data is accurate. The ultimate test of its accuracy is in fact its utility. Its utility in terms of the clinical treatment. If the sensing data suggests something, and you try it out and results in clinical improvement, then I think the case is proven. That makes it even more important that we continuously assess and validate that the interventions that we’re trying with service users are resulting in benefits for them.

Nick Allen

7 September 2021

Share on:

Recent Articles

Enhancing Behavioral Health Care: Insights from the 2023 Society for Digital Mental Health Conference

Contributed by Lauren Weiner, Ksana Health Director of Clinical Science. On June 21, 2023, Ksana Health recently attended the 2023 Society for Digital Mental Health Conference. The event highlighted the role of...

Continue reading

Ksana Health selected to join the Beyond Language Studio!

Ksana Health is jazzed to be selected to join the Beyond Language Studio, a collaboration between Sorenson and Newlab supporting deep-tech startups building high-impact, innovative products, and services through real-world pilot projects. The...

Continue reading

Ksana Health 2022 Highlights (in Video)

For Ksana Health, 2022 was a year of perseverance, learning and optimization. We are grateful to all the partners, collaborators and supporters who made the year what it was. The video below...

Continue reading