Artificial Intelligence’s Role in the Future of Mental Illness

Kevin Haube
10 min readDec 17, 2020

Where are we at?

It’s no secret we’re in the midst of the next great evolution in our age of technological advancement; Artificial Intelligence (AI) and Machine Learning (ML) are everywhere, whether they’re making decisions about what advertisements are delivered to an end-user on Instagram or piloting a Tesla on I-405 during rush hour. While machines hold the computational advantage over humans in terms of learning from preprocessed arithmetic data, the human brain far outshines the machine’s ability to process organic sensory data, performing over one-hundred trillion calculations in unison (duToit, 2019). It is no secret, then, why only 3.79% of psychiatrists surveyed believe their jobs will become fully obsolete by the advent of AI (Doraiswamy et al. 2020), and they are right to be skeptical. AI acts as a mirror, only learning from what data it is shown, and in recent years the public has become aware of a multitude of biases that could plague autonomous AI without proper precautions. These machines are highly literal, and to the point of Aylin Caliskan, a professor of Computer Science at George Washington University, “…machines are trained on human data, and humans are biased.” (Resnick, 2019) What we should be asking ourselves, then, is this: what role can, and should, AI play in research and therapies of the human brain? To fully address this question, one must be willing to separate the technical capabilities from their ethical boundaries and understand that possibility does not always denote human benefit. However, with the advanced modeling capabilities of Machine Learning, It’s not hard to imagine that Artificial Intelligence (AI) will play a large role in accurately diagnosing and choosing the best form of treatment for individuals suffering from the most common, yet mystifying, neurological disorders and mental illnesses, and provide new and innovative ways to treat the most life-threatening situations created by such illnesses.

Machine Learning (ML) uses linear algebra to build increasingly accurate models capable of data analysis and pattern detection at levels humans cannot compete with, and thus can be able to demystify some of psychology and neuroscience’s most intriguing topics while providing more accurate diagnoses of disorders and illnesses. In fact, preliminary work is already being done in the fields of neuroscience and psychiatry to model Alzheimer’s. Through exploratory research utilizing machine learning, researchers have found a distinct link between beta amyloid, a plaque found in the brains of Alzheimer’s patients, and the neurodegenerative process. This will likely spark further analysis, hypotheses, and research leading to not only earlier detection of neurodegenerative diseases, but also insight into other cognitive dysfunction and pharmacological methods to treat them (Tai et al. 2019). In other studies, machine learning algorithms have been able to analyze MRI data to accurately detect scans of healthy brains vs schizophrenic brains, including accurate prediction of the patient’s age (Tai et al. 2019). Both exploratory and descriptive research are important for furthering our understanding of these vague illnesses. Through extensive exploratory research, utilizing AI and ML, we are not only able to come to conclusions otherwise inaccessible by the human mind through such granular knowledge, but we can use these newfound conclusions as the cornerstones of further-developed hypotheses for more descriptive research to confirm or deny.

For a real-world therapeutic application, take into consideration Andy Blackwell’s work at the intersection of AI and Mental Health Treatments. Blackwell poses the question in his March 2020 TED Talk: “Wouldn’t it be nice if we could get it right the first time?” (Blackwell, 2020) In a comparison to pharmacological methods of treatment — which in and of themselves have variable success rates on a first, second, or even third prescription — he states that we are aware of the active ingredients in prescribed pill, and that the pill remains consistent day after day, week after week, but on the other hand, we have no prior inclination as to the effective “ingredient” in the over one-hundred different psychotherapeutic methods used by licensed therapists across the world. What is more, those methods and effectiveness rates are far less substantial, coming in at an average of 50% likelihood of recovery when employing a psychotherapy method to treat depression (Blackwell, 2020). The approach Blackwell and his colleagues took is exemplary of how AI is capable of aiding psychiatrist in our current day and age: they used a combination of symptom tracking and recorded therapeutic sessions to create a unique fingerprint for both a patient and a therapist, effectively matching a patient with the therapist and therapeutic method best fit to their unique needs, seeing their success rates improve from 52% to 60% between 2015 and 2019 (Blackwell, 2020). Gone are the days where we should assume a one-size-fits-all approach, especially in when such a diverse, unique organ, and the life of a human being, lie in the balance.

Photo by Jesse Chan on Unsplash

The Ethics of Where We’re Going

Artificial Intelligence exists in three subcategories: Artificial Narrow Intelligence, Artificial General Intelligence, and Artificial Super Intelligence (du Toit, 2019). Blackwell and colleagues have employed an Artificial Narrow Intelligence (ANI) — and AI built to perform one specific task, rather than mimic the true intelligence of humans — in their efforts to better match a patient to a therapist and psychotherapeutic method. While this use of ANI has been generally perceived as a success, there is no shortage of ethical concerns surrounding its use in healthcare. Among the many, common concerns tend to center around discrimination, lack of privacy, and lack of transparency or understanding of how the systems are built and operate, and despite actions being taken to prepare for these understandable concerns, often enforcing them is an entirely different struggle all together (Mörch et al. 2020).

An approach taken by many has been to develop checklists. The Canada Protocol, an ethical checklist for the use of artificial intelligence in suicide prevention and mental healthcare, is one such list. The approach: build a team consisting of experts, ethicists, and professors across the fields of psychological and computer sciences to develop questions, extract evidence, develop recommendations, and have them reviewed by a panel of peers for further revising. As a result, a list of 44 items were submitted, divided by types of concern: “Description of the Autonomous Intelligence System”, “Privacy and Transparency”, “Security”, “Health-Related Risks”, and “Biases” (Mörch et al. 2020). This ethical checklist outlines actions necessary to provide a safe product to the public, including transparency surrounding data collection, manipulation, and sharing, as well as levels of autonomy and moderation of the system. As an end-user, this type of information would often be presented in the form of an End User Licensing Agreement. While these ethical concerns are constantly being thought about and addressed, it’s imperative to state that legal precedent still has yet to be established for the misuse and inevitable shortcomings of Artificial Intelligence in healthcare, especially in high-stakes use-cases as suicide prevention.

In accordance with the Canada Protocol, developers and doctors are able to ideate new and innovative uses for ANI systems within mental health research and treatment while remaining ethically sound. One such use, inspired by the 2013 film Her — starring Joaquin Phoenix as a lonely individual who falls in love with his AI-based virtual assistant, voiced by Scarlet Johannsen — could be a virtual assistant serving as a biometric-driven emergency responder for individuals experiencing panic attacks, anxiety attacks, various forms of depressive episodes, and more. Wearable technologies such as the Apple Watch or Fitbit Charge could provide the input necessary for a trained machine learning algorithm to accurately predict whether an individual is suffering a mental health emergency or perhaps just enjoying a jog through the neighborhood. For example, incoming data from a heartrate monitoring device with additional capabilities such as GPS tracking, and the ability to tell whether or not you’re standing up, could transmit those three inputs to an Artificially Intelligent system to determine whether your raised heartrate were due to a high-intensity activity or a panic attack. In a similar way, researchers in Korea have had success rates over 78.4% in predicting whether an individual suffers from a panic disorder or some other form of anxiety disorder using a Logistic Regression Machine Learning algorithm and one singular input: heartrate variance (Na et al. 2021). Utilizing the same “fingerprint” method Blackwell and colleagues used to match a patient to a clinician and methodology, so too could this program match you to a trained personality. Whether a virtual assistant would make a patient feel more secure with a male or female voice, what words would likely aid in calming a panic-ridden individual, and determining whether or not a sufferer is capable of vocally responding are all within the realm of possibility for properly-trained Artificially Intelligent systems.

To abide by the Canada Protocol, we must remain fully transparent in the development of our theoretical system. Firstly, our objective(s): we are aiming to mitigate the damage of triggered responses to panic disorders, anxiety disorders, and mood disorders. Additionally, we’ll need written, verbal, or virtual content outlining our funding, any potential conflicts of interest, credentials, target population, any evidence supporting our marketing claims, our testing protocol, and any complaints expressed about our system. From there, an End User License Agreement would need to be created and presented in an ethical way; in short, it should not be presented as a long legal document with an easy-to-click “Agree” button in our software, but rather bulleted or outlined, showcasing key information such as legal responsibility, data collection practices, accessibility, consent and withdrawal of consent, access to one’s data, the right to be forgotten, and data collection from minors among other security-related practices. Lastly, messaging surrounding heath risks and potential biases must be presented in easily digestible manners. Considering the 56% rise in suicide rates for people ages 10–24 between 2007 and 2017 (Tanzi, 2019), these forms of communication should be catered towards younger audiences, potentially through social media and online entertainment hubs like YouTube.

This thought exercise was meant to develop a safe and ethical business plan surrounding the release of a theoretical AI-based mental health platform. However, this technology would still exist within the realm of Artificial Narrow Intelligence, the lowest of the three tiers of Artificial Intelligence. In a survey conducted by Oracle, 80% of individuals surveyed were willing to receive therapeutic services from a robot (Hickins, 2020). While modern day robots are capable of mimicking human behavior, the role of a counselor and therapist requires emotional cognition and advanced sentiment analysis that ANI is just not capable of; this level of mental health treatment by Artificial Intelligence certainly requires Artificial General Intelligence. Artificial General Intelligence is where machine transcend single-function uses and begin to mimic human intelligence, even going as far as to become sentient, driven by emotional desires and capable of abstract thought (du Toit, 2019). This level of AI has yet to be achieved, however that presents the human species the perfect time to address use cases and ethical concerns the same way we have with Artificial Narrow Intelligence.

Though they are driven by similar technology, ANI and AGI differ drastically in their ethical concerns; we are shifting away from the simple privacy and security concerns of present day into far more complex concerns like the value of an AI life, cohabitation of our species with another fully-sentient lifeform, and even “playing God”. The evolution of man’s desire to transcend their God-given right to rule over His creation into a realm where they are the creators themselves has been documented through a multitude of historic events and religious texts (du Toit, 2019). If we are to create machines in our likeness, designed with brains that function as ours and provide consciousness, what is stopping these machines from becoming as fallible as humans in the eye of their creator? To better raise and address the concerns of Artificial General Intelligence, the human species must make drastic strides in their understanding of their own consciousness, the value and rights associated with their own lives, and come to an agreement on a baseline moral and ethical code.

In conclusion, Artificial Intelligence can and should play an assisting role in the fields of mental health with proper precautionary measures taken to ensure the safe and ethical implementation of autonomous systems that rely on such sensitive data. As the technology evolves over the coming centuries, however, continuous discussions surrounding ethical approaches to development and implementation must continue. In preparation for the potential sentient evolution of artificially intelligent systems, humans must continue to make strides in their understanding of neuroscience and behavioral sciences to be better prepared to raise the right concerns the way we have with Artificial Narrow Intelligence. The human species must come to an agreement on a bedrock moral and ethical code before the creation of another sentient race through Artificial General Intelligence. For now, though, we should continue to evolve our uses and ethical understanding of Artificial Narrow Intelligence in the field of healthcare in an effort to save lives and understand the illnesses that plague the human brain.

Works Cited:

Blackwell, A (2020) Artificial Intelligence Meets Mental Health Treatment. Retrieved from https://www.ted.com/talks/andy_blackwell_artificial_intelligence_meets_mental_health_therapy

Doraiswamy, P. M., Blease, C., & Bodner, K. (2020). Artificial intelligence and the future of psychiatry: Insights from a global physician survey. Artificial Intelligence in Medicine, 102. https://doi-org.ezproxy.umgc.edu/10.1016/j.artmed.2019.101753

du Toit, C. W. (2019). Artificial intelligence and the question of being. Hervormde Teologiese Studies, 75(1), 1–10. https://doi-org.ezproxy.umgc.edu/10.4102/hts.v75i1.5311

Hickins, M. (2020). Why Most People Trust Robots Over Other People for Mental Health. EWeek, N.PAG. https://search-ebscohost-com.ezproxy.umgc.edu/login.aspx?direct=true&db=iih&AN=146348039&site=eds-live&scope=site

Mörch, C.-M., Gupta, A., & Mishara, B. L. (2020). Canada protocol: An ethical checklist for the use of artificial intelligence in suicide prevention and mental health. Artificial Intelligence in Medicine, 108. https://doi-org.ezproxy.umgc.edu/10.1016/j.artmed.2020.101934

Na, K.-S., Cho, S.-E., & Cho, S.-J. (2021). Machine learning-based discrimination of panic disorder from other anxiety disorders. Journal of Affective Disorders, 278, 1–4. https://doi-org.ezproxy.umgc.edu/10.1016/j.jad.2020.09.027

Resnick, B. (2019, January 24). Alexandria Ocasio-Cortez says AI can be biased. She’s right. Vox. https://www.vox.com/science-and-health/2019/1/23/18194717/alexandria-ocasio-cortez-ai-bias

Tai, A. M. Y., Albuquerque, A., Carmona, N. E., Subramanieapillai, M., Cha, D. S., Sheko, M., Lee, Y., Mansur, R., & McIntyre, R. S. (2019). Machine learning and big data: Implications for disease modeling and therapeutic discovery in psychiatry. Artificial Intelligence In Medicine, 99. https://doi-org.ezproxy.umgc.edu/10.1016/j.artmed.2019.101704

Tanzi, A. (2019). Suicide Rates for U.S. Teens and Young Adults on the Rise. Bloomberg.Com, N.PAG. https://search-ebscohost-com.ezproxy.umgc.edu/login.aspx?direct=true&db=heh&AN=139183166&site=eds-live&scope=site.

--

--

Kevin Haube

Double Bachelor’s of Science Scholar, and hobbyist writer at the intersection of Psychology and AI.