What 100 suicide notes taught us about creating more empathetic chatbots
AI needs to learn more about the language of despair to better address sensitive conditions
Negative sentiment and constrictive thinking
As one would expect, many phrases in the notes we analyzed expressed negative sentiment such as:
There was also language that pointed to constrictive thinking. For example:
The phenomenon of constrictive thoughts and language iswell documented. Constrictive thinking considers the absolute when dealing with a prolonged source of distress.
For the author in question, there is no compromise. The language that manifests as a result often contains terms such aseither/or, always, never, forever, nothing, totally, allandonly.
Language idioms
Idioms such as “the grass is greener on the other side” were also common — although not directly linked to suicidal ideation. Idioms are often colloquial and culturally derived, with the real meaning being vastly different from the literal interpretation.
Such idioms are problematic for chatbots to understand. Unless a bot has been programmed with the intended meaning, it will operate under the assumption of a literal meaning.
Chatbots can make some disastrous mistakes if they’re not encoded with knowledge of the real meaning behind certain idioms. In the example below, a more suitable response from Siri would have been to redirect the user to a crisis hotline.
The fallacies in reasoning
Words such astherefore, oughtand their various synonyms require special attention from chatbots. That’s because these are often bridge words between a thought and action. Behind them is some logic consisting of a premise that reaches a conclusion,such as:
This closely resemblances a common fallacy (an example of faulty reasoning) calledaffirming the consequent. Below is a more pathological example of this, which has been calledcatastrophic logic:
This is an example of a semanticfallacy(and constrictive thinking) concerning the meaning ofI, which changes between the two clauses that make up the second sentence.
This fallacyoccurs when the author expresses they will experience feelings such as happiness or success after completing suicide — which is whatthisrefers to in the note above. This kind of“autopilot” modewas often described by people who gave psychological recounts in interviews after attempting suicide.
Preparing future chatbots
The good news is detecting negative sentiment and constrictive language can be achieved with off-the-shelf algorithms and publicly available data. Chatbot developers can (and should) implement these algorithms.
Generally speaking, the bot’s performance and detection accuracy will depend on the quality and size of the training data. As such, there should never be just one algorithm involved in detecting language related to poor mental health.
Detecting logic reasoning styles is anew and promising area of research. Formal logic is well established in mathematics and computer science, but to establish a machine logic for commonsense reasoning that would detect these fallacies is no small feat.
Here’s an example of our system thinking about a brief conversation that included a semantic fallacy mentioned earlier. Notice it first hypothesizes whatthiscould refer to, based on its interactions with the user.
Although this technology still requires further research and development, it provides machines a necessary — albeit primitive — understanding of how words can relate to complex real-world scenarios (which is basically what semantics is about).
And machines will need this capability if they are to ultimately address sensitive human affairs — first by detecting warning signs, and then delivering the appropriate response.
This article byDavid Ireland, Senior Research Scientist at the Australian E-Health Research Centre.,CSIROandDana Kai Bradford, Principal Research Scientist, Australian eHealth Research Centre,CSIRO, is republished fromThe Conversationunder a Creative Commons license. Read theoriginal article.
Story byThe Conversation
An independent news and commentary website produced by academics and journalists.An independent news and commentary website produced by academics and journalists.
Get the TNW newsletter
Get the most important tech news in your inbox each week.