Rafael Calvo: "The challenge that we have in AI is to design technologies that support psychological well-being"

October 10, 2018 | Author: Patricia Morén / neuromimeTICs.org

Photo: neuromimeTICs

Photo: neuromimeTICs

Dr. Rafael Calvo is a professor and director of the Wellbeing Technology Lab at the University of Sydney and ARC Future Fellow. Dr. Calvo, who is also co-author of Positive Computing. Technology for wellbeing and human potential, together with the designer Dr. Dorian Peters, defends as a true advocate of the consideration of ethics from the beginning of the design of technology; in this regard, he emphasizes that much attention has been paid to the ergonomic design of technology, to seek the physical well-being of users, but not so much to the design that seeks their psychological well-being. As a leader of the Human-Computer Interaction, Dr. Calvo thinks that engineers themselves should not neglect this ethical vision in the design of each new technological project. Dr. Calvo addresses all these issues, as well as his main projects that relate artificial intelligence to mental health, in an interview given to neuromimeTICs in the second edition of the AI for Good Global Summit, organized by the International Telecommunications Union (ITU) and held in Geneva , from May 15 to 17, 2018.

One of these projects has to do with the World Day Suicide Prevention Day, which is celebrated on September 10, since it consists of the use of AI to help prevent suicide among young people and adolescents, and the World Mental Health Day, on October 10, which this year is also dedicated to young people.


What are the research lines your group is currently working on linking artificial intelligence with mental health?

Our working group focuses on Human Computer Interaction-Positive Computing, looking for mechanisms to design techniques that help to make technologies that support psychological well-being. And the ideas of psychological models to promote health and psychological well-being and very similar to those that inspire confidence. We use controlled design technologies in humans. We act with the participation of the audience. We evaluate the whole process. As an audience, the participants do not focus so much on the problem of technology, but on visualizing the mental health problem, which is a problem because it is a very serious stigma. How can we take this problem to the design, considering the language, the images, to attract people? Something very important that we have seen in our experience in Australia, where the language was very medical, is that if the language is very medical, it tends to increase the stigma. Using a language to promote well-being is more effective. People do not feel this stigma so heavily.

This conference has focused a lot on ethics and its relationship with AI. How is ethics imbricated with the new technologies that are being developed?

Ethics is intended to be present from the very beginning of the design of a new technology. In my opinion, ethics is the study and activity to promote the models that mean the good life, which is what defines many ethical decisions. Different cultures have different ethical models and different interpretations of how society and human psychology work.

Secondly, I have to say that, in the field of engineering, a significant number of engineers have realized that technology has the values of those who design it. As designers we must be aware of their values, as well as of the values of other people.

 

A significant number of engineers have realized that technology has the values of those who design it

 

Maybe there are many things related to the AI evolving in parallel and very fast that make the decision making difficult from the ethical point of view.

There is a major debate about who decides which are the ethical models to follow. We think that the design must be focused on the users, on the people. While there is no regulation and companies decide for themselves how to improve. In other words, they govern the rules of the game. There are two things to consider in AI regulation: Who regulates who and to make sure that the rules are good. The problem that we have is that technology companies are universal. Regulation in Europe may have an impact in the United States, for example, or the lack of regulation may have an impact in Europe or Asia.

Many of these AI technologies are growing very fast in Asia, China and Russia.

Through the collection of certain data, language, voice and what an individual explain about his life, we can infer the mental health of a person, based on approximations

Also consider the United States, with Google, Facebook, and yes, of course, China. The Chinese government is investing more money than the American or the Europeans in AI. I think there is a revolution in the development of AI but not with government funds but by four or five large companies. Think of The Partnership on AI. Among the Chinese, Tencent, one of the largest who is investing a lot in AI, is having an impact on more than one billion people.

What is the state of the art or the situation in these moments of machine learning in mental health?

Information about the individual can increase the prediction of putting labels on people and, thus, can identify certain people with certain symptoms. Through the collection of certain data, language, voice and what an individual explain about his life, we can infer the mental health of a person, based on approximations. It is similar to what one does in epidemiology. It's like doing epidemiology in real time with more information that comes together when asking questions to a person. But it is a subject in which we have to be very careful because the labels also reflect a structure of power, of cultural interpretation that the research in general does not consider. If a person is labeled as depressed or nervous this label gives a position of power over the person who is being labeled risky.

What do you mean? Can you be more precise?

Facebook and the social media have algorithms that, when a risk or mental health problem is detected, for example if a person talks about suicide, different protocols can be activated so that that person can be followed and called by the police or healthcare services

Dr. Calvo at the second edition of the  AI for Good Global Summit   Photo: neuromimeTICs

Dr. Calvo at the second edition of the AI for Good Global Summit
Photo: neuromimeTICs

How does the system detect it?

From what is written on Facebook. In the United States, as one can ask Google questions about mental health, relevant information arrives to help him/her.

Not doing so.

Of course, to help the person find medical help. It is always difficult, because there is always someone with a webpage on how to kill oneself and we would have to see how to do it to erase it. As I was saying, the so-called natural language processing techniques (NLP) allow making inferences of people mental’s health of what they write on Facebook, Twitter or other social media, in order to direct health information and online assistance to these people and even to generate personalized interventions. Recently, me, together with other authors, have published a review of the literature on it in Natural Language Engineering where we expose the focus on the research until now, very dispersed, and we intend to look for a common language among the NLP, Human-Computer Interaction and mental health.

What does the application in which your group has worked to prevent mental health risk consist of?

There are many psychology tools that use statistical models to predict mental health risk based on psychosocial parameters. They are models that allow predicting the risk of mental health, according to the family, the work context, the socioeconomic situation, age or sex, and so on. The application is called HeadGear, from the University of Sydney, and calculates the risk of mental illness based on the risk for different issues. This calculation is done by a statistical algorithm; we are not talking about machine learning now.

What other advances have been made using AI algorithms and in favor of mental health?

There is an NGO, a community in Australia, which has 1.8 million users, whose functionality is the discussion forums. Young people access it and ask questions. They write comments like "I'm gay and I do not know how to tell my father and my mother." They express and support each other; they provide support as peers. Many messages come in and the moderators have to prioritize them. Algorithms automatically triangulate them, so the system divides them into green (not urgent), yellow (need an answer), red (must be answered) and super-red, with an exclamation mark, when the user is talking about suicide risk, and he can hurt himself or other people. In that case, you have to act faster, in real time. Forum moderators should prioritize time and respond to these emergencies. As I was saying, Reach out Australia has a huge community of young users and a crucial functionality, life, with support material to help young people with their problems related to sexuality, communication, drugs, alcohol or substance abuse in general.

And what does the project that your group has developed to help psychiatrists in training consist of?

It is about EQClínic, an online platform to help students of medicine and psychiatry how to improve their communication skills and empathy with their patients. It is not intended for psychiatrists, but for medical students. It works in a way that it enters a post an actor who simulates to be a patient interacts by teleconference with the medical student where a simulated patient writes a note that contains the characteristics about a disease and the doctor in training user must respond. There is also a system that automatically recognizes the facial expressions of the doctor, if he sketches a smile, and so on. And it allows an analysis of the non-verbal behavior of the doctor, through the use of algorithms of visual analysis by computer, among other things.

In conclusion, what do you think that is the challenge in the design of new AI technologies?

The challenge in AI is how to design technologies that support psychological well-being, which means ethics. These definitions change with the culture. But, from the point of view of technology, they must be broad enough so that it acts as a catalyst for well-being, with tools that can have an effect on different social groups, even minority groups, that may be from the same country. It has not been considered that in the same locality there may be different social groups composed of people with little economic power or racial minorities, when it comes to guaranteeing psychological well-being.

And how can this challenge be achieved?

By searching for algorithms with a certain bias. There are many examples, such as in the United States, where the black community did not receive advertising notices about life insurances. In these cases, the chances of receiving them or repairing the situation would have to be increased, since otherwise these people would be less likely to hire an insurance. Or it has been seen that women are less likely to receive a notice to study engineering, among other examples.

Five keys to design technologies with psychological well-being

Designing technology thinking about the psychological well-being of users is not an entelechy, but a feasible option applying a model called METUX. This model is based on five keys and has been exposed by Dr. Rafael Calvo and other colleagues in an article published in Frontiers in Psychology in May of this year 2018. METUX (Motivation, Engagement and Thriving in User Experience) is a model that provides a framework, which is based on psychological research, to help researchers and professionals working in Human-Computer Interaction with useful and applicable ideas to discern which technologies support and which undermine the basic psychological needs of the users.

The objective is to increase the motivation and the engagement of the users regarding a technology and, ultimately, to improve their well-being. To achieve this, Dr. Calvo and his colleagues define in their article five key areas of experience in which the psychological needs of the users must be considered and analyzed:

  • At the point of the technology adoption
  • During the interaction with the interface
  • As a result of the engagement with technology, the specific tasks
  • As part of the technology, the supported behaviour
  • As part of an individual’s life overall

Finally, they point out that these five spheres of user’s experience should be considered within a sixth sphere, society, which includes both the direct and collateral effects of the technology use as well as non-user experiences, say the authors.

Source: frontiersin.org/articles/10.3389/ fpsyg.2018.00797/full