—Jessica Hamzelou

This week, I’ve been working on a piece about an AI-based tool that could help guide end-of-life care. We’re talking about the kinds of life-and-death decisions that come up for very unwell people.

Often, the patient isn’t able to make these decisions—instead, the task falls to a surrogate. It can be an extremely difficult and distressing experience.  

A group of ethicists have an idea for an AI tool that they believe could help make things easier. The tool would be trained on information about the person, drawn from things like emails, social media activity, and browsing history. And it could predict, from those factors, what the patient might choose. The team describe the tool, which has not yet been built, as a “digital psychological twin.”

There are lots of questions that need to be answered before we introduce anything like this into hospitals or care settings. We don’t know how accurate it would be, or how we can ensure it won’t be misused. But perhaps the biggest question is: Would anyone want to use it? Read the full story.

This story first appeared in The Checkup, our weekly newsletter giving you the inside track on all things health and biotech. Sign up to receive it in your inbox every Thursday.

If you’re interested in AI and human mortality, why not check out:

+ The messy morality of letting AI make life-and-death decisions. Automation can help us make hard choices, but it can’t do it alone. Read the full story.

+ …but AI systems reflect the humans who build them, and they are riddled with biases. So we should carefully question how much decision-making we really want to turn over to.



Source link

By admin

Malcare WordPress Security