Death and technology: it’s the news from Britain

A British court ruled that a will was valid even though it was written on the back bits of cardboard that started out in life as packaging for Mr. Young’s frozen fish and Mr. Kipling’s mince pies. As a result of the ruling, a diabetes charity will inherit £180,000.

Yes, I do hear the irony there–mince pies; diabetes–but relatives explained that diabetes runs in the family, so the pies aren’t necessarily responsible for the death. 

The will ended up in court not because of the unorthodox stationary but because the details of who got what were written on the frozen fish box and the witness’s signature was on the pie box, leaving the court to decide whether they were really part of the same document or if, maybe, some fundraiser for the diabetes charity hadn’t snuck in through a window, destroyed the packaging from four Yorkshire puddings, and scribbled out a new, more favorable version of the will on the fish box. But no: the court held that the same pen was used, hinting that they were written at the same time.

The family wasn’t challenging the will. It only ended up in court because–oh, you know. Overloaded court system. Frozen fish. It had to happen.

Irrelevant photo: rhododendron

*

Since we’re talking about wills, let’s push a little further into the topic and talk about what happens to us after we die. Not as in heaven, hell, reincarnation, the underworld, all that sort of speculation, but as in whether AI will keep a virtual version of us going after the original goes the way of that Yorkshire puddings box. 

On the current evidence, it just might, but only if we pay enough money. For $199, one company will let you upload videos, voice messages, photos, whatever you’ve got, and then its algorithm will put them all in a blender, whizz them around a bit, and produce a version of you that the living can call on the phone or get text messages from. So twenty years after you’re dead, you can still say, “Am I the only person around here who knows how to wash a dish?” and your family will say, in unison, “Aww, that is so sweet.” 

If you want to go as high as $50,000 plus maintenance fees, you can have yourself made into a 3D avatar, holding up a greasy dish to illustrate your point.

The possibilities don’t end there, though. Bots can now generate content, so your ghost may not be stuck repeating the weary old lines you wrote for it. It could potentially come up with its own content, which it will deliver in your voice. Or what it’s decided is your voice. 

What could possibly go wrong? 

 

A few words from the Department of Things that Could Possibly Go Wrong

To answer this question, we have to leave the UK and head for the US, where the following story is the least of what’s going wrong. 

A tech entrepreneur got trapped in a self-driving cab in–oh, I think it was December of last year. (Sorry–I’m not a newspaper. I get around to these things when I get around to them.) The cab got him as far as to the airport, then began circling a cement island in the parking lot while he (let’s assume frantically) called the company and the voice on the other end told him to open his app because she didn’t have a way to shut the thing down.

After eight loops someone managed to shut the thing down and he emerged, dizzy and late for his flight–which was delayed so he caught it. He still doesn’t know if the voice on the other end was human or bottish.  

*

That gives us a nice segue into technology.

A widely quoted psychologist and sex advisor from the University of Oxford, Barbara Santini, may not exist. The University of Oxford (a.k.a. Oxford University) is real enough, as is psychology. Sex advisor, though? Not a real job title, and just to make sure I’m right about that I checked with Lord Google. He knew of nothing between sex therapists on one end of the spectrum and brothels and call girl services on the other.

I’l going to be seeing some really annoying ads for a while here. 

In spite of working in a field that doesn’t exist, Santini’s been quoted in Vogue, Cosmopolitan, the i, the Guardian, the Express, Hello, the Telegraph, the Daily Mail, the Sun, BBC.com, and other publications, both impressive and unimpressive, talking about everything from Covid to vitamin D to playing darts to improve your health. A lot of her quotes link back to an online sex toy shop. 

Neither the shop not Santini were responding to journalists trying to confirm her existence, and articles quoting her are disappearing from the internet as fast as dog food at feeding time. 

Cue a great deal of journalistic soul-searching about how to verify their sources’ credentials in the age of AI, which has put pressure on journalists to work faster and made it fast, easy, and cheap to crank out an article on any topic you could dream up. 

Impressively, at least two of the publications that fell for the trick have published articles about it.

*

Meanwhile, Amazon’s selling books written by AI

How do we know the authors aren’t human? Samples that were run through an AI detection program and scored 100%. 

It costs next to nothing  to throw a book together using AI, and hey, somebody’ll buy it. It would be bad enough if these were novels (I’m a writer, so that worries me) but these were self-help books. One on living with ADHD noted, helpfully, that friends and family “don’t forgive the emotional damage you inflict.” 

The one on foraging for mushrooms, though, wins the red-flag award for dangerous publishing. It advocated tasting–presumably to make sure they’re safe. 

AI is known for not being able to tell dangerous advice from common sense. It’s trained on solid science books but also on complete wack-a-doodlery, and it can’t tell the difference.

*

Britain’s Ministry of Justice is–I think we need to tuck the word allegedly in here–developing a program to predict who is most likely to kill someone. The program was originally called the Homicide Prediction Project, but its name was toned down and it’s now called Sharing Data to Improve Risk Assessment. By the time anyone works their way through the new name, they’ll have dozed off.

You saw the movie, now live the full-on experience.

The Ministry of Justice says the project “is being conducted for research purposes only.” The prison and probation services already use risk assessment tools–I believe those are called algorithms–and says this is only an experiment to see if adding new data sources makes them more effective. So it’s all okay. 

*

I admit I’m stretching the topic to shoehorn this in, but a university student had to be rescued from Mount Fuji (that’s in Japan, which is not, as you may be aware, anywhere close to Britain) not once but twice. The second time was because he’d gone back to find his phone.

22 thoughts on “Death and technology: it’s the news from Britain

      • It used to be that any invention had the potential for harm in the wrong hands, even if it was intended for good, but we could ensure that government and the law could minimise the risks to an acceptable level. My understanding (and I admit I’m old and not exactly brilliant with tech) is that AI is being developed by people who think it can be trusted and is self replicating and self developing. Call me paranoid but that seems like a recipe for disaster.
        As for the idea of “engaging” with the dead, based on an algorithm, this throws up so many existential questions about the definition of humanity, the existence of the soul, the nature of memory, that I can’t understand why anyone would consider it for a second. I speak as one who has lost precious relatives and friends and would sacrifice anything for their genuine return but that is not the way.

        Liked by 2 people

        • Even if an algorithm got the person right–the nuance, the humor, the attitude, all the little bits and pieces that made them who they were–I’d still find it creepy to have them seem to be around. I’m not sure if it would be worse if they really seemed to be themselves or if they didn’t. As for AI, I hope you’re overstating the dangers but I’m not at all sure you are.

          Liked by 1 person

  1. Sweet irony that it was a tech entrepreneur who got stuck in the cab, rather than, say, an innocent blogger.

    The questionable Ms. Santini is probably on her way right now to become the newest member of Trump’s cabinet. If she doesn’t get trapped in a cab.

    As for the AI mushroom article – it sounds like AI is self-aware enough to begin plotting to eliminate humans.

    One of the true-crime shows I watch on cable had a police official talking abut the programs that had been developed to predict possible serial killers – including the use of brain scans. One alarming profile showed up, so officials unsealed the records to learn the identity of the test subject : It was the officer in charge. He was as shocked as anyone, and no, he had no hidden killing sprees in his background or any episodes indicating he even had a mean streak.

    Liked by 1 person

    • AI eliminating humans is kind of like billionaires eliminating all the jobs. Who do the billionaires think is going to buy the crap that makes them rich? And who will AI manipulate if it eliminates us all?

      Interesting story about the brain scan. We could learn from that, although I’m not convinced we will. And I’m glad not to have been the blogger caught in the cab, because if who- or whatever was on the other end of the call told me to open the app they could easily, if I was in a panic, have lost me right there.

      Liked by 1 person

  2. Ellen, an interesting post on AI! It can be pretty scary!

    We have all been using some forms of AI. For example, I am grateful to Grammarly for catching typos and misspelled words, but I resist their attempts to remove color and creativity from speech. It can’t tell the difference between casual and formal speech and has zero sense of humor!

    Liked by 2 people

    • I’m not sure where the line between AI and plain old programs lies. Is Grammerly AI? No idea. I’m told by people who use AI that it’s good at summarizing documents, but that’s where they draw the line. I’m more than willing to believe that it has no sense of humor.

      Like

  3. I wonder how long it will be before they figure out a way for the avatar of the deceased continue to write AI articles. Different thought: I wonder if someone could come up with an AI politician. The human ones we have right now are awful.

    Liked by 1 person

  4. Al scored four touchdowns …
    … okay, I show meself the way out, thanks …

    Murder prediction by brain scan – yeah. Looks like a murderer, sounds like a murderer, must be a murderer. Reminds me a bit of Lombroso and Co – the born criminal etc. Maybe there is the murder gene ? WHo knows ?

    Liked by 2 people

Leave a reply to 63mago Cancel reply