A British court ruled that a will was valid even though it was written on the back bits of cardboard that started out in life as packaging for Mr. Young’s frozen fish and Mr. Kipling’s mince pies. As a result of the ruling, a diabetes charity will inherit £180,000.
Yes, I do hear the irony there–mince pies; diabetes–but relatives explained that diabetes runs in the family, so the pies aren’t necessarily responsible for the death.
The will ended up in court not because of the unorthodox stationary but because the details of who got what were written on the frozen fish box and the witness’s signature was on the pie box, leaving the court to decide whether they were really part of the same document or if, maybe, some fundraiser for the diabetes charity hadn’t snuck in through a window, destroyed the packaging from four Yorkshire puddings, and scribbled out a new, more favorable version of the will on the fish box. But no: the court held that the same pen was used, hinting that they were written at the same time.
The family wasn’t challenging the will. It only ended up in court because–oh, you know. Overloaded court system. Frozen fish. It had to happen.
*
Since we’re talking about wills, let’s push a little further into the topic and talk about what happens to us after we die. Not as in heaven, hell, reincarnation, the underworld, all that sort of speculation, but as in whether AI will keep a virtual version of us going after the original goes the way of that Yorkshire puddings box.
On the current evidence, it just might, but only if we pay enough money. For $199, one company will let you upload videos, voice messages, photos, whatever you’ve got, and then its algorithm will put them all in a blender, whizz them around a bit, and produce a version of you that the living can call on the phone or get text messages from. So twenty years after you’re dead, you can still say, “Am I the only person around here who knows how to wash a dish?” and your family will say, in unison, “Aww, that is so sweet.”
If you want to go as high as $50,000 plus maintenance fees, you can have yourself made into a 3D avatar, holding up a greasy dish to illustrate your point.
The possibilities don’t end there, though. Bots can now generate content, so your ghost may not be stuck repeating the weary old lines you wrote for it. It could potentially come up with its own content, which it will deliver in your voice. Or what it’s decided is your voice.
What could possibly go wrong?
A few words from the Department of Things that Could Possibly Go Wrong
To answer this question, we have to leave the UK and head for the US, where the following story is the least of what’s going wrong.
A tech entrepreneur got trapped in a self-driving cab in–oh, I think it was December of last year. (Sorry–I’m not a newspaper. I get around to these things when I get around to them.) The cab got him as far as to the airport, then began circling a cement island in the parking lot while he (let’s assume frantically) called the company and the voice on the other end told him to open his app because she didn’t have a way to shut the thing down.
After eight loops someone managed to shut the thing down and he emerged, dizzy and late for his flight–which was delayed so he caught it. He still doesn’t know if the voice on the other end was human or bottish.
*
That gives us a nice segue into technology.
A widely quoted psychologist and sex advisor from the University of Oxford, Barbara Santini, may not exist. The University of Oxford (a.k.a. Oxford University) is real enough, as is psychology. Sex advisor, though? Not a real job title, and just to make sure I’m right about that I checked with Lord Google. He knew of nothing between sex therapists on one end of the spectrum and brothels and call girl services on the other.
I’l going to be seeing some really annoying ads for a while here.
In spite of working in a field that doesn’t exist, Santini’s been quoted in Vogue, Cosmopolitan, the i, the Guardian, the Express, Hello, the Telegraph, the Daily Mail, the Sun, BBC.com, and other publications, both impressive and unimpressive, talking about everything from Covid to vitamin D to playing darts to improve your health. A lot of her quotes link back to an online sex toy shop.
Neither the shop not Santini were responding to journalists trying to confirm her existence, and articles quoting her are disappearing from the internet as fast as dog food at feeding time.
Cue a great deal of journalistic soul-searching about how to verify their sources’ credentials in the age of AI, which has put pressure on journalists to work faster and made it fast, easy, and cheap to crank out an article on any topic you could dream up.
Impressively, at least two of the publications that fell for the trick have published articles about it.
*
Meanwhile, Amazon’s selling books written by AI.
How do we know the authors aren’t human? Samples that were run through an AI detection program and scored 100%.
It costs next to nothing to throw a book together using AI, and hey, somebody’ll buy it. It would be bad enough if these were novels (I’m a writer, so that worries me) but these were self-help books. One on living with ADHD noted, helpfully, that friends and family “don’t forgive the emotional damage you inflict.”
The one on foraging for mushrooms, though, wins the red-flag award for dangerous publishing. It advocated tasting–presumably to make sure they’re safe.
AI is known for not being able to tell dangerous advice from common sense. It’s trained on solid science books but also on complete wack-a-doodlery, and it can’t tell the difference.
*
Britain’s Ministry of Justice is–I think we need to tuck the word allegedly in here–developing a program to predict who is most likely to kill someone. The program was originally called the Homicide Prediction Project, but its name was toned down and it’s now called Sharing Data to Improve Risk Assessment. By the time anyone works their way through the new name, they’ll have dozed off.
You saw the movie, now live the full-on experience.
The Ministry of Justice says the project “is being conducted for research purposes only.” The prison and probation services already use risk assessment tools–I believe those are called algorithms–and says this is only an experiment to see if adding new data sources makes them more effective. So it’s all okay.
*
I admit I’m stretching the topic to shoehorn this in, but a university student had to be rescued from Mount Fuji (that’s in Japan, which is not, as you may be aware, anywhere close to Britain) not once but twice. The second time was because he’d gone back to find his phone.
