Of chatbots and culture wars and imaginary incidents

One of Britain’s reputable papers (and with five words, I’ve already eliminated several) had an incident involving chatbots, and the tale’s worth retelling because it tells us a lot about the age we’re stumbling cluelessly into. Or maybe that’s the drain we’re being washed down. Or–well, it’s Supply Your Own Metaphor Week here at Notes, so I’ll leave you to come up with your own while I waddle onward.

One of the Guardian’s reporters got an email asking about an article that ChatGPT had cited but that wasn’t showing up on the paper’s website. The email’s writer wanted to know what had happened to it and the journalist went hunting. It was on a topic they reported on,so it sounded likely enough although they couldn’t remember the specific article or find it anywhere, so they asked other people in the office to turn the paper’s electronic pockets inside out and see if it fell out. Maybe it was in there with the shredded kleenex and the linty mint.

Irrelevant photo: camellia

It wasn’t. Because it had never been written. It turns out that AI not only invents facts–something I trust you’ve heard by now–but it also invents sources, and it can be convincing when it does. The nonexistent article was a good enough invention that the journalist hadn’t been able to say, “No, I never wrote that.” They easily could have. 

If you think it’s scary living in a world where a lot of people feel entitled to curate their own selections of alternative facts to back up their pre-existing worldviews–well, it’s about to get a whole lot weirder. And, I expect, scarier.

 

Imaginary drag queen teaches hallucinatory sex ed class

Did anyone mention alternative facts? The Daily Mail, GB News, and Fox News all reported that a drag queen appeared as  a guest speaker at an Isle of Man schooll and told “11-year-olds there are 73 genders–and made a child who said there are ‘only two’ leave the class.”

Seventy-three? Stop it, guys. I can’t count that high. If this goes on, I’ll have to give up my leadership position in the Gender Hyperawareness and Conservative Freakout Society.  

The story went on to say that “one teacher is also said to have had to teach pupils in Year 7 and 8 how to masturbate.”  

How old are kids in years seven and eight? Eleven to thirteen. Since it’s been a long time since I was anywhere close to that age, I asked Lord Google how old kids are when they begin to masturbate. The top-ranked answer was from the National Institutes of Health (that’s in the US) and said two years old. The next one said three. In fact, most of the articles I found were geared toward calming the parents of toddlers and preschoolers, saying, essentially, It’s okay. Kids that young discover that there’s something interesting where their legs come together and they’re not shy about exploring it

That wasn’t what I’d been looking for, but it did back up my hunch that kids don’t really need to be taught how to masturbate, although by the time they’re eleven to thirteen they may need reassurance that what they’re doing–or at least imagining–isn’t so different from what other people do and imagine.

But that’s not the point. The point is, that although the article I quoted is real and can still be found on the Daily Mail’s website, the facts were invented. The flap the reporting caused led to an investigation of the incident, which found that the incident never happened. 

But who waits for that? As soon as the story went public, people working at the school were deluged with threats and demands for staff to be fired, arrested, and executed–not necessarily in that order. 

What triggered the story? A man who does occasionally do drag spoke to kids “gender neutral language and the concept of gender in the LGBTQ+ environment.” He wasn’t in drag, though. So the question is, if a person has done drag, can they be allowed out in public in non-drag or do they have to be freeze-dried, vacuum packed, and kept in storage until the political winds shift? For the safety, you understand, of all 73 genders of our children.

As for the kid who said there were only two genders, the closest I’ve found to the incident was one kid who was taken out of the room by a teacher over some sort of behavior issue. 

 

The problem of defining drag in Britain

Cranking up the British about men in drag is going to be harder than cranking up Americans, because drag has a solid mainstream history here. Every Christmas panto season starts, and these are shows for kids, with the lead female role always (over)played by a man and the lead male role almost always played by a woman. It’s a thing. Among straight people. Is that drag or is it only drag if a man (over)dresses like a woman outside of a panto?

What, while we’re at it, does a woman dress like? I’m wearing jeans, a turtleneck, and an old sweater.

On our first visit to Britain, we watched a race where a lot of the runners were in costume. It’s a thing here. Give people a chance to run five miles dressed  as bananas or phone booths and they’ll, ahem, run with it. So in among all the bananas and phone booths and chickens were men dressed as ballerinas and nurses. Not the contemporary kind of nurses who wear practical uniforms, but the old-fashioned ones in white dresses and caps, who (I gather) inhabit the fantasies of some unspecified number of non-nurses. My gaydar insisted that the runners in nurses’ uniforms were straight. But even if my gaydar was off–it was tuned in a different country, after all–no one much cared. It was just another race through the streets of an English city. Enjoy the show, everyone.

So where do pantos and dress-up end and drag begin? 

I don’t know, dear. You tell me.

 

The problem of defining copyright and privacy

Now that artificial intelligence scrapes information out of every corner of the internet so that it can tell you, in perfectly grammatical prose, that the pope is made of custard, defining copyright and privacy is going to be as problematic as defining drag. Or more so.

Copyrighted material is probably being used to train AI systems. The word probably is part of that sentence because AI’s neural networks aren’t available for your average gawker–or even your non-average one–to examine, so no one knows what they’ve been reading, but a couple of AI systems have, embarrassingly, hacked up copyrighted photos from Getty Images, complete with the watermark Getty prints over the photos so that users will have to pay for a clean copy. 

Yes, there’s a lawsuit involved, but it’s about the smallest edge of the problem. Still to be discussed is the amount of personal data that’s being collected–and potentially disclosed–without people’s consent and the use of copyrighted material to train chatbots.

 

But speaking of privacy

Teslas have an in-car camera that Tesla assures the world “is designed from the ground up to protect your privacy.” Because customer privacy “is and will always be enormously important to us.” 

So important that from 2019 to 2022 Tesla employees were sending each other clips of, oh, you know, interesting stuff in people’s garages; road incidents, a man walking up to his car naked; you know, ordinary, everyday stuff that would embarrass no one. 

What are the camera’s limits? I’m not sure, but I’ve read that a Tesla parked in the right spot outside someone’s house could, potentially, film whatever’s going on inside through the window. 

One owner is suing Tesla. Some Chinese government compounds and residential neighborhoods have banned the cars. 

The moral of this story is that if someone goes out of their way to tell you how carefully they’re protecting your privacy, they’re calling your attention to a problem.

47 thoughts on “Of chatbots and culture wars and imaginary incidents

  1. Reblogged this on Wibble and commented:
    I’d heard about the ’73 genders’ thing; call me gullible but it never crossed my mind that the facts might have been fabricated† simply to create a sensationalised headline. Silly me.

    It’s long been a maxim that you can’t trust information you get from the Internet. Seems to me that the time when ‘from the Internet’ can be removed is fast approaching. Perthaps it’s all a Cunning Plan à la Baldrick: how better to create chaos that to get us all to distrust everything we read, see, hear… and think we know?

    † According to‡ Education Minister Julie Edge, the number was actually 72 not 73; however, the minister may simply have misspoken; after all, in the same audio clip she (ironically, for an ‘Education Minister’) demonstrates her lack of expertise in the English language when she says, “[…] and should not be tolerated in the online world neither.” 🙄

    ‡ information that is itself ‘according to’ manxradio.com (linked from Ellen’s article). How deep does the rabbit hole go… and at this point, does anyone care any more?

    Liked by 3 people

    • Thanks for the additional information. I was about to say clarification, but in this murk I’m not sure that’s the right word. I do appreciate dropping one of the seventy-some genders–anything helps–but we still haven’t gotten it down to where it’s manageable for someone as allergic to numbers as I am. I wonder if negotiations are possible, and who exactly to negotiate with.

      Liked by 3 people

        • E.g. the fact that people reacted so intensely to a story from a faraway place they knew nothing about, rather than to what their own children reported from their own school?

          If I were a parent of a child who told me that *per* teacher was discussing controversial news stories instead of teaching facts and skills, in the classroom, I’d have something to say.

          If it’s merely reported from some other school, oh well, someone there just wanted to get into the newspapers, but that’s not my problem.

          Liked by 1 person

          • These days, when something incredible is reported, we might all want to ask whether it is, in fact, credible, and before we run screaming in all directions, do some checking to see if there’s any reality to it. But a lot of people seem to enjoy exercising their outrage muscles. Why check in with reality and spoil the fun?

            Like

  2. The interesting/scary thing that I’ve noticed about the writing from ChatGPT and other bots is that the writing is decent, but not really good. Unfortunately, the quality of human writing in the news has been declining in a lot of places, so there’s a bit of a convergence.

    Liked by 1 person

    • I can’t complain about the level of writing in the papers I read, but that’s hardly a fair survey. Having taught–or tried to teach–writing on a college level, I was struck by how hard a lot of people find it to make a coherent argument–to link one idea to another; to supply information that backs up their argument; that sort of thing. And that was–oh, good lord, probably 30 or so years ago, so I can’t say it’s a recent trend. Whether I’d have noticed if they slipped in a chatbot I’m not sure. Probably not. They don’t seem to be particularly good at linking or developing ideas either.

      Liked by 2 people

        • I wonder if such a thing is possible–not the teaching, but observing impartially. With a newspaper, by way of example, you have to choose what to cover, and even that early in the process you stop being impartial. Accuracy is possible, and an effort at impartiality (as in, not burying the facts you don’t like), but any process of selection is, I think, partisan.

          I think. I’d be happy to be talked out of that idea.

          Liked by 1 person

  3. Kristine Catherine Ruch had an interesting take on the subject recently, where she basically said it’s here to stay (barring an EMP event that would take out all communications) and once this initial wave of hysteria dies down we’ll get used to it and use it like any other tool … which sounded alright on the surface, but something kept niggling at me …we now have one maybe two generations who’ve known nothing but the culture of dis/misinformation (via mass media at least) who don’t even understand the concept of critical (critique, not criticise) thinking, and combined with the fact that the things the ‘bots’ are making up have actually happened, (differing times, places, circumstances, etc) we have a mass population that literally can’t distinguish what reality they’re indulging in.
    Which is great for those who have an investment in perpetuating that illusion, but for the rest of us, not so much.
    And so, the great divide between the sane and the insane grows. :) … isn’t it bees who, when a colony shows signs of insanity, the insane ones are killed, (and are either eaten or chucked out of the hive, I can’t remember which) and the sane ones get on with it. Mother Nature really does like the direct approach, doesn’t she? :)
    Anyhow, those are some thoughts I’ve thunked on this gloriously sunny and chilly April morn.
    Hope you and yours are keeping well. :)

    Liked by 4 people

  4. I for one am not surprised that AI is trying to gaslight us. Being gaslit seems to be par for the course these days and I am sure the AI is subsuming that culture of behaviour into its programming. I am honestly not a Luddite but I really think we need to cool our heels when it comes to AI. The bumpers and barriers are not in place and there needs to be robust consideration about all manner of ethics. Too many people seem to be so enchanted by its possibilities that they have not stopped to even ponder the consequences.

    As for the whole drag queen school debacle, it again doesn’t surprise me in the least that people were whipped up into an irate fervor over something completely non-existent. That, again, seems to be the way of things these days. The plan seems to be to distract us all with culture wars being waged against communities who are not remotely the source of any of our problems – and who are in fact among the most vulnerable people in society – so that we don’t focus on the real problems we need to tackle, such as gun violence and the erosion of democracy. My assumption is that the purpose of culture wars is to sow division among people who, if they have the opportunity to unite, will actually start to disrupt and dismantle the white supremacist patriarchy that is the actual root cause of so many of our societal problems.

    Liked by 4 people

    • I couldn’t agree more on the culture wars: find an enemy, crank people up about them, and steal the poor fools blind.

      It does seem like the people programming the chatbots could’ve programmed in something about accuracy, but then I’m not a programmer and have no idea what’s involved.

      Liked by 2 people

  5. I thought it might be interesting to try chatGPT, but I took a look at the privacy statement and they can go and get stuffed. This is the other end of the rabbit hole: as well as getting fed garbage out of the magical Cloud, we’re also the source of it, and at both ends the big corporations are raking in the profits from our data. It is, of course, very attractive, and “everyone” is going to be using it, like “everyone” seems to think they need a smart speaker (also feeding on their data), an e-doorbell (ditto) and a wifi-enabled washing machine (I mean, how did we manage before?).

    It is very worrying, not because the robots are going to kill all the human beings to make efficiency savings – that’s a silly idea – but because the rich human beings who own the robots might. You have to program a dangerous AI to be dangerous, but that’s not comforting, because they’re already doing that.

    To put this in context – we are already dominated by vastly wealthy individuals, and their corporations (and not really by states: I suspect democracy is, in fact, dead). Capitalism demands competition and merger. The cats are getting fatter and fewer in number, while the rest of us (rats?) are given just enough sustenance and entertainment to mollify and distract us to keep the game going. This may reach a critical point where neo-Luddism puts its foot down, but whether it still has a leg to stand on is anyone’s guess.

    Liked by 3 people

    • The modern version of the Roman bread and circuses.

      That’s interesting about the privacy statement. I admire you for actually reading the damn thing. I’ve given up. I can read that sort of drivel if I get paid for it (I used to work as an editor; it’s amazing what you can make yourself wade through) but it’s written with an eye toward making humans roll over and go to sleep long before they take in what they’re agreeing to.

      I agree with you about corporations and states. The corporations are larger and more powerful than states these days, and I’m no longer sure (if I ever was) what it’s going to take to bring them under control. Let’s not even get into the people who want to pare government back even further so it’ll interfere even less with the money that’s out there to be made.

      Oh, hell, I feel a rant coming on. Think I’ll go make myself a cup of tea and spare you.

      Liked by 1 person

      • Oh no, sorry to incite a rant, Ellen! Although all this is frightening and sickening, I just have to remember how horrified I was when, as a child, I realised the USA and USSR were poised to exterminate everyone on the planet with something called nuclear bombs, and yet we muddled through for another fifty years without annihilating each other.

        I am a bit spectrummy when it comes to reading terms of service, particularly privacy statements, before I join anything. I can understand the view that the legalese is too soporific, and I used to pay little attention myself, but (a) I realised it’s actually fairly easy to scan through to the relevant passages, where they tell you they share or don’t share your data with others, (b) we only have to do it once, and (c) we really can’t complain that some faceless entity is stealing our data and selling it and serving us adverts while we take advantage of their services (usually for “free”) if we don’t bother to check the legal contract we’re entering into.

        Anyway, you have stimulated a very interesting conversation here. Audrey Driscoll’s comment above reminded me of another bit on the ChatGPT introduction page, under the heading “Limitations”:

        “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.”
        https://openai.com/blog/chatgpt

        (1) is a fundamental philosophical condition, as far as I understand it. It is discussed in the philosophy of science. All “facts” are interpreted according to earlier “knowledge”, which likewise aren’t objective, but based on other axioms, assumptions we have to make just to think or perceive anything. On the other hand, we collectively devise agreement (mostly) about how we verify facts in science, as we do in journalism, and quoting valid sources is an important backbone of either. And presumably the programmers can easily stop the AI just making up fake references when it feels like it.

        It’s a great excuse, though, if I ever get caught telling porkies: “Well, you see, there’s currently no source of truth.”

        Liked by 1 person

        • Journalists–at least those who work for serious news outlets–have managed to establish reasonably effective ways to verify the accuracy (I’m avoiding the word truth, as if it had suddenly become controversial, and as if I avoided controversy) of their stories. A programmer who was interested in accuracy, I think, could sit down with some journalists, some scientists, some assorted other people, and figure out how to teach AI to spot reliable and unreliable sources. Which is, of course, more or less what you said but–oh, you know how it is: sometimes we just can’t believe something’s been said until we say it ourselves.

          I completely agree with you about reading terms of service. I only wish I actually did it. But yes, it has been an interesting conversation–and may yet continue to be. We’ll see where it goes from here.

          Liked by 1 person

  6. Over here there was a kerfluffle that some schools were providing litter boxes for students who identified as cats. It was seriously looked into, and, like the 73 genders, found to be complete bovine excrement. Barnum was right (but it turns out it was actually H.L.Mencken who said it.) “No one as far as I know has ever lost money by underestimating the intelligence of the great masses…”

    Not to upset anybody, but in my career as a Middle School teacher we discussed controversial stories…The National Guard shooting students at Kent State University, which was in a town 30 miles away where some of their siblings were enrolled (and from which I had graduated 3 years before)…the death of a popular teacher who collapsed and died at a track meet…the explosion of the Space Shuttle Challenger the same day our area was hit with a minor earthquake…(I had retired before 9/11/2001.)

    Liked by 2 people

    • That makes sense to me, that you’d almost have to discuss major events with your students. I never taught young kids, but I was teaching a college-level writing class on 9/11, before anyone understood what was going on. All we knew was that something had happened. I had no idea what to say, but I did at least have to acknowledge it. People wanted to talk. Since almost no information was available, I didn’t want to open it up to an information-free discussion, but you can’t act as if nothing’s happening.

      I always heard the Barnum/Mencken quote as, “Nobody ever went broke underestimating the taste of the American public,” but since the source is questionable, I don’t see why the wording shouldn’t be fluid.

      Like

  7. My first view of a few paragraphs written by Chat GPT pretty much disillusioned me with it (before I even managed to form an illusion). It mentioned a study, and when I pointed out in a comment that it hadn’t cited the title or author(s) of said study, the person who had displayed the AI’s creation said that didn’t matter, because there was no such study. Chat GPT had just made it up.
    At that point I decided it was nothing to get excited about. We humans are already perfectly capable of fabricating things.

    Liked by 2 people

    • You’re right: Our track record on truthfulness hasn’t been good lately–and even the idea that we might maybe outta check in with the facts from time to time seems to be sneaking out the window. So we now have a creation in our image. Isn’t modern life wonderful?

      Liked by 1 person

  8. Great post. Privacy assurances from any corporation mean nothing. As for these chatbots, six weeks ago Google gave me advance access to Bard, their artificial intelligence. I’m not sure why I was chosen. I didn’t ask for it.

    Google tells you ahead of time that not everything Bard tells you will be true. I’ve caught it spewing inaccurate information a lot of times. Bard apologizes and says it’s still in development.

    What’s really scary about it, though, is its creative process. It can plan out a novel for you, or write about whatever you ask it to. One day, we won’t have to challenge ourselves ever again. We’ll never learn what our abilities are. And it seems like that day is coming soon, and there’s no stopping it.

    Liked by 1 person

    • Having seen the credible but bland prose it produces (sort of like corporate writing; if you think about it too long, it didn’t really say anything), it’s hard to believe it could pull off a novel. Yes, it could put pieces of a plot together, but will they be anything anyone wants to read? (Admittedly, a lot of humans are plotting and writing at that level.) But–sigh–you could be right about this.

      Liked by 1 person

          • Here’s an example:

            What would be a good blog post title for the following idea?

            With artificial intelligence, one day, we won’t have to challenge ourselves ever again. We’ll never learn what our abilities are. (This is part of the first comment I left you)

            Answer:

            Here are some blog post titles that you can use for your idea:

            – The Rise of Artificial Intelligence: Will We Stop Challenging Ourselves?
            – Artificial Intelligence: The End of Self-Improvement?
            – Will Artificial Intelligence Make Us Lazy?
            – How Will Artificial Intelligence Change Our Relationship with Challenge?
            – The Future of Human Potential: Will We Be Replaced by AI?

            These titles are all attention-grabbing and thought-provoking, and they accurately reflect the content of your blog post. They are also likely to appeal to a wide audience of readers who are interested in the future of artificial intelligence.I hope this helps!

            So these are pretty good.

            Liked by 1 person

            • I’m inclined to argue that they’re overly slick, and I’m also aware that my response is colored by having been told that they’re AI generated, so I’m working backwards: conclusion first, evidence later.

              I expect we have a bumpy ride ahead. We might want to fasten our seat belts.

              Liked by 1 person

  9. In the good old days a friend always invented a source for an article (“wissenschaftlicher Aufsatz”), that is, he smuggled in something he invented just for this. He did this for ages (forty years ?), and never, seriously never, someone asked him about this un-proofable sources. It was fun, a game, homo ludens, always a bit like a poked tongue to the holy “Wissenschaftlichkeit”. Like those titles one could find in large catalogues hidden, invented by the poor souls who had to do the dirty work, and over tired and under paid invented some authors and titles, like Borges library, just less intellectual. And now some stupid machine takes over, what a drag.
    Sadly we’ll not get the genie back into the bottle. I think we should kill it.

    Liked by 1 person

    • Makes me think that what’s funny (or scandalous) if a human does it quickly becomes old hat once a machine takes over. I suspect we probably should kill it, and I’m pretty sure we can’t.

      Like

      • Can a machine play ? Machina ludens ? I doubt. A machine can’t laugh.
        What we have at hand are soulless executers of programs ; we are giving inhumanness a space to reign without clear ideas of the consequences. This is profoundly dumb.

        Liked by 1 person

        • Well, to be fair, as a species we don’t have a good track record when it comes to long-term planning. We are, after all, some distance down the road toward making our planet uninhabitable (at least by many species, including ours) because to do anything else wouldn’t be cost-effective. That doesn’t make us look particularly smart.

          Like

          • The method of computation probably isn’t very different – we also abstract rules of thumb from experience and our human output is highly error-prone. The difference is the amount of data, the speed/power of the analysis, and that we have prisons and other sanctions for dangerous people. There are learned bodies working on AI safety, so fingers crossed.

            Liked by 1 person

            • Although to be fair, some of the most dangerous people on the planet will never get near a prison or be sanctioned in any way serious enough to make an impact. But yes, I do see your point.

              Like

Talk to me

This site uses Akismet to reduce spam. Learn how your comment data is processed.