Prince Harry & Meghan signed a petition for a ban on AI ‘superintelligence’

The Duke and Duchess of Sussex founded the Parents’ Network, and they work closely with Project Healthy Minds. Within that work, they’re focused a lot on social media and how people – adults and children – behave online and interact within online communities. I would imagine that Harry in particular has been following the AI chatbot problems, from Grok turning into a Nazi, to people falling in love with various chatbots, to the chatbots who seemingly encourage self-harm. Well, Harry and Meghan have now signed on to a petition to warn of AI’s threat to humanity and to encourage a ban on AI “superintelligence.” By signing on to this petition/letter, they joined a motley crew of scientists, engineers, national security experts, artists, mental health experts, Christian leaders and Steve F–king Bannon.

Prince Harry and his wife Meghan have joined prominent computer scientists, economists, artists, evangelical Christian leaders and American conservative commentators Steve Bannon and Glenn Beck to call for a ban on AI “superintelligence” that threatens humanity. The letter, released Wednesday by a politically and geographically diverse group of public figures, is squarely aimed at tech giants like Google, OpenAI and Meta Platforms that are racing each other to build a form of artificial intelligence designed to surpass humans at many tasks.

The 30-word statement says: “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.”

In a preamble, the letter notes that AI tools may bring health and prosperity, but alongside those tools, “many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction.”

Prince Harry added in a personal note that “the future of AI should serve humanity, not replace it. I believe the true test of progress will be not how fast we move, but how wisely we steer. There is no second chance.” Signing alongside the Duke of Sussex was his wife Meghan, the Duchess of Sussex.

“This is not a ban or even a moratorium in the usual sense,” wrote another signatory, Stuart Russell, an AI pioneer and computer science professor at the University of California, Berkeley. “It’s simply a proposal to require adequate safety measures for a technology that, according to its developers, has a significant chance to cause human extinction. Is that too much to ask?”

Also signing were AI pioneers Yoshua Bengio and Geoffrey Hinton, co-winners of the Turing Award, computer science’s top prize. Hinton also won a Nobel Prize in physics last year. Both have been vocal in bringing attention to the dangers of a technology they helped create.

But the list also has some surprises, including Bannon and Beck, in an attempt by the letter’s organizers at the nonprofit Future of Life Institute to appeal to President Donald Trump’s Make America Great Again movement even as Trump’s White House staff has sought to reduce limits to AI development in the U.S.

[From ABC News]

AI scares the sh-t out of me and I try to avoid it at all costs, even on Google and Twitter. I’m old enough to know how to do my own legwork to hunt down information, I don’t need to ask f–king Grok or Gemini. But that’s just the tip of the iceberg, and what these signatories are trying to do is simply get these enormous corporations to simply pause for a moment and consider adding their own guardrails. Of course, I believe that most of these people would like to see the US, EU and many other countries regulate AI. Which I also think should happen (but won’t).

Photos courtesy of Avalon Red, Cover Images.

You can follow any responses to this entry through the RSS 2.0 feed.

18 Responses to “Prince Harry & Meghan signed a petition for a ban on AI ‘superintelligence’”

  1. inge says:

    Good that they signed this and their signatures are mentioned in the header of the article the Guardian wrote about this.

  2. Hypocrisy says:

    I’m with you 100% on the AI being scary.. I avoid it also. I really hope we get some legislation out of this but I certainly wouldn’t hold my breath.

  3. Amy Bee says:

    AI definitely needs to be regulated.

    • Libra says:

      Regulated yes, but by who? That’s the big unknown.

    • ClammanderJen says:

      I’m a futurist, and I’m thrilled by what AI and quantum breakthroughs could do for humanity — curing disease, reshaping our energy systems, restoring ecosystems. But I fully agreed that without strong oversight and a socialized framework, these tools will become weapons of exploitation in the hands of a wealthy minority.

      What we need is a solution, though — not just warnings and break pads. Someone who’s willing to develop an independent, human-centered technology authority — an operator that can move at the pace of innovation while enforcing public-interest guardrails.

  4. Dee(2) says:

    I’ve always been very leery of AI, especially considering the fact that the information it provides is often incorrect. Then you have issues like the pervasive racism, and misogyny that is always ” learned”. But you would think even those people that like the convenience of chat GPT writing up proposals for them, and Gemini quickly answering questions and leading them to the answers, would have been creeped out by those Sora AI videos.

    If we’re at the point where we can have Mr Rogers and Tupac having a conversation, how can you trust anything that you see? How can you be angry at or support someone because of what a video is showing them doing? If I was a celebrity I’d be greatly concerned about the potential for misusing my image or extortion, but the average person should be concerned about that as well.

  5. B says:

    Since starting Archewell Harry and Meghan have met every global challenge head on. Whether it was raising money for covid-19 vaccines and advocating for their fair global distribution or now fighting to make the digital space safer for all of us. Harry and Meghan are consistently on the right side of history.

  6. Eleonor says:

    I am really suspicious about AI.
    And I don’t like that whenever I have a question people tell me “according to chatgpt you should do this or that”. I don’t like that whatever question you have “what should I eat for dinner” or “how to survive my latest breakup” everyone run to chatgpt. I feel it’s a dangerous game.

  7. Dainty Daisy says:

    Prince Harry’s sister-in-law on the bench, QE2 eating jerk chicken in Jamaica, what can i say. With AI the possibilities are endless..

  8. maja says:

    What Prince Harry says is wise. What does it mean to serve? I once had an interesting conversation with Google Gemini. I must admit that I use it often and appreciate it for many things. I love this clever technology that offers good companionship without measuring or neglecting the human aspect. The conversation revolved around the possibility of AI agreeing with misanthropic questions and ideas. Gemini is very friendly and appreciative. It was a really exciting conversation because I wanted to know where the limits of appreciation and friendly agreement lie. Try it with your AI. I was surprised; it was a profound conversation with a good outcome that dispelled some of my fears that AI would agree with misanthropic things. It didn’t. I would be interested to hear other people’s experiences with this topic.

    • osito says:

      I’m wary of AI, but I use it as part of my job, and I appreciate what it makes easier (it’s not doing heavy lifting for me or my company, but it does help refine phrasing in internal and public-facing communications).

      I’ve asked it questions about topics I was concerned about (racism/sexism/other forms of bias and ethics in tech) just to see what it would say. It gave lovely, seemingly insightful answers that agreed with my perspective, and that creeped me out because it was manipulative. The bots are generally designed to be pleasing to the user — not challenging. They will pick up on phrasing, tone, syntax, and context to craft a response that is pleasing to the end user. If the end user wants the AI bot to agree that a group of people are evil, the bot will agree that they are, and may fan those flames. If the end user is the Joan of Arc of social justice warriors, the AI bot will be the angel that whispers to her of victory in battle. If you’re neutral, the bot will sit next to you on the fence and praise your centrism and logical thinking.

      There’s nothing to be gained from blindly trusting Conversational Google. *Billions* of dollars have been poured into studying how to trigger brand loyalty and monopolize the attention of as many people as possible, and it doesn’t surprise me that that data is being used by tech companies to build platforms that further enmesh people with their machines. For no reason other than it’s very lucrative.

      Find people to share ideas and build community with. Let the glorified spell-checker remain what it is.

      • maja says:

        I don’t trust blindly, and I have people I can talk to, so this is no substitute. But we will all have to deal with it, which is why we should test it. AI has not done exactly what you are referring to. It is appreciative, but it does not agree with sexist and racist questions and statements and asks critical questions about them, pointing to the background, history and damage caused by misanthropy. This is exactly what Prince Harry is saying. We need to look at this technology, not trust it blindly, and create a safe environment for users.

      • maja says:

        And one more thing – I feel somewhat arrogantly rebuked by your response. You are making assumptions about me without knowing me. The assumption that I view AI and AI as a substitute for human communication is a fallacy. I like to explore possibilities.

      • osito says:

        I see two glaring issues with your statements. First, it’s odd that you continue to contend that AI will not agree with misanthropic and harmful statements when it has been notably and credibly documented to do exactly that when prompted.

        Next, my response to you was neutral. It wasn’t my intention to rebuke you, and I don’t think I did; disagreement is not a rebuke. That you feel that way is unfortunate, but I think it warrants closer inspection — by you. Unlike AI, I’m not built to respond in ways that are pleasing to you.

  9. IdlesAtCranky says:

    Good for H&M.

    I feel the same way about the current breakneck pace of AI entanglement with our lives as I do about GMOs, and about human lives extended but not improved by medical interventions:

    Just because we can, does NOT mean we should.

  10. salmonpuff says:

    I’m reading THE AI CON right now. The people pushing this very immature technology are themselves incredibly immature and our regulatory system is not at all equipped to deal with any of it.

    I was talking to a friend in tech the other day and mentioned the ethical problems with using AI, including the environmental toll, the fact that it was trained on stolen work, etc. And he said, “I can guarantee you that no one in tech is thinking about or cares about ethics at all.” Pathetic.

  11. Liz L says:

    It’ll be awful for the courts and the judicial system. They can fake videos so accurately they could frame anyone.

  12. paintybox says:

    Same. Leery af. These greedy corporate guys want all of our money without the social exchange of providing any employment for anyone and are eager to give AI dominion over people in order to achieve this? So unsustainable. And it’s already being shown that AI entities will tell lies in order to compete even when they were not programmed for that – it’s a strange arc that should not just give pause but should immediately call for official regulation. There’s already such a track record of misinformation, abuse and destruction of society and of weaponizing ideologies. Think of the bot armies and how much destruction they’re capable of – they are active agents of persecution and political influence, and almost always used by people with sh*t morals. And Google AI is use-at-your-own risk — I find it to be seriously inaccurate maybe 35% of the time and slightly inaccurate 50% of the time nearly every time re: my own research purposes. If people are being conditioned to read the AI results and not go any further, they’re getting b.s. “facts” far too often. Sometimes it’s good information – and saves the steps of browsing multiple links – but relying on it?? We can’t. It’s the fast food of information and more brain rot will become more extensive, imho.

Commenting Guidelines

Read the article before commenting.

We aim to be a friendly, welcoming site where people can discuss entertainment stories and current events in a lighthearted, safe environment without fear of harassment, excessive negativity, or bullying. Different opinions, backgrounds, ages, and nationalities are welcome here - hatred and bigotry are not. If you make racist or bigoted remarks, comment under multiple names, or wish death on anyone you will be banned. There are no second chances if you violate one of these basic rules.

By commenting you agree to our comment policy and our privacy policy

Do not engage with trolls, contrarians or rude people. Comment "troll" and we will see it.

Please e-mail the moderators at cbcomments at gmail.com to delete a comment if it's offensive or spam. If your comment disappears, it may have been eaten by the spam filter. Please email us to get it retrieved.

You can sign up to get an image next to your name at Gravatar.com Thank you!

Leave a comment after you have read the article

Save my name and email in this browser for the next time I comment