
I was trudging into the office this week and greeted a colleague with the customary “Morning, how are you?” Now, usually I’m the grumpy, not-fully-awake one. But this particular morning, my colleague out-gloomed me and responded “I think it’s gonna be a rough day; ChatGPT is down.” Yes reader, I chortled. Oh the horror, no reliably unreliable shortcut to turn to! And it’s not just coming from me, but from the big bosses inflicting the technology upon us. Sundar Pichai is the CEO of Google’s parent company Alphabet. He was there with all the other traitors tech giants at the felon’s inauguration, pictured standing next to Jeff Bezos (and Lauren Sanchez’s winter bustier). Anyway, Pichai just gave an exclusive interview to BBC News where he advised users not to “blindly trust” AI, because of how prone AI is to errors. What, the same AI technology that attributed false quotes to real film critics and gave kids instructions on how to light a match? You don’t say.
AI is here to help… but fact-check everything it says: “This is why people also use Google search, and we have other products that are more grounded in providing accurate information.” … While AI tools were helpful “if you want to creatively write something”, Mr Pichai said people “have to learn to use these tools for what they’re good at, not blindly trust everything they say”. He told the BBC: “We take pride in the amount of work we put in to give us as accurate information as possible, but the current state-of-the-art AI technology is prone to some errors.”
An expert dissents: “We know these systems make up answers, and they make up answers to please us — and that’s a problem,” Gina Neff, professor of responsible AI at Queen Mary University of London, told BBC Radio 4’s Today programme. “It’s okay if I’m asking ‘what movie should I see next’, it’s quite different if I’m asking really sensitive questions about my health, mental wellbeing, about science, about news,” she said. She also urged Google to take more responsibility over its AI products and their accuracy, rather than passing that on to consumers. “The company now is asking to mark their own exam paper while they’re burning down the school,” she said.
Gemini 3 is coming soon: The company unveiled the model on Tuesday, claiming it would unleash “a new era of intelligence” at the heart of its own products such as its search engine. In a blog post, it said Gemini 3 boasted industry-leading performance across understanding and responding to different modes of input, such as photo, audio and visual, as well as “state-of-the-art” reasoning capabilities. In May this year, Google began introducing a new “AI Mode” into its search, integrating its Gemini chatbot which is aimed at giving users the experience of talking to an expert. At the time, Mr Pichai said the integration of Gemini with search signalled a “new phase of the AI platform shift”. The move is also part of the tech giant’s bid to remain competitive against AI services such as ChatGPT, which have threatened Google’s online search dominance.
Tech is in a race, truth is the cost: Mr Pichai said there was some tension between how fast technology was being developed and how mitigations are built in to prevent potential harmful effects. For Alphabet, Mr Pichai said managing that tension means being “bold and responsible at the same time”. “So we are moving fast through this moment. I think our consumers are demanding it,” he said. The tech giant has also increased its investment in AI security in proportion with its investment in AI, Mr Pichai added. “For example, we are open-sourcing technology which will allow you to detect whether an image is generated by AI,” he said.
Just to recap, Google now puts their AI Overview results at the top of every Google search page, and the head of Google is telling us to double check everything the AI Overview says. Sundar Pichai is worth an estimated $1.1 billion (but that’s according to AI Overview, so double check it!). I can make an equally if not more artful nonsensical statement — where are my billions?!! Like I’ve been grumbling all along, AI is a “solution” to a human-made, ultimately unnecessary problem. It’s just that the “problem” is sweat equity, good old-fashioned labor and research. In the context of Pichai’s statements here, not only is AI stealing human work for its training, but it’s also so unreliable and inconsistent that humans still have to look elsewhere to fact-check what it says. What kind of a shortcut is that, I ask you! Also, I love how Pichai basically cuts to the chase at the end, saying all the tech giants are just rushing to get competitive products out, that’s why they’re floundering on the whole truth/accuracy side of things. For what it’s worth, after typing in your search on Google, you can click on the “Web” tab and it removes the AI results. I picked up that and other helpful AI-deactivation tips from Consumer Reports.
photos credit: Pawel Supernak/PAP/Avalon, Xavier Collin/Image Press Agency/Avalon, Getty











I’ve been listening to a lot of speeches about AI at my job recently. One thing all of the speakers have said repeatedly is that you have to teach AI, you have to continually feed it accurate information or it doesn’t work. It’s also case and space sensitive. I think a lot of people are about to mess up their businesses because they don’t understand how AI works. You still need humans to teach it.
So if humans have to continually teach AI to be correct then what’s the point of AI.
Great question, @SusanCollins! And yet AI is “inevitable”.
Another great post. Thank you for addressing the multi-faceted concerns that people *should* have about AI. So many people have jumped on the AI bandwagon without understanding or even considering the very real costs and risks associated with it. Kismet and Celebitchy, I want you to know how deeply I appreciate your coverage of AI and tech issues!