Why Google Isn’t Hurrying to Implement AI Chatbots: Open AI’s ChatGPT and Stability AI’s Stable Diffusion are just two examples of this year’s successful new generative AI models that have set the stage for an AI technology explosion in 2023. Google, which has its own powerful (but not sentient) LaMDA AI chatbot, claims it has no plans to rush its models out to the public.
According to CNBC, during an all-hands meeting, employees asked if the company was losing its competitive edge because of its cautious approach, prompting executives to explain why they were taking such a cautious tack.
Google CEO Sundar Pichai and head of AI division Jeff Dean have warned that the company’s missteps in generative AI could damage the company’s reputation and turn off customers who rely on the company’s other, more established offerings.
Would people still trust Google’s search results if Google made LaMDA public and immediately began spreading hateful misinformation? According to CNBC, Dean has said, “We are looking to get these things out into real products and into things that are more prominently featuring the language model.” But it is crucial that we get this right.
Dean remarked that search-based AI systems needed extra care to account for unintended consequences that could cause bias, toxicity, or safety problems. The leader in artificial intelligence made some oblique references to rival companies that have already released their models to the public, suggesting that some of them will “make stuff up” if they are unsure of an answer.
As for Pichai, he reportedly tried to reassure workers by assuring them that “a lot” was being planned for AI at Google. Google can point to numerous instances where other, more ambitious tech companies have prematurely released AI systems.
- Does Netflix Have Plans to Release Lookism Second Season?
- Ransomware Hackers Use New Method to Avoid Microsoft Exchange Proxynotshell Mitigations
Perhaps most notably, Microsoft’s “Tay” chatbot, released on Twitter in 2016 to learn from users’ conversations, quickly morphed into a racist a-hole who expressed sympathy for Hitler within 24 hours after its debut. Earlier this year, another chatbot trained on 4Chan users amassed over 15,000 racist posts in just one day.
— Margaret Siegien CES2023 (@MargaretSiegien) December 14, 2022
About a week after OpenAI’s ChatGPT swept the tech world, Google held an all-hands meeting. The public release of that system, based on OpenAI’s GPT3 language system, prompted a flurry of screenshots showing users instructing the system to generate everything from parody Bible verses and poetry to a less-than-convincing Gizmodo article.
Some more advanced users have even figured out how to get the model to draught lines of code and answer specific queries, leading some to speculate that it could one day threaten Google’s search business.
Opening a browser and typing with your fingers can indeed feel archaic compared to the prospect of a near-future world where everyone has a Siri-like personal assistant on their phone with the search clarity of an OpenAI. Theoretically, generative AI could swap out all the links in writing for coherent paragraphs.
— Agile Digital (@agiledigital) December 20, 2022
Despite the Google executive’s call for caution, the company has taken some steps this year to expand access to LaMDA in particular further. In a series of supervised experiments via the AI Test Kitchen App, Google made the chatbot available to users back in August.
However, Google has decided to present LaMDA to users via a series of structured scenarios rather than making the bot available in an open-ended format. Google isn’t just focusing on LaMDA; they also have several other generative AI modes in development.
Gizmodo was invited to the company’s New York headquarters for an AI event last month. They demonstrated some early but impressive examples of text-to-image and text-to-video systems. Previous generative AI systems for video typically generate incoherent blogs of meaningless information, but Google’s demo developed a consistently coherent 45-second story.