* This article is a paid service of Digiday[Japanese version]a media for next-generation leaders responsible for branding.DIGIDAY+This is a reprint from.“Why don’t scientists trust atoms? Because they make up everything.”
Greg Brockman, co-founder and president of OpenAI, recently announced the latest version of the AI language model developed by the company, “GPT-4” (Generative Pre-trained Transformer 4). GPT-4 is a fourth-generation autoregressive language model that uses deep learning to generate sentences that look as if they were written by a human being, and serves as the technical basis for the AI chatbot ChatGPT. In the product demo that he did at the time of the announcement,Create a website from images of handwritten notesI did it.
In the demo, he typed in the command “tell me a funny joke,” and the answer GPT-4 came up with was the opening joke. Seem). The capabilities of generative AI are certainly amazing and fascinating, but they also raise big questions about “reliability” and “fakeability.”
The risk of “hallucinations” brought about by AI
“Many executives are fascinated by ChatGPT,” said David Schreier, an AT/innovation professor at Imperial College London. The fact that this AI chatbot can instantly build websites, develop games and pioneering pharmaceuticals, and create passing answers for bar exams is no wonder why interest is growing.
Such spectacular achievements can cloud the judgment of business leaders, said Schreier, a futurist who has written about emerging technologies. Companies and individual users who blindly worship ChatGPT are “ignoring the dangers of AI being overconfident and giving wrong answers.” He warns of the huge risks companies can face in rushing to adopt ChatGPT without realizing the pitfalls of this kind of tool.
The latest version of ChatGPT is an AI tool based on a “Large Language Model” that has been enhanced with OpenAI inputting over 300 billion words. In a paper published in 2018, Google researchers used neural machine translation as an example, arguing that it could “generate anomalous translations that deviate from the original,” and suggested that AI could cause “hallucinations.” It points out the dangers of such information.
AI’s language models can sometimes go astray and create delusions, just like the human brain. Therefore, you must not neglect to verify the facts regarding the output deliverables.
Not a generation tool, just a generation support tool
“If companies do not discover and deal with the ‘hallucinations’ of AI output results before they are published, they will be providing readers with false information. In the worst case, there is a risk of damaging their corporate image,” he warned. Greg Bortkiewicz is a digital marketing consultant at Magenta Associates, a consultancy firm specializing in integrated communications.
In an article posted on its official blog at the same time as the announcement of GPT-4, OpenAI said: “[GPT-4 is]more secure[than previous versions]more aligned with human values, and 40% more likely to generate fact-based responses.”
It’s a form that shows a safe material, but you should take it with a discount. Even OpenAI co-founder Sam Altman admits that GPT-4’s capabilities are “still flawed and limited.” “People who use GPT-4 for the first time are very impressed, but that impression seems to fade with repeated use,” Altman said.
Bortkiewicz also said: “GPT-4 can generate phantom information and inappropriate content that doesn’t exist, but it knows nothing about events that happened after September 2021, when the input to the model ended.” As with previous versions, he advises companies to view GPT-4 as “a production support tool, not a content production tool,” with human oversight.
AI is inherently “obscure”
A similar sentiment was echoed in an essay written by UK-based technologist James Bridle for The Guardian website. In an article titled “The stupidity of AI,” which happened to be published two days after GPT-4’s announcement, he argued that “AI is inherently obscure,” stating: . “AI reads most of the information on the internet and learns what human language should be, but it has nothing to do with reality.”
As a piece of advice to corporate executives, he also called for the need to look at AI in general, such as ChatGPT, from both positive and negative sides. “It is very dangerous to believe that AI is savvy and meaningful. AI risks poisoning the source of all human thought and the thinking itself.”
In other words, the idea of quickly relying on AI to achieve results may encourage corporate negligence. If it becomes known to interested parties, it may lead to damage to the brand image.
What we need is funding for research and a mechanism to keep the balance
Let me give you a concrete example. In February 2023, a few days after the shooting at Michigan State University, the Vanderbilt University administration in Tennessee, USA, created an email expressing condolences using ChatGPT and sent it to those involved. The e-mail in question was signed by two staff members as the sender, but the fact was discovered because it was written in small letters at the end of the e-mail that it was “quoted from ChatGPT.”
The university apologized the next day for its misjudgment on the matter. “Many people must have felt ‘morally wrong, deceitful, or creepy,'” Bridle said of the condolence message. There are a lot of situations where we need .”
On the other hand, using real-world data to train machine learning can lead to misbehavior of AI. Victor Boutev, co-founder and chief technology officer of Iris.ai (headquartered in Norway), a start-up company specializing in AI technology research and development, raises the question:
“ChatGPT has a huge amount of text automatically collected and processed from the Internet as training data. How much is included, and how many of them are used correctly?”
He argues that securing funding for intensive AI research in those areas is important. In addition, it is necessary to establish a mechanism to maintain appropriate checks and balances in AI utilization.
Shift from quantity to quality orientation
In order to accelerate the evolution of AI technology, Microsoft announced in January 2023 that it would “invest additional billions of dollars (hundreds of billions of yen) in the next few years in open AI.” I recently laid off the ethics team in the AI department. Isn’t this a worrying situation?
Bouteff emphasized the importance of building and strengthening AI security guardrails and improving the robustness of data structures. In addition, there is also a need for a shift in orientation from “quantity to quality” in both people’s mindsets and data. “Large-scale language models have limitless potential, but the recent rise of models of this kind has left open questions about the accuracy of fact-recognition, validation of knowledge, and fidelity to the underlying message.” problem has been highlighted.”
In addition, he points out problems in the AI measures of major companies such as Microsoft. Large companies have the resources and technology to handle common search-results-related problems, but they are “not one-size-fits-all,” he said.
He also said, “Using AI to solve problems in niche fields and business applications requires a considerable amount of investment, and with a lack of funding, we can only do things that cannot be used as language models. Ultimately, we can rely on AI’s decision-making process. In order to build AI, we must make it a priority to ensure the explainability and transparency that underpins decisions made by AI.”
[original text]
(Text: Oliver Pickup, Translation: SI Japan, Editing: Ryohei Shimada)
Source: BusinessInsider
Emma Warren is a well-known author and market analyst who writes for 24 news breaker. She is an expert in her field and her articles provide readers with insightful and informative analysis on the latest market trends and developments. With a keen understanding of the economy and a talent for explaining complex issues in an easy-to-understand manner, Emma’s writing is a must-read for anyone interested in staying up-to-date on the latest market news.