What are the risks lurking in companies and users who blindly worship AI? “They are so confident that they ignore the danger of giving the wrong answer.”

* This article is a paid service of Digiday[Japanese version]a media for next-generation leaders responsible for branding.DIGIDAY+This is a reprint from.“Why don’t scientists trust atoms? Because they make up everything.”

Greg Brockman, co-founder and president of OpenAI, recently announced the latest version of the AI ​​language model developed by the company, “GPT-4” (Generative Pre-trained Transformer 4). GPT-4 is a fourth-generation autoregressive language model that uses deep learning to generate sentences that look as if they were written by a human being, and serves as the technical basis for the AI ​​chatbot ChatGPT. In the product demo that he did at the time of the announcement,Create a website from images of handwritten notesI did it.

In the demo, he typed in the command “tell me a funny joke,” and the answer GPT-4 came up with was the opening joke. Seem). The capabilities of generative AI are certainly amazing and fascinating, but they also raise big questions about “reliability” and “fakeability.”

The risk of “hallucinations” brought about by AI

“Many executives are fascinated by ChatGPT,” said David Schreier, an AT/innovation professor at Imperial College London. The fact that this AI chatbot can instantly build websites, develop games and pioneering pharmaceuticals, and create passing answers for bar exams is no wonder why interest is growing.

Such spectacular achievements can cloud the judgment of business leaders, said Schreier, a futurist who has written about emerging technologies. Companies and individual users who blindly worship ChatGPT are “ignoring the dangers of AI being overconfident and giving wrong answers.” He warns of the huge risks companies can face in rushing to adopt ChatGPT without realizing the pitfalls of this kind of tool.

The latest version of ChatGPT is an AI tool based on a “Large Language Model” that has been enhanced with OpenAI inputting over 300 billion words. In a paper published in 2018, Google researchers used neural machine translation as an example, arguing that it could “generate anomalous translations that deviate from the original,” and suggested that AI could cause “hallucinations.” It points out the dangers of such information.

AI’s language models can sometimes go astray and create delusions, just like the human brain. Therefore, you must not neglect to verify the facts regarding the output deliverables.

Not a generation tool, just a generation support tool


Source: BusinessInsider

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest