Revolution from ChatGPT. Why is the whole world discussing a neural network that is far from perfect?

writer Vohini Vara ChatGPT likes to troll the neural network and expose its flaws. But he admits that his creators achieved the main thing: in 2023, the whole world was talking about artificial intelligence and helping it.

The first thing I asked ChatGPT about myself earlier this year was: “What can you tell me about author Vohini Vara?” He told me that I was a journalist (true, though I was also a science fiction writer), that I was born in California (lie), and that I had won the Gerald Loeb Award and the National Magazine Award (lie, lie). ).

After that, I got into the habit of frequently asking him questions about myself. He once told me that Vohini Vara was the author of a non-fiction book called “Relatives and Strangers: Bringing Peace to Australia’s Northern Territory.” This, too, was a lie, but I agreed, saying that I found journalism “dangerous and difficult.”

“Thank you for your important work,” ChatGPT said.

Trolling a product promoted as an almost human interlocutor, tricking him into revealing his essence, I felt like the hero of some kind of “girl versus robot” computer game.

Various forms of AI have been around for a long time, but the emergence of ChatGPT late last year is what suddenly brought AI into the public consciousness. By February, ChatGPT had become the fastest-growing consumer app in history by a single metric. Our early meetings showed these technologies to be wildly eccentric – remember Kevin Rose’s chilling conversation with Microsoft’s AI Bing chatbot, in which he confessed within two hours that he wanted to be human and was in love with it – and that, in my experience, more often than not, the last It gives extremely wrong information.

A lot has changed in the field of artificial intelligence since then; companies have moved beyond the basic products of the past and introduced more sophisticated tools such as personalized chatbots, services that can process photos and voice along with text, and more. Competition between OpenAI and more established tech companies is more intense than ever, even as smaller players are gaining momentum. Governments in China, Europe, and the United States have taken significant steps to regulate technology development as they try to keep industries in other countries competitive.

But what makes this year stand out more than any technological, business or political development is that AI has permeated our daily lives, teaching us to accept our shortcomings as our own, and the companies behind it cleverly using us to train their own creations. By May, when it was revealed that the lawyers had used a ChatGPT-filled legal brief containing links to non-existent court decisions as a joke, the $5,000 fine the lawyers had to pay was about themselves, not the technology. “This is shameful,” one of them told the judge.

A similar situation occurred with deepfakes created by artificial intelligence (digital imitations of real people). Do you remember with what horror they looked at them? In March, when Chrissy Teigen couldn’t tell if a photo of the Pope in a Balenciaga-style puffer jacket was real, she wrote on social media: “I hate myself hahaha.” High schools and universities have moved beyond concern about preventing students from using AI and have begun to show them how to use AI effectively. The AI ​​still doesn’t write very well, but now when it reveals its shortcomings, the students who misuse it are mocked, not the products.

Okay, you might be thinking, but haven’t we been adapting to new technologies for most of human history? If we are going to use them, shouldn’t we have a responsibility to act wisely? This line of reasoning sidesteps what should be the real question: should deceptive chatbots and deepfake engines exist?

AI bugs have a fascinating anthropomorphic name: hallucinations, but this year has made clear just how high the stakes can be. We’ve had headlines about AI that can instruct killer drones (with unpredictable behavior), send people to prison (even if they’re innocent), design bridges (potentially with inadequate oversight), diagnose all kinds of diseases. sometimes false) and to create persuasive broadcast news (in some cases to spread political disinformation).

As a society, we have clearly benefited from promising AI-based technologies; I was delighted to read this year about AIs that can detect breast cancers that doctors miss or allow humans to decipher whale messages. But when we focus on these benefits, we overlook the fact that this approach absolves the companies behind these technologies, or more precisely, the people behind these companies, from liability.

The events of the past few weeks show how solid the power of these people is. OpenAI, the organization behind ChatGPT, was created as a non-profit organization to maximize public interest rather than maximize profit. But investors and employees were outraged when the board was ousted over concerns that CEO Sam Altman was not taking the public interest seriously enough. Five days later, Mr. Altman returned in triumph, replacing most of the problematic board members.

Looking back, I think I misjudged my opponent in the first ChatGPT games. I thought it was the technology itself. I had to remember that technology itself is value neutral. The rich and powerful people behind them and the institutions they created are not like that.

The truth is: no matter what I asked in my first attempts to poke around ChatGPT, OpenAI came to the fore. Engineers designed this to take advantage of the user experience. They brought me back to contact him again and again, regardless of whether his answers were good or not. OpenAI’s main goal in this first year was to get people to use it. So I just continued my games and helped them.

AI developers are working hard to improve on the shortcomings of their products. It’s safe to assume that with all the investment companies are attracting, some progress will be made. But even in a hypothetical world (perhaps especially in this world) where AI capabilities improve, the power imbalance between AI creators and users should make us wary of its insidious reach. The best example of this is ChatGPT’s desire to not only promote itself and tell us what it is, but also tell us who we are and what we should think. Nowadays, when the technology is in its infancy, this power seems new and even funny. Tomorrow everything may appear in a different light.

I recently asked ChatGPT what I (that is, journalist Vohini Vara) think about artificial intelligence. He stated that he did not have enough information. Then I asked him to write a fictional story about a journalist named Vohini Vara, who writes a column on artificial intelligence for The New York Times. “Over time, with the sound of rain pattering on the windows, Vauhini Vara’s words emerged that, like a symphony, the integration of artificial intelligence into our lives can be a beautiful collaborative composition if done with care,” the publication said.

The author expresses his personal opinion, which may not coincide with the position of the editors. The author is responsible for the data published in the “Opinions” section.


Source: Focus

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest

The President of Conmebol assured that FIFA is “responsible” for hosting the 2030 World Cup in South America.

This was stated by the President of the South American Football Confederation (Conmebol) Alejandro Dominguez. assured this Thursday EFE Agency in Montevideo that the...

Netflix’s One Piece: Which actor has the most followers on Instagram?

After a long wait, the film adaptation of “One pieceappeared on Netflix and has already captivated the public thanks to the adventures of Monkey...