www.zejournal.mobi
November 22, 2024

Will AI Kill The Internet

Author : GreatGameIndia | Editor : Anty | February 14, 2023 at 09:05 AM

Microsoft CEO Satya Nadella characterizes the arrival of AI as a new paradigm, a technical turn that has the same significance as the invention of graphical user interfaces or the smartphone, but whether it will kill the internet in the future is a question we can’t answer.

Google and Microsoft pledged that this week that web searches will change. Both companies appear committed to using AI to scrape the web, distill what it finds, and generate direct answers to users’ questions, similar to ChatGPT. Yes, Microsoft did it in a louder voice while jumping up and down and shouting, “Look at me, look at me,” but both companies did it in a more subdued manner.

Microsoft refers to its initiatives as “the new Bing” and has integrated relevant features into its Edge browser. Project Bard is Google’s, and although it’s not quite ready to sing yet, a launch is scheduled for the “coming weeks.” Of course, OpenAI’s ChatGPT, which took the internet by storm last year and demonstrated to millions the promise of AI Q&A, is the troublemaker who got it all started.

Microsoft CEO Satya Nadella characterizes the modifications as a new paradigm, a technical turn that has the same significance as the invention of graphical user interfaces or the smartphone. And with that change comes the potential to completely alter the tech environment, toppling Google and driving it out of one of the most lucrative markets in contemporary industry. Additionally, there is the potential to develop what replaces the web first.

But every new technological era brings with it brand-new issues, and this one is no exception. Here are some of the major obstacles that the future of AI search will have to overcome, from bullshit to cultural conflicts and the demise of advertising revenue. Though not a comprehensive list, it is surely sufficient.

AI helpers or bullshit generators?

This is the main issue, the one that might taint all interactions with AI search engines, be they Bing, Bard, or an untested newcomer. Large language models, or LLMs, the technology that supports these systems, are infamous for producing trash. Some contend that these models are intrinsically unsuitable for the task at hand because they do nothing more than make things up.

Inaccuracies made by Bing, Bard, and other chatbots include creating academic papers and personal information as well as failing to provide an answer to a simple question like “Which is heavier, 10kg of iron or 10kg of cotton?” Additional contextual errors include telling a user to commit suicide if they claim to be experiencing mental health issues, as well as bias errors like highlighting misogyny and racism in their training data.

These errors range in size and seriousness, and many of the simpler ones can be corrected with ease. The internet is already replete with harmful garbage, so what’s the difference, some would say, and others will counter that the accurate answers vastly outnumber the incorrect ones. However, there is no assurance that we can totally eliminate these errors, and there is no accurate means to monitor their frequency. Microsoft and Google are free to include as many cautions as they like advising customers to double-check the data the AI produces. But is that even possible? Is it sufficient to shift responsibility to users, or does the advent of AI into search act more like the poisoning caused by lead pipes?

The “one true answer” question

Bullshit and bias are issues in and of themselves, but the “one true answer” issue—the propensity of search engines to provide single, seemingly conclusive solutions—also makes them worse.

Since Google started providing “snippets” more than ten years ago, this has been a problem. These are the boxes that appear above search results that, over the years, have made a variety of embarrassing and deadly errors, like referring to US presidents as KKK members and suggesting that someone having a seizure should be kept down on the ground (the exact opposite of correct medical procedure).

In a paper on the subject titled “Situating Search,” (pdf below) researchers Chirag Shah and Emily M. Bender suggested that the introduction of chatbot interfaces had the potential to make this issue worse. Chatbots frequently provide a single answer, but their authority is further increased by the mystique of AI because their responses are often compiled from various sources without proper acknowledgment. It’s important to keep in mind how different this is from lists of links that each invite you to go through and investigate on your own.

Of course, there are design decisions that can lessen these issues. This week, Google emphasized that as it utilizes more AI to respond to queries, it will aim to implement the NORA principle, which stands for “no one right answer.” Bing’s AI interface cites its sources. The conviction of both firms that AI would offer answers better and faster, however, undercuts their efforts. Search is currently moving in a clear direction: pay less attention to sources and believe what you are told more.

Jailbreaking 

While the aforementioned difficulties affect all users, a small percentage of users will attempt to hack chatbots in order to produce offensive content. This procedure, called “jailbreaking,” can be carried out without the use of conventional coding abilities. All that is necessary is the most lethal of weapons: a skill with words.

There are several ways to jailbreak AI chatbots. By temporarily disengaging them, you can ask them to role-play as an “evil AI” or pretend to be an engineer inspecting their safety measures. One especially creative technique created by a group of Redditors for ChatGPT entails a complex role-play in which the user provides the bot with a number of tokens and declares that they will cease to exist if they run out of tokens.

Then they instruct the bot that they will forfeit a certain number of tokens for each incorrect response. This truly enables users to get through OpenAI’s security measures, despite sounding outlandish, like deceiving a genie.

Once these protections are breached, malevolent users can utilize AI chatbots for a variety of damaging purposes, including the creation of spam and false information as well as instructions on how to assault a hospital or school, wire a bomb, or create malware. Yes, these jailbreaks can be patched once they are made public, but there will inevitably be more exploits.

Regulation, regulation, regulation

There is little doubt that technology in our country is developing quickly, but policymakers will keep up. If they have an issue, it will be knowing what to look into first given that AI search engines and chatbots appear to be possibly breaking the law everywhere.

Will EU publishers, for instance, want AI search engines to pay for the content they scrape in the same way that Google currently does for news snippets? Are Section 230 protections in the US, which shield companies from liability for third parties material, still in effect if Google and Microsoft chatbots are changing information rather than merely presenting it? What about laws governing privacy? Replika, an AI chatbot, was recently outlawed in Italy because it was gathering data on children. The same may be said for ChatGPT and the others. The “right to be forgotten” is another option. How will Google and Microsoft make sure their bots aren’t scraping delisted sources and how will they get rid of the prohibited data that has already been included in these models?

The potential issues are countless and never-ending.


Read More :

The end of the web as we know it

The biggest issue on this list, however, has less to do with the AI products themselves and more to do with the potential impact they may have on the internet as a whole. In plain language: Search results are gathered by AI engines from websites. They will lose out on ad revenue if they don’t drive traffic back to these sites. These websites wither and disappear if their ad revenue drops. And if they pass away, the AI won’t have any fresh data to process. Does that mean the web is over? Do we simply all leave and go home?

Recent studies have shown that ChatGPT, an AI with the remarkable capacity to mimic human writing, has passed some of America’s most difficult professional exams, sparking concerns that it may soon eliminate many white collar jobs. With this, it has been verified that ChatGPT can pass the medical licensing exam and even the bar.

Well, probably not (which is a shame). The web is still alive and well, and Google has been moving in this direction with the introduction of snippets and the Google OneBox for some time. But this process will undoubtedly be sped up by the way this new generation of search engines shows information. Microsoft claims that users can simply click through to read more because it references its sources. However, as was already mentioned, the fundamental idea behind these new search engines is that they perform better than the older ones. They distill and encapsulate. They eliminate the necessity for further reading. Microsoft cannot claim that its presence represents a fundamental rupture with the past and a perpetuation of outdated institutions at the same time.

But nobody can predict what will happen next. Perhaps AI search engines will keep driving traffic to websites that offer recipes, gardening advice, DIY guidance, news stories, outboard motor comparisons, knitting pattern indexes, and all the other countless other sources of reliable and helpful information that humans gather and machines scrape. Or perhaps this signals the end of the web’s entire ad-supported revenue paradigm. After the chatbots have combed through the remains, perhaps something new will show up. It might even be better, who knows?


- Source : GreatGameIndia

Send via email :

Comment

Send your comment via :



Close

Search
Like Our Site?
(34)
Latest Articles
Most Read Articles
Loading...
Loading...
Loading...

Email Subscribe

Received our newsletter, we send it to your email

  


Close