We Need To Discuss About How Advanced Artificial Intelligence Is  Getting?

Over the past decade, some in the field of artificial intelligence (AI) have begun to call it a “golden decade,” researchers have made significant strides in various AI-related areas.

Over the past several days, I have been experimenting with DALL-E 2, a program created by the San Francisco firm OpenAI that can transform textual descriptions into incredibly lifelike visuals.

When OpenAI's asked me to beta-test DALL-E 2 (a play on Pixar's WALL-E and the artist Salvador Dal), I couldn't resist. I devoted a good chunk of my time to creating bizarre, humorous, and abstract examples for the AI to work with, such as “a 3D rendering of a suburban home shaped like a croissant,”.

“A daguerreotype portrait of Kermit the Frog from the 1850s,” and “a charcoal sketch of two penguins drinking wine in a Parisian bistro.” DALL-E 2 would spew a few photos matching my request in seconds, and they would frequently be incredibly lifelike.

What’s Impressive About DALL-E 2

DALL-E 2 goes beyond the artwork it creates. This is the process by which creative works are produced. These aren't just photoshopped from various sources on the internet but the product of a sophisticated artificial intelligence method called “diffusion,” which begins with a completely arbitrary set of pixels and iteratively refines the result until it matches a given text description. DALL·E 2 Explained here:

Fast progress is being made; the DALL-E 2 produces images that are four times as detailed as those produced by the original DALL-E, which was released just last year. Many people paid attention to the announcement of DALL-E 2 this year, and for a good reason.

It's cutting-edge tech that might revolutionize the lives of artists, graphic designers, photographers, and anyone who earns a living with visual media. It also begs the issue of what purpose all this AI-generated art will serve and whether we should be concerned about an uptick in synthetic propaganda, hyper-realistic deep fakes, or even nonconsensual erotica.

However, AI has also made significant advances in fields outside of the arts.

There has been a wave of progress in many areas of AI research over the past 10 years, fueled by the rise of techniques like deep learning and the advent of specialized hardware for running huge, computationally intensive AI models.

Some AI researchers have begun to refer to this period as a “golden decade.” Part of that development has been gradual and steady, with larger models with more data and processing capacity providing marginally better results.

Occasionally, however, it seems like a switch has been flipped, and hitherto unthinkable feats of magic are now within reach.

AlphaGo, a deep learning model developed by Google DeepMind that could defeat the world's top humans at the board game Go, was the most incredible story in the AI world just five years ago.

While it was impressive to show that an AI could be trained to win a Go tournament, this isn't the kind of innovation that drives most people to action.

However, DeepMind's AlphaFold, an artificial intelligence system that evolved from a Go-playing AI, accomplished something revolutionary last year. The “protein-folding problem,” which had stumped molecular biologists for decades, was effectively solved by this method, which employed a deep neural network trained to predict the three-dimensional shapes of proteins from their one-dimensional amino acid sequences.

This summer, DeepMind reported that its AI, AlphaFold, had produced predictions for nearly all of the 200 million proteins known to exist, yielding a goldmine of information that will aid the development of new medications and vaccinations for years to come.

The journal Science recognized AlphaFold's significance last year, proclaiming it the year's most significant scientific advance.

Or Look At What’s Happening With AI-generated Text.

Not very long ago, artificial intelligence chatbots had a hard time with even the most fundamental discussions, let alone more complex language-based jobs.

OpenAI's GPT-3 is one example of a significant language model used in non-academic contexts, including scriptwriting, email marketing, and game design. (Last year, I even used GPT-3 to write a book review for this publication; had I not warned my editors, it's unlikely they would have noticed.)

OpenAI's GPT-3
OpenAI's GPT-3

More than a million individuals have signed up to use GitHub's Copilot since it was created last year to assist programmers in saving time by finishing their code snippets automatically. This is just one example of how AI is being used to contribute to the coding community.

Next, there's Google's LaMDA, an artificial intelligence model that made news a few months ago when a top Google developer named Blake Lemoine was sacked for saying that LaMDA had developed sentience.

In addition to Google's denial of Lemoine's assertions, many AI researchers have found fault with his findings. If you remove the claim that LaMDA and other cutting-edge language models are becoming uncannily proficient at having humanlike text conversations, his argument would not have raised as many questions.

Indeed, many professionals would tell you that AI is improving in many ways today, including in areas like language and reasoning, where humans were once thought to have the upper hand.

Racist chatbots and malfunctioning autonomous vehicles are only two examples of the terrible, broken AI still present today. And even when AI advances rapidly, it can take some time for those advancements to trickle down into consumer-facing goods and services. Recent advancements in artificial intelligence at Google or OpenAI won't immediately translate to novel-writing Roombas.

The debate in Silicon Valley is changing, though, because the top AI systems are now capable of progressing at such fast rates. Many experts now believe that massive changes are just around the brink, for better or worse, and fewer confidently predict that we have years or even decades to prepare for a wave of AI that will drastically alter our world.

Two years ago, Ajeya Contra, a senior analyst with Open Philanthropy who studies AI risk, predicted a 15% chance of the emergence of “transformational AI” by 2036. “Transformational AI” is defined by Contra and others as AI good enough to usher in large-scale economic and societal changes, such as eliminating most white-collar knowledge jobs.

However, due to the rapid development of technologies like GPT-3, Contra has increased that probability to 35% in a recent post.

‘AI systems can go from charming and useless toys to solid goods in a concise amount of time,' Contra informed me. AI may soon alter society in unsettling ways; people must consider this possibility seriously.

To be fair, plenty of doubters argue that advances in AI have been exaggerated. They will tell you that artificial intelligence has a long way to go before it can achieve sentience or replace humans in most occupations.

They argue that current AI models such as GPT-3 and LaMDA are nothing more than glorified parrots that merely regurgitate their training data and that real AGI (artificial general intelligence) that can “think for itself” is still decades away.

Some tech industry people are optimistic about AI's future and think it's progressing quickly. They argue that accelerating the development of AI will provide humanity with new means to treat illness, expand into space, and prevent ecological catastrophes.

Don't feel you must choose a side in this argument with me. I'm trying to convey that you need to pay more attention to the actual, concrete changes driving it.

Artificial intelligence that functions isn't kept in isolation, after all. Facebook's algorithm for rating news feeds, YouTube's suggestions, and TikTok's “For You” pages are all examples of this kind of personalization being baked into the apps we use regularly.

It finds its way into military hardware and educational software kids use. AI is used in many different contexts, from vetting loan applicants to assisting in criminal investigations.

It's simple to see how systems like GPT-3, LaMDA, and DALL-E 2 could become a significant force in society, even if the naysayers are right and AI doesn't acquire human-level consciousness for many years.

Most of the media we consume online, including photographs, movies, and text, may have been produced by computers shortly. It's possible that discerning between humans and convincing machines in online conversations may make our interactions stranger and more stressful.

Propagandists who are proficient with technology could use it to disseminate general, personalized false information, influencing the democratic process in ways we can't predict.

The phrase “we need to have a social discourse about AI risk” has become a cliche in artificial intelligence. The dystopian future's backup plans have been discussed at Davos panels, TED speeches, think tanks, and AI ethics committees.

What's lacking is a standardized, value-free language for discussing what modern AI systems are capable of and the unique dangers and opportunities that come with those capabilities.

I Think Three Things Could Help Here

First, regulators and politicians need to get up to speed.

Few public officials have direct experience with AI systems like GPT-3 or DALL-E 2, and even fewer have a firm comprehension of the rapid pace of progress being made at the AI frontier because of the relative novelty of these systems.

We need more legislators and regulators to take an interest in the technology. Still, some efforts have been to reduce the gap (such as Stanford's Institute for Human-Centered Artificial Intelligence holding a three-day “AI boot camp” for congressional staff members).

And no, I don't mean they should start spreading panic about the impending AI Armageddon in the vein of Andrew Yang. A small step forward would be reading The Alignment Problem by Brian Christian or learning the basics of a model like GPT-3.

Otherwise, we risk a replay of the situation with social media corporations after the 2016 election, when the strength of Silicon Valley met the ignorance of Washington, resulting in paralysis and contentious hearings.

Second

The world's Googles, Metas, and OpenAIs need to better communicate their work in artificial intelligence research without glossing over or downplaying the hazards involved.

Many of the most advanced AI models are created secretly, using proprietary data, and exclusively subjected to internal testing. Information concerning them is typically diluted by corporate PR or buried in incomprehensible scientific studies when it finally makes it into the public domain.

In the short term, tech businesses may be prudent to downplay AI concerns to prevent backlash. Still, such a strategy will not ensure the companies' continued success in the long term if they are perceived as having a secret AI plan that conflicts with the public interest.

And if these businesses don't want to be transparent, AI engineers should bypass their superiors and speak with lawmakers and reporters on their own time.

Third

There is a need for the media to do a better job of breaking down AI developments for laypeople. Journalists, including myself, tend to use old sci-fi shorthand when attempting to explain actions in AI to their readers.

Large language models are sometimes compared to Skynet and HAL 9000, and exciting advances in machine learning are often reduced to scare tactics like “the robots are coming!” in the name of grabbing headlines.

Sometimes we reveal our lack of knowledge by using pictures of hardware-based factory robots to illustrate pieces about software-based AI models, which is about as logical as using a picture of a BMW to show an article about bicycles.

Most people's conceptions of artificial intelligence are limited to how it will affect them personally: Will it replace me in my job? How does it compare to me in Skill X or Task Y? rather than trying to grasp the full scope of AI's development and its potential consequences?

To contribute, I will describe artificial intelligence (AI) in all its craziness and complexity without using Hollywood cliches or exaggerations. We must, however, begin making changes to our conceptual frameworks to account for the presence of these remarkable new machines.