If AI were to become smarter than humans, could it pose a threat to our existence?

Decide for yourself.
Imagine a future where machines can think and act like humans. Think of it like a sci-fi movie.
Machines become smarter and stronger than humans, leading to catastrophic consequences.

Are we ready for a world with thinking robots?

My first exposure to the concept of AI safety came from reading Isaac Asimov’s books. The term ‘robotics‘ was first introduced in his short story in 1941. As he famously wrote,

The Three Laws of Robotics:
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law. 3) A robot must protect its existence as long as such protection does not conflict with the First or Second Law.”

AI for Good

AI is advancing quickly, and most people are not even worried about its implications. That sh*t worries me, because if we don’t make sure AI is used safely, we may have a sci-fi-like dystopia on our hands.
The concept of AI was first proposed by John McCarthy in 1956 at Dartmouth College. He called it “artificial intelligence.”
The idea quickly gained traction, and scientists began to experiment with machines to test if they could mimic human thought processes.

AI, or artificial intelligence AGI, or artificial general intelligence, is the use of computers to perform tasks that would require human intelligence, like decision-making, problem-solving, and language translation.

But here’s the catch: as AGI continues to advance, the risk of it turning against humanity grows.

The question is, are we ready for the dangers that AI may bring?

Can we program moral values?

I mean, morals are so relative and unpredictible. Establishing morals could lead to many ethical issues when we use AI.

On the one hand, AI can provide amazing results, but on the other hand, what about:

BIAS.  What if Al is biased?

One risk is human bias entering the Al algorithm. Given that most of the current development is happening in the private sector, this becomes even more serious.

SECURITY.  Is Al fully secure?

What is it that we don’t know today about AI security? There must be some way of ensuring the technology doesn’t get into the hands of bad actors.

DECEPTION. Could Al turn deceptive?

Some projects that start with noble intentions bow to corporate pressure to make money, ideals be damned. Will Al deceive to make money?

MALICE. Will Al turn malicious?

Abusing technology isn’t new, but with Al, the scale is huge.
One question to never stop asking is how to ensure Al doesn’t become malicious by intent.

UNREGULATED. Isn’t Al too unregulated?

Just like with any completely new technology, we aren’t sure of all the risks involved in artificial intelligence.
The challenge is to regulate without stifling innovation.

POLITICAL.  How about victimization?

All ruling powers are keen to vanquish opponents, but in authoritarian governments, the risk of Al being abused to victimize opponents is significantly higher.
One example of AI is GPT-3 (Generative Pre-trained Transformer 3).
Most AI researchers I’ve heard talk about GPT-3 say, “It’s not doing anything intelligent.”
 Come on! It is a language processing program that can write articles, poetry, and even computer code! GPT-3 is a sophisticated language AI model capable of natural language processing, answering complex questions, and even composing articles, posing both opportunities and potential risks for humanity. The real sh*t
They say this because they understand how a transformer model works, develop AI every day, and are comfortable with their concept of it. If I could quote the movie Outside the Wire, “People are stupid, habitual, and lazy.” Perhaps it wasn’t the best movie, but essentially, if we wanted “human” AI, we would have to go out of our way to create it and make it self-limiting and stupid on purpose.

So, at what point do we conclude that AI is intelligent?

Even when it could best us at everything we can do, as long as we use AI for some utility, people won’t recognize it as intelligent. I doubt most people will even consider it. Indeed, AI is like having a smart robot that can learn and make decisions on its own. It uses data to solve problems and help people with tasks like driving cars or sorting photos. However, there are dangers to advanced AI, but people won’t accept it now.
Or maybe never, because any AI will only be the same as human intelligence if we go out of our way to make it specifically human-like. Which would seem to have zero utility (and likely get us in trouble) in almost every use that we have for AI.

So, after that long-winded rant, my point is that we are stupid, habitual, and lazy (which necessarily includes ignorant).

Think about this: No human can conclude what the trained structure of 100 billion parameters of AI could represent after being trained for months on humanity’s entire knowledge.

I’ll leave you with that thought.

The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” -Stephen Hawking


1 Comment.

Translate »
Call Now Button