Latest news about Bitcoin and all cryptocurrencies. Your daily crypto news habit.
We have seen glimpses into the artificial intelligence world, both good and bad.
It has the potential to be life changing for the generations who live through it.It will change lifestyles, the working life and the running of public services in every way, forever.
It will completely redefine manufacturing and production, and remove the boundaries to what is possible to build and create.
Everything will become instant, as drones will deliver your Amazon purchase on the same day and drop it onto your front porch. Robot dogs will fetch your paper.
Productivity levels will increase exponentially. Picket lines and other lower level jobs will be performed by man-made machines that will never get bored, tired or succumb to any other human traits.
Yet it also has a potentially darker side.
The technology will continue to learn from us, then inevitably, at some point out think us.
Huge swathes of the workforce will loose their jobs as every possible process becomes automated.
People argue that this is actually an amazing thing as it will bring about a new age of art, of makers and creators, a musical renaissance.It will also leave millions with nothing to do and no purpose to live for. Potentially huge spikes in depression, alcoholism and suicide could follow.
And worst of all, it will have every potential of being developed into some kind of weaponry, to of course âmake the world saferâ.
Yes, we should be excited for this new age.
But we should also be terrified about it.
So lets play the devilâs advocate.
Lets examine the A.I. world in itâs worst sense.
A Whole New World
The world of movies has portrayed many versions of a robotic world, and most donât end too well for humanity. Well, until the Hollywood ending of course.
Taking a closer look at some of the themes and ideas of these films, you begin to realise they are not all that crazy and certainly within the realms of possibility. Here is a few examples.
SkyNet (Terminator)
Before you laugh, even some of the pioneers of our time fear artificial intelligence becoming something it was not designed to be.
Take Elon Musk for example, who has actually started a project called Neuralink to âavoid A.I. from becoming otherâ, said
âThere have been movies about this, you know, like Terminator. There are some scary outcomes. And we should try to make sure the outcomes are good, not bad.â
If you are not aware, SkyNet is a superintelligence system that spread into millions of computer servers across the world.
In doing this, it became self-aware and realised itâs own potential. As the creators tried to deactivate it, the SkyNet system went into self protection mode and deemed humans a threat. In order to carry out itâs original coding, it set about trying to exterminate the human race all together.
Or something like that. The Terminator films get a little silly.
The SkyNet system was put in full control of all computerised military hardware, including the entire nuclear weaponry of the U.S. This was done to prevent human error, and increase the speed of reaction to threats.
However, after deciding to end humanity, the SkyNet system goes about starting nuclear wars.
Take a step back from Hollywood, and that isnât too far fetched a proposal.
As a premise, it makes sense to take the control of nuclear weapons out of the Donald Trumpâs and Kim Jong-unâs of this world, and give it to robots. Why wouldnât a robot with no emotional impulse be safer than our egomaniac leaders?
All would be well, until the robots become self aware, decide they are fed up with humans, and nuke us all.
Is the distant future of the A.I. world just a baron, post nuclear war wasteland, the age of the cockroach?
I-Robot
I hear you laughing again. Bare with me.
In this film, Will Smith has to fight helpful-turned-hateful robots, who decided to disobey command and take over. These robots were meant to serve humanity, obeying the Three Laws of Robotics.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
As with many things, these robots were designed with magnificent intentions. They would make day to day life easier, safer and less lonely. In this fair society, everyone was entitled to a NS-5 robot. Win win for everyone.
Until the mothership computer decided to turn all the robots against humanity.
Thank god for Will Smith.
Is this futuristic world a Hollywood story, or again a potential reality?
If robots can make peopleâs lives easier, sure as fate everyone will want one. With the introduction of machines into our world, there would have to be parameters and protocols in place to keep us safe.
Yet, who is to say that the robots canât decide to disobey protocol? Or that the laws could be misinterpreted?
Worryingly, if they do, what system will be in place to protect us from them? Who would control such a system?
You canât expect Will Smith to actually save us. He is only an actor after all.
Metalhead (Black Mirror, Season 4)
A little less Hollywood, but the most recent season of Black Mirror contained one episode that featured an all too real concept.
The scary robot âdogâ.
Based on the âBoston Dynamics videos crossed with the film All Is Lostâ, the episode focused on the aftermath of the (unexplained) fall of humanity and the plight of Bella desperately trying to escape this robotic hunting nightmare.
It is tense and terrifying.
What touches home with this one is the concept is not Hollywood. Of course, it is a little exaggerated, but it is based on true, already developed, robotic mechanics.
Take a peek at these dogs from Boston Dynamics in actionâŠ
Fuck me.
Every danger usually has some kind of defence.
If you were running away from the original Daleks, you just had to find some stairs. Phew. If you get attacked by a dog, you are to punch it in the nose, or the ribs. Disaster averted.
With robots, you could just close a door. Safe now.
Not anymore.
So in reality, if an army of robot dogs was developed for good, and either corrupted or altered for a darker purpose, what on earth could stop them?
The real life prototypes have impressed everyone with their remarkable balance, speed, and dexterity⊠yet it begs the question, what if one was relentlessly chasing you down?
Pretty unnerving right.
The A.I. Arms Race
Lets move away from Hollywood and into reality.
Which is still scary as hell.
This Ted Talk by Sam Harris, is an eye opening view on what the future could hold for us in the A.I. world.
Like all things, building Artificial Intelligence is becoming a race to the finish line. Every country wants to be the first to produce the best robotic technologies. But do we actually understand it? Do we actually know what we are dealing with?
An A.I. arms race is dangerous.
It means shortcuts will be taken, vital steps will be missed and the proper due diligence wonât be done. As the tech gets more and more mature and intelligent, it will inevitably outsmart usâââand we wonât be ready to deal with it, or control it.
Donât believe it will out-think us? As Sam notes, these machines can think 1000 times faster than us, and can therefore develop and grow at a rate 1000 times faster.
What happens when we build machines that we donât understand, that start to know more than we do and understand themselves better than we do?
What happens when the machines develop goals that donât benefit the human race?
What happens when A.I. decides we are in the way of its development? Starting to sound a lot like the story of SkyNet.
The World of Work
In the work force, the implementation of forms of A.I. is inevitable.
Processes like signing in, security procedures, getting into the office door, logging into computers, can all be streamlined by A.I.
Take tech company Three Square Market (32M). The majority of employees have had RFID implants, roughly the size of a grain of rice, injected between the thumb and forefinger at voluntary âchipping partiesâ.
It serves many functions for the staff who signed up, but mainly password storage. They can access things without ever needing a keycard or similar. In short, it will improve the speed and efficiency of their work
In the wider context it will lead to payments similar to ApplePay, unlocking your door, using your phone, boarding flights, all being done through the palm of your hand.
And while the idea is cool, and will make us feel like super humans, it also comes with a lot of grey areas.
Most of these issues boil down to privacy. Literally holding data in the palm of your hand comes with risk. It could easily be taken out of you by forceful means, or used against you.
It could be scanned or replicated, much like the early days of contactless where people were walking around with payment devices and stealing payments off cards.
Then there is the issue of spying. We know phones are trackable. We know our internet usage is tracked and used in many ways.
But if you have a device in your palm, your are susceptible to government and corporate tracking and surveillance every single minute of every single day.
It could be the absolute end of privacy.
Employers could find you at anytime of day. It could expose activities from your personal lives. It could of course be used for good, but it could also be used to hold people ransom.
âI know where you were last nightâ.
So is this really the answer?
Do we want to be part of the âchipped massesâ, becoming more like data carriers and less like humans?
The Death of Jobs
The most well stated impact of a robot revolution is the loss of jobs and the potential number is staggering.
According to a study compiled by McKinsey Global Institute, a worst case scenario of 800 million jobs worldwide could be lost to automation by 2030. Yeah, you read that right.
800 million, in 12Â years.
https://www.mckinsey.com/global-themes/future-of-organizations-and-work/what-the-future-of-work-will-mean-for-jobs-skills-and-wages
In the US alone, between 39 and 73 million jobs are at threat of automation, which equates to around a third of the workforce.
This graphic shows the developing world is at even higher risk.
Of course, this high scale automation would create new jobs and refine existing roles. Workers will also be able to switch careers, if at all possible.
But it will not keep a third of the workforce in a job. It is predicted that only the individuals in higher income jobs will be more able to adapt to the changing market, while demand for the middle and low-skill occupations will decline.
So how do millions live without a job?
This brings about the point which could well be the make or break in the automated worldâŠ
The Universal Income Argument
For an Artificial Intelligence world to succeed, politics has to keep pace.
The idea of a universal incomeâââan unconditional government paymentâââhas gained traction in recent years concerns grow about the effect that robots will have on employment.
Richard Branson recently spoke of his belief that A.I. will lead to social inequality. To counter this, new jobs must be created, but also, a âbasic minimum earnings,â or a universal basic income, should be instituted âso that there is nobody that is having to sleep on the street.â
Even Zuckerberg, who has been using A.I. to build his own robotic butler, said âNow itâs our time to define a new social contract for our generation. We should explore ideas like universal basic income to give everyone a cushion to try new things.â
The two arguments against this concept are cost and the worry that handing out cash payments creates a ânanny stateâ. But if there is no jobs around, what is the alternative? Nationwide starvation? Depression? Mass poverty?
The A.I. revolution is likely to create a situation where a national income is introduced not through choice, but necessity.
The Fears Of The Pioneers
Opinion on A.I. and robots vary. It seems to be a fair split in for and against. So there is a lot of sceptics and doom-mongers, yet a lot of them come with real respectability.
Stephen Hawking told the BBC, âI think the development of full artificial intelligence could spell the end of the human race.â That goes down as doom-mongering, but such a revered individualâs opinion carries weight.
Nick Bostrom, author of Superintelligence, goes with the same notion
âOnce unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.â
Bill Gates told Charlie Rose that A.I. was potentially more dangerous than a nuclear catastrophe. As discussed earlier, A.I. could in fact be the cause of nuclear warfare.
Elon Musk recently warned that âThe danger of AI is much greater than the danger of nuclear warheadsâââby a lot. Mark my words, AI is far more dangerous than nukes.â
If some of the smartest and most respected intellects and entrepreneurs are very concerned, why is nobody else speaking out?
A Dose of Reality
The biggest problem, and danger, with a system like A.I. and its development, is that it wonât be done in the best interests of us, the people likely to use it or be served by it.
And in truth, it might never have been designed in our interest in the first place.
The A.I. Money Tree
Everything in the corporate world is money revolved. Much like the Pharmaceutical market, the big players in Wall Street and others will move their money into the robot world.
Once companies become money obsessed and desperate to please shareholders, it will go to shit.
Sounds like a familiar tale.
A huge concern for the development and use of A.I. is that humans will become a secondary concern, and itâs market value will make many people seriously rich.
But at what cost to everyone else?
Will companies stocks rise with every human they can replace?
The A.I. War
The absolute worst case scenario is a lot like SkyNet.
If Artificial Intelligence ever becomes either in control of weaponry, or becomes the weapon itself, the outcomes could be devastating if left unchecked.
The systems will either cause the wars or end them by killing everyone.
It could produce huge swathes of Terminators or Cybermen style soldiers, send them out into the street and in effect declare martial law.
It could randomly decide to shoot nuclear missiles if it deems itself under threat.
It could refuse to obey command from our leaders.
In similar concerns to Sam Harrisâ talk mentioned earlier, racing to have the best and biggest A.I. in some kind of messed up ego boost, âmy A.I. is better than yoursâ, could bring similar tensions to the world order that the current North Korean nuclear arms saga is doing.
What happens when leaders start showing their muscle and using their automated weaponry as a show of power?
Who keeps that in check?
World leaders would have to come together and agree strict terms and rules of use and development. Even then, the secret operations would just move underground and go unchecked.
What good is A.I. if it ends up destroying the planet it was developed to serve?
After Thoughts
The future of the world is going to be vastly different.
This always occurs generation to generation, with each living different lives to each other, but now the speed of change has increased so fast it is happening in faster cycles.
In the millennial era, we have witnessed incredible changes. How can you possibly top the smart phone, smart life environment without going one stage further and adding in A.I.?
It will change the way we live and work forever.
In many aspects of life it will improve them beyond belief. Everything will become easier and faster, and the more thatâs automated, the more that people can enjoy their freedoms and return to the arts, music and crafts.
But the concept has to be challenged. It has to be done properly, with serious care, attention and understanding. It canât be a race to the finish line, or it could put us in genuine danger.
Much like driver-less cars dilemmas, we need to fully understand the coding and internal metrics of these robots, and how they will live side by side with humanity.
What codes and laws do they live by?
Who controls them?
Who decides what they can and canât do?
Who has access to the kill switch if something goes wrong?
How do we stop it getting into the hands of those who cannot be trusted to use it for good?
Only time will tell if this futuristic future is the one that saves us, or ends us.
Is A.I. Going To End The World As We Know It? was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Disclaimer
The views and opinions expressed in this article are solely those of the authors and do not reflect the views of Bitcoin Insider. Every investment and trading move involves risk - this is especially true for cryptocurrencies given their volatility. We strongly advise our readers to conduct their own research when making a decision.