Embed from Getty Images
There are some tasks, chores, and activities that people are just bad at. Repetition can sometimes be a mind-numbing drag.
In the near future robots will be helping with more of these tasks. Driving will be optional. Warehouse and manufacturing work will be more rare for humans. Robots will also be assisting in fields like education.
These changes will be what people make of them. Humans are in the process of developing the ultimate productivity hack, and the outcome will largely be decided in the politics and regulation.
With these advancements come a great deal of questions, and a conversation that not many people are thinking about. We’re starting to talk about workplace automation, but the conversation goes deeper than that.
Humans are the dominant species on the planet, but it’s not because we are bigger and stronger and faster than any other animal on the planet. It’s because we’re smarter.
What happens when we are no longer the smartest “species” on the planet? There are scientists and companies out there looking for a competitive advantage on building a superintelligent robot. What precautions are being taken to ensure it’s done safely?
Some Questions We All Need to be Asking Ourselves About Superintelligence
Our best computers still freeze on a somewhat regular basis. What happens if your autonomous vehicle freezes at 80 MPH on the Interstate? What if a robot tasked with performing surgery freezes? Robots will be trusted with human life. What happens when the robots make a mistake?
What happens when a robot doesn’t interpret its end goal or purpose the way that humans intended?
How do you instill a value system into a machine? How else would you make sure that the context of the command or the job is what the human intended for? |
In Nick Bostrom’s book Superintelligence, he wrote about a paper clip maximizer as a form of artificial general intelligence. This means that it’s built with a rough equivalent of human intelligence, and it’s focused on one specific task – in this case maximizing the number of paper clips. The paper clip maximizer would also be able to increase its intelligence related to that task, in an effort to improve. First it might go buy a vast amount of paperclips. Next it might convert matter within the human body to paperclips. The example is selected to highlight how something frivolous could go completely wrong, if we fail to instill a value system into these machines.
How do you instill a value system into a machine? How else would you make sure that the context of the command or the job is what the human intended for?
There are scientists and thinkers on the AI front who believe that Superintelligent robots can take over the world before humans even understand what happened. There are also scientists and thinkers who believe the solution will be to simply unplug the machine. If these opinions represent both ends of a spectrum, where does this actually fall? Facebook took some bots offline recently after engineers decided they didn’t like what happened. Will that be a viable option in all outcomes?
Will the box theory remain a foolproof method for testing artificial intelligence? When scientists or engineers are developing a new artificial intelligence, the idea is to keep it in a contained space, where any result is contained. If the AI is smarter than humans, how long will that containment be possible?
Again, AI will serve a noble purpose and can be of great benefit to humans. But a lot of care needs to be put into the research and development process. A rush to be the first to develop the technology could come at the expense of safety. Conversations on this topic need to take place between scientists, engineers, programmers businessmen, journalists, ethicists and the general public.
So much rides on the results.
__
__
Photo credit: Getty Images