Back
Image Alt

Keeping AI safe

Keeping AI safe

Science fiction stories about how robots (artificial intelligence or AI) will destroy humans in the future have made millions in the entertainment industry. Comics, movies, games and science has been using artificial intelligence in many different ways. However, we can’t stop wondering whether these Sci-fi stories could become reality and if artificial intelligence might, indeed to think on its own and become destructive to the human race.

Before answering these questions, we have to say a few words about how far artificial intelligence has advanced over time. AI is quickly evolving, making technologies to impersonate human actions. For instance, Mark Zuckerberg has created his own, personal smart home assistant, designed to help him run his house. Called JARVIS (Just A Rather Very Intelligent System), it was inspired by “IronMan”. Also, researchers from the Technological University of Singapore have developed an AI receptionist at first sight looks like a human and can display simple social interactions such as making eye contact, shaking hands or responding to non-complex requests.

A team from Virginia Tech developed a machine learning algorithm that can identify and analyze funny images from a particular part of a scene. The technology is then attempting to make the picture unfunny, a goal achieved in 95 percent of the cases. This is a big step in unlocking emotional intelligence, which would enhance the ability of AI to relate with humans.

The answer might be yes

Even if AI can boost productivity and handle mechanical repeated office tasks, letting employees work on more complex tasks, there is a disadvantage, and that’s security. One day in the future, online criminals may use AI algorithms to find new vulnerabilities and create an automatic system to attack. Opposite to a human, AI can do all of those things with machine efficiency, which will make time-consuming hacks long gone.

Worst scenarios

Researchers don’t agree that AI could exhibit human emotions like hate or love, and there is no reason to fear that it could become intentionally malevolent. Anyway, they think two scenarios are most likely to happen:

The AI is programmed to do devastating things: artificial intelligence systems can become autonomous weapons by programming them to harm. Being controlled by the wrong person, this kind of weapons could easily cause mass destruction. To avoid being turned off by humans, these systems would be designed to be hard to handle.

The AI is programmed to do something good, but it creates a destructive method to achieve the goal: this scenario could happen if people fail to align the AI’s goals with theirs. For instance, if you ask an intelligent car to take you to the airport as fast as possible, it may violate the law to do that.

What are we doing about this?

This is a serious problem that could do a lot of harm to companies and even to simple people. Five influential businesses are trying to create a set of standards around the development of artificial intelligence. While most researchers and science-fiction passionate people have focused on what threats could AI pose to humans, scientists at Google, Amazon, Facebook, IBM, and Microsoft are focusing on something more tangible: the impact of AI on jobs, warfare, and transportation.

The specific actions this industry group will do isn’t well defined, but the underlying intention is quite clear: to make sure that AI research focuses on the advantages for people, and not on hurting them intentionally.

Moreover, there are discussions all over the world about AI. The Asilomar Conference is one of the places where AI researchers, leaders of economics, law, ethics, and philosophy dedicate five days to debate AI topics and concerns. Year by year they have pointed out the risks of AI development and create a set of 23 AI principles that should guide the researchers in the future.

Making AI more secure

Of course, specialists have been discussing this problem, which is why they also came up with some solutions for AI’s lack of security. Consider the following:

  • Secure the code: it should be designed to prevent unauthorized access. Machine learning can be adapted, so the code can be written to reduce risk;
  • Ensure the environment: by using a secure infrastructure where data and access are locked down, the system can be developed more safely;
  • Understand the danger: comprehending the possible threats enable people to design and implement changes to secure the application;
  • Anticipate and detect problems: the steps above can allow you to monitor the activities, then find and eliminate the problems;
  • Encryption: the ability to encrypt data at rest and in motion will keep the applications more secure.

Is artificial intelligence safe? Could it become a liability? Well, everything is possible, but we do have the necessary tools, systems, and human-intelligence to make AI work for us and not against us. If you have any thoughts on the subject, please share them!

Photo source: pexels.com

Post a Comment