Is artificial intelligence dangerous?

Many famous personalities, such as Stephen Hawking, Bill Gates and Ilon Mask, repeatedly said that the further development of artificial intelligence is associated with many potential risks. Exponential growth led to the creation of extremely advanced algorithms much earlier than was supposed. At the same time, technology penetrates our lives deeper into our lives, starting to respond to the stable work of many applications and even physical infrastructure. In the material, we looked at the main global threats associated with progressive AI.

Under

Today, developers have achieved the interaction of several systems of artificial intelligence among themselves, but so far it applies only to narrow tasks, such as face recognition, natural language processing or online search. Ultimately, specialists in this area are planning to go to the fully autonomous AI, whose algorithms will be able to cope with any intellectual tasks performed by people, and most likely, superior to us in solving any of them.

In one of his comments, Ilon Mask noted the incredibly rapid pace of development of AI in a wide understanding of this term. According to him, those who are not in contact with leading developers of machine learning systems and neural networks do not even imagine that the indicators of progress in this area are close to exponential growth. Therefore, in the next 5-10 years, something really dangerous will happen with a high probability.

There are quite a lot of applications related to artificial intelligence that make our daily life more comfortable and efficient. It is they who play a decisive role in ensuring security, which is concerned about Ilona Mask, Stephen Hawking, Bila Gates and others when they talked about doubts about the development of technology. For example, if AI is responsible for providing the operation of the power system, and we lose control over it or it hacks the enemy, it may lead to great damage.

So far, humanity has not yet created cars superior to us, pay attention to complex and large-scale legal, political, social, financial and regulatory issues in advance, to ensure our security in advance. However, artificial intelligence even in the existing form today can represent a potential danger.

One of the most dangerous threats is independent systems of weapons, which are programmed to murder and represent a real risk to life. Most likely, the ride of nuclear weapons will be replaced by global rivalry in the development of military autonomous systems. Last year, Russian President Vladimir Putin said:

«Artificial intelligence is the future not only Russia, this is the future of all mankind. Here are the tremendous opportunities and difficult threats predicted today. The one who becomes the leader in this area will be the Lord of the world. «

In addition to the threat that advanced weapons will receive a «own mind», where big fears cause the possibility of managing autonomous military systems with a separate person or government who is not caring for people’s lives. After turning such weapons, it will be extremely difficult to fight and neutralize it.

Social media with the help of their autonomous algorithms work very effectively in the field of target marketing. They know who we are, what we like, and incredibly well understand what we think about. Researchers still investigate attempts to use data 50 million Facebook users in order to influence the results of the 2016 presidential election results in the United States and the referendum in the UK associated with its exit from the EU. If the charges are truly true, it illustrates the enormous potential of the use of AI for social manipulations.

Spreading propaganda focused on specific people, pre-identified with algorithms and personal data, artificial intelligence will be able to manage their moods and report information in any format that will be most convincing for them.

Now you can track and analyze every step of the online user, as well as the time in which he is engaged in his daily affairs. Cameras are almost everywhere, and face recognition algorithms easily identify us.

In fact, it resembles a social loan system in China, which is expected to give each of its 1.4 billion citizens a personal score based on how they behave:

When a big brother is watching us, and then makes decisions based on this information, this is not only an invasion of privacy, but also begins to grow into social oppression.

AI presents value for us primarily due to high performance and efficiency. However, if we are unclearly formulating the autonomous system to the task, then its optimal execution may have hazardous consequences.

For example, a person charges

Since the machines are able to collect, track and analyze a lot of information about people, they potentially can use the data obtained against us. You can imagine a situation in which the insurance company refuses a person in the design of documents due to the fact that the cameras were often fixed, as he speaks driving on the phone. Another example may be an employer who refuses to accept the applicant only because he has a low score of the «social loan».

Any powerful technology can be applied to harm. Now artificial intelligence is used for many good things:

Unfortunately, as II opportunities expand, we will increasingly see that it is used for dangerous or malicious purposes. The exponential growth of technology makes the discussion of the optimal ways to further develop one of the priorities, and the earlier this process will begin, the less destructive will have an impact.

The decision can also be