AI won’t receive code for nuclear weapons, but that’s not certain

Russia, China, the US and Israel are not keen on limiting the use of AI in weapons.

According to the Wired portal, the US State Department outlined its vision for how far artificial intelligence should reach the nuclear arsenal of countries with such weapons. The State Department has called for a categorical refusal to allow AI to enter a dangerous area such as nuclear weapons. But some states may disagree, according to editor-in-chief Will Knight.

The US State Department has developed a political declaration on the responsible military use of AI and urged all nuclear powers not to introduce AI into such a dangerous area where it could lead to the death of all humanity. Only the people should make the final decision on the use of nuclear weapons, otherwise the script for the movie “Terminator” may come true.

However, editorial expert Will Knight believes this document in no way legally binds the United States or other nuclear countries. According to him, the expert believes that a comprehensive agreement should be developed between such countries, which will clearly set the standards for the use of artificial intelligence in the development of weapons, and there can be no talk of the introduction of artificial intelligence into the nuclear field. .

However, Will Knight, an artificial intelligence expert, rightly points out that neural networks are very useful in situations where there is not enough response even from a trained expert. For example, air defense fighters are unlikely to be able to independently cope with a whole swarm of drones, instantly switching from target to target, but artificial intelligence can do it.

Human rights organizations such as the International Red Cross and Stop Killer Robots are pushing not only to limit the use of artificial intelligence in nuclear weapons, but to ban its use in any weapon in general. Human rights activists believe that far more people will die from autonomous weapons than from accidental shooting by soldiers or from an accidental missile hit.

But countries like the USA, Russia, Israel, South Korea and Australia are blocking human rights activists’ attempts to impose this idea through the UN in every possible way. One reason is that these forces are seeing greater use of artificial intelligence in their military programs and are already actively using such weapons.

The expert believes that if the development of AI in the military-industrial complex cannot be banned, it is very important, through the joint efforts of all countries, to prepare clear instructions and joint obligations on the standards for their final implementation. Without this, no one can be sure that tomorrow a country will not entrust the “nuclear bag” to artificial intelligence.

Previously Focus wrote that the first summit dedicated to killer robots failed: why?

Source: Focus

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest