Publisher's Synopsis
In simplicity, (AI) can be one of scientific weapons platform. When one day, it is invented to be applied to control war planes to fly to any countries to attack enemies or it is invented to be seemed to human to replace soldiers to bring guns or any weapons go to other countries to attack. So, it is possible that future any war defense planes, (AI) technological automatic control weapon can be replaced of human soldiers or war plane pilots to control any war defense planes to go to different enemy countries to attack them easily. It is very horror matter to threaten global human's ourselves life in the future, if (AI) automatic control war defense planes or (AI) automatic control machine soldiers were invented successfully.Hence, when (AI) can be applied to weapons platforms, it structures that launch weapons, i.e. jets, ships, vehicles. (AI) platform and weapon and software architecture components are be done one (AI) technological weapons systems. Thus, human will encounter any (AI) benefits or risks ( threats) causes in the same time as soon as possible. If we can predict when (AI) weapon system will be manufactured or invented successfully. Then, we can reduce (AI) weapon systems risks, if we can threaten any (AI) scientists continue to invent any undiscovered (AI) weapons in any time to avoid the future first time (AI) weapon war occurrence in possible.The (AI) weapon system risk means autonomy: the ability to problem solve technological war, when (AI) weapon system is manufactured successfully, the power to act, how to damage the (AI) weapon system. The power to chance to stop (AI) weapon system manufacturing processes, ability to create a new goals, how to change the (AI) weapon system inventors' or scientists' minds to avoid to apply (AI) tools to achieve attack goals to change to another positive goal. Due to human can't know a prior what an autonomous (AI) weapon system will do.Although, human is known what (AI) is, but human is also known when (AI) scientists whose emergent behaviors will do to change to do any negative behaviors from positive behaviors. Whatever (AI) weapon system design we use, there will be cybersecurity, problems arising from computation design/complexity. Due to any one (AI) scientist can manipulate the system to act against itself, or who can utilize traditional " cyber weapons" against the (AI) weapon system, or who can manipulate the system to lie to humans, but also due to complexity, there is no way to know if it is lying or not or bounded rationality: satisficing.