In this report, we review what risks need to be taken into account to effectively govern AI and discuss how the European Regulation can be effectively implemented. For ease of analysis, we have classified the risks associated with AI into adversarial and structural. The first group includes those risks where there is a direct relationship between the harm and its causer. Specifically, two potential source vectors were identified: malicious actors with intent to misuse AI and AI systems themselves, which may pursue goals autonomously and contrary to human interest. The latter is highlighted as an unprecedented risk vector that will require innovative solutions. As for the specific threats associated with this risk, they have focused on three: (i) cyber-attacks and other unauthorized access, (ii) development of strategic technologies, and (iii) user manipulation. Cyber-attacks and other unauthorized access consist of the use of AI to execute cyber offensives with the aim of obtaining certain resources; the development of strategic technologies consists of the misuse of AI to achieve competitive advantages in the military or civilian sphere; and user manipulation consists of the use of persuasion techniques or the presentation of biased or false information to condition human behavior.