Substantial risks may arise from potential intentional misuse or unintended issues of control relating to alignment with human intent. These issues are in part because those capabilities are not fully understood [...] There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.
Artificial Intelligence has the potential to revolutionize numerous aspects of society, from healthcare to transportation to scientific research. Through the previous chapters you have seen AI's ability to defeat world champions at Go, generate photorealistic images from text descriptions, and even discover new antibiotics. However, these developments also raise significant challenges and risks, including job displacement, privacy infringements, and the potential for AI systems to make consequential mistakes or be misused (see the Chapter 2 on Risks for the full spectrum). Technical AI safety research is necessary to ensure AI behaves reliably and aligns with human values, especially as it becomes more capable and autonomous. Even though technical research is necessary it alone is not sufficient to address the full spectrum of challenges posed by advanced AI systems.
The scope of AI governance is broad, so this chapter will primarily focus on large-scale risks associated with frontier AI. As a reminder frontier AIs are highly capable models that could possess dangerous capabilities sufficient to pose severe risks to public safety (Anderljung et al., 2023). Although in recent history many state of the art advancements have been driven by LLMs or foundation models, frontier AI as a term is not limited to just these types of models. We will examine why governance is necessary, how it complements technical AI safety efforts, and the key challenges and opportunities in this rapidly evolving field. We will focus on the governance of commercial and civil AI applications, as military AI governance involves a distinct set of issues that are beyond the scope of this chapter.