The Alignment Problem is the main problem of Artificial Intelligence (AI): how do we develop AI such that it respects human values? The problem gained more weight recently, primarily due to the prevailing skepticism surrounding the Large Language Models (LLMs) such as ChatGPT. The remarkable and frequently misconstrued capabilities inherent in these LLMs have prompted AI researchers to advocate for either a temporary halt in AI advancement (refer to "Pause Giant AI Experiments: An Open Letter") or an immediate evaluation of the human existential risk posed by AI (refer to "Statement on AI Risk").
I am not a believer in or a signatory to either of these calls. In my view, these two influential appeals cast an unduly pessimistic light upon the present state of AI, detrimentally impacting its acceptance and future development. These LLMs, undeniably powerful as they are, are still nothing but tools and remain fully subject to our discretion for deployment in either constructive or destructive pursuits. Therefore, I believe that our focus should predominantly center on increasing the quality of human jobs with AI, rather than entertaining the notion of their substitution by AI.
I published two series of books, seemingly having nothing in common at first glimpse. First series is the Artificial Intelligence, Dreams and Fears of a Blue Dot series; it consists of two books. They are an adaptation of the aibluedot.com website to book form, with some editing done to account for feedback I received mostly on social media.
The second series is the Formal Software Development series; it also consists of two books. These two books are wrappers around a program of study I proposed way back in 2011.
How are they related? There is a need for AI regulatory action in the US Congress, as well as the EU, China, and the rest of the world. There has been less work done on the issue of how we will verify that AI systems comply with regulations. The issue of software verification is fundamentally a mathematical issue, therefore it must be based on proofs, not on English (or any other natural language) text.
If we worry about AI extinguishing us humans, we should make sure that we can prove that it cannot do that. In other words, core parts of AI systems will eventually have to be specified formally (i.e. in a formal language), and those core software parts proven to satisfy their formal specifications. From a more practical personal perspective, in my work at sd-ai.org there will be a long-term need for such formal development when it comes to this writing and verifying of AI specifications. I put more details about this need in the SD-AI library.