Some companies are sincere when they announce they are pursuing superintelligence, despite acknowledging the risks that this will lead to the extinction of humanity. Things are moving fast; many people struggle to keep up with the frantic pace of progress and to understand where it is leading…
Therefore, the developing AI situation needs to be made legible/accessible/understandable to outsiders in order for society to walk a potentially quite narrow path towards a good future. Istarion Research believes that experts have a responsibility here, and we would seek to encourage, facilitate, and enable them in fulfilling it. In addition to making legible what experts conclude, we seek to make non-experts understand some key things that experts understand, by turning both specific research products and general understanding of important issues (currently scattered or not yet existing) into improved sense-making among the broader academic community, policy community, and the general public. Here is a list of activities of this kind we would pursue:
- Make disparate information and consensus among experts visible and legible, perhaps using some executive summaries, illustrations, interactive websites, podcast-style discussions, long-form blogs, etc.
- Engage with the broader academic community, policy community, and the general public, address the actual objections people have, and produce compelling content informed by this engagement.
- Remain grounded and focused on meaningful questions by having lots of conversations with lots of stakeholders and guide the long-term investigations of the institute to bring sense and clarity to such discussions.
- Understand and publicly address the actual intuitions and reasons people have for what they believe and do.
Many questions need to be asked. There is currently nothing remotely approaching a consensus on a technical pathway that would lead to a safe superintelligence (or to a good future). If funding permits, we would also want to perform fundamental research into the underlying assumptions behind various proposed technical pathways and turn the research products into improved sense-making among the experts, broader academic community, policy community, and the general public.
Donors
Matthew MacDermott