We aim to provide a reliable and accessible source of information about existential risk from AI. This resource caters to all audiences, whether they are new to the topic, looking to explore in more depth, seeking answers to their objections, or hoping to get involved with research or other projects. Your donations will allow us to continue running distillation fellowships, in which a team of remote writers and editors collaborate on expanding and improving our written content.
To answer the “long tail” of uncommon questions, we’re also building an automated distiller. This chatbot searches a database of alignment literature and summarizes the results with citations to the sources. We also plan to redesign and improve the front end, using A/B testing to figure out how to present information in ways that make it easier to take in and share. Some groups have already reached out to us about strategic partnerships, and we aim to develop an API that lets external websites embed our search function. With larger amounts of funding, we may hire software developers to continue working on these projects, and a CEO to direct them.