77. If I Were Emperor of New AI Safety Researcher Training...
(Epistemic status: Opinions, but justifiable ones. For RD, and to a lesser extent RK and DN/LT; with thanks to JM and PR.) ...then what would I make absolutely sure that the new blood read, played, or otherwise interacted with? And why? This list is not meant to be exhaustive, but I've tried pretty hard to cover a lot of ground very fast. You may assume that this is in addition to classics, like "A List of Lethalities", excerpts from Bostrom, and "Ten Levels of AI Alignment Difficulty". Accordingly, this is the things that I would personally add to that curriculum, or maybe bump some marginal things in favor of. It's aimed all over the spectrum of what "new AI safety researcher" means; some of them are for totally new people, some are for people who have a sense of what subfield they want to attack, and some could benefit literally everyone including established researchers. I've tried to pick things that are specifically underutilized and a re...