#7: Holly Elmore on AI pause, wild animal welfare, and some cool biology things I couldn't fully follow but maybe you can
www.aaronbergman.net
Listen on Spotify or Apple Podcasts Be sure to check out and follow Holly’s Substack and org Pause AI. Blurb and summary from Clong Blurb Holly and Aaron had a wide-ranging discussion touching on effective altruism, AI alignment, genetic conflict, wild animal welfare, and the importance of public advocacy in the AI safety space. Holly spoke about her background in evolutionary biology and how she became involved in effective altruism. She discussed her reservations around wild animal welfare and her perspective on the challenges of AI alignment. They talked about the value of public opinion polls, the psychology of AI researchers, and whether certain AI labs like OpenAI might be net positive actors. Holly argued for the strategic importance of public advocacy and pushing the Overton window within EA on AI safety issues.
#7: Holly Elmore on AI pause, wild animal welfare, and some cool biology things I couldn't fully follow but maybe you can
#7: Holly Elmore on AI pause, wild animal…
#7: Holly Elmore on AI pause, wild animal welfare, and some cool biology things I couldn't fully follow but maybe you can
Listen on Spotify or Apple Podcasts Be sure to check out and follow Holly’s Substack and org Pause AI. Blurb and summary from Clong Blurb Holly and Aaron had a wide-ranging discussion touching on effective altruism, AI alignment, genetic conflict, wild animal welfare, and the importance of public advocacy in the AI safety space. Holly spoke about her background in evolutionary biology and how she became involved in effective altruism. She discussed her reservations around wild animal welfare and her perspective on the challenges of AI alignment. They talked about the value of public opinion polls, the psychology of AI researchers, and whether certain AI labs like OpenAI might be net positive actors. Holly argued for the strategic importance of public advocacy and pushing the Overton window within EA on AI safety issues.