"Some people in the AI field think the risks of AGI (and successor systems) are fictitious; we would be delighted if they turn out to be right, but we are going to operate as if these risks are existential." 🤣 no you aren't openai.com/blog/planning-…

Aha so the way to put the AGI genie they themselves say could end humanity and upend society back in the bottle is to switch it off. Killswitch operator such an important job. This sophisticated thinking combined with amazing governance fills me with confidence, how about you?

Full interview with other such delightful nuggets here: youtu.be/540vzMlf-54 Have heard they are actively lobbying against open source despite the fact it is essential to national security - open, auditable, interpretable system are essential for private and regulated data

@elonmusk @EMostaque The problem with AI is that the very people who determine its safety are the ones who would be most shielded from its negative effects if they were to manifest.