things we need for a good AGI future: 1) the technical ability to align a superintelligence 2) sufficient coordination among most of the leading AGI efforts 3) an effective global regulatory framework including democratic governance
@sama #3 seems to be quite the moonshot given the incentive structures for 'participant nations'
@sama The last thing needed is coordination and global regulatory framework.
@sama Unfortunately, #3 will be a pain point amongst the world. Unification is not the strongest piece of humanity. There needs to be an alignment for sure, though.
@sama Is it interested in chicken tendies or good boy points?
@sama So in other words it's a fool's errand and requires a total ban.
@sama Asimov's 3 Laws of Robotics?
@sama Human beings cannot agree on the good; how do we align a greater intelligence with the good?