The below from OpenAI agrees with the bulk of the core calls of the FLI letter and also notes potential existential threat etc I think v large language models will probably be fine but admit I could be wrong and they right Now is the time for proper transparency & governance
The below from OpenAI agrees with the bulk of the core calls of the FLI letter and also notes potential existential threat etc I think v large language models will probably be fine but admit I could be wrong and they right Now is the time for proper transparency & governance
@EMostaque I think the main issue is that bad actors will not stop developing their AI.
@followmarcos The only thing that can stop a bad AI is a good AI eh? What if the bad actors just don't bother developing their own and espionage the weights away instead? Seems far simpler.
@followmarcos There is no evidence of the former and they are probably just waiting for the latter. All the North Korean hackers specialized in stealing crypto are likely repurposing to stealing model weights as we speak.
@EMostaque That's actually more feasible. It's far easier for them to steal it away than to figure out how to do it. I would still be weary of both possibilities.
@EMostaque @followmarcos How did NK take 80 million from Bangladesh in it’s biggest bank heist?