The below from OpenAI agrees with the bulk of the core calls of the FLI letter and also notes potential existential threat etc I think v large language models will probably be fine but admit I could be wrong and they right Now is the time for proper transparency & governance
The below from OpenAI agrees with the bulk of the core calls of the FLI letter and also notes potential existential threat etc I think v large language models will probably be fine but admit I could be wrong and they right Now is the time for proper transparency & governance
@EMostaque I think the main issue is that bad actors will not stop developing their AI.
@EMostaque So long as open source community developers DO NOT have to drink this coolaid, I don't mind what the big companies and universities sign up for. But OpenAI is more concerned about squashing the competition and maintaining its monopoly, rather than preventing rogue AI causing harm
@EMostaque i think we should make an llm tha tis instructed to self iterate, gorges on code, and can write addendums to itself. makesure it has some sort of off switch built in. and let it go.. just see what happens.
@EMostaque What do they want safety checks on? The code? The training data which may already be subject to laws that may already have been broken? The use?
@EMostaque it'd help if open ai were actually OPEN about what their gpt-4 model was, not just trained on, but how they built it. You can't govern something if you have no information on how it was built.
@EMostaque Nooooooo These amazing and creative tools are incredibly useful frustrated that many of these "tech leaders", with their immense wealth, have decided to go in this luddite/pessimistic direction
@EMostaque meanwhile in an off-site basement somewhere, machines go Brrrrr.......
@EMostaque What did you think of this petition? Have you signed it? openpetition.eu/petition/onlin…