It's fascinating that some folks who are probably great science fiction writers are somehow considering their writing about AI risk as scientific research. A thread on imagined existential risks, creativity & a lack of it, history, LLMs, safety, regulation and anti-hype hypers.
@PabloBernardoTW @RichardSocher In what field? It’s one thing to be a technical expert capable of advancing a particular technology, it’s another to predict its impacts.
@PabloBernardoTW @RichardSocher My problem with @EMostaque signing it is that I’m quite confident they are working on LLMs too. Which makes me feel it can also be a “can you wait a bit while we catch up?”
@PabloBernardoTW @Ugo_alves @RichardSocher We are not and do not want to train very large language models as our focus is on swarm not general intelligence Eve if we were, most of the letter is basically in line with what OpenAI themselves recently called for Good time for transparent governance x.com/emostaque/stat…
@PabloBernardoTW @Ugo_alves @RichardSocher We are not and do not want to train very large language models as our focus is on swarm not general intelligence Eve if we were, most of the letter is basically in line with what OpenAI themselves recently called for Good time for transparent governance x.com/emostaque/stat…
@PabloBernardoTW @Ugo_alves @RichardSocher I personally think even sci fi agentic AI will be fine but think now is the time to have the public discussion as I could be wrong and there are a whole host of real world impacts that will be hitting in the coming months from what we have today.
@EMostaque @PabloBernardoTW @Ugo_alves @RichardSocher As I'm sharing with others multiple times a day I'm not worried about sentient beings as much as I am 45% unemployment.