Every serious AI person, including @sama and @demishassabis , are either agnostic or are very mindful of the same concerns and advise cautious progress. Very few actually advise reversing or banning progress like @ESYudkowsky. But randos always gotta be variance-maxxing.

This is an absolutely incredible video. Hinton: “That's an issue, right. We have to think hard about how to control that.“ Reporter: “Can we?“ Hinton: “We don't know. We haven't been there yet. But we can try!“ Reporter: “That seems kind of concerning?“ Hinton: “Uh, yes!“

Pretty much echoes my stance tbh Experts mostly agree there is potential existential risk & we don't know how to mitigate it. Plenty of risks under existential level too. No transparent governance or insight into those building the models that could lead to this risk.

@EMostaque How close are we to seeing AI display any sort of originality or the ability to solve problems they lack previous data on?

@EMostaque ......................................................

@EMostaque 100% - so three layers of research investment needed: 1. Existential (low likelihood) 2. Systemic (high likelihood) 3. Specific (high likelihood)

@EMostaque @emad - if you were going to get cold feet, you should have done it last june. NOW what you should be doing is spearheading the push to develop safety, but not parterning with an organization (future of life) who's entire stated reason for existance is to get rid of AI.

@EMostaque 100% unfortunately the discourse cannot be tamed.

@EMostaque Everyone knows there's risk, but the risk is people. AI just enables us to weaponize ourselves against each other like never before. There's a super simple solve that doesn't even take AI away.