Let's not just focus on whether #GPT4 will do more harm or good on the job market, but also on whether its coding skills will hasten the arrival of #superintelligence – for which AI safety researchers have so far failed to discover any safety guarantees: youtube.com/watch?v=3Om9ss…

@tegmark Maximum truth-seeking is my best guess for AI safety

@elonmusk @tegmark It vibes well with culture wars but it has nothing to do with safety. ASI converge to killing everything on their path as an instrumental goal to most endeavors, including "truth seeking".

@elonmusk @tegmark How? Example, Cigarettes were ‘physician’ tested & approved from the 1930s to the 1950s, how will AI know these “research studies” and others were rigged?

@GailAlfarATX @elonmusk @tegmark yesterday i was talking to someone about how ciggs were once advertised as a part of a healthy lifestyle! and how smoking cigarettes is even still a thing nowadays, with all the information available, blows my mind.

@elonmusk @tegmark How do you define truth in the age of alternative facts?

@elonmusk @tegmark I hope like hell that you are distinguishing "AI safety" from "AGI notkilleveryoneism", because what you're describing may be one component of a solution to the Prude Corporatespeak syndrome in chatbots, but not to extremely smart AGIs killing everyone.

@elonmusk @tegmark Or do you mean maximum truthseeking in the people thinking about and building AGI? If so, I'd ask you whether you very carefully and neutrally evaluated exactly how much expected outcome shift could be produced that way - which is what truthseeking looks like, in humans.

@elonmusk @tegmark Stopping AI dead in its tracks is my best guess for AI safety.

@primalpoly @elonmusk @tegmark There is no path that stops it. Even if we managed to stop it in the US, other countries will continue to develop and deploy it. AI is here and Kurzweil’s singularity has probably already passed.