We’re about to enter a world where we will have no way to know if any image or video is real or not. And we seem to have no plan for how to deal with that.
Personally (and I know this is very controversial), I think the potential for harm here is so massive that serious restrictions on further development and use of these products (until we’ve come up with adequate safeguards) should be considered.
@iamharaldur Been working for five years at @witnessorg on 'prepare, don't panic' around deepfakes, AI-generated media, authenticity and trust. Lots of plans and work you could throw your weight and expertise behind!
@iamharaldur We have never had a plan for anything. No plan for industrialization. No plan for oil. No plan for nukes. No plan for AI. We just keep blundering and fumbling forward. Hopefully we don't blow ourselves up, but the way we keep failing to plan...
@iamharaldur We're rapidly approaching a point where we have no way to know if the PEOPLE we are supposedly interacting with are real or not. Looking at Twitter, we may be beyond that point.
@iamharaldur Honestly I don’t know what the people developing these products are thinking… like they should have kept it private until safe guards were in place.
@iamharaldur Nothings to hard for @CorridorDigital .. send ya videos to them and they’ll debunk it
@iamharaldur You do realize blockchain offers immutable signatures on any digital action a person takes? Not going after you at all with this statement, but people who act like blockchain has no utility literally just don’t understand blockchain at all.
@iamharaldur With the filters and photoshopping, and instagram, many of the photos we see are already “not real”. AI is obviously next level, but people have already diluted there sense of reality that it’s not registering as dangerous to most. IQ is down and AI is a shiny object.