Deploying GPT-4 subject to adversarial pressures of real world has been a great practice run for practical AI alignment. Just getting started, but encouraged by degree of alignment we've achieved so far (and the engineering process we've been maturing to improve issues).
One example observation — much of model's good behavior across such a broad range of tricky or adversarial topics comes from the model having generalized the human instructors' concept of being a helpful AI assistant. Not something I saw coming!
@gdb Greg you are doing a great job, who am I but don’t stop innovation, keep building the speed we are used to in crypto also. In a not far future we could finally have Super toys like Teddy in the movie Artificial Intelligence (Steven Spielberg). One of my early childhood dreams
@gdb This "alignment" word has become such a shibboleth. Just say the word as it is: control.
@gdb Greg. Had an issue today (I’m a paying customer and was trying to teach GPT on creating a resume). It constantly would stop producing text and would just leave off the ending of the resume. Kinda odd and I just gave up on trying to do it.
@gdb I think OpenAI should be reminded they released GPT-4 first in Bing in a terrible broken way. I don't understand why this seems to have swept under the rug and the deployment of GPT-4 is now being talked like it was a massive success on alignment/safety side of things. It wasn't
Your team is doing great work! GPT-4 is already transformational in many, many ways for knowledge workers. Noticing subtle, steady improvements in the UX, eg. amount of tolerated ‘copy and paste’ input data, improved behaviour on communicating and explaining limits on responses, etc.