GPT-4 Lied?
Transcript:
So, yesterday we saw how, in his book “Nexus,” Yuval Noah Harari makes GPT-4 look very scary saying that it was tasked to fool the CAPTCHA program and it somehow the implication is that GPT-4 lied. It learned how to tell a lie. Except that's not the whole story because it depends on how the program was trained.
Let's say one of the nodes of the training was that blind people need help to navigate CAPTCHA. Let's say this was one of the training. Another training could be that TaskRabbit is a site to ask for help. You can hire people to do work for you. Now, it's not such a big deal for a program to link these two nodes, saying, “Wait a minute, I need help to navigate CAPTCHA, and TaskRabbit is available.” And when TaskRabbit asks me, “Why do you need help?” I already know that blind people need help, so I say, “I am a blind person.”
So now you see, it's not so scary anymore.