Discussing OpenAI's Open Sourcing of GPT-3 and Recent GPT-4 Quality Issues
I infer that GPT-3 will indeed be open sourced, and I predict that it will happen at the latest by next year.
The original article (論 OepnAI 開源 GPT-3,與近期 GPT-4 品質問題) was written in Traditional Chinese. Below is the version translated by AI.
Will OpenAI open source GPT-3?
According to OpenAI's plans as reported by Sam Altman, the original website article has been taken down at the request of OpenAI. Here is a past archive.
I think the information in there is consistent with my understanding, except that the issue of open sourcing is a bit strange, and it should not be fabricated.
The reason for the removal, I guess, may be that this is actually a closed-door meeting, not a public statement or interview, and perhaps some information could affect stock prices...
I infer that OpenAI's open sourcing of GPT-3 (not GPT-3.5?) is very likely, but the reason is not, as he said, out of concern that others cannot run the model. I'm sure there will be companies/organizations that can run it, and people will make streamlined versions for more people to run.
His real aim is actually to suppress other open source works, hoping that everyone will come back to rely on OpenAI models, rather than develop their own, which in the end even OpenAI can't stop.
Open-source GPT-3 and not GPT-3.5? Does it include RLHF? The original model is the most powerful but also the most dangerous, and it's safer after RLHF. From a responsibility perspective, they should release the version that includes RLHF. This makes it harder for malicious people to use, but it's much more useful for companies looking to make money. However, I suspect that OpenAI may only release the original model, forcing everyone to work a little harder before they can start making money with it.
When will it be released? When open source models begin to pose a threat to GPT-3.
When is that??? Given the pace of development of open source models, I predict that it will be within this year, and at the latest next year.
GPT-4 Quality
Looking at other people's reactions, GPT-4 has become faster recently, but the quality has become worse, although the API seems unchanged.
I infer that this is a change after the version of May 24.
Now that I'm using GPT-4, not only is it much faster, the speed at which words are displayed is already too fast for me to keep up with, and the content is concise, answering directly without being verbose. I didn't feel the quality was getting worse.
I believe this is the result of improved efficiency, and I also believe that the training reward target has been retrained to allow GPT to generate concise answers, reducing the time occupying GPU and serving more users.
This suggests that in previous training, human trainers preferred complete answers. But now, after large-scale commercialization, concise answers are found to be more in the interest of the company and users.