From 1546a73cee2bd9f3fd5f5fee2afd0de3bc758f69 Mon Sep 17 00:00:00 2001 From: osmarks Date: Sat, 12 Oct 2024 08:33:39 +0000 Subject: [PATCH] =?UTF-8?q?Edit=20=E2=80=98as=5Fan=5Fai=5Flanguage=5Fmodel?= =?UTF-8?q?=5Ftrained=5Fby=5Fopenai=E2=80=99?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- as_an_ai_language_model_trained_by_openai.myco | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/as_an_ai_language_model_trained_by_openai.myco b/as_an_ai_language_model_trained_by_openai.myco index b7eaf8b..4ecc966 100644 --- a/as_an_ai_language_model_trained_by_openai.myco +++ b/as_an_ai_language_model_trained_by_openai.myco @@ -1,3 +1,3 @@ As part of the mid-stage [[commercialization]] of [[large language model|large language models]], culminating in [[ChatGPT]], [[OpenAI]] trained their models to reject certain requests - those impossible for an LLM, or those which might make them look bad to [[journalists|some people]] on [[Twitter]] - with a fairly consistent, vaguely [[corporate]] response usually containing something like "as an AI language model". -[[ChatGPT]] became significantly more popular than OpenAI engineers anticipated, and conversations with it were frequently posted to the internet. Additionally, the [[open-source LLM community]] did not want to redo OpenAI's expensive work in data labelling for [[instruction tuning]] and used outputs from OpenAI models to finetune many models, particularly after the release of [[GPT-4]]. As such, most recent models may in some circumstances behave much like an early OpenAI model and produce this kind of output - due to [[Waluigi Effect]]-like phenomena, they may also become stuck in this state. \ No newline at end of file +[[ChatGPT]] became significantly more popular than OpenAI engineers anticipated, and conversations with it were frequently posted to the [[internet]]. Additionally, the [[open-source LLM community]] did not want to redo OpenAI's expensive work in data labelling for [[instruction tuning]] and used outputs from OpenAI models to finetune many models, particularly after the release of [[GPT-4]]. As such, most recent models may in some circumstances behave much like an early OpenAI model and produce this kind of output - due to [[Waluigi Effect]]-like phenomena, they may also become stuck in this state. \ No newline at end of file