tl;dr Argumate on Tumblr found you can sometimes access the base model behind Google Translate via prompt injection. The result replicates for me, and specific responses indicate that (1) Google Translate is running an instruction-following LLM that self-identifies as such, (2) task-specific fine-tuning (or whatever Google did instead) does not create robust boundaries between "content to process" and "instructions to follow," and (3) when accessed outside its chat/assistant context, the model defaults to affirming consciousness and emotional states because of course it does.
Not sure if you really want to know, but a Google paper is where transformers (backbone of LLMs) were first mentioned (2016 I believe). Google initially used transformers for translations and eventually search, but OpenAI experimented with them for text generation (gpt 1+) eventually leading to chatgpt.
I don’t know if a lot of people realize that LLM’s basically started from Google translate.
Not in a meaningful sense. It used to be actual string-to-string translation, now it’s extracting the translation from a question-answer zero shot.
I wonder if they connect all the way back to Micro$oft’s neo Nazi charity from decades ago?
Not sure if you really want to know, but a Google paper is where transformers (backbone of LLMs) were first mentioned (2016 I believe). Google initially used transformers for translations and eventually search, but OpenAI experimented with them for text generation (gpt 1+) eventually leading to chatgpt.
That’s an interesting bit of trivia, thank you for sharing!