This group has been studying dolphins since 1985 using a non-invasive approach to track a specific community of Atlantic spotted dolphins. The WDP creates video and audio recordings of dolphins, along with correlating notes on their behaviors.
Just the 40 years, for that specific group of dolphins.
So imagine the language model can produce grammatically correct and semantically meaningful dolphin language, how does it translate that to a human language?
The reason LLMs can do this for human languages is that we have an enormous corpus of Rosetta stones for every language that allow the model to correlate concepts in each language. The training data for human to dolphin is going to be just these “behavioural notes.”
So the outcome is that the bullshitting machine will bullshit the scientists that it knows what they’re saying when it’s actually just making stuff up.
It’s a big problem with LLMs that they very rarely answer, “I don’t know.”
LLMs use a tokenizer stage to convert input data into NN inputs, then a de-tokenizer at the output.
Those tokens are not limited to “human language”, they can as well be positions, orientations, directions, movements, etc. “Body language”, or the flight pattern of a bee, are as tokenizable as any other input data.
The concepts a dolphin language may have, no matter what they are, could then be described in a human language, and/or matched to human words for the same description.
It’s just going to hallucinate bullshit. Because we have so much training data of conversations between humans and dolphins don’t we?
Just the 40 years, for that specific group of dolphins.
So imagine the language model can produce grammatically correct and semantically meaningful dolphin language, how does it translate that to a human language?
The reason LLMs can do this for human languages is that we have an enormous corpus of Rosetta stones for every language that allow the model to correlate concepts in each language. The training data for human to dolphin is going to be just these “behavioural notes.”
So the outcome is that the bullshitting machine will bullshit the scientists that it knows what they’re saying when it’s actually just making stuff up.
It’s a big problem with LLMs that they very rarely answer, “I don’t know.”
LLMs use a tokenizer stage to convert input data into NN inputs, then a de-tokenizer at the output.
Those tokens are not limited to “human language”, they can as well be positions, orientations, directions, movements, etc. “Body language”, or the flight pattern of a bee, are as tokenizable as any other input data.
The concepts a dolphin language may have, no matter what they are, could then be described in a human language, and/or matched to human words for the same description.