• utopiah@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    6 months ago

    Interesting video based on “No “Zero-Shot” Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance” https://arxiv.org/abs/2404.04125 which basically says (my interpretation) that temporary techniques, i.e not LLM but LMM are statistical models based on large datasets which don’t, and can’t unless at a ridiculously (basically impractical) high cost consider the long tail, namely what is not quite popular.