- cross-posted to:
- futurology@futurology.today
- cross-posted to:
- futurology@futurology.today
Did nobody really question the usability of language models in designing war strategies?
Did nobody really question the usability of language models in designing war strategies?
To be fair they’re not accidentally good enough: they’re intentionally good enough.
That’s where all the salary money went: to find people who could make them intentionally.
GPT 2 was just a bullshit generator. It was like a politician trying to explain something they know nothing about.
GPT 3.0 was just a bigger version of version 2. It was the same architecture but with more nodes and data as far as I followed the research. But that one could suddenly do a lot more than the previous version, so by accident. And then the AI scene exploded.
So the architecture just needed more data to generate useful answers. I don’t think that was an accident.