- cross-posted to:
- futurology@futurology.today
- cross-posted to:
- futurology@futurology.today
Did nobody really question the usability of language models in designing war strategies?
Did nobody really question the usability of language models in designing war strategies?
Whenever we have disrupting technological advancements, DARPA looks at it to see if it can be applied to military action, and this has been true with generative AI, with LLMs and with sophisticated learning systems. They’re still working on all of these.
They also get clickbait news whenever one of their test subjects does something whacky, like kill their own commander in order to expedite completing the mission parameters (in a simulation, not on the field.) The whole point is to learn how to train smart weapons to not do funny things like that.
So yes, that means on a strategic level, we’re getting into the nitty of what we try to do with the tools we have. Generals typically look to minimize casualties (and to weigh factors against the expenditure of living troops) knowing that every dead soldier is a grieving family, is rhetoric against the war effort, is pressure against recruitment and so on. When we train our neural-nets, we give casualties (and risk thereof) a certain weight, so as to inform how much their respective objectives need to be worth before we throw more troopers to take them.
Fortunately, AI generals will be advisory to human generals long before they are commanding armies, themselves, or at least I’d hope so: among our DARPA scientists, military think tanks and plutocrats are a few madmen who’d gladly take over the world if they could muster a perfectly loyal robot army smart enough to fight against human opponents determined to learn and exploit any weaknesses in their logic.