I don’t want to hype AI, but you’re basically comparing a high school graduate AI (lots of general knowledge, no specialization) with a perfect senior dev. But that’s not really fair.
As soon as an AI works better than the average developer in a given area, it will outperform them. Simple as that.
Of course it will make errors, but the question is, are the extra errors compared to a human worth the savings?
Just a quick example: let’s say you’d need 10 devs of 100k a year and they produce errors worth 200k a year. That means costs of 1.2million a years.
If an AI costs 100k in licenses, replaces 5 devs and only adds, say 200k in errors, you’re still at only 1 million a year.
Do human programmers not fail?
I don’t want to hype AI, but you’re basically comparing a high school graduate AI (lots of general knowledge, no specialization) with a perfect senior dev. But that’s not really fair.
As soon as an AI works better than the average developer in a given area, it will outperform them. Simple as that.
Of course it will make errors, but the question is, are the extra errors compared to a human worth the savings?
Just a quick example: let’s say you’d need 10 devs of 100k a year and they produce errors worth 200k a year. That means costs of 1.2million a years.
If an AI costs 100k in licenses, replaces 5 devs and only adds, say 200k in errors, you’re still at only 1 million a year.