Shit as dumb as decision trees are considered AI. As long as there’s an if-statement somewhere in the app, they can slap the label AI on it, and it’s technically correct.
But I take your point. This stuff will continue to advance.
But the important argument today isn’t over what it can be, it’s an attempt to clarify for confused people.
While the current LLMs are an important and exciting step, they’re also largely just a math trick, and they are not a sign that thinking machines are almost here.
Some people are being fooled into thinking general artificial intelligence has already arrived.
If we give these unthinking LLMs human rights today, we expand orporate control over us all.
These LLMs can’t yet take a useful ethical stand, and so we need to not rely on then that way, if we don’t want things to go really badly.
Always remember that it will only get better, never worse.
They said “computers will never do x” and now x is assumed.
There’s a difference between “this is AI that could be better!” and “this could one day turn into AI.”
Everyone is calling their algorithms AI because it’s a buzzword that trends well.
Shit as dumb as decision trees are considered AI. As long as there’s an if-statement somewhere in the app, they can slap the label AI on it, and it’s technically correct.
That’s not technically correct unless the thresholds in those if statements are updated on the information gained for the data.
It usually also gets worse while it gets better.
But I take your point. This stuff will continue to advance.
But the important argument today isn’t over what it can be, it’s an attempt to clarify for confused people.
While the current LLMs are an important and exciting step, they’re also largely just a math trick, and they are not a sign that thinking machines are almost here.
Some people are being fooled into thinking general artificial intelligence has already arrived.
If we give these unthinking LLMs human rights today, we expand orporate control over us all.
These LLMs can’t yet take a useful ethical stand, and so we need to not rely on then that way, if we don’t want things to go really badly.