Controversy… What controversy? It sounds more like blatant journalistic malpractice
When I suggested he be fired on another thread I received several responses saying “he made a mistake” and “he was sick”, and many downvotes in return.
I did not downvote you—my instance does not allow or show downvotes, which is really nice!—but he was sick, and he did make a mistake, and him being fired does not make either of those things false.
Also, a ton of people were piling on him in that thread, so you had plenty of company in calling him to be fired.
The comments here around this were so… Off. I guess nothing was certain, but we were supposed to believe that the author was too sick to write an article, but also writing an article and using an AI “tool” at the same time.
Hindsight is 20/20, but popular defenses at the time were
He wrote the article himself, he just got mixed up when experimenting with using an AI tool to help him extract quotes from a blog entry. (He is the head AI writer, so learning about these tools is his job.) It was nonetheless his failure to check the quotes he was copying from his note to make sure that he got them right… but an important bit of context is that he had COVID while doing all this.
If he had Covid, then why was he working?
You know that the writer himself is quoted in the OP article, right?
Amazing. Just great.
Imagine being confronted for lying and just going “hey it was an accident okay I didn’t MEAN to decieve people, I just used the machine known for deceiving people and willingly put my name on its deceptions and it deceived people!” and having people defend you.
That’s why he was fired
The article says “controversy” as of this is some cancel culture crap.
A few years ago, blatant journalistic malpractice was a controversy.
Obviously the use of a LLM was a terrible decision, but I think in this context we can also blame some country’s lack of sick pay.
I’m not taking all the credit but I do hope those people who didn’t believe me in the past could rightfully take this comment, print it, pull down their pants and shove it up their ass.
It’s time to hold journalism with a higher standard and this idea that “well they do alright” and “it was only once” is bullshit sliding into madness.
Just the facts, folks.
and “it was only once” is bullshit
They checked and then fired the author. I don’t see how this is “it was only once” implying nothing changed and it will happen again. Isn’t firing the author “holding journalism to a higher standard” already, which you ask for?
Main character moment.
The problem with your attitude towards this is that these companies are forcing “AI” down everyone’s throat. It’s a requirement now to churn out more bullshit than humanly possible.
This person was simply fired because they didn’t catch the false information,not because they used the tools forced upon them.
To be fair to Ars Technica, that doesn’t sound like the case to me.
The “journalist” in question seems to be suggesting that this was their own bad judgment to use AI to “find relevant quotes” from the source material.
Having said that, there’s also a senior editor on the by-line who hasn’t been held accountable for clearly failing to do their job, which as I understand it, is to read, edit and verify the contents of the article. So in a way Ars seems to have a problem with quality whether or not the use of AI was mandated.
Ars is owned by Conde Nast who has multiple whistleblowers saying AI is being forced on them. Think that’s kind of relevant.
Is there any evidence this is happening at Ars Technica? They’re pretty transparent about their methods, and obviously tech-savvy. Just because it happened at Teen Vogue doesn’t mean it’s happening at Ars. Conde Nast publications seem to be run pretty independently. Take The New Yorker, their content remains amazing and seems fully independent.
Most companies have AI forced, either directly or indirectly (“you need to double your output, AI can help…” kind of thing)
I don’t work at Ars, and maybe you know something I don’t, but I have seen nothing to suggest that they’re one of the companies doing that. It seems like they are pretty open about how they do not allow AI to be used in the process. Have they said something to indicate otherwise and I just misssed it?
Sifting through information to find out what’s true and what’s not, before presenting it to the public, is a pretty crucial task and ability for an actual journalist though. It is probably one of the most important parts of their job to verify the correctness of their sources and what they write.
Then maybe they shouldn’t be using these tools in the first place. Other Conde Nast employees have already been blowing the whistle about this, which is funny because they used all the AI companies for stealing content.
Whether there is a news article about it or not, these shitty tools are being shoved down everyone’s throats. From developers, to authors.
Then maybe they shouldn’t be using these tools in the first place
I absolutely agree, they should not write articles with LLMs. I’m just saying they’re not absolved of basic journalistic responsibility because they’re instructed to use LLM tools.
As they should
Why are we blaming AI here instead of the journalist?
I mean they fired the guy, and the guy took full responsibility for the errors. If that’s not blaming the journalist, I don’t know what is.
Tbf, I didn’t read the article. But the title mentions “controversy.” Also are people so lazy they can’t make up their own fake quotes? Was AI really needed here?
Tbf, I didn’t read the article. But the title…
Say no more. Please
Are people so lazy they can’t even bother to read the headline? Maybe an AI would’ve been useful here to generate its own defense.
Being too lazy to read is one thing, not being too lazy to then comment is a whole other kind of existence.
AI - damned if you do and damned if you don’t. And it’s not just journalism affected.
Or, you know, double-check that the quotes given to you by the experimental AI “quote extractor” tool are accurate?
He is (was) their go-to AI reporter. It’s not like they handed the assignment to an intern and said “go nuts.”
And the article was about AI fabricating an attack on a developer that rejected its PR.
In this case it was very much NOT “damned if you do, damned if you don’t”–It’s just don’t.
As a journalist it’s your whole fucking job to do the research and report things accurately and truthfully. There’s no reason at all the “journalist” in question here should have had an AI generated anything for his shitty article.
The fact that this was a story on AI misuse in the first place only adds insult to injury.
And yet, if you don’t, you will be undercut by the grossly subsidized AI and out of a job, either individually if your management leans AI or the whole enterprise if they don’t, replaced by the AI slop factories.
Yeah. But there’s always the risk of being undercut by someone or something cheaper if you’re operating in a workplace with zero standards. After all, you could write a lot of articles if you didn’t give a rat’s ass about the veracity or quality of the information within.
Good newsrooms are supposed to have standards–that’s what makes them good.
If this the people at Ars had done their jobs to a high standard, the article in question wouldn’t have been written like that in the first place, let alone edited and published as is. They want to fire the writer in question, and the writer wants to blame being sick, but the fact remains that the publishing of that article reveals a systemic problem with how Ars are operating, and a total lack of editorial standards.
The elite don’t need the masses to be informed, they need them to be placated and oblivious or confused about what is happening, so they support what is contrary to their interests - idolize and support the elite. Good newsrooms don’t serve the purposes of those that own them. AI producing slop with embedded propaganda serves them. It has only just begun. Watch young people on TikTok, sopping up the numbing propaganda. It is the future - now controlled by US elites. Like programmers who know their code, accountants that know their books, and so many other professionals who pride themselves on the quality of their work, journalists who do their jobs to a high standard are being replaced. It will be very good for a few - those that can afford quality, free from slop and misinformation. But that’s not the audience of Ars.
What was the damned if you don’t in this scenario? Seems more like damned if do, best if you don’t in this situation.
I don’t know but implication the other poster is making is “a human can write 2 articles, a Ai can write 5, I’m being asked for 5 which is impossible. I can use Ai and risk trusting it or not meet my required outputs and also get fired.”
I made up those numbers but that’s the accusation. You are damned if you use the Ai to meet your goals. You are damned if you don’t meet your goals.
There’s an assumption that there has been an increased workload requested of them that I don’t have a reason to believe. That person has been a writer for them for years and since they don’t use AI as a rule, I don’t know why they would have increased expected output from their staff. I’m not saying that never happens, I just don’t believe that’s what happened in this case as there is no evidence to suggest that. I appreciate you explaining that comment though.
My wife is an accountant. She went to a seminar today where they were told to start using AI or get out of the way. They were shown an AI that can produce consolidated annual accounts and financial statements in a few minutes, that it takes her and the auditors a month to produce. And they look very good! The company is unlikely to pay her and wait for the quality reports she has been producing for years. She’s on notice: start prompting the AI or move on. The AI promoters are going to run her and me and probably you into the ground and walk over us all, as they move on to their glorious future.
The AI promoters are going to run her and me and probably you into the ground and walk over us all, as they move on to their glorious future
LOL there’s no “glorious future”, they’re just going to rat fuck themselves, because those accounts are going to be riddled with errors.
Did they actually check if the generated stuff was correct? I’m betting it isn’t
What company does she work for so I can stay clear of that impending hallucinatory clusterfuck?
Her company has been good, though a recent restructuring is worrying. The advice came to an assembly of CFOs. The problem is much bigger than her company. This was the second professional development guidance she has received in the past month, promoting AI. I give her examples of unreliability and advise caution. At the session, they advised that no one should study programming or accounting any more. My advice was that they should study how to audit and that use of AI would make effective audit much harder than it has been, but also more necessary. The clusterfuck is going to affect everyone, unfortunately. You can’t avoid it by avoiding her company.
Ouch! Tell her I’m sorry, and I’m sorry for you too. All the accountants I worked with did alot more than just reports. Not to mention that sounds great until the Ai says 2+4 =2*4 and now the company owes 20 billion on taxes…
Plus in a lot of cases people don’t submit records in identical format, the number of excel workbooks I’ve seen where the data was on “sheet 2” for some unknown reason…
Maybe its just me, I always provided raw data on sheet 1, analyzed data on sheet 2, and if needed complicated formulas on sheet 3. I would be willing to bet their Ai would break on that format.
Best if you don’t if quality is more important than financial viability, but no one can compete financially with the flood of AI/LLM being given away for free or, at most, far below actual cost. It’s not good for anyone but the billionaires, but have you noticed how much wealth they have accumulated in the past few years? It’s very, very good for them.
I get where you’re coming from, but I think it’s important that ars has held this person accountable. They have a journalistic standard they are sticking to, which is that there should be no AI use, and there are repercussions for people who don’t abide. There’s not an extremely large cohort that is willing to spend more to avoid AI, but I am certainly part of it, and seeing ars hold this person accountable helps me know that I can trust and patronize them ethically. There are businesses out there unwilling to acquiesce to an AI first narrative, and I’m just worried that elements of doomerism are going to make people unwilling to believe those companies when they have every reason to believe them.
I have yet to see a field where LLMs are a net positive. At best scammers can dupe people easier and faster than ever but between writing, programming, etc the avg productivity gain is typically negligible at best to achieve work of similar quality with or without LLMs.
Maybe they heared @latenightlinux@mastodon.social






