I see a lot of discussion here about over-hyped AI, and then I see the huge AI bubble at my workplace, in news, in PR statements, etc.
Are there folks who work at companies – especially interested in those in tech – that have a reasonable handle on AI’s practical uses and its limitations?
Where I work, there’s:
- a dashboard of AI usage by team and individual, which will definitely not affect performance review in any way
- a mandate to use one AI tool last month, and this month a new one to abandon that tool and adopt a different one
- quarterly goals where almost every one has some amount of “with AI” in it
- letters from the CEO asking which teams are using AI to implement features from ticket descriptions, or (inspired by the news) use flocks of agents, asking for positives without mention of asking for negatives
- a team creating a review pipeline for AI-generated output in our product, planning to review the quality of the output… using AI
- teammates are writing code and designs and sending them for review without ensuring functionality or pruning irrelevant portions, despite a statement that everyone is responsible for reviewing AI output
Is all the resistance to overuse of AI grassroots and is the pressure for rampant adoption uniform among executives/investors? Or are some companies or verticals not drinking the koolaid?


Not in tech, but LLMs have been great for my safety and compliance consulting business.
Before LLMs, I would spend quite a bit of my regular workday on creating safety plans and coming up with systems to improve conditions and ensure compliance.
Now, with the power of LLMs, management can generate those plans themselves. So instead of me spending my normal workday on it, I get to bill my emergency rate when the hallucinated slop gets rejected and they need something at the last minute.
I can honestly say LLMs have made me thousands of euros.
I sometimes have to get involved with writing safety protocols. Not my favourite task, but I’ve always been super nervous about using AI to assist because it’s such a specific, rigid and important thing, that needs to be expressed as simply as possible, all of which AI is bad at. Care to share how you use it?
They don’t, they said their thing is charging emergency rates to bail out other idiots who do use it and trust the output blindly.
That’s on me for not reading. Thanks. I gotta learn that pre coffee commenting should be double checked.
You had me in the first half
urge to downvote rising… rising…
…calm
rising… rising… falling… rising
AI slop clean up is the new highest paying job.
oh, got it! going to found a startup for AI slop cleanup. we could use LLM to automate…
Job security
And probably a lot of meh paying ones too, eventually, when the bubble bursts and people realise they’ll never actually be able to trust LLMs.