I see a lot of discussion here about over-hyped AI, and then I see the huge AI bubble at my workplace, in news, in PR statements, etc.
Are there folks who work at companies – especially interested in those in tech – that have a reasonable handle on AI’s practical uses and its limitations?
Where I work, there’s:
- a dashboard of AI usage by team and individual, which will definitely not affect performance review in any way
- a mandate to use one AI tool last month, and this month a new one to abandon that tool and adopt a different one
- quarterly goals where almost every one has some amount of “with AI” in it
- letters from the CEO asking which teams are using AI to implement features from ticket descriptions, or (inspired by the news) use flocks of agents, asking for positives without mention of asking for negatives
- a team creating a review pipeline for AI-generated output in our product, planning to review the quality of the output… using AI
- teammates are writing code and designs and sending them for review without ensuring functionality or pruning irrelevant portions, despite a statement that everyone is responsible for reviewing AI output
Is all the resistance to overuse of AI grassroots and is the pressure for rampant adoption uniform among executives/investors? Or are some companies or verticals not drinking the koolaid?


Medical device industry here. Some of our software and electrical engineers are using Claude as a sounding board for ideas, or as a starting point to find possible paths forward when they get stuck with a hard problem. Nobody trusts the model to give an accurate answer. At the end of the day, all work committed to a project is done by real humans with the normal review processes.
Management is cautiously looking at potential uses for AI in our products, but there is a healthy dose of skepticism all around. If your machine is displaying diagnostic data to a doctor there cannot be any question as to whether the machine is hallucinating.
Honestly, this is probably the best use case for LLM’s.
Tom Scott did something recent 2-3 years ago where he fed a bunch of his video titles into an LLM and had it come up 100 new names with a similar style. Most of the output sucked, a handful he had already done, and a few more sounded plausible but didn’t exist. But he got 8-10 that he could have turned into actual videos (doing all the work himself) and even did so for a couple.
The hallucination of AI can be used to help a human artist or programmer, designer, scientist, etc.) make a new connection they couldn’t before, and they can then use that new connection to implement their new idea. But LLM’s generally suck for anything more than that, and over-reliance on them slowly erodes people’s ability to think and create over time