I see a lot of discussion here about over-hyped AI, and then I see the huge AI bubble at my workplace, in news, in PR statements, etc.
Are there folks who work at companies – especially interested in those in tech – that have a reasonable handle on AI’s practical uses and its limitations?
Where I work, there’s:
- a dashboard of AI usage by team and individual, which will definitely not affect performance review in any way
- a mandate to use one AI tool last month, and this month a new one to abandon that tool and adopt a different one
- quarterly goals where almost every one has some amount of “with AI” in it
- letters from the CEO asking which teams are using AI to implement features from ticket descriptions, or (inspired by the news) use flocks of agents, asking for positives without mention of asking for negatives
- a team creating a review pipeline for AI-generated output in our product, planning to review the quality of the output… using AI
- teammates are writing code and designs and sending them for review without ensuring functionality or pruning irrelevant portions, despite a statement that everyone is responsible for reviewing AI output
Is all the resistance to overuse of AI grassroots and is the pressure for rampant adoption uniform among executives/investors? Or are some companies or verticals not drinking the koolaid?


Not a tech company, but a petroleum exploration company, which involves a lot of tech. The petroleum industry in general is extremely conservative in terms of tech, in that older and proven technologies tend to stick around. For example, I often write data to magnetic tape.
However, the industry doesn’t shy away from newer technologies where it does make sense. There is some AI at play, but it is limited in scope, and only deployed where it makes sense. Most of it is done on the processing side, so I don’t know much about it, but I get the impression it’s used in a similar manner to those headlines you see from time about AI predicting rectal cancer 99% correctly. Interpreting seismic survey data involves some geophysical wizardry that I’ve never quite understood - I just make sure the production servers offshore work.
For the size of data that oil exploration requires, tapes make lots of sense still.
They have higher density, and they are more shock proof. When you need to move masses of data round the world, writing it to tape, then sticking it on a plane is still the fastest way to move it (probably, may have changed I guess)
Yup, I 100% agree. Tapes are often viewed as obsolete, but there is no more cost-effective way storing data in the petabytes in a safer way than tape.
Hell, at work I have a few live storage clusters measured in petabytes, and being responsible for them can be pretty stressful at times. Data loss isn’t just bad, it is fucking terrifying when its data costs hundreds of thousands of dollars per day to collect.
I have yet to experience data loss, but I breathe a sigh of relief for every batch of data that has been confirmed written to tape. Because once it is, I know that it is safe and no longer my responsibility.
It’s written to two sets of tape at a time, both of which are read back to confirm data integrity, and once it is, that’s when I know that my live copy is officially not supposed to be a backup.
One set of tapes is stored on board in case something stupid happens with the other set during transport to a literal mountain for storage. There it is re-read and checksummed, confirming that the other set of tapes can be rewritten with the next dataset. (Yes, every tape cartridge is written to twice).
seems like large scale data analysis and mathematics are the strong points of AI if I understand the tools correctly, less ambiguity and room for hallucinations.
Do people agree?
“Artificial Intelligence” is a very broad term that, within computer science, covers a range of techniques and tools that broadly cover the study of “human-like behavior and impersonation.” Before the current fad of calling LLMs “AI”, the term was most often used in video games and covered techniques for pathfinding, decision making, reacting, seeming to speak, etc. Before that, pre-90s basically, “AI” had already undergone a few boom and bust cycles of hype with chess playing machines and, as always, chat bots.
In many fields, many of these same techniques and their descendants are being used to model and simulate and predict. All of them have trade-offs and limitations, that’s what computer science is all about.
I do remember talking to chatbots on AIM back in the day, so I think I had a leg up on other people in already understanding that the technology has existed for decades, which made me more cautious about the claims.
They made such a big leap so quickly, though. I remember even in 2018 thinking no bot would ever pass the Turing test.
Great point, they have come far, but my interactions have led me to believe they have come super far in faking it, not in actually understanding what is being done.
Maybe they have come further then I realize, but based on how easily they get tripped up on simple things and tie themselves into knots, the general models haven’t come too much further since.
Yeah, I think so. When you have a huge dataset with low signal to noise, AI tools seem pretty great.