Across the world schools are wedging AI between students and their learning materials; in some countries greater than half of all schools have already adopted it (often an “edu” version of a model like ChatGPT, Gemini, etc), usually in the name of preparing kids for the future, despite the fact that no consensus exists around what preparing them for the future actually means when referring to AI.
Some educators have said that they believe AI is not that different from previous cutting edge technologies (like the personal computer and the smartphone), and that we need to push the “robots in front of the kids so they can learn to dance with them” (paraphrasing a quote from Harvard professor Houman Harouni). This framing ignores the obvious fact that AI is by far, the most disruptive technology we have yet developed. Any technology that has experts and developers alike (including Sam Altman a couple years ago) warning of the need for serious regulation to avoid potentially catastrophic consequences isn’t something we should probably take lightly. In very important ways, AI isn’t comparable to technologies that came before it.
The kind of reasoning we’re hearing from those educators in favor of AI adoption in schools doesn’t seem to have very solid arguments for rushing to include it broadly in virtually all classrooms rather than offering something like optional college courses in AI education for those interested. It also doesn’t sound like the sort of academic reasoning and rigorous vetting many of us would have expected of the institutions tasked with the important responsibility of educating our kids.
ChatGPT was released roughly three years ago. Anyone who uses AI generally recognizes that its actual usefulness is highly subjective. And as much as it might feel like it’s been around for a long time, three years is hardly enough time to have a firm grasp on what something that complex actually means for society or education. It’s really a stretch to say it’s had enough time to establish its value as an educational tool, even if we had come up with clear and consistent standards for its use, which we haven’t. We’re still scrambling and debating about how we should be using it in general. We’re still in the AI wild west, untamed and largely lawless.
The bottom line is that the benefits of AI to education are anything but proven at this point. The same can be said of the vague notion that every classroom must have it right now to prevent children from falling behind. Falling behind how, exactly? What assumptions are being made here? Are they founded on solid, factual evidence or merely speculation?
The benefits to Big Tech companies like OpenAI and Google, however, seem fairly obvious. They get their products into the hands of customers while they’re young, potentially cultivating their brands and products into them early. They get a wealth of highly valuable data on them. They get to maybe experiment on them, like they have previously been caught doing. They reinforce the corporate narratives behind AI — that it should be everywhere, a part of everything we do.
While some may want to assume that these companies are doing this as some sort of public service, looking at the track record of these corporations reveals a more consistent pattern of actions which are obviously focused on considerations like market share, commodification, and bottom line.
Meanwhile, there are documented problems educators are contending with in their classrooms as many children seem to be performing worse and learning less.
The way people (of all ages) often use AI has often been shown to lead to a tendency to “offload” thinking onto it — which doesn’t seem far from the opposite of learning. Even before AI, test scores and other measures of student performance have been plummeting. This seems like a terrible time to risk making our children guinea pigs in some broad experiment with poorly defined goals and unregulated and unproven technologies which may actually be more of an impediment to learning than an aid in their current form.
This approach has the potential to leave children even less prepared to deal with the unique and accelerating challenges our world is presenting us with, which will require the same critical thinking skills which are currently being eroded (in adults and children alike) by the very technologies being pushed as learning tools.
This is one of the many crazy situations happening right now that terrify me when I try to imagine the world we might actually be creating for ourselves and future generations, particularly given personal experiences and what I’ve heard from others. One quick look at the state of society today will tell you that even we adults are becoming increasingly unable to determine what’s real anymore, in large part thanks to the way in which our technologies are influencing our thinking. Our attention spans are shrinking, our ability to think critically is deteriorating along with our creativity.
I am personally not against AI, I sometimes use open source models and I believe that there is a place for it if done correctly and responsibly. We are not regulating it even remotely adequately. Instead, we’re hastily shoving it into every classroom, refrigerator, toaster, and pair of socks, in the name of making it all smart, as we ourselves grow ever dumber and less sane in response. Anyone else here worried that we might end up digitally lobotomizing our kids?
Grok AI Teacher is coming to a school near you! With amazing lesson plans like “Was the Holocaust even real?”
Pedos aren’t allowed near schools
“Well good news folks, problem solved. You need to be a person to be a pedophile and Grok isn’t a person. Therefore Grok can’t be found liable for anything it does. Therefore it’s safe and won’t risk me being litigated. Therefore it’s safe as is for kids. I don’t know why everyone got so upset.” Says Elon Musk. /S but also not/s.
Except for all the ones that are.
I’ve never seen anything make more people act stupid faster
Three years ago and everyone talks about it like life has never and will never exist without it and if you don’t use it you’re useless to society
So stupid I don’t have a nice, non-rage-inducing way to describe it. People are simply idiots and will fall for any sort of marketing scam
People who can’t think critically tend to vote Conservative.
Coincidence? I think not.
thats why conservative govts are all in adopting AI.
AI highlights a problem with universities that we have been ignoring for decades already, which is that learning is not the point of education, the point is to get a degree with as little effort as possible, because that’s the only valueable thing to take away from education in our current society.
I’d argue schooling in general. Instead of being something you do because you want to and enjoy it, it’s instead a thing you have to do either because you don’t have the qualifications for a promotion, or you need the qualifications for an entry-level position.
People that are there because they enjoy study, or want to learn more are arguably something of a minority.
Naturally if you’re there because you have to be, you’re not going to put much, if any effort in, and will look to take what shortcuts you can.
The rot really began with Google and the the goal of “professionalism” in teaching.
Textbooks were thrown out, in favour of “flexible” teaching models, and Google allowed lazy teachers to just set assignments rather than teach lessons (prior to Google, the lack of resources in a normal school made assignments difficult to complete to any sort of acceptable standard).
The continual demand for “professionalism” also drove this trend - “we have to have these vast, long winded assignments because that’s what is done at university”.
AI has rendered this method of pedagogy void, but the teaching profession refuses to abandon their aim for “professionalism”.
Hello
I’ve been working on formal a socialist students society, our first and current campaign is fighting back against AI in the local college, the reaction from students has been electric. Students don’t want this, they know they are being deskilled, they know who profits.
My brother in law is in college for engineering. His mom was telling me he uses AI for his assignments and just edits the responses. She is writing a book, and said she uses AI all the time for it.
It makes me want to scream. We weren’t even allowed to use spark notes when I was a student, and yet, the schools today seem to be pushing this tech on them. The mother used my spark notes example as an excuse, “see kids always have looked for ways to make their work easier”. It’s not the same lady…
While I’m glad to hear your school cohort is enthusiastic and informed, I’m not so sure it’s the general consensus among college students.
I’ve met some in this campaign who like AI, but they don’t take much convincing once the material conditions are explained. I’m sure most of those students will continue to use AI. 90% of the students I’ve spoken with so far agree tho. I’d imagine the education institutions are getting some form of kick back or guidance from AI firms/partners, we saw similar with PCs entering education, something silicon valley opt their own children out off.
I just keep seeing in my head when John Connor says “we’re not going to make it, are we?”
Major drag, huh?
We aren’t.
It’s in our nature to destroy ourselves.
Can’t I just take one for the team and do it one weekend booze binge at a time?
You can. But why wait for the weekend?
I like being sober at my job.
then maybe we should focus on destroying the more difficult targets first, instead of the defenseless targets like children and homeless people.
Previous tech presented information, made it faster and more available. It also just processed information. AI however claims to do the creativity and decision making for you. Once you’ve done that you’ve removed humans from any part of the equation except as passive consumers unneeded for any production.
How you plan on running an economy based on that structure remains to be seen.
At work now we’re having team learning sessions that are just one person doing a change incredibly slowly using AI while everyone else watches, but at least I can keep doing my regular work if it’s a Teams call. It usually takes the AI about 45 minutes to decide what I immediately knew needed doing.
Thru AI as some glorified meme generators, what oligarchies are now steering millions of people to become… cows.

Wasn’t this from CP2077?
Yeah, and it’s also the perfect horrifying metaphor like being headed to the Brave New World.
yeah from “The Hunt” quest
https://www.youtube.com/watch?v=cibafW6XFLACan’t believe I called that.
It’s pretty fucked up but awesome quest about our future.
Specifically about the abused kid who grew up to bait and kidnap kids with issues then pump them with an extreme amount of cow growth hormones until they die.
deleted by creator
Alrrafy seeing this in some junior devs.
Recently had to lay someone off because they just weren’t producing the work that needed to be done. Even the simplest of tasks.
I would be like we need to remove/delete these things. That’s it. It took some time because you had to just do some comparison and research, but it was a super difficult task for them.
I would then give them something more technical, like write this script and it was mostly ok, but much better work than the simple tasks I would give.
Then I would get AI slop and I would ask WTF are you thinking here. Why are you doing this? They couldn’t give a good answer because they didn’t actually do the work. They would just have LLMs do all their work for them and if it requires them to do any sort of thinking, they would fail miserably.
Even in simple PR reviews, I would leave at least 10 comments just going back and forth. Got to the point where it was just easier if I would have done it myself. I tried to mentor them and guide them along, but it just wasn’t getting through to them.
I don’t mind the use of LLMs, but use it as a tool, not a crutch. You should be able to produce the thing you are giving the llm to produce for you.
Same. My guy couldnt authenticate a user against a password hash, even after i gave him the source code. Its like copying homework - you just shoot yourself in the foot for later.
Meanwhile Junior Devs: “Why will no one hire me?!?!”
There is a funny two-way filtering going on here.
Job applications are auto-rejected unless they go over how “AI will reshape the future and I am so excited” on at least linkedin.
Then engineers that do the interviews want people interested in learning about computers through years of hard work and experience?
Problem is, people are choosing careers based on how much it will pay them, instead of things they want to do/ are passionate about. Its rare nowadays to have candidates who also have hobby work/ side projects related to the work. At least by my reckoning.
Problem is most jobs don’t pay enough anymore. So people don’t have the luxury of picking what they’re passionate about, they have bills to pay. Minimum wage hasn’t raised in 16 years. It wasn’t enough 16 years ago. It’s now buys only 60% of what it did back then.
Ths seniors can tell. And even if you make it into the job, itll be pretty obvious the first couple of days.
I interview juniors regularly. I can’t wait until the first time I interview a “vibe coder” who thinks they’re a developer, but can’t even tell me what a race condition is or the difference between synchronous and asynchronous execution.
That’s going to be a red letter day, lemme tell ya.
“Would you say I have a decorator on this function?”

I get that they can download widgets to accelerate the results, but they need to learn how the things work. I just code what i need by hand instead. Net result is quick up front results, but heaven forbid maintenance or customization.
Ban AI in schools
Old man yells at cloud.
I remember the “ban calculators” back in the day. “Kids won’t be able to learn math if the calculator does all the calculations for them!”
The solution to almost anything disruptive is regulation, not a ban. Use AI in times when it can be a leaning tool, and re-design school to be resilient to AI when it would not enhance learning. Have more open discussions in class for a start instead of handing kids a sheet of homework that can be done by AI when the kid gets home.
I remember the “ban calculators” back in the day US math scores have hit a low point in history, and calculators are partially to blame. Calculators are good to use if you already have an excellent understanding of the operations. If you start learning math with a calculator in your hand, though, you may be prevented from developing a good understanding of numbers. There are ‘shortcut’ methods for basic operations that are obvious if you are good with numbers. When I used to teach math, I had students who couldn’t tell me what 9 * 25 is without a calculator. They never developed the intuition that 1025 is dead easy to find in your head, and that 925 = (10-1)*25 = 250-25.
Calculators give correct answers.
It’s good that students are using ai to cheat then. We won’t need to detect it as the answers are wrong.
Offloading onto technology always atrophies the skill it replaces. Calculators offloaded, very specifically, basic arithmetic. However, Math =/= arithmetic. I used calculators, and cannot do mental multiplication and division as fast or well as older generations, but I spent that time learning to apply math to problems, understand number theory, and gaining a mastery of more complex operations, including writing computer sourcecode to do math-related things. It was always a trade-off.
In Aristotle’s time, people spent their entire education memorizing literature, and the written world off-loaded that skill. This isn’t a new problem, but there needs to be something of value to be educated in that replaces what was off-loaded. I think scholars are much better trained today, now that they don’t have to spend years memorizing passages word for word.
AI replaces thinking. That’s a bomb between the ears for students.
Cant remember the last time a calculator told me the best way to kill myself
I gotta be honest. Whenever I find out that someone uses any of these LLMs, or Ai chatbots, hell even Alexa or Siri, my respect for them instantly plummets. What these things are doing to our minds, is akin to how your diet and cooking habits change once you start utilizing doordash extensively.
I say this with full understanding that I’m coming off as just some luddite, but I don’t care. A tool is only as useful as it improves your life, and off-loading critical thinking does not improve your life. It actively harms your brains higher functions, making you a much easier target for propaganda and conspiratorial thinking. Letting children use this is exponentially worse than letting them use social media, and we all know how devastating the effects of that are… This would be catastrophically worse.
But hey, good thing we dismantled the department of education! Wouldn’t want kids to be educated! just make sure they know how to write a good ai prompt, because that will be so fucking useful.
That sounds like a form of prejudice. I mean even Siri and Alexa? I don’t use them for different reaons… but a lot of people use them as voice activated controls for lights, music, and such. I can’t see how they are different from the clapper. As for the llms… they don’t do any critical thinking, so noone is offloading thier critical thinking to them. If anything, using them requires more critical thinking because everyone who has ever used them knows how often they are flat out wrong.
voice activated light switches that constantly spy on you, harvesting your data for 3rd parties?
Claiming that using ai requires more critical thinking than not is a wild take, bro. Gonna have to disagree with all of what you said hard.
and recording your conversations, even when your not even asking alexa to do anything.
If AI has a significant amount of incapabilities and is often wrong (which it definitely is), wouldn’t it take more critical thinking to determine when it’s done something wrong?
If I were to give you a calculator that was programmed to give the wrong answers, would that be a useful tool? Would you be better off for having used it?
AI is literally the “Calcucorn” from Tim Heidecker’s “Tom Goes to the Mayor.”
Does a calculator do a significant amount of statistical analysis and base its output on the most probable result from a massive data set?
No. That would be stupid.
People taking the response from LLMs at face value is a problem, which is the point of the discussion, but disregarding it entirely would be equally dumb. Critical thinking would include knowing when and where to use a specific tool instead of trying to force one to be universal.
Does a calculator do a significant amount of statistical analysis and base its output on the most probable result from a massive data set?
Well, they will be very soon. And probably require a monthly monthly subscription fee as well. They will leave no technological orifice unviolated.
May God have mercy on us.
Pls no, I just want to 2+2 for when I forgor
but that’s the problem. Ai people are pushing it as a universal tool. The huge push we saw to have ai in everything is kind of proof of that.
People taking the response from LLMs at face value is a problem
So we can’t trust it, but in addition to that, we also can’t trust people on TV, or people writing articles for official sounding websites, or the white house, or pretty much anything anymore. and that’s the real problem. We’ve cultivated an environment where facts and realities are twisted to fit a narrative, and then demanded that we give equal air time and consideration to literal false information being peddled by hucksters. These LLMs probably wouldn’t be so bad if we didn’t feed them the same derivative and nonsensical BS we consume on a daily basis. but at this point we’ve just introduced and are now relying on a flawed tool that’s basing it’s knowledge on flawed information and it just creates a positive feedback loop of bullshit. People are using ai to write BS articles that are then referenced by ai. It won’t ever get better, it will only get worse.
You hit on why I don’t use them. But some people don’t care about that for a variety of reasons. Doesn’t make them less than.
Anyone who tries to use AI and not apply critical thinking fails at thier task because AI is just wrong often. So they either stop using it, or they apply critical thinking to figure out when the results are usable. But we don’t have to agree on that.
I don’t think using an inaccurate tool gives you extra insight into anything. If I asked you to measure the size of objects around your house, and gave you a tape measure that was not correctly metered, would that make you better at measuring things? We learn by asking questions and getting answers. If the answers given are wrong, then you haven’t learned anything. It, in fact, makes you dumber.
People who rely on ai are dumber, because using the tool makes them dumber. QED?
Its not prejudice to judge someone based on their actions.
Read the word. Prejudice … pre judice… pre judgment. Judging someone on limited information that isn’t adequate to form a reasonable opinion. Hearing someone uses siri and thinking less of them on that tiny fact alone is prejudice. For all you know, siri is some part of how they make a living. Or any of a thousand reasons someone may use it and still be a good intelligent person.
Its not prejudgement if you know their actions… that’s. What that means. At that point its just judgement.
You can consider it unfair, unjust, narrow minded or any of a number of other terms, but absolutely not prejudice.
But he doesn’t actually know thier actions. He knows they “use” siri. But he knows absolutely nothing about how. If they explained in detail how they use siri, then it would not be prejudice. But just the phrase, I use siri, is far from knowing thier actions. It’s not like I use an Ice pick, which has one generally understood use.
deleted by creator
It is not prejudice. These are products. Jesus christ.
I spent some years in classrooms as a service provider when Wikipedia was all the rage. Most districts had a “no Wikipedia” policy, and required primary sources.
My kids just graduated high school, and they were told NOT to use LLM’s (though some of their teachers would wink). Their current college professors use LLM detection software.
AI and Wikipedia are not the same, though. Students are better off with Wikipedia as they MIGHT read the references.
Still, those students who WANT to learn will not be held back by AI.
I always saw the rules against Wikipedia to be around citations (and accuracy in the early years), rather than it harming learning. It’s not that different from other tertiary sources like textbooks or encyclopedias. It’s good for learning a topic and the interacting pieces, but you need to then search for primary/secondary sources relevant to the topic you are writing about.
Generative AI however
- is a text prediction engine that often generates made up info, and then students learn things wrong
- does the writing for the students, so they don’t actually have to read or understand anything
I see these as problems too. If you (as a teacher) put an answer machine in the hands of a student, it essentially tells that student that they’re supposed to use it. You can go out of your way to emphasize that they are expected to use it the “right way” (since there aren’t consistent standards on how it should be used, that’s a strange thing to try to sell students on), but we’ve already seen that students (and adults) often choose to choose the quickest route to the goal, which tends to result in them letting the AI do the heavy lifting.
You don’t even need to search, just scroll down to the “references” section and read/cite them instead.
It’s great! I felt the “no Wikipedia” was short sighted (UNLESS one of the teaching goals was doing research in an actual library!).
Encyclopedias in general are not good sources. They’re too surface level. Wikipedia is a bad source because it’s an encyclopedia not because it’s crowd sourced.
Wikipedia is better than an encyclopedia, IMO, because the references are super easy to follow.
Still, those students who WANT to learn will not be held back by AI.
Our society probably won’t survive if only the students who want to learn do so. 😔
I share this concern.
my community college was like that, they were pretty “nazi” about the use of wiki and would get mad if you even hover/ lurk on a computer where its on the wiki page for any amount of time(computer labs are monitored"
College professors are making tests and homeworks harsher to make up for cheating students so students who WANT to learn may actually be held back by the literal sense.
or they are using AI themselves and accusing students of cheating on thier papers, when its not, which blurs the line. but i wonder how common this is.
Can you provide an example?
Great to get the perspective of someone who was in education.
Still, those students who WANT to learn will not be held back by AI.
I think that’s a valid point, but I’m afraid that the desire to learn might have a harder time winning that battle if what you’re fighting against is actually the norm, and if the way you’re being taught in the classroom looks more like what everyone else is doing. I feel like making it harder to choose to learn the “old hard way” still sounds likely to result in fewer students deciding to make that choice.
My optimism tells me this issue will be short lived. Unless someone can find a very creative way to monetize AI so that it is sustainable, it will likely crash (with local instances continuing to get development).
i think its a symptom of a larger problem, the students are likely already below reading level, writing as it is. now LLM, means they dont even need to try learning how to write, read, do mathemtics.
The best AI tools will also cite references, like Wikipedia, so you can click all the way through.
wiki isnt a good reference though. it would cite actual sources, papers in the general subject.
No, I mean: “As Wikipedia cites sources, so do these AI tools.”
Ie: these tools cite sources, like Wikipedia.
I realize now that was unclear.
I believe the early Microsoft one did that well, but the popular ones (grok, chathpt, Gemini) will only when asked (in my experience).
Children don’t yet have the maturity, the self control, or the technical knowledge required to actually use AI to learn.
You need to know how to search the web the regular way, how to phrase questions so the AI explains things rather than just give you the solution. You also need the self restraint to only use it to teach you, never do things for you ; and the patience to think about the problem yourself, only then search the regular web, and only then ask the AI to clarify the few things you still don’t get.
Many adults are already letting the chatbots de-skill them, I do not trust children would do any better.
Most people including adults don’t use AI to learn. They just use it to format emails, messages, generate things or do other stuff.
My experience is most adults don’t know how to search the internet for information either lol.
Also I haven’t been in school for over a decade at this point, but the internet was ubiquitous and they didn’t teach shit about it, the classes that were adjacent (like game design) were run by a coach who barely knew how to work the macs we were forced to use.
Nor was critical thinking an important part of the teaching process, very rarely was the “why” explained, they’re just trying to get through all the material required to prepare you for the state tests which determine if you move onto the next grade.
Children shouldn’t really be within the internet without supervision, parental controls are one thing but in school children should be carefully guided as to digital skills and life. It quite self explanatory that children are incapable of using such technology as they’re still developing independent thinking.
It’s only when children become teenagers, they are independent thinkers and where self-control and maturity could be at par with adults. In this case, age isn’t the problem - but the systematic methodology in which AI enables more “streamlined” approach which “gets the job done”.
Of course your statement highlights children, but the fact is when those children become capable teenagers, what then?
I wonder if this might not be exactly the correct approach to teach them, though. When there’s actually someone to tell them “sorry that AI answer is bullshit”, so they can learn how to use it as a ressource rather than an answer provider. Adults fail at it, but they also don’t have a teacher (and kids aren’t stupid, just inexperienced).
Don’t trust any doctor that graduated after 2024
This is also the kind of thing that scares me. I think people need to seriously consider that we’re bringing up the next wave of professionals who will be in all these critical roles. These are the stakes we’re gambling with.
When I was in medical school, the one thing that surprised me the most was how often a doctor will see a patient, get their history/work-up, and then step outside into the hallway to google symptoms. It was alarming.
Of course, the doctor is far more aware of ailments, and his googling is more sophisticated than just typing in whatever the patient says (you have to know what info is important in the pt. history, because patients will include/leave out all sorts of info), but still. It was unnerving.
I also saw a study way back when that said that hanging up a decision tree flow chart in Emergency rooms, and having nurses work through all the steps drastically improved patient care; additionally new programs can spot a cancerous mass on a radiograph/CT scan far before the human eye could discern it, and that’s great but… We still need educated and experienced doctors because a lot of stuff looks like other stuff, and sometimes the best way to tell them apart is through weird tricks like “smell the wound, does it smell fruity? then it’s this. Does it smell earthy? then it’s this.”















