

Last time I was looking for job I just looked up companies from my field and sent them an email. I sent two emails and got 1 interview. Didn’t get the place though, so I just employed myself then.
A contrarian isn’t one who always objects - that’s a confirmist of a different sort. A contrarian reasons independently, from the ground up, and resists pressure to conform.
Last time I was looking for job I just looked up companies from my field and sent them an email. I sent two emails and got 1 interview. Didn’t get the place though, so I just employed myself then.
Why do you need to be such a mean jerk about it? I’m familiar with the saying - I just misunderstood you at first, and I already acknowledged my mistake. What more do you want?
I believe that, in reality, wolves domesticated themselves. They started hanging around humans because it was a mutually beneficial arrangement.
Dogs and wolves are the same specie - just a different subspecie. A Chihuahua could breed with a wolf.
Fair enough. “This is gonna twist so many incel knives” just made it sound like that’s what you were refering to.
Incel violence isn’t really the epidemic you’re making it sound to be. There have even been papers written about the lack of it.
I’m not 100% sure but I don’t see why not if that’s the name you gave them when registering as a customer. They all read in my ID as well.
I’ve only broken up with my ex-partners.
Does this help?
You’re not hoping anything, you’re just trying to look clever by pretending to be worried about phrasing no one actually misunderstood.
Concern trolling / weaponized empathy - Pretending to care as a disguise for judgment or hostility.
I have 3 first names and I’m legally allowed to use any of them.
Ironically, I had to use AI to figure out what this is supposed to mean.
Here’s the intended meaning:
The author is critiquing the misapplication of AI—specifically, the way people adopt a flashy new tool (AI, in this case) and start using it for everything, even when it’s not the right tool for the job.
Hammers vs. screwdrivers: A hammer is great for nails, but terrible for screws. If people start hammering screws just because hammers are faster and cheaper, they’re clearly missing the point of why screws exist and what screwdrivers are for.
Applied to AI: People are now using large language models (like ChatGPT) or generative AI for tasks they were never meant to do—data analysis, logical reasoning, legal interpretation, even mission-critical decision-making—just because it’s easy, fast, and feels impressive.
So the post is a cautionary parable: just because a tool is powerful or trendy (like generative AI), doesn’t mean it’s suited to every task. And blindly replacing well-understood, purpose-built tools (like rule-based systems, structured code, or human experts) with something flashy but poorly matched is a mistake.
It’s not anti-AI—it’s anti-overuse or misuse of AI. And the tone suggests the writer thinks that’s already happening.
I don’t feel like their wealth changes the equation that much. I don’t expect them to just hand me money just because I’m their biological child - and since I’m doing fine on my own anyway, I wouldn’t really need them to.
A self-aware or conscious AI system is most likely also generally intelligent - but general intelligence itself doesn’t imply consciousness. It’s likely that consciousness would come along with it, but it doesn’t have to. An unconscious AGI is a perfectly coherent concept.
What do you not agree with the graph?
No I didn’t.
No, it generates natural sounding language. That’s all it does.
The models definitely have some level of consciousness.
Depends on what one means by consciousness. The way I hear the term used most often - and how I use it myself - is to describe the fact of subjective experience. That it feels like something to be.
While I can’t definitively argue that none of our current AI systems are conscious to any degree, I’d still say that’s the case with extremely high probability. There’s just no reason to assume it feels like anything to be one of these systems, based on what we know about how they function under the hood.
LLM “hallucinations” are only errors from a user expectations perspective. The actual purpose of these models is to generate natural-sounding language, not to provide factual answers. We often forget that - they were never designed as knowledge engines or reasoning tools.
The fact that they often get things right isn’t because they “know” anything - it’s a side effect of being trained on data that contains a lot of correct information. So when they get things wrong, it’s not a bug in the traditional sense - it’s just the model doing what it was designed to do: predict likely word sequences, not truth. Calling that a “hallucination” isn’t marketing spin - it’s a useful way to describe confident output that isn’t grounded in reality.
Plumber by training, but these days I work as a self-employed general contractor / handyman.
My thinking is that companies looking for employees get flooded with nearly identical applications, so it’s hard to stand out. I’d rather just email, call, or even show up in person and ask for work - whether they’re actively hiring or not. It shows initiative.
Honestly, I didn’t even want the position - I only applied to keep my unemployment payments going. I spent maybe five minutes writing the application and still got the interview.