For most use cases, web search engines are fine. But I am wondering if there are alternative ways to finding information. There is also the enshittification of google and tbh most(free) search engines just give google search result
Obviously, the straight is just asking other people, can be in person or online, in general forums or specialised communities
Libraries are good source but for those that don’t have access to physical libraries, there free online public libraries(I will post the links for those that I found below)
Books in general, a lot of them have reference to outside materials.
So, I been experimenting with an AI chat bot(Le chat), partially as life coach of sorts and partially as a fine tuned web search engine. To cut to the chase, its bad. I when its not just listing google top results it list tools that long gone or just makes shit up. I was hoping it to be fine tuned search engine, cuz the google works, if what you want is not the top 10 websites, your on your own.
So yeah, that all I can think. Those are all the routes I can think of for finding information and probably all there is but maybe I miss another route.


No it can’t do that. It’s an LLM, it can only generate the next word in a sequence.
Also this doesn’t solve OPs problem at all. If it’s in the top 10 results on a major search engine then anyone can find it in minimal time.
Fucking AI bros being like I remade looking at 10 google links but this time it burns down a forest and tells me what a genius I am for asking.
Explain to me which forest burns down when I run an AI on my local computer that uses the same power (or less) as running a video game?
AI / LLMs aren’t evil or unethical or immoral - commercializing them into enormous behemoths that eat resources 24x7 is.
Even local models are trained on stolen art and content. That’s the immoral part.
No one seems to get this part.
There’s many models that use open source training sets and weights. You can choose them.
AI models aren’t trained on anything “stolen”. When you steal something, the original owner doesn’t have it anymore. That’s not being pedantic, it’s the truth.
Also, if you actually understand how AI training works, you wouldn’t even use this sort of analogy in the first place. It’s so wrong it’s like describing a Flintstones car and saying that’s how automobiles work.
Let’s say you wrote a book and I used it as part of my AI model (LLM) training set. As my code processes your novel, token-by-token (not word-by-word!), it’ll increase or decrease a floating point value by something like 0.001. That’s it. That’s all that’s happening.
To a layman, that makes no sense whatever but it’s the truth. How can a huge list of floating point values be used to generate semi-intelligent text? That’s the actually really fucking complicated part.
Before you’re can even use a model you need to tokenize the prompt and then perform an inference step which then gets processed a zillion ways before that .safetensors file (which is the AI model) gets used at all.
When an AI model is outputting text, it’s using a random number generator in conjunction with a word prediction algorithm that’s based on the floating point values inside the model. It doesn’t even “copy” anything. It’s literally built upon the back of an RNG!
If an LLM successfully copies something via it’s model that is just random chance. The more copies of something that went into its training, the higher the chance of it happening (and that’s considered a bug, not a feature).
There’s also a problem that can occur on the opposite end: When a single set of tokens gets associated with just one tiny bit of the training set. That’s how you can get it to output the same thing relatively consistently when given the same prompt (associated with that set of tokens). This is also considered a bug and AI researchers are always trying to find ways to prevent this sort of thing from happening.
As much as I understand your hate for LLMs, this is wrong.
Your knowledge is out of date, friend. These days you can configure an LLM to run tools like
curl,nmap,ping, or even write then execute shell scripts and Python (though, in a sandbox for security).Some tools that help you manage the models are preconfigured to make it easy for them to search the web on your behalf. I wouldn’t be surprised if there’s a whole ecosystem of AI tools just for searching the web that will emerge soon.
What Mozilla is implementing in Firefox will likely start with cloud-based services but eventually it’ll just be using local models, running on your PC. Then all those specialized AI search tools will become less popular as Firefox’s built-in features end up being “good enough”.
It dies wayore than that.
I have it write scripts in 30 seconds that would take me 2 days to write and verify.
I can quickly parse through what it writes (takes me about a minute) to verify it hasn’t done anything wonky, then test it in a VM I use for testing my own scripts.
It does this because the question I ask is very clear and explicit: exact scrupt language/version, exact inout, exact output, how the script should flow, what commenting should look like. It takes me about 1 minute to write a good question like this.