I have different technical questions I want to ask. I went to Stack Overflow and they blocked registration with a VPN. It’s really fucking annoying. I can buy a residential IP to bypass this, but I’d rather just not use these enshitified platforms that are so hostile to VPNs.
Is there any decent alternative to Stack Overflow? I have tried getting AI answers to the technical question but they are not good.
And no, I can’t just create a github ID using VPN to login, they block github logins based on IP also.
There soon will be. They are still trying to scrub all of the helpful information out there and, once done, all you need is an AI subscription to get you moving again. /s
Depending on your question, maybe you have a lemmy group (sublemmy?) for the tech you are working with.
Or you can try ask here.
Myself and others can try and help you.
Generally people just call them communities here
I still refer to them as “subs” for brevity
Comms works for that. I like to leave the Reddit baggage behind
I could roll with that
I would love a strong moderated community of people to answer technical questions.
Like a “no stupid questions” style of community with the opposite of StackOverflow’s style of communication.
SO has a pretty great standard for communication. I known it can be frustrating if you don’t know the rules at first, but if one asks a clear question, states what they have attempted, and provides a minimal reproducible example, with code as text not an image, the support is pretty damned good.
The vpn block is probably trying to diminish what they can of bot scraping, to salvage what’s left of their husk of relevance.
Could I have a more detailed view of what that would look like? When I want to explore a problem for the layman I go to Wikipedia first and later onto SO if I still can’t find the answer in relevant literature. Making my question precise is often enough for me to accept an answer.
I am trying to run an OCR program for handwriting to process some large PDFs of old journals that are scanned into PDF. Doing it by hand will take a very long time. I have a amd gpu and have rocm installed. I tried to configure pip with rocm and failed. I was considering pulling a docker of PyTorth and then configuring gradio in it, then trying to get gradio to run TrOCR. I have never run gradio. I have “easier” LLM programs like LM Studio and Ollama but I don’t know if they can run TrOCR. There is AMD documentation on running OCR (https://rocm.docs.amd.com/projects/ai-developer-hub/en/latest/notebooks/inference/ocr_vllm.html) but it’s not clear if it works well with handwriting. TrOCR is just trained for handwriting. It’s also on huggingface, which i don’t know how to use that well.
Ok excellent.
Let’s go step by step.
You say you tried to configure pip but failed.
What was the error? Any logs? Did you follow the steps from the link you provided?
I don’t remember exactly, but I have rocm 7.2 installed, and there was something I was trying to install inside pip for rocm and it just wouldn’t work, it was like 7.2 rocm wasn’t out or the link didn’t work. The LLM tried multiple suggestion and they all failed, then I gave up. When I said “inside” pip, I don’t know if that’s accurate. I am very knew to pip and am decent at linux and only know a small amount of coding and lack python familiarity.
So if you follow the instructions from the link again, can you make it work?
that’s not for TrOCR, it’s just for OCR, which may not work for handwriting
I did try some of the GPT steps:
pip install --upgrade transformers pillow pdf2imagegetting some errors:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━━ 3/4 [transformers] WARNING: The scripts transformers and transformers-cli are installed in '/home/user/.local/bin' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. mistral-common 1.5.2 requires pillow<11.0.0,>=10.3.0, but you have pillow 12.1.0 which is incompatible. moviepy 2.1.2 requires pillow<11.0,>=9.2.0, but you have pillow 12.1.0 which is incompatible.this is what GPT said to run, but it makes no sense because I don’t have TrOCR even downloaded or running at all.
Install packages: pip install --upgrade transformers pillow pdf2image Ensure poppler is installed: Ubuntu/Debian: sudo apt install -y poppler-utils macOS: brew install poppler Execute: python3 trocr_pdf.py input.pdf output.txtThat’s the script to save and run.
#!/usr/bin/env python3 import sys from pdf2image import convert_from_path from PIL import Image import torch from transformers import TrOCRProcessor, VisionEncoderDecoderModel def main(pdf_path, out_path="output.txt", dpi=300): device = "cuda" if torch.cuda.is_available() else "cpu" model_name = "microsoft/trocr-base-handwritten" processor = TrOCRProcessor.from_pretrained(model_name) model = VisionEncoderDecoderModel.from_pretrained(model_name).to(device) pages = convert_from_path(pdf_path, dpi=dpi) results = [] for i, page in enumerate(pages, 1): page = page.convert("RGB") # downscale if very large to avoid OOM max_dim = 1600 if max(page.width, page.height) > max_dim: scale = max_dim / max(page.width, page.height) page = page.resize((int(page.width*scale), int(page.height*scale)), Image.Resampling.LANCZOS) pixel_values = processor(images=page, return_tensors="pt").pixel_values.to(device) generated_ids = model.generate(pixel_values, max_length=512) text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] results.append(f"--- Page {i} ---\n{text.strip()}\n") with open(out_path, "w", encoding="utf-8") as f: f.write("\n".join(results)) print(f"Saved OCR text to {out_path}") if __name__ == "__main__": if len(sys.argv) < 2: print("Usage: python3 trocr_pdf.py input.pdf [output.txt]") sys.exit(1) pdf_path = sys.argv[1] out_path = sys.argv[2] if len(sys.argv) > 2 else "output.txt" main(pdf_path, out_path)Ok so from the error, you have a version of pillow that is incompatible.
You have to downgrade pillow to version 11.
That’s the first step.
I don’t trust big tech to not extract data and metadata and save it. Many companies get served with government requests to save data and keep it secret. Even if handwritingocr.com doesn’t have such an agreement, it could run on AWS and that has an agreement. I would much rather do this locally. Some of the writings are confidential. Handwritingocr.com says data is encrypted in transit and at rest, but it’s not open source and even if it were I can’t verify the server code.
also Tesseract is CPU only, right? It will be so slow.
Terminal error after running GPT code:
`python3 trocr_pdf.py small.pdf output.txt Traceback (most recent call last): File “/home/user/.local/lib/python3.10/site-packages/transformers/utils/hub.py”, line 479, in cached_files hf_hub_download( File “/home/user/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py”, line 114, in _inner_fn return fn(*args, **kwargs) File “/home/user/.local/lib/python3.10/site-packages/huggingface_hub/file_download.py”, line 1007, in hf_hub_download return _hf_hub_download_to_cache_dir( File “/home/user/.local/lib/python3.10/site-packages/huggingface_hub/file_download.py”, line 1124, in _hf_hub_download_to_cache_dir os.makedirs(os.path.dirname(blob_path), exist_ok=True) File “/usr/lib/python3.10/os.py”, line 215, in makedirs makedirs(head, exist_ok=exist_ok) File “/usr/lib/python3.10/os.py”, line 225, in makedirs mkdir(name, mode) PermissionError: [Errno 13] Permission denied: ‘/home/user/.cache/huggingface/hub/models–microsoft–trocr-base-handwritten’
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File “/home/user/Documents/trocr_pdf.py”, line 39, in <module> main(pdf_path, out_path) File “/home/user/Documents/trocr_pdf.py”, line 11, in main processor = TrOCRProcessor.from_pretrained(model_name) File “/home/user/.local/lib/python3.10/site-packages/transformers/processing_utils.py”, line 1394, in from_pretrained args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs) File “/home/user/.local/lib/python3.10/site-packages/transformers/processing_utils.py”, line 1453, in _get_arguments_from_pretrained args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs)) File “/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/image_processing_auto.py”, line 489, in from_pretrained raise initial_exception File “/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/image_processing_auto.py”, line 476, in from_pretrained config_dict, _ = ImageProcessingMixin.get_image_processor_dict( File “/home/user/.local/lib/python3.10/site-packages/transformers/image_processing_base.py”, line 333, in get_image_processor_dict resolved_image_processor_files = [ File “/home/user/.local/lib/python3.10/site-packages/transformers/image_processing_base.py”, line 337, in <listcomp> resolved_file := cached_file( File “/home/user/.local/lib/python3.10/site-packages/transformers/utils/hub.py”, line 322, in cached_file file = cached_files(path_or_repo_id=path_or_repo_id, filenames=[filename], **kwargs) File “/home/user/.local/lib/python3.10/site-packages/transformers/utils/hub.py”, line 524, in cached_files raise OSError( OSError: PermissionError at /home/user/.cache/huggingface/hub/models–microsoft–trocr-base-handwritten when downloading microsoft/trocr-base-handwritten. Check cache directory permissions. Common causes: 1) another user is downloading the same model (please wait); 2) a previous download was canceled and the lock file needs manual removal. `
This error looks like it is saying a previous attempt aborted, and it needs you to clean up some file that was only partly downloaded.
Edit: The “please wait” makes me think I would try again in a couple hours.
Piefed iirc has question answer support like Stackoverflow. Where you can choose an accepted answer. So you could replicate stackoverflow with a piefed community, and something like that might already exist. I’m not aware of it if it does though.
Man, I need to get to switching to Piefed soon, it seems nice. Any cons vs Lemmy?
which community? i did look and didn’t see one.
You could try codidact. I’m not sure what kind of gatekeeping they have around account creation, but the project basically started up with “Stackoverflow sucks now, let’s make our own!”
Wow, I had never heard of that before and thought it was a typo.
https://codidact.com/ also links to https://topanswers.xyz/ to check for communities they might not have.
Hard to tell how active they are but interesting to check out for sure.
I don’t have an answer for you I’m sorry.
I think I asked a question on S/O once, like 10 years ago, and I got the usual “this question has been asked before” bullshit so I just never tried again.
These days I mostly just search for technical answers, yes often the answers are on S/O but often they’re elsewhere.
I also use a self hosted instance of openwebui to chat with LLMs hosted by huggingface. You’re correct that often technical answers are insufficient, but often it’s helpful. For example today I was trying to measure contrast between 2 colors. The chat bot gave me theory, explanation, and a working spreadsheet formula in a few seconds. It didn’t complain about how I’d formatted my question or that someone else may have asked the same question.
i got that a lot when i first started using linux-based distros. “why aren’t you typing man?” or “just type sudo rmdir /”
Why not turn off the VPN to make the account then use the VPN like normal afterwards?
hell fucking no. I do not need some real, large corporate company taking a) my browser fingerprint b) my real IP address and ping time and c) selling them to every data broker that exists. Fuck no. Buying a residential IP costs less than a dollar. Big data does not need to associate my online activities with who I am IRL.
It’s also a terrible idea because a lot of my online protection is because no one know my initial ping time, so I would be giving big data information like “here’s this user with browser hash ########### and they have a ping time of X ms and origination IP of X.X.X.X.” so after that, even if I protected my origination IP later, they could guess who I am based on hops and ping time and browser fingerprint, because even privacy browsers have a browser fingerprint. Fuuuuuuuuuuuuuuck no.
What VPN provider are you using because I guarantee you are leaking far more data to them then to anywhere else.
How do you know your VPN is not just doing the same things? Anything that you think is secure is only so temporarily because it becomes a target for state and private security firms. E.g. a private security company from Sweden could infiltrate a VPN provider than that security firm sells the data to states and other private firms. These firms are owned by the same conglomerates that own everything else and they consolidate the data. That is giving the VPN provider the benefit of the doubt, that they aren’t just lying to you.
Stack Overflow thrives on participation. If you do not want to participate in any way then SO is probably not for you.
I am happy to participate. I do not want my IP and ping data sold to data brokers to serve targeted ads, track me, and go to the police surveillance state. I’m sure you are a good citizen who always keeps location on and feels like life would be easier if everyone just complied.
I like your disdain for mass surveillance, but don’t call me a tech bro
Are these questions that an LLM can’t answer? For better or for worse, SO has basically died because LLMs can answer a great majority of SO-type questions. So I guess my question to you is: what are the nature of your queries? Something some bullshit LLM can’t answer? Well by golly, then let’s see if we can’t answer them here!
GPT 5 gave me a lot of code that returned errors. I really need help with the specific terminal code or knowing if I am even approaching the problem right.
No, sorry. Never. I believe in mind training and development, human discussions, attribution, contribution, and discoveries for human work, imagination, achievements, and scientific miracles.
You? You do you with your LLM.
It’s a shame it’s not around anymore. You sound like you would have fit in nicely.






