• irelephant [he/him]@lemmy.dbzer0.comOP
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    1 day ago

    The snippet that does this is:

    if site.enable_chan_image_filter:
                    # Do not allow fascist meme content
                    try:
                        if '.avif' in uploaded_file.filename:
                            import pillow_avif  # NOQA
                        image_text = pytesseract.image_to_string(Image.open(BytesIO(uploaded_file.read())).convert('L'))
                    except FileNotFoundError:
                        image_text = ''
                    except UnidentifiedImageError:
                        image_text = ''
    
                    if 'Anonymous' in image_text and (
                            'No.' in image_text or ' N0' in image_text):  # chan posts usually contain the text 'Anonymous' and ' No.12345'
                        self.image_file.errors.append(
                            "This image is an invalid file type.")  # deliberately misleading error message
                        current_user.reputation -= 1
                        db.session.commit()
                        return False
    

    (Link in the post body)

    • flamingos-cant (hopepunk arc)@feddit.uk
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      19 hours ago

      I was curious to see how they handle this on the fedi side, because they obviously can’t stop you from uploading images to other instances, so decided to do some digging myself.

      The fedi code for this is here and looks like this:

      # Alert regarding fascist meme content
      if site.enable_chan_image_filter and toxic_community and img_width < 2000:  # images > 2000px tend to be real photos instead of 4chan screenshots.
          if os.environ.get('ALLOW_4CHAN', None) is None:
              try:
                  image_text = pytesseract.image_to_string(
                      Image.open(BytesIO(source_image)).convert('L'), timeout=30)
              except Exception:
                  image_text = ''
              if 'Anonymous' in image_text and (
                      'No.' in image_text or ' N0' in image_text):  # chan posts usually contain the text 'Anonymous' and ' No.12345'
                  post = session.query(Post).filter_by(image_id=file.id).first()
                  targets_data = {'gen': '0',
                                  'post_id': post.id,
                                  'orig_post_title': post.title,
                                  'orig_post_body': post.body
                                  }
                  notification = Notification(title='Review this',
                                              user_id=1,
                                              author_id=post.user_id,
                                              url=post.slug,
                                              notif_type=NOTIF_REPORT,
                                              subtype='post_with_suspicious_image',
                                              targets=targets_data)
                  session.add(notification)
                  session.commit()
      
      

      The curious thing here, apart from there being both an environmental variable and site setting for this, is the toxic_community variable. This seems to be a renaming of the low_quality field Piefed applies to communities, which are just communities with either memes or shitpost in their name.

      You also don’t get social credits docked for this.