“It is a sign that people aren’t happy within the U.S. government, clearly. The shooting [of Good] was the last straw for many people," Dominick Skinner, ICE list founder, told The Beast.
Information included in the new leak includes around “1,800 on-the-ground agents and 150 supervisors. Early analysis by the organization suggests that around 80 per cent of the staff identified remain employed by DHS,” according to The Beast.


Save the list, make local copies. Every single one of those fuckers needs to be held accountable
I have been trying to access the site for over an hour. I don’t know if it’s authentic trader or a DDOS, but only the front page loads. The wiki subdomain (where the list is) returns 503 server error every time.
Worked on my VPN to Mexico City. Slow but loading. Trying to export all the data.
If you get any of it, would you mind sharing what you can?
We might be able to patch something together given multiple efforts. No telling how long the attack will last, but we can hold steady with slow progress.
it’s like half up. here is some. the special:export is kinda working, but i’m very new at ripping wiki sites
https://archive.is/ipv5h
Any chance you know Python, or maybe you can have ChatGPT write a script that uses bs4 to parse through each of the name hyperlinks in the html? A simple loop should be able to visit each page, cache the results on disk, and finish up eventually.
The link doesn’t work where I am, still.
it’s very slow for me. i’m looking into python and all the other stuff. if you one specifically that will scrap the whole site, i’m down to run it. any OS except mac i can use. i can’t copy/paste each agent page, looking to scrap the whole site at once
edit!
i think i have it scraping it all to html files. it’s taking time, just trying to make sure it gets the agents and incidents at least.
I see the edit, sweet!
I wouldn’t be able to write the script myself because I can’t load the site at all. So I can’t analyze the html and determine how to loop through it an extract all the additional links for scraping. Unless you want to send me the html. Then I can work with that, but without testing on the live site.
If you have something working though, great! Let’s just start there.
Check by opening one of the html files with your browser, or VSCode or something, and see if it has the data you were expecting.
the files look correct, just html version of the pages, just not sure if it’s going to crawl and get every page. i don’t even are if it’s just a folder of html file sat this time, as long as all the agents are retrived. will keep testing.
I’ve been having issues for a while too.
It had a burst and gave a few actual data eventually, but I think it’s just victim of its own success. People really want that list
I can poke in a page load sometimes but it’s not going great. We need to coordinate a scrape of the site and torrent it out before it’s shut down.
Dudes doing good work but they will absolutely kill him. Grab that data.
Hopefully somebody who has it gets a torrent up and seeding
Unverified archive link:
https://archive.is/YEqkE
Found it on a reddit post
And the neat part is that, since the
wiki.subdomain is running MediaWiki, you can export the pages on the website itself using “Special:Export”.And you can also access all pages on the subdomain.
Error 403, access forbidden. Feds really must not want anyone seeing this.