An internal spreadsheet obtained by Business Insider shows which websites Surge AI gig workers were told to mine — and which to avoid — while fine-tuning Anthropic’s AI to make it sound more “helpful, honest, and harmless.”
The spreadsheet allows sources like Bloomberg, Harvard University, and the New England Journal of Medicine while blacklisting others like The New York Times and Reddit.
Anthropic says it wasn’t aware of the spreadsheet and said it was created by a third-party vendor, the data-labeling startup Surge AI, which declined to comment on this point.
“This document was created by a third-party vendor without our involvement,” an Anthropic spokesperson said. “We were unaware of its existence until today and cannot validate the contents of the specific document since we had no role in its creation.”
Frontier AI companies mine the internet for content and often work with startups with thousands of human contractors, like Surge, to refine their AI models.
In this case, project documents show Surge worked to make Anthropic’s AI sound more human, avoid “offensive” statements, and cite documents more accurately.
Many of the whitelisted sources copyright or otherwise restrict their content. The Mayo Clinic, Cornell University, and Morningstar, whose main websites were all listed as “sites you can use,” told BI they don’t have any agreements with Anthropic to use this data for training AI models.
Surge left a trove of materials detailing its work for Anthropic, including the spreadsheet, accessible to anyone with the link on Google Drive. Surge locked down the documents shortly after BI reached out for comment.
“We take data security seriously, and documents are restricted by project and access level where possible,” a Surge spokesperson said. “We are looking closely into the matter to ensure all materials are protected.”
It’s the latest incident in which a data-labeling startup used public Google Docs to pass around sensitive AI training instructions. Surge’s competitor, Scale AI, also exposed internal data in this manner, locking the documents down after BI revealed the issue.
Related stories
A Google Cloud spokesperson told BI that its default setting restricts a company’s files from sharing outside the organization; changing this setting is a “choice that a customer explicitly makes,” the spokesperson said.
Surge hit $1 billion in revenue last year and is raising funds at a $15 billion valuation, Reuters reported. Anthropic was most recently valued at $61.5 billion, and its Claude chatbot is widely considered a leading competitor to ChatGPT.
What’s allowed — and what’s not
Google Sheet data showed the spreadsheet was created in November 2024, and it’s referenced in updates as recent as May 2025 in other documents left public by Surge.
The list functions as a “guide” for what online sources Surge’s gig workers can and can’t use on the Anthropic project.
The list includes over 120 permitted websites from a wide range of fields, including academia, healthcare, law, and finance. It includes 10 US universities, including Harvard, Yale, Northwestern, and the University of Chicago.
It also lists popular business news sources, such as Bloomberg, PitchBook, Crunchbase, Seeking Alpha, Investing.com, and PR Newswire.
Medical information sources, such as the New England Journal of Medicine, and government sources, such as a list of UN treaties and the US National Archives, are also in the whitelist. So are university publishers like Cambridge University Press.
Here’s the full list of who’s allowed, which says that it is “not exhaustive.” And here’s the list of who is banned — or over 50 “common sources” that are “now disallowed,” as the spreadsheet puts it.
The blacklist mostly consists of media outlets like The New York Times, The Wall Street Journal, and others. It also includes other types of sources like Reddit, Stanford University, the academic publisher Wiley, and the Harvard Business Review.
The spreadsheet doesn’t explain why some sources are permitted and others are not.
The blacklist could reflect websites that made direct demands to AI companies to stop using their content, said Edward Lee, a law professor at Santa Clara University. That can happen through written requests or through an automated method like robots.txt.
Some sources in the blacklist have taken legal stances against AI companies using their content. Reddit, for example, sued Anthropic this year, saying the AI company accessed its site without permission. Anthropic has denied these claims. The New York Times sued OpenAI, and The Wall Street Journal’s parent, Dow Jones, sued Perplexity, for similar reasons.
“The Times has objected to Anthropic’s unlicensed use of Times content for AI purposes and has taken steps to block their access as part of our ongoing IP protection and enforcement efforts,” the Times spokesperson Charlie Stadtlander told BI.
“As the law and our terms of service make clear, scraping or using the Times’s content is prohibited without our prior written permission, such as a licensing agreement.”
Surge workers used the list for RLHF
Surge contractors were told to use the list for a later, but crucial, stage of AI model training in which humans rate an existing chatbot’s responses to improve them. That process is called “reinforcement learning from human feedback,” or RLHF.
The Surge contractors working for Anthropic did tasks like copying and pasting text from the internet, asking the AI to summarize it, and choosing the best summary. In another case, workers were asked to “find at least 5-10 PDFs” from the web and quiz Anthropic’s AI about the documents’ content to improve its citation skills.
That doesn’t involve feeding web data directly into the model for it to regurgitate later — the better-known process that’s known as pre-training.
Courts haven’t addressed whether there’s a clear distinction between the two processes when it comes to copyright law. There’s a good chance both would be viewed as crucial to building a state-of-the-art AI model, Lee, the law professor, said.
It is “probably not going to make a material difference in terms of fair use,” Lee said.
Have a tip? Contact this reporter via email at crollet@insider.com or Signal and WhatsApp at 628-282-2811. Use a personal email address, a nonwork WiFi network, and a nonwork device; here’s our guide to sharing information securely.