The Internet Archive discovers and captures web pages through many different web crawls. At any given time several distinct crawls are running, some for months, and some every day or longer. View the web archive through the Wayback Machine .
Topic: webwidecrawl
Wide crawls of the Internet conducted by Internet Archive. Please visit the Wayback Machine to explore archived web sites. Since September 10th, 2010, the Internet Archive has been running Worldwide Web Crawls of the global web, capturing web elements, pages, sites and parts of sites. Each Worldwide Web Crawl was initiated from one or more lists of URLs that are known as "Seed Lists". Descriptions of the Seed Lists associated with each crawl may be provided as part of the metadata for...
Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history. History is littered with hundreds of conflicts over the future of a community, group, location or...
Archive-It is a subscription web archiving service of the Internet Archive that helps organizations harvest, build, and preserve collections of digital content. Partners create domain specific collections of web captures that can be searched on Archive It . Content is hosted and stored at the Internet Archive data centers. Archive-It works with more than 400 partner organizations in 48 U.S. states and 16 countries worldwide including: College and University Libraries State Archives, Libraries,...
Topic: Colleges, Universities, Libraries, Archives, NGOs, Museums
Archive-It is the leading web archiving service for collecting and accessing cultural heritage on the web and is a service of Internet Archive used by libraries, archives, governments, non-profits, and other organizations to build collections of web materials.
Topic: TK
Starting in 1996, Alexa Internet has been donating their crawl data to the Internet Archive. Flowing in every day, these data are added to the Wayback Machine after an embargo period.
Topics: web crawl, Alexa
Content crawled via the Wayback Machine Live Proxy mostly by the Save Page Now feature on web.archive.org. Liveweb proxy is a component of Internet Archive’s wayback machine project. The liveweb proxy captures the content of a web page in real time, archives it into a ARC or WARC file and returns the ARC/WARC record back to the wayback machine to process. The recorded ARC/WARC file becomes part of the wayback machine in due course of time.
6.3B
6.3B
Nov 4, 2011
11/11
by
Internet Archive
Focused crawls are collections of frequently-updated webcrawl data from narrow (as opposed to broad or wide) web crawls, often focused on a single domain or subdomain.
Topic: webcrawl
Survey crawls are run about twice a year, on average, and attempt to capture the content of the front page of every web host ever seen by the Internet Archive since 1996.
Topic: survey crawls
1.6B
1.6B
Nov 7, 2020
11/20
by
Archive Team
A daily collection of thousands of the most popular web sites according to Alexa.com's top sites rankings .
Topics: daily, popular sites, Alexa
Web crawl data from Common Crawl.
These crawls are part of an effort to archive pages as they are created and archive the pages that they refer to. That way, as the pages that are referenced are changed or taken from the web, a link to the version that was live when the page was written will be preserved. Then the Internet Archive hopes that references to these archived pages will be put in place of a link that would be otherwise be broken, or a companion link to allow people to see what was originally intended by a page's...
Collections of Wiki data
Topics: crawls, data, wiki
Crawl of outlinks from wikipedia.org . These files are currently not publicly accessible. from Wikipedia : Wikipedia is a multilingual, web-based, free-content encyclopedia project operated by the Wikimedia Foundation and based on an openly editable model. The name "Wikipedia" is a portmanteau of the words wiki (a technology for creating collaborative websites, from the Hawaiian word wiki, meaning "quick") and encyclopedia. Wikipedia's articles provide links to guide the...
ArchiveBot is an IRC bot designed to automate the archival of smaller websites (e.g. up to a few hundred thousand URLs). You give it a URL to start at, and it grabs all content under that URL, records it in a WARC, and then uploads that WARC to ArchiveTeam servers for eventual injection into the Internet Archive (or other archive sites). To use ArchiveBot, drop by #archivebot on EFNet. To interact with ArchiveBot, you issue commands by typing it into the channel. Note you will need channel...
Topics: archiveteam, archivebot, webcrawl, robot, love
This is a set of web collections curated by Mark Graham using the Archive-IT service of the Internet Archive. They include web captures of the ISKME.org website as well as captures from sites hosted by IGC.org. These web captures are available to the general public. For more information about this collection please feel free to contact Mark via Send Mail
Topic: Mark Graham, ISKME, IGC.org
2.4B
2.4B
Apr 8, 2011
04/11
by
Internet Archive
Large-scale web harvests and national domain crawls performed for National Libraries, National Archives, preservation partners, research initiatives, and as part of special projects and custom crawling and research services.
Topic: ccs
A daily collection of hundreds of the world's top news sites.
Topics: daily, news
The seed for Wide00014 was: - Slash pages from every domain on the web: -- a list of domains using Survey crawl seeds -- a list of domains using Wide00012 web graph -- a list of domains using Wide00013 web graph - Top ranked pages (up to a max of 100) from every linked-to domain using the Wide00012 inter-domain navigational link graph -- a ranking of all URLs that have more than one incoming inter-domain link (rank was determined by number of incoming links using Wide00012 inter domain links)...
Wide17 was seeded with the "Total Domains" list of 256,796,456 URLs provided by Domains Index on June 26th, and crawled with max-hops set to "3" and de-duplication set "on".
This is a Collection of URLs (and Outlinked URLs) extracted from a random feed of 1% of all Tweets.
This is a collection of web page captures from links added to, or changed on, Wikipedia pages. The idea is to bring a reliability to Wikipedia outlinks so that if the pages referenced by Wikipedia articles are changed, or go away, a reader can permanently find what was originally referred to. This is part of the Internet Archive's attempt to rid the web of broken links .
Topics: Wikipedia, Wikimedia
Web wide crawl number 16 The seed list for Wide00016 was made from the join of the top 1 million domains from CISCO and the top 1 million domains from Alexa.
This is a collection of pages and embedded objects from WordPress blogs and the external pages they link to. Captures of these pages are made on a continuous basis seeded from a feed of new or changed pages hosted by Wordpress.com or by Wordpress pages hosted by sites running a properly configured Jetpack wordpress plugin.
Topics: Wordpress.com, blogs, jetpack
The seed for this crawl was a list of every host in the Wayback Machine This crawl was run at a level 1 (URLs including their embeds, plus the URLs of all outbound links including their embeds) The WARC files associated with this crawl are not currently available to the general public.
305.4M
305M
Oct 14, 2016
10/16
by
Archive Team
Google has been planning to shut down panoramic photo sharing site Panoramio since September 2014. The initial plan was to merge it with Google Views which was a similar product. However, due to feedback from the Panoramio community they held off that move. Frank did an in depth post about this in June 2015. Since then Google Views itself was merged into Street View. Google has now announced that they are finally shutting down Panoramio for good. As of November 4th, 2016, they will stop...
A daily crawl of more than 200,000 home pages of news sites, including the pages linked from those home pages. Site list provided by The GDELT Project
Topics: GDELT, News
A longitudinal web archival collection based on URIs from the daily feed of Media Cloud that maps news media coverage of current events.
Web wide crawl with initial seedlist and crawler configuration from January 2015.
The seeds for this crawl came from: 251 million Domains that had at least one link from a different domain in the Wayback Machine, across all time ~ 300 million Domains that we had in the Wayback, across all time 55,945,067 Domains from https://archive.org/details/wide00016 This crawl was run with a Heritrix setting of "maxHops=0" (URLs including their embeds) The WARC files associated with this crawl are not currently available to the general public.
miscellaneous data
Topic: brad tofel
Web wide crawl with initial seedlist and crawler configuration from April 2013.
Web wide crawl with initial seedlist and crawler configuration from June 2014.
this data is currently not publicly accessible.
The seed for this crawl was a list of every host in the Wayback Machine This crawl was run at a level 1 (URLs including their embeds, plus the URLs of all outbound links including their embeds) The WARC files associated with this crawl are not currently available to the general public.
Crawl of outlinks from wikipedia.org started March, 2016. These files are currently not publicly accessible. Properties of this collection. It has been several years since the last time we did this. For this collection, several things were done: 1. Turned off duplicate detection. This collection will be complete, as there is a good chance we will share the data, and sharing data with pointers to random other collections, is a complex problem. 2. For the first time, did all the different wikis....
This "Survey" crawl was started on Feb. 24, 2018. This crawl was run with a Heritrix setting of "maxHops=0" (URLs including their embeds) Survey 7 is based on a seed list of 339,249,218 URLs which is all the URLs in the Wayback Machine that we saw a 200 response code from in 2017 based on a query we ran on Feb. 1st, 2018. The WARC files associated with this crawl are not currently available to the general public.
Wayback indexes. This data is currently not publicly accessible.
The seed for this crawl was a list of every host in the Wayback Machine This crawl was run at a level 1 (URLs including their embeds, plus the URLs of all outbound links including their embeds) The WARC files associated with this crawl are not currently available to the general public.
863.7M
864M
Jan 21, 2016
01/16
by
Archive Team
Archive Team now searches many, many news sites, including extensive worldwide and obscure sources, to capture unique news stories for history.
Web wide crawl with initial seedlist and crawler configuration from August 2013.
Crawls performed by Internet Archive on behalf of the National Library of Australia. This data is currently not publicly accessible.
this data is currently not publicly accessible.
The seed for this crawl was a list of every host in the Wayback Machine This crawl was run at a level 1 (URLs including their embeds, plus the URLs of all outbound links including their embeds) The WARC files associated with this crawl are not currently available to the general public.
Web wide crawl with initial seedlist and crawler configuration from January 2012 using HQ software.
Archive-It Partner 126: Library of Congress
Web wide crawl with initial seedlist and crawler configuration from April 2012.
Web wide crawl with initial seedlist and crawler configuration from February 2014.
The seed for this crawl was a list of every host in the Wayback Machine This crawl was run at a level 1 (URLs including their embeds, plus the URLs of all outbound links including their embeds) The WARC files associated with this crawl are not currently available to the general public.
this data is currently not publicly accessible.
Web wide crawl with initial seedlist and crawler configuration from October 2010
Crawls of International News Sites
this data is currently not publicly accessible.
Screen captures of hosts discovered during wide crawls. This data is currently not publicly accessible.
150.3M
150M
Dec 19, 2017
12/17
by
Internet Archive Web Group
A series of open web crawls targeting journal articles, technical memos, essays, datasets, and other research publications. This collection contains WARC and CDX files that end up in Wayback ( https://web.archive.org ). See also bibliographic metadata corpuses at https://archive.org/details/ia_biblio_metadata
Wide crawls of the Internet conducted by Internet Archive. Access to content is restricted. Please visit the Wayback Machine to explore archived web sites.
Web wide crawl with initial seedlist and crawler configuration from March 2011 using HQ software.
This collection includes web crawls of the Federal Executive, Legislative, and Judicial branches of government performed at the end of US presidential terms of office.
Topics: web, end of term, US, federal government
Web wide crawl with initial seedlist and crawler configuration from September 2012.
Survey crawl of .com domains started January 2011.
Topic: webcrawl
The seed for this crawl was a list of every host in the Wayback Machine This crawl was run at a level 1 (URLs including their embeds, plus the URLs of all outbound links including their embeds) The WARC files associated with this crawl are not currently available to the general public.
Library and Archives Canada (LAC) combines the holdings, services and staff of both the former National Library of Canada and the National Archives of Canada. As outlined in the Preamble to the Library and Archives of Canada Act, LAC’s mandate is as follows: • to preserve the documentary heritage of Canada for the benefit of present and future generations; • to be a source of enduring knowledge accessible to all, contributing to the cultural, social and economic advancement of Canada as a...