Run a high-fidelity browser-based web archiving crawler in a single Docker container https://crawler.docs.browsertrix.com
Find a file
Ilya Kreymer 56053534c5
SAX-based sitemap parser (#497)
Adds a new SAX-based sitemap parser, inspired by:
https://www.npmjs.com/package/sitemap-stream-parser

Supports:
- recursively parsing sitemap indexes, using p-queue to process N at a
time (currently 5)
- `fromDate` and `toDate` filter dates, to only include URLs between the given
dates, filtering nested sitemap lists included
- async parsing, continue parsing in the background after 100 URLs
- timeout for initial fetch / first 100 URLs set to 30 seconds to avoid
slowing down the crawl
- save/load state integration: mark if sitemaps have already been parsed
in redis, serialize to save state, to avoid reparsing again. (Will
reparse if parsing did not fully finish)
- Aware of `pageLimit`, don't add URLs pass the page limit, interrupt
further parsing when at limit.
- robots.txt `sitemap:` parsing, check URL extension and mime type
- automatic detection of sitemaps for a seed URL if no sitemap url provided - first check robots.txt,
then /sitemap.xml
- tests: test for full sitemap autodetect, sitemap with limit, and sitemap from specific URL.

Fixes #496 

---------
Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
2024-03-18 19:14:07 -07:00
.github/workflows Add MKDocs documentation site for Browsertrix Crawler 1.0.0 (#494) 2024-03-16 14:59:32 -07:00
.husky Add MKDocs documentation site for Browsertrix Crawler 1.0.0 (#494) 2024-03-16 14:59:32 -07:00
config Add Prettier to the repo, and format all the files! (#428) 2023-11-09 16:11:11 -08:00
docs SAX-based sitemap parser (#497) 2024-03-18 19:14:07 -07:00
html Add Prettier to the repo, and format all the files! (#428) 2023-11-09 16:11:11 -08:00
src SAX-based sitemap parser (#497) 2024-03-18 19:14:07 -07:00
tests SAX-based sitemap parser (#497) 2024-03-18 19:14:07 -07:00
.dockerignore Add ad blocking via request interception (#173) 2022-11-15 18:30:27 -08:00
.eslintignore follow-up to #428: update ignore files (#431) 2023-11-09 17:13:53 -08:00
.eslintrc.cjs Add Prettier to the repo, and format all the files! (#428) 2023-11-09 16:11:11 -08:00
.gitignore follow-up to #428: update ignore files (#431) 2023-11-09 17:13:53 -08:00
.pre-commit-config.yaml Add Prettier to the repo, and format all the files! (#428) 2023-11-09 16:11:11 -08:00
.prettierignore follow-up to #428: update ignore files (#431) 2023-11-09 17:13:53 -08:00
CHANGES.md Add Prettier to the repo, and format all the files! (#428) 2023-11-09 16:11:11 -08:00
docker-compose.yml Add Prettier to the repo, and format all the files! (#428) 2023-11-09 16:11:11 -08:00
docker-entrypoint.sh Add --title and --description CLI args to write metadata into datapackage.json (#276) 2023-04-04 10:46:03 -04:00
Dockerfile Update Browser Image (#466) 2024-02-17 22:40:12 -08:00
LICENSE initial commit after split from zimit 2020-10-31 13:16:37 -07:00
NOTICE initial commit after split from zimit 2020-10-31 13:16:37 -07:00
package.json SAX-based sitemap parser (#497) 2024-03-18 19:14:07 -07:00
README.md Add MKDocs documentation site for Browsertrix Crawler 1.0.0 (#494) 2024-03-16 14:59:32 -07:00
requirements.txt Use new browser-based archiving mechanism instead of pywb proxy (#424) 2023-11-07 21:38:50 -08:00
test-setup.js Fix disk utilization computation errors (#338) 2023-07-05 21:58:28 -07:00
tsconfig.json Add Prettier to the repo, and format all the files! (#428) 2023-11-09 16:11:11 -08:00
yarn.lock SAX-based sitemap parser (#497) 2024-03-18 19:14:07 -07:00

Browsertrix Crawler 1.x

Browsertrix Crawler is a standalone browser-based high-fidelity crawling system, designed to run a complex, customizable browser-based crawl in a single Docker container. Browsertrix Crawler uses Puppeteer to control one or more Brave Browser browser windows in parallel. Data is captured through the Chrome Devtools Protocol (CDP) in the browser.

For information on how to use and develop Browsertrix Crawler, see the hosted Browsertrix Crawler documentation.

For information on how to build the docs locally, see the docs page.

Support

Initial support for 0.x version of Browsertrix Crawler, was provided by Kiwix. The initial functionality for Browsertrix Crawler was developed to support the zimit project in a collaboration between Webrecorder and Kiwix, and this project has been split off from Zimit into a core component of Webrecorder.

Additional support for Browsertrix Crawler, including for the development of the 0.4.x version has been provided by Portico.

License

AGPLv3 or later, see LICENSE for more details.