Run a high-fidelity browser-based web archiving crawler in a single Docker container https://crawler.docs.browsertrix.com
Find a file
Tessa Walsh 1d15a155f2
Add option to respect robots.txt disallows (#888)
Fixes #631 
- Adds --robots flag which will enable checking robots.txt for each host for each page, before the page is queued for further crawler.
- Supports --robotsAgent flag which configures agent to check in robots.txt, in addition to '*'. Defaults to 'Browsertrix/1.x'
- Robots.txt bodies are parsed and checked for page allow/disallow status
using the https://github.com/samclarke/robots-parser library, which is
the most active and well-maintained implementation I could find with
TypeScript types.
- Fetched robots.txt bodies are cached by their URL in Redis using an LRU, retaining last 100 robots entries, each upto 100K
- Non-200 responses are treated as empty robots, and empty robots are treated as 'allow all'
- Multiple request to same robots.txt are batched to perform only one fetch, waiting up to 10 seconds per fetch.

---------

Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
2025-11-26 19:00:06 -08:00
.github/workflows ci: fixes to deploy ci workflow 2025-04-03 23:36:49 -07:00
.husky Add MKDocs documentation site for Browsertrix Crawler 1.0.0 (#494) 2024-03-16 14:59:32 -07:00
config/policies brave: update policies to disable new brave services (#914) 2025-11-14 20:00:58 -08:00
docs Add option to respect robots.txt disallows (#888) 2025-11-26 19:00:06 -08:00
html Dynamically adjust reported aspect ratio based on GEOMETRY (#794) 2025-04-01 18:26:12 -07:00
src Add option to respect robots.txt disallows (#888) 2025-11-26 19:00:06 -08:00
tests Add option to respect robots.txt disallows (#888) 2025-11-26 19:00:06 -08:00
.dockerignore Add ad blocking via request interception (#173) 2022-11-15 18:30:27 -08:00
.eslintignore follow-up to #428: update ignore files (#431) 2023-11-09 17:13:53 -08:00
.eslintrc.cjs eslint: add strict await checking: (#684) 2024-09-06 16:24:18 -07:00
.gitignore Gracefully handle non-absolute path for create-login-profile --filename (#521) 2024-03-29 13:46:54 -07:00
.pre-commit-config.yaml Add Prettier to the repo, and format all the files! (#428) 2023-11-09 16:11:11 -08:00
.prettierignore follow-up to #428: update ignore files (#431) 2023-11-09 17:13:53 -08:00
.prettierrc Dynamically adjust reported aspect ratio based on GEOMETRY (#794) 2025-04-01 18:26:12 -07:00
CHANGES.md Add Prettier to the repo, and format all the files! (#428) 2023-11-09 16:11:11 -08:00
docker-compose.yml Add Prettier to the repo, and format all the files! (#428) 2023-11-09 16:11:11 -08:00
docker-entrypoint.sh clear out core dumps to avoid using up volume space: (#740) 2025-01-16 15:50:59 -08:00
Dockerfile remove --disable-component-update flag, fixes shields not working (#915) 2025-11-14 20:30:42 -08:00
LICENSE initial commit after split from zimit 2020-10-31 13:16:37 -07:00
NOTICE initial commit after split from zimit 2020-10-31 13:16:37 -07:00
package.json Add option to respect robots.txt disallows (#888) 2025-11-26 19:00:06 -08:00
README.md Add MKDocs documentation site for Browsertrix Crawler 1.0.0 (#494) 2024-03-16 14:59:32 -07:00
requirements.txt Separate writing pages to pages.jsonl + extraPages.jsonl to use with new py-wacz (#535) 2024-04-11 13:55:52 -07:00
test-setup.js Fix disk utilization computation errors (#338) 2023-07-05 21:58:28 -07:00
tsconfig.eslint.json eslint: add strict await checking: (#684) 2024-09-06 16:24:18 -07:00
tsconfig.json Support option to fail crawl on content check (#861) 2025-07-08 13:08:52 -07:00
yarn.lock better failure detection, allow update support for captcha detection via behaviors (#917) 2025-11-19 15:49:49 -08:00

Browsertrix Crawler 1.x

Browsertrix Crawler is a standalone browser-based high-fidelity crawling system, designed to run a complex, customizable browser-based crawl in a single Docker container. Browsertrix Crawler uses Puppeteer to control one or more Brave Browser browser windows in parallel. Data is captured through the Chrome Devtools Protocol (CDP) in the browser.

For information on how to use and develop Browsertrix Crawler, see the hosted Browsertrix Crawler documentation.

For information on how to build the docs locally, see the docs page.

Support

Initial support for 0.x version of Browsertrix Crawler, was provided by Kiwix. The initial functionality for Browsertrix Crawler was developed to support the zimit project in a collaboration between Webrecorder and Kiwix, and this project has been split off from Zimit into a core component of Webrecorder.

Additional support for Browsertrix Crawler, including for the development of the 0.4.x version has been provided by Portico.

License

AGPLv3 or later, see LICENSE for more details.