Run a high-fidelity browser-based web archiving crawler in a single Docker container https://crawler.docs.browsertrix.com
Find a file
Ilya Kreymer 88a2fbd0a0
Fix 206 response + general video handling (#646)
Refactors handling of 206 responses:
- If a 206 response is encountered, and its actually the full range,
convert to 200 and rewrite range and content-range headers to x-range
and x-orig-range. This is to support rewriting of 206 responses for DASH
manifests
- If a partial 206 response starting with `0-`, do a full async fetch
separately.
- If a partial 206 response not starting with 0-, just ignore (very
likely a duplicate picked up when handling the 0- response)
- Don't stream content-types that can be rewritten, since streaming
prevents rewriting. Fixes rewriting on DASH/HLS manifests which have no
content-length and don't get properly rewritten.
- Overall, adds missing rewriting of DASH/HLS manifests that have no
content-length and are served as 206.
- Update to latest wabac.js which fixes rewriting of DASH manifest to
avoid duplicate '<?xml' prefix, webrecorder/wabac.js#192
- Fixes #645
2024-07-17 13:24:25 -07:00
.github/workflows http auth support per seed (supersedes #566): (#616) 2024-06-20 16:35:30 -07:00
.husky Add MKDocs documentation site for Browsertrix Crawler 1.0.0 (#494) 2024-03-16 14:59:32 -07:00
config Always download PDF + non HTML page cleanup + enterprise policy cleanup (#629) 2024-06-26 09:16:24 -07:00
docs http auth support per seed (supersedes #566): (#616) 2024-06-20 16:35:30 -07:00
html Adblock support (#534) 2024-04-12 09:47:32 -07:00
src Fix 206 response + general video handling (#646) 2024-07-17 13:24:25 -07:00
tests don't disable extraHops when using sitemaps: (#639) 2024-07-11 19:48:43 -07:00
.dockerignore Add ad blocking via request interception (#173) 2022-11-15 18:30:27 -08:00
.eslintignore follow-up to #428: update ignore files (#431) 2023-11-09 17:13:53 -08:00
.eslintrc.cjs Add Prettier to the repo, and format all the files! (#428) 2023-11-09 16:11:11 -08:00
.gitignore Gracefully handle non-absolute path for create-login-profile --filename (#521) 2024-03-29 13:46:54 -07:00
.pre-commit-config.yaml Add Prettier to the repo, and format all the files! (#428) 2023-11-09 16:11:11 -08:00
.prettierignore follow-up to #428: update ignore files (#431) 2023-11-09 17:13:53 -08:00
CHANGES.md Add Prettier to the repo, and format all the files! (#428) 2023-11-09 16:11:11 -08:00
docker-compose.yml Add Prettier to the repo, and format all the files! (#428) 2023-11-09 16:11:11 -08:00
docker-entrypoint.sh Improved support for running as non-root (#503) 2024-03-21 08:16:59 -07:00
Dockerfile Fix 206 response + general video handling (#646) 2024-07-17 13:24:25 -07:00
LICENSE initial commit after split from zimit 2020-10-31 13:16:37 -07:00
NOTICE initial commit after split from zimit 2020-10-31 13:16:37 -07:00
package.json Fix 206 response + general video handling (#646) 2024-07-17 13:24:25 -07:00
README.md Add MKDocs documentation site for Browsertrix Crawler 1.0.0 (#494) 2024-03-16 14:59:32 -07:00
requirements.txt Separate writing pages to pages.jsonl + extraPages.jsonl to use with new py-wacz (#535) 2024-04-11 13:55:52 -07:00
test-setup.js Fix disk utilization computation errors (#338) 2023-07-05 21:58:28 -07:00
tsconfig.json Add Prettier to the repo, and format all the files! (#428) 2023-11-09 16:11:11 -08:00
yarn.lock Fix 206 response + general video handling (#646) 2024-07-17 13:24:25 -07:00

Browsertrix Crawler 1.x

Browsertrix Crawler is a standalone browser-based high-fidelity crawling system, designed to run a complex, customizable browser-based crawl in a single Docker container. Browsertrix Crawler uses Puppeteer to control one or more Brave Browser browser windows in parallel. Data is captured through the Chrome Devtools Protocol (CDP) in the browser.

For information on how to use and develop Browsertrix Crawler, see the hosted Browsertrix Crawler documentation.

For information on how to build the docs locally, see the docs page.

Support

Initial support for 0.x version of Browsertrix Crawler, was provided by Kiwix. The initial functionality for Browsertrix Crawler was developed to support the zimit project in a collaboration between Webrecorder and Kiwix, and this project has been split off from Zimit into a core component of Webrecorder.

Additional support for Browsertrix Crawler, including for the development of the 0.4.x version has been provided by Portico.

License

AGPLv3 or later, see LICENSE for more details.