Run a high-fidelity browser-based web archiving crawler in a single Docker container https://crawler.docs.browsertrix.com
Find a file
Ilya Kreymer 85a07aff18
Streaming in-place WACZ creation + CDXJ indexing (#673)
Fixes #674 

This PR supersedes #505, and instead of using js-wacz for optimized WACZ
creation:
- generates an 'in-place' or 'streaming' WACZ in the crawler, without
having to copy the data again.
- WACZ contents are streamed to remote upload (or to disk) from existing
files on disk
- CDXJ indices per-WARC are first written to 'warc-cdx' directory, then merged using the linux 'sort' command, and compressed to ZipNum if >50K (or always if using --generateCDX)
- All data in the WARCs is written and read only once
- Should result in significant speed / disk usage improvements:
previously WARC was written once, then read again (for CDXJ indexing),
read again (for adding to new WACZ ZIP), written to disk (into new WACZ
ZIP), read again (if upload to remote endpoint). Now, WARCs are written
once, along with the per-WARC CDXJ, the CDXJ only is reread, sorted and merged on-disk, and all
data is read once to either generate WACZ on disk or upload to remote.

---------

Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
2024-08-29 13:21:20 -07:00
.github/workflows Streaming in-place WACZ creation + CDXJ indexing (#673) 2024-08-29 13:21:20 -07:00
.husky Add MKDocs documentation site for Browsertrix Crawler 1.0.0 (#494) 2024-03-16 14:59:32 -07:00
config Always download PDF + non HTML page cleanup + enterprise policy cleanup (#629) 2024-06-26 09:16:24 -07:00
docs SOCKS5 over SSH Tunnel Support (#671) 2024-08-28 18:47:24 -07:00
html Adblock support (#534) 2024-04-12 09:47:32 -07:00
src Streaming in-place WACZ creation + CDXJ indexing (#673) 2024-08-29 13:21:20 -07:00
tests Streaming in-place WACZ creation + CDXJ indexing (#673) 2024-08-29 13:21:20 -07:00
.dockerignore Add ad blocking via request interception (#173) 2022-11-15 18:30:27 -08:00
.eslintignore follow-up to #428: update ignore files (#431) 2023-11-09 17:13:53 -08:00
.eslintrc.cjs Add Prettier to the repo, and format all the files! (#428) 2023-11-09 16:11:11 -08:00
.gitignore Gracefully handle non-absolute path for create-login-profile --filename (#521) 2024-03-29 13:46:54 -07:00
.pre-commit-config.yaml Add Prettier to the repo, and format all the files! (#428) 2023-11-09 16:11:11 -08:00
.prettierignore follow-up to #428: update ignore files (#431) 2023-11-09 17:13:53 -08:00
CHANGES.md Add Prettier to the repo, and format all the files! (#428) 2023-11-09 16:11:11 -08:00
docker-compose.yml Add Prettier to the repo, and format all the files! (#428) 2023-11-09 16:11:11 -08:00
docker-entrypoint.sh Improved support for running as non-root (#503) 2024-03-21 08:16:59 -07:00
Dockerfile Streaming in-place WACZ creation + CDXJ indexing (#673) 2024-08-29 13:21:20 -07:00
LICENSE initial commit after split from zimit 2020-10-31 13:16:37 -07:00
NOTICE initial commit after split from zimit 2020-10-31 13:16:37 -07:00
package.json Streaming in-place WACZ creation + CDXJ indexing (#673) 2024-08-29 13:21:20 -07:00
README.md Add MKDocs documentation site for Browsertrix Crawler 1.0.0 (#494) 2024-03-16 14:59:32 -07:00
requirements.txt Separate writing pages to pages.jsonl + extraPages.jsonl to use with new py-wacz (#535) 2024-04-11 13:55:52 -07:00
test-setup.js Fix disk utilization computation errors (#338) 2023-07-05 21:58:28 -07:00
tsconfig.json Streaming in-place WACZ creation + CDXJ indexing (#673) 2024-08-29 13:21:20 -07:00
yarn.lock Streaming in-place WACZ creation + CDXJ indexing (#673) 2024-08-29 13:21:20 -07:00

Browsertrix Crawler 1.x

Browsertrix Crawler is a standalone browser-based high-fidelity crawling system, designed to run a complex, customizable browser-based crawl in a single Docker container. Browsertrix Crawler uses Puppeteer to control one or more Brave Browser browser windows in parallel. Data is captured through the Chrome Devtools Protocol (CDP) in the browser.

For information on how to use and develop Browsertrix Crawler, see the hosted Browsertrix Crawler documentation.

For information on how to build the docs locally, see the docs page.

Support

Initial support for 0.x version of Browsertrix Crawler, was provided by Kiwix. The initial functionality for Browsertrix Crawler was developed to support the zimit project in a collaboration between Webrecorder and Kiwix, and this project has been split off from Zimit into a core component of Webrecorder.

Additional support for Browsertrix Crawler, including for the development of the 0.4.x version has been provided by Portico.

License

AGPLv3 or later, see LICENSE for more details.