Fixes#533
Fixes rollover in WARCWriter, separate from combined WARC rollover size:
- check rolloverSize and close previous WARCs when size exceeds
- add timestamp to resource WARC filenames to support rollover, eg.
screenshots-{ts}.warc.gz
- use append mode for all write streams, just in case
- tests: add test for rollover of individual WARCs with 500K size limit
- tests: update screenshot tests to account for WARCs now being named
screenshots-{ts}.warc.gz instead of just screenshots.warc.gz
warcwriter operations result in a write promise being put on a queue,
and handled one-at-a-time. This change wraps that promise in an async function that awaits the actual
write and logs any rejections.
- If an additional log details is provided, successful writes are also
logged for now, including success logging for resource records (text,
screenshot, pageinfo)
- screenshot / text / pageinfo use the appropriate logcontext for the resource for better log filtering
Now that RWP 2.0.0 with adblock support has been released
(webrecorder/replayweb.page#307), this enables adblock on the QA mode
RWP embed, to get more accurate screenshots.
Fetches the adblock.gz directly from RWP (though could also fetch it
separately from Easylist)
Updates to 1.1.0-beta.5
Cherry-picked from the use-js-wacz branch, now implementing separate
writing of pages.jsonl / extraPages.jsonl to be used with py-wacz and
new `--copy-page-files` flag.
Dependent on py-wacz 0.5.0 (via webrecorder/py-wacz#43)
---------
Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
- use frame.load() to load RWP frame directly instead of waiting for
navigation messages
- retry loading RWP if replay frame is missing
- support --postLoadDelay in replay crawl
- support --include / --exclude options in replay crawler, allow
excluding and including pages to QA via regex
- improve --qaDebugImageDiff debug image saving, save images to same
dir, using ${counter}-${workerid}-${pageid}-{crawl,replay,vdiff}.png for
better sorting
- when running QA crawl, check and use QA_ARGS instead of CRAWL_ARGS if
provided
- ensure empty string text from page is treated different from error (undefined)
- ensure info.warc.gz is closed in closeFiles()
misc:
- fix typo in --postLoadDelay check!
- enable 'startEarly' mode for behaviors (autofetch, autoplay)
---------
Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
Currently, only the recorder's WARCWriter writes records through a
queue, resulting in other WARCs potentially suffering from concurrent
write attempts. This fixes that by:
- adding the concurrent queue to WARCWriter itself
- all writeRecord, writeRecordPair, writeNewResourceRecord calls are
first added to the PQueue, which ensures writes happen in order and
one-at-a-time
- flush() also ensures queue is empty/idle
- should avoid any issues with concurrent writes to any WARC
refactor handling of max size for html/js/css (copy of #525)
- due to a typo (and lack of type-checking!) incorrectly passed in
matchFetchSize instead of maxFetchSize, resulting in text/css/js for
>5MB instead of >25MB not properly streamed back to the browser
- add type checking to AsyncFetcherOptions to avoid this in the future.
- refactor to avoid checking size altogether for 'essential resources',
html(document), js and css, instead always fetch them fully and
continue in the browser. Only apply rewriting if <25MB.
fixes#522
Fixes#513
If an absolute path isn't provided to the `create-login-profile`
entrypoint's `--filename` option, resolve the value given within
`/crawls/profiles`.
Also updates the docs cli-options section to include the
`create-login-profile` entrypoint and adjusts the script to
automatically generate this page accordingly.
---------
Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
but before running link extraction, text extraction, screenshots and
behaviors.
Useful for sites that load quickly but perform async loading / init
afterwards, fixes#519
A simple workaround for when it's tricky to detect when a page has
actually fully loaded. Useful for sites such as Instagram.
- filter out 'other' / no url targets from puppeteer attachment
- disable '--disable-site-isolation-trials' for profiles
- workaround for #446 with profiles
- also fixes `pageExtraDelay` not working for non-200 responses - may be
useful for detecting captcha blocked pages.
- connect VNC right away instead of waiting for page to fully finish
loading, hopefully resulting in faster profile start-up time.
Using latest puppeteer-core to keep up with latest browsers, mostly
minor syntax changes
Due to change in puppeteer hiding the executionContextId, need to create
a frameId->executionContextId mapping and track it ourselves to support
the custom evaluateWithCLI() function
Previously, there was the main WARCWriter as well as utility
WARCResourceWriter that was used for screenshots, text, pageinfo and
only generated resource records. This separate WARC writing path did not
generate CDX, but used appendFile() to append new WARC records to an
existing WARC.
This change removes WARCResourceWriter and ensures all WARC writing is done through a single WARCWriter, which uses a writable stream to append records, and can also generate CDX on the fly. This change is a
pre-requisite to the js-wacz conversion (#484) since all WARCs need to
have generated CDX.
---------
Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
sitemap improvements: gz support + application/xml + extraHops fix#511
- follow up to
https://github.com/webrecorder/browsertrix-crawler/issues/496
- support parsing sitemap urls that end in .gz with gzip decompression
- support both `application/xml` and `text/xml` as valid sitemap
content-types (add test for both)
- ignore extraHops for sitemap found URLs by setting to past extraHops
limit (otherwise, all sitemap URLs would be treated as links from seed
page)
fixes redirected seed (from #476) being counted against page limit: #509
- subtract extraSeeds when computing limit
- don't include redirect seeds in seen list when serializing
- tests: adjust saved-state-test to also check total pages when crawl is
done
fixes#508
Initial (beta) support for QA/replay crawling!
- Supports running a crawl over a given WACZ / list of WACZ (multi WACZ) input, hosted in ReplayWeb.page
- Runs local http server with full-page, ui-less ReplayWeb.page embed
- ReplayWeb.page release version configured in the Dockerfile, pinned ui.js and sw.js fetched directly from cdnjs
Can be deployed with `webrecorder/browsertrix-crawler qa` entrypoint.
- Requires `--qaSource`, pointing to WACZ or multi-WACZ json that will be replay/QAd
- Also supports `--qaRedisKey` where QA comparison data will be pushed, if specified.
- Supports `--qaDebugImageDiff` for outputting crawl / replay/ diff
images.
- If using --writePagesToRedis, a `comparison` key is added to existing page data where:
```
comparison: {
screenshotMatch?: number;
textMatch?: number;
resourceCounts: {
crawlGood?: number;
crawlBad?: number;
replayGood?: number;
replayBad?: number;
};
};
```
- bump version to 1.1.0-beta.2
Due to issues with capturing top-level pages, make bypassing service
workers the default for now. Previously, it was only disabled when using
profiles. (This is also consistent with ArchiveWeb.page behavior).
Includes:
- add --serviceWorker option which can be `disabled`,
disabled-if-profile (previous default) and `enabled`
- ensure page timestamp is set for direct fetch
- warn if page timestamp is missing on serialization, then set to now
before serializing
bump version to 1.0.2
The intent is for even non-graceful interruption (duplicate Ctrl+C) to
still result in valid WARC records, even if page is unfinished:
- immediately exit the browser, and call closeWorkers()
- finalize() recorder, finish active WARC records but don't fetch
anything else
- flush() existing open writer, mark as done, don't write anything else
- possible fix to additional issues raised in #487
Docs: Update docs on different interrupt options, eg. single SIGINT/SIGTERM, multiple SIGINT/SIGTERM (as handled here) vs SIGKILL
---------
Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
This PR provides improved support for running crawler as non-root,
matching the user to the uid/gid of the crawl volume.
This fixes#502 initial regression from 0.12.4, where `chmod u+x` was
used instead of `chmod a+x` on the node binary files.
However, that was not enough to fully support equivalent signal handling
/ graceful shutdown as when running with the same user. To make the
running as different user path work the same way:
- need to switch to `gosu` instead of `su` (added in Brave 1.64.109
image)
- run all child processes as detached (redis-server, socat, wacz, etc..)
to avoid them automatically being killed via SIGINT/SIGTERM
- running detached is controlled via `DETACHED_CHILD_PROC=1` env
variable, set to 1 by default in the Dockerfile (to allow for overrides
just in case)
A test has been added which runs one of the tests with a non-root
`test-crawls` directory to test the different user path. The test
(saved-state.test.js) includes sending interrupt signals and graceful
shutdown and allows testing of those features for a non-root gosu
execution.
Also bumping crawler version to 1.0.1
Adds a new SAX-based sitemap parser, inspired by:
https://www.npmjs.com/package/sitemap-stream-parser
Supports:
- recursively parsing sitemap indexes, using p-queue to process N at a
time (currently 5)
- `fromDate` and `toDate` filter dates, to only include URLs between the given
dates, filtering nested sitemap lists included
- async parsing, continue parsing in the background after 100 URLs
- timeout for initial fetch / first 100 URLs set to 30 seconds to avoid
slowing down the crawl
- save/load state integration: mark if sitemaps have already been parsed
in redis, serialize to save state, to avoid reparsing again. (Will
reparse if parsing did not fully finish)
- Aware of `pageLimit`, don't add URLs pass the page limit, interrupt
further parsing when at limit.
- robots.txt `sitemap:` parsing, check URL extension and mime type
- automatic detection of sitemaps for a seed URL if no sitemap url provided - first check robots.txt,
then /sitemap.xml
- tests: test for full sitemap autodetect, sitemap with limit, and sitemap from specific URL.
Fixes#496
---------
Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
Fixes#493
This PR updates the documentation for Browsertrix Crawler 1.0.0 and
moves it from the project README to an MKDocs site.
Initial docs site set to https://crawler.docs.browsertrix.com/
Many thanks to @Shrinks99 for help setting this up!
---------
Co-authored-by: Henry Wilkinson <henry@wilkinson.graphics>
Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
- Fixes state serialization, which was missing the done list. Instead,
adds a 'finished' list computed from the seen list, minus failed and
queued URLs.
- Also adds serialization support for 'extraSeeds', seeds added
dynamically from a redirect (via #475). Extra seeds are added to Redis
and also included in the serialization.
Fixes#491
- add --logExcludeContext for log contexts that should be excluded
(while --logContext specifies which are to be included)
- enable 'recorderNetwork' logging for debugging CDP network
- create default log context exclude list (containing: screencast,
recorderNetwork, jsErrors), customizable via --logExcludeContext
recorder: Track failed requests and include in pageinfo records with
status code 0
- cleanup cdp handler methods
- intercept requestWillBeSent to track requests that started (but may
not complete)
- fix shouldSkip() still working if no url is provided (eg. check only
headers)
- set status to 0 for async fetch failures
- remove responseServedFromCache interception, as response data
generally not available then, and responseReceived is still called
- pageinfo: include page requests that failed with status code 0, also
include 'error' status if available.
- ensure page is closed on failure
- ensure pageinfo still written even if nothing else is crawled for a
page
- track cached responses, add to debug logging (can also add to pageinfo
later if needed)
tests: add pageinfo test for crawling invalid URL, which should still
result in pageinfo record with status code 0
bump to 1.0.0-beta.7
Add fail on status code option, --failOnInvalidStatus to treat non-200
responses as failures. Can be useful especially when combined with
--failOnFailedSeed or --failOnFailedLimit
requeue: ensure requeued urls are requeued with same depth/priority, not
0
Requires webrecorder/browsertrix-behaviors#69 / browsertrix-behaviors
0.5.3, which will add support for behaviors to add links.
Simplify adding links by simply adding the links directly, instead of
batching to 500 links. Errors are already being logged in queueing a new
URL fails.
don't treat non-200 pages as errors, still extract text, take
screenshots, and run behaviors
only consider actual page load errors, eg. chrome-error:// page url, as
errors
- if a seed page redirects (page response != seed url), then add the
final url as a new seed with same scope
- add newScopeSeed() to ScopedSeed to duplicate seed with different URL,
store original includes / excludes
- also add check for 'chrome-error://' URLs for the page, and ensure
page is marked as failed if page.url() starts with chrome-error://
- fixes#475