when loading a PDF as a page, the browser returns a 'false positive'
net::ERR_ABORTED even though the PDF is loaded.
- this is already handled, but status code was still being cleared,
ensure status code is not reset to 0 on response
- ensure page status and mime are also recorded if this failure is
ignored (in shouldIgnoreAbort)
- tests: add test for PDF capture
fixes#570
On sites with regular workers, requests from workers were being skipped
as there was no match for the worker frameId.
Add recorder.hasFrame() frameId to match not just service-worker
frameIds but also other frame ids already tracked in the frameIdToExecId
map.
warcwriter operations result in a write promise being put on a queue,
and handled one-at-a-time. This change wraps that promise in an async function that awaits the actual
write and logs any rejections.
- If an additional log details is provided, successful writes are also
logged for now, including success logging for resource records (text,
screenshot, pageinfo)
- screenshot / text / pageinfo use the appropriate logcontext for the resource for better log filtering
Currently, only the recorder's WARCWriter writes records through a
queue, resulting in other WARCs potentially suffering from concurrent
write attempts. This fixes that by:
- adding the concurrent queue to WARCWriter itself
- all writeRecord, writeRecordPair, writeNewResourceRecord calls are
first added to the PQueue, which ensures writes happen in order and
one-at-a-time
- flush() also ensures queue is empty/idle
- should avoid any issues with concurrent writes to any WARC
refactor handling of max size for html/js/css (copy of #525)
- due to a typo (and lack of type-checking!) incorrectly passed in
matchFetchSize instead of maxFetchSize, resulting in text/css/js for
>5MB instead of >25MB not properly streamed back to the browser
- add type checking to AsyncFetcherOptions to avoid this in the future.
- refactor to avoid checking size altogether for 'essential resources',
html(document), js and css, instead always fetch them fully and
continue in the browser. Only apply rewriting if <25MB.
fixes#522
Using latest puppeteer-core to keep up with latest browsers, mostly
minor syntax changes
Due to change in puppeteer hiding the executionContextId, need to create
a frameId->executionContextId mapping and track it ourselves to support
the custom evaluateWithCLI() function
Previously, there was the main WARCWriter as well as utility
WARCResourceWriter that was used for screenshots, text, pageinfo and
only generated resource records. This separate WARC writing path did not
generate CDX, but used appendFile() to append new WARC records to an
existing WARC.
This change removes WARCResourceWriter and ensures all WARC writing is done through a single WARCWriter, which uses a writable stream to append records, and can also generate CDX on the fly. This change is a
pre-requisite to the js-wacz conversion (#484) since all WARCs need to
have generated CDX.
---------
Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
The intent is for even non-graceful interruption (duplicate Ctrl+C) to
still result in valid WARC records, even if page is unfinished:
- immediately exit the browser, and call closeWorkers()
- finalize() recorder, finish active WARC records but don't fetch
anything else
- flush() existing open writer, mark as done, don't write anything else
- possible fix to additional issues raised in #487
Docs: Update docs on different interrupt options, eg. single SIGINT/SIGTERM, multiple SIGINT/SIGTERM (as handled here) vs SIGKILL
---------
Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
- add --logExcludeContext for log contexts that should be excluded
(while --logContext specifies which are to be included)
- enable 'recorderNetwork' logging for debugging CDP network
- create default log context exclude list (containing: screencast,
recorderNetwork, jsErrors), customizable via --logExcludeContext
recorder: Track failed requests and include in pageinfo records with
status code 0
- cleanup cdp handler methods
- intercept requestWillBeSent to track requests that started (but may
not complete)
- fix shouldSkip() still working if no url is provided (eg. check only
headers)
- set status to 0 for async fetch failures
- remove responseServedFromCache interception, as response data
generally not available then, and responseReceived is still called
- pageinfo: include page requests that failed with status code 0, also
include 'error' status if available.
- ensure page is closed on failure
- ensure pageinfo still written even if nothing else is crawled for a
page
- track cached responses, add to debug logging (can also add to pageinfo
later if needed)
tests: add pageinfo test for crawling invalid URL, which should still
result in pageinfo record with status code 0
bump to 1.0.0-beta.7
The `:pageinfo:<url>` record now includes the mime type + resource type
(from Chrome) along with status code for each resource, for better
filtering / comparison.
- recorder: don't attempt to record response with mime type
`text/event-stream` (will not terminate).
- resources: don't track non http/https resources.
- resources: store page timestamp on first resources URL match, in case
multiple responses for same page encountered.
Ensure cached resources (that are not written to WARC) are still
included in the `url:pageinfo:...` records. This will make it easier to
track which resources are actually *loaded* from a given page.
Tests: add test to ensure pageinfo record for webrecorder.net and webrecorder.net/about
include cached resources
Fixes#462
Add --writePagesToRedis arg, for use conjunction with QA features in Browsertrix Cloud, to add
pages to the database for each crawl.
Ensure timestamp (as ISO date) is added to pages when they are serialized (both to pages.jsonl and redis)
Also include timestamp (as ISO date) in `pageinfo:` records
---------
Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
Generate records for each page, containing a list of resources and their
status codes, to aid in future diffing/comparison.
Generates a `urn:pageinfo:<page url>` record for each page
- Adds POST / non-GET request canonicalization from warcio to handle
non-GET requests
- Adds `writeSingleRecord` to WARCWriter
Fixes#457
Support for rollover size and custom WARC prefix templates:
- reenable --rolloverSize (default to 1GB) for when a new WARC is
created
- support custom WARC prefix via --warcPrefix, prepended to new WARC
filename, test via basic_crawl.test.js
- filename template for new files is:
`${prefix}-${crawlId}-$ts-${this.workerid}.warc${his.gzip ? ".gz" : ""}`
with `$ts` replaced at new file creation time with current timestamp
Improved support for long (non-terminating) responses, such as from
live-streaming:
- add a size to CDP takeStream to ensure data is streamed in fixed
chunks, defaulting to 64k
- change shutdown order: first close browser, then finish writing all
WARCs to ensure any truncated responses can be captured.
- ensure WARC is not rewritten after it is done, skip writing records if
stream already flushed
- add timeout to final fetch tasks to avoid never hanging on finish
- fix adding `WARC-Truncated` header, need to set after stream is
finished to determine if its been truncated
- move temp download `tmp-dl` dir to main temp folder, outside of
collection (no need to be there).
- add LogContext type and enumerate all log contexts
- also add LOG_CONTEXT_TYPES array to validate --context arg
- rename errJSON -> formatErr, convert unknown (likely Error) to dict
- make logger info/error/debug accept unknown as well, to avoid explicit 'any' typing in all catch handlers
When calling directFetchCapture, and aborting the response via an
exception, throw `new Error("response-filtered-out");`
so that it can be ignored. This exception is only used for direct
capture, and should not be logged as an error - rethrow and
handle in calling function to indicate direct fetch is skipped
Previously, responses >2MB are streamed to disk and an empty response returned to browser,
to avoid holding large response in memory.
This limit was too small, as some HTML pages may be >2MB, resulting in no content loaded.
This PR sets different limits for:
- HTML as well as other JS necessary for page to load to 25MB
- All other content limit is set to 5MB
Also includes some more type fixing
This adds prettier to the repo, and sets up the pre-commit hook to
auto-format as well as lint.
Also updates ignores files to exclude crawls, test-crawls, scratch, dist as needed.
Follows #424. Converts the upcoming 1.0.0 branch based on native browser-based traffic capture and recording to TypeScript. Fixes#426
---------
Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
Co-authored-by: emma <hi@emma.cafe>