Commit graph

163 commits

Author SHA1 Message Date
Ilya Kreymer
868cd7ab48 remove pywb dependency
- only keep py-wacz
- use cdxj-indexer for --generateCDX
2023-11-07 20:01:42 -08:00
Ilya Kreymer
e7a850c380
Apply suggestions from code review, remove commented out code
Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
2023-11-07 17:20:08 -08:00
Ilya Kreymer
53cfd39416 Merge branch 'main' (0.12.1 release) into recorder-work 2023-11-03 18:31:18 -07:00
Ilya Kreymer
dd7b926d87
Exclusion Optimizations: follow-up to (#423)
Follow-up to #408 - optimized exclusion filtering:
- use zscan with default count instead of ordered scan to remvoe
- use glob match when possible (non-regex as determined by string check)
- move isInScope() check to worker to avoid creating a page and then
closing for every excluded URL
- tests: update saved-state test to be more resilient to delays

args: also support '--text false' for backwards compatibility, fixes
webrecorder/browsertrix-cloud#1334

bump to 0.12.1
2023-11-03 15:15:09 -07:00
Ilya Kreymer
ccff712fb6 Merge branch 'main' into recorder-work 2023-10-31 23:11:35 -07:00
Ilya Kreymer
2aeda56d40
improved text extraction: (addresses #403) (#404)
- use DOMSnapshot.captureSnapshot instead of older DOM.getDocument to
get the snapshot (consistent with ArchiveWeb.page) - should be slightly
more performant
- keep option to use DOM.getDocument
- refactor warc resource writing to separate class, used by text
extraction and screenshots
- write extracted text to WARC files as 'urn:text:<url>' after page
loads, similar to screenshots
- also store final text to WARC as 'urn:textFinal:<url>' if it is
different
- cli options: update `--text` to take one more more comma-separated
string options `--text to-warc,to-pages,final-to-warc`. For backwards
compatibility, support `--text` and `--text true` to be equivalent to
`--text to-pages`.

---------
Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
2023-10-31 23:05:30 -07:00
benoit74
bc730a0d37
Return User-Agent on all code path to set headers appropriately (#420)
Fixes #419
2023-10-25 12:32:10 -04:00
Ilya Kreymer
8c92901889
load saved state fixes + redis tests (#415)
- set done key correctly, just an int now
- also check if array for old-style save states (for backwards
compatibility)
- fixes #411
- tests: includes tests using redis: tests save state + dynamically
adding exclusions (follow up to #408)
- adds `--debugAccessRedis` flag to allow accessing local redis outside
container
2023-10-23 09:36:10 -07:00
Ilya Kreymer
60cf313f50 reenable HEAD check + direct (non-browser) fetch of non-HTML pages:
- if HEAD succeeds, do a direct fetch of non-HTML resource
- add filter to AsyncFetcher: reject if non-200 response or response sets cookies
- set loadState to 'full page loaded' (2) for direct-fetched pages
- also set mime type to better differntiate non-HTML pages, and lower loadState
- AsyncFetcher dupe handling: load() returns, "fetched", "dupe" or "notfetched" to differentiate dupe vs failed loading
- response async loading: if 'dupe', don't attempt to load again
- direct fetch: add ignoreDupe to ignore dupe check: if loading as page, always load again, even if previously loaded as a non-page resource
2023-10-21 10:21:35 -07:00
Ilya Kreymer
1a273abc20
remove tracking execution time here (handled in browsertrix cloud app instead) (#406)
- don't set start / end time in redis
- rename setEndTimeAndExit to setStatusAndExit

add 'fast cancel' option:
- add isCrawlCanceled() to state, which checks redis canceled key
- on interrupt, if canceled, immediately exit with status 0
- on fatal, exit with code 0 if restartsOnError is set
- no longer keeping track of start/end time in crawler itself
2023-10-09 12:28:58 -07:00
Ilya Kreymer
6f073779dc Merge branch 'main' to 'recorder-work', switching to Brave 2023-10-03 20:40:12 -07:00
Ilya Kreymer
8533f6ccf9
additional failure logic: (#402)
- logger.fatal() also sets crawl status to 'failed' and adds endTime before exiting
- add 'failOnFailedLimit' to set crawl status to 'failed' if number of failed pages exceeds limit, refactored from #393 to now use logger.fatal() to end crawl.
2023-10-03 20:21:30 -07:00
Tessa Walsh
a23f840318
Store crawler start and end times in Redis lists (#397)
* Store crawler start and end times in Redis lists

* end time tweaks:
- set end time for logger.fatal()
- set missing start time into setEndTime()

---------
Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
2023-10-02 17:55:52 -07:00
Ilya Kreymer
bc30d5aaa8 Merge branch 'main' into recorder-work 2023-09-20 23:56:23 -05:00
Ilya Kreymer
165a9787af
logging and beheaviors improvements (#389)
- run behaviors: check if behaviors object exists before trying to run behaviors to avoid failure message
- skip behaviors if frame no longer attached / has empty URL
2023-09-20 15:02:37 -04:00
Ilya Kreymer
7b0de11c2a Merge branch 'main' into recorder-work 2023-09-19 01:05:47 -05:00
Ilya Kreymer
c4287c7ed9
Error handling fixes to avoid crawler getting stuck. (#385)
* error handling fixes:
- listen to correct event for page crashes, 'error' instead of 'crash', may fix #371, #351
- more removal of duplicate logging for status-related errors, eg. if page crashed, don't log worker exception
- detect browser 'disconnected' event, interrupt crawl (but allow post-crawl tasks, such as waiting for pending requests to run), set browser to null to avoid trying to use again.

worker
- bump new page timeout to 20
- if loading page from new domain, always use new page

logger:
- log timestamp first for better sorting
2023-09-18 15:24:33 -07:00
Ilya Kreymer
0c88eb78af
favicon: use 127.0.0.1 instead of localhost (#384)
catch exception in fetch
bump to 0.11.1
2023-09-17 12:50:39 -07:00
Ilya Kreymer
e5b0c4ec1b
optimize link extraction: (fixes #376) (#380)
* optimize link extraction: (fixes #376)
- dedup urls in browser first
- don't return entire list of URLs, process one-at-a-time via callback
- add exposeFunction per page in setupPage, then register 'addLink' callback for each pages' handler
- optimize addqueue: atomically check if already at max urls and if url already seen in one redis call
- add QueueState enum to indicate possible states: url added, limit hit, or dupe url
- better logging: log rejected promises for link extraction
- tests: add test for exact page limit being reached
2023-09-15 10:12:08 -07:00
Ilya Kreymer
3c9be514d3
behavior logging tweaks, add netIdle (#381)
* behavior logging tweaks, add netIdle
* fix shouldIncludeFrame() check: was actually erroring out and never accepting any iframes!
now used not only for link extraction but also to run() behaviors
* add logging if iframe check fails
* Dockerfile: add commented out line to use local behaviors.js
* bump behaviors to 0.5.2
2023-09-14 19:48:41 -07:00
benoit74
d72443ced3
Add option to output stats file live, i.e. after each page crawled (#374)
* Add option to output stats file live, i.e. after each page crawled

* Always output stat files after each page crawled (+ test)

* Fix inversion between expected and test value
2023-09-14 15:16:19 -07:00
Ilya Kreymer
afecec01bd
status: fix typo setting status to log message (#379)
status should be set to 'done'!
2023-09-13 22:54:55 -07:00
Ilya Kreymer
a3cfc55c38
various fixes regarding state restart: (#370)
* additional fixes:
- use distinct exit code for subsequent interrupt (13) and fatal interrupt (17)
- if crawl has been stopped, mark for final exit for post crawl tasks
- stopped takes precedence over interrupted: if both, still exit with 0 (and marked for final exit)
- if no warcs found, crawl stopped, but previous pages found, don't consider failed!
- cleanup: remove unused code, rename to gracefulFinishOnInterrupt, separate from graceful finish via crawl stopped
2023-09-13 10:48:21 -07:00
Graham Hukill
1eeee2c215
Surface lastmod option for sitemap parser (#367)
* Surface lastmod option for sitemap parser
- Add --sitemapFromDate to use along with --useSitemap which will filter sitemap by on or after
specified ISO date.

The library used to parse sitemaps for URLs added an optional
"lastmod" argument in v3.2.5 that allows filtering URLs returned
by a "last_modified" element present in sitemap XMLs.  This
surfaces that argument to the browsertrix-crawler CLI runtime
parameters.

This can be useful for orienting a crawl around a list of seeds
known to contain sitemaps, but are only interested in including
URLs that have been modified on or after X date.

---------
Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
2023-09-13 10:20:41 -07:00
Ilya Kreymer
f8508a85ab
logging fixes: (#377)
- avoid duplicate logging for same error, if logging more specific message and rethrowing exception,
set e.detail to "logged" and worker exception handler will not log same error again
- add option to log timeouts as warnings instead of errors
- remove unneed async method in browser, get headers directly
- fix logging in screenshots to include page
2023-09-13 10:05:05 -07:00
Ilya Kreymer
283fa00299
logging: resolve confusion with 'crawl done' not being written to log… (#375)
* logging: resolve confusion with 'crawl done' not being written to log, because the log is itself stored in the WACZ: (fixes #365)
- keep log file open until end, even if its being written to WACZ, close before exit
- add logging of 'crawling done' when crawling is done (writing to WACZ or not)
- add debug logging of 'end of log file' to indicate log file is being added to WACZ and nothing else will be added there in the WACZ.
2023-09-13 10:04:09 -07:00
Anish Lakhwara
1c486ea1f3
Capture Favicon (#362)
- get favicon from CDP debug page, if available, log warning if not
- store in favIconUrl in pages.jsonl
- test: add test for favIcon and additional multi-page crawls
2023-09-10 11:29:35 -07:00
Ilya Kreymer
b95c535821
misc exit features: (#366)
- if interrupted (via signal or due to limits) and not finished, return error code 11 to indicate interruption
- allow stopping single instances with hset '<crawlid>:stopone' uid (similar to status)
- deliberate stop via redis not considered interruption (exit 0)
2023-09-06 11:14:18 -04:00
Ilya Kreymer
3c2f5f8934
link extraction optimization: for scopeType page, set depth == extraHops to avoid getting links (#364)
if we know no additional links wil be used
2023-08-31 13:42:14 -07:00
Ilya Kreymer
60aec17a7c Merge branch 'main' into recorder-work 2023-08-22 17:49:59 -07:00
Ilya Kreymer
cf404efa13
improve crawl stopped check with unified isCrawlRunning() check with checks both interrupted + redis-based state (#356)
- handle browser crash -- if getting new page fails after 5 tries, assume browser crashed and exit
- check if timedRun() returns a non-null value before expanding
- update timedRun() to rethrow any non-timeout exception, instead of just logging 'unknown exception', as it should be handled downstream.
2023-08-22 09:16:00 -07:00
Ilya Kreymer
212bff0a27
mark for upload-and-delete when crawl is interrupted for any limit: total size, total time, or disk limit (#354) 2023-08-15 11:34:39 -07:00
Ilya Kreymer
2420896fc6 Merge branch 'main' into recorder-work 2023-08-08 23:05:18 -07:00
Ilya Kreymer
69fc1819d1
sizeLimit fix: (#347)
- only delete local data if uploading and uploaded succeeded, not after every sizeLimit interruption
- fixes #344
2023-08-01 00:04:10 -07:00
Ilya Kreymer
cf53a51cef Merge branch 'main' into recorder-work 2023-07-27 08:04:29 -07:00
Amani
442f4486d3
feat: Add custom behavior injection (#285)
* support loading custom behaviors from a specified directory via --customBehaviors
* call load() for each behavior incrementally, then call selectMainBehavior() (available in browsertrix-behaviors 0.5.1)
* tests: add tests for multiple custom behaviors

---------
Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
2023-07-06 13:09:48 -07:00
wvengen
de2b4512b6
Allow configuration of deduplication policy (#331) (#332) 2023-07-06 14:54:35 -04:00
Tessa Walsh
254da95a44
Fix disk utilization computation errors (#338)
* Check size of /crawls by default to fix disk utilization check

* Refactor calculating percentage used and add unit tests

* add tests using df output for with disk usage above and below
threshold

---------

Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
2023-07-05 21:58:28 -07:00
Ilya Kreymer
e332ee4bec Merge branch 'main' into recorder-work 2023-06-16 22:02:12 -07:00
Ilya Kreymer
f51154facb
Chrome 112 + new headless mode + consistent viewport tweaks (#316)
* base: update to chrome 112
headless: switch to using new headless mode available in 112 which is more in sync with headful mode
viewport: use fixed viewport matching screen dimensions for headless and headful mode (if GEOMETRY is set)
profiles: fix catching new window message, reopening page in current window
versions: bump to pywb 2.7.4, update puppeteer-core to (20.2.1)
bump to 0.10.0-beta.4

* profile: force reopen in current window only for headless mode (currently breaks otherwise), remove logging messages
2023-05-22 16:24:39 -07:00
Ilya Kreymer
77f0a935aa
stopping: if crawl is marked as stopping, and no warcs found, mark state as failed also (to avoid loop in cloud when (#314)
crawler is restarted)
2023-05-19 07:38:16 -07:00
Tessa Walsh
f3c64b2b07
Consolidate wacz error loglines (#306)
* Print WACZ and reindexing errors/stacktraces on single line

* Log full stderr as single line if debug is enabled
2023-05-07 13:00:56 -07:00
Ilya Kreymer
f4c4203381
crawl stopping / additional states: (#303)
* crawl stopping / additional states:
- adds check for 'isCrawlStopped()' which checks redis key to see if crawl has been stopped externally, and interrupts work
loop and prevents crawl from starting on load
- additional crawl states: 'generate-wacz', 'generate-cdx', 'generate-warc', 'uploading-wacz', and 'pending-wait' to indicate
when crawl is no longer running but crawler performing work
- addresses part of webrecorder/browsertrix-cloud#263, webrecorder/browsertrix-cloud#637
2023-05-03 16:25:59 -07:00
Tessa Walsh
d4bc9e80b9
Catch 4xx and 5xx page.goto() responses to mark invalid URLs as failed (#300)
* Catch 400 pywb errors on page load and mark page failed

* Add --failOnFailedSeed option to fail crawl with exit code 1 if seed doesn't load, resolves #207

* Handle 4xx or 5xx page.goto responses as page load errors
2023-04-26 16:49:32 -07:00
Ilya Kreymer
361f765ae9 Merge branch 'main' into recorder-work 2023-04-26 15:44:01 -07:00
Ilya Kreymer
71b618fe94
Switch back to Puppeteer from Playwright (#301)
- reduced memory usage, avoids memory leak issues caused by using playwright (see #298) 
- browser: split Browser into Browser and BaseBrowser
- browser: puppeteer-specific functions added to Browser for additional flexibility if need to change again later
- browser: use defaultArgs from playwright
- browser: attempt to recover if initial target is gone
- logging: add debug logging from process.memoryUsage() after every page
- request interception: use priorities for cooperative request interception
- request interception: move to setupPage() to run once per page, enable if any of blockrules, adblockrules or originOverrides are used
- request interception: fix originOverrides enabled check, fix to work with catch-all request interception
- default args: set --waitUntil back to 'load,networkidle2'
- Update README with changes for puppeteer
- tests: fix extra hops depth test to ensure more than one page crawled

---------
Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
2023-04-26 15:41:35 -07:00
Ilya Kreymer
d4e222fab2
merge regression fixes from 0.9.1: full page screenshot + allow service workers if no profile used (#297)
* browser: just pass profileUrl and track if custom profile is used
browser: don't disable service workers always (accidentally added as part of playwright migration)
only disable if using profile, same as 0.8.x behavior
fix for #288

* Fix full page screenshot (#296)
---------

Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
2023-04-24 10:26:56 -07:00
Ilya Kreymer
3aad61ab58 Merge branch 'main' into recorder-work 2023-04-19 23:42:46 -07:00
Ilya Kreymer
3d8e21ea59
origin override: add --originOverride source=dest to allow routing where https://src-host:src-port/path/page.html -> http://dest-host:dest-port/path/page.html where source=https://src-host:src-port and dest=http://dest-host:dest-port (#281) 2023-04-19 19:17:15 -07:00
Tessa Walsh
4143ebbd02
Store archive dir size in Redis (#291) 2023-04-19 18:10:02 -07:00