| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | #! /usr/bin/env python | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  | # Original code by Guido van Rossum; extensive changes by Sam Bayer, | 
					
						
							|  |  |  | # including code to check URL fragments. | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | """Web tree checker.
 | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | This utility is handy to check a subweb of the world-wide web for | 
					
						
							|  |  |  | errors.  A subweb is specified by giving one or more ``root URLs''; a | 
					
						
							|  |  |  | page belongs to the subweb if one of the root URLs is an initial | 
					
						
							|  |  |  | prefix of it. | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | File URL extension: | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | In order to easy the checking of subwebs via the local file system, | 
					
						
							|  |  |  | the interpretation of ``file:'' URLs is extended to mimic the behavior | 
					
						
							|  |  |  | of your average HTTP daemon: if a directory pathname is given, the | 
					
						
							|  |  |  | file index.html in that directory is returned if it exists, otherwise | 
					
						
							|  |  |  | a directory listing is returned.  Now, you can point webchecker to the | 
					
						
							|  |  |  | document tree in the local file system of your HTTP daemon, and have | 
					
						
							|  |  |  | most of it checked.  In fact the default works this way if your local | 
					
						
							|  |  |  | web tree is located at /usr/local/etc/httpd/htdpcs (the default for | 
					
						
							|  |  |  | the NCSA HTTP daemon and probably others). | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-02-02 23:30:32 +00:00
										 |  |  | Report printed: | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-02-02 23:30:32 +00:00
										 |  |  | When done, it reports pages with bad links within the subweb.  When | 
					
						
							|  |  |  | interrupted, it reports for the pages that it has checked already. | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  | In verbose mode, additional messages are printed during the | 
					
						
							|  |  |  | information gathering phase.  By default, it prints a summary of its | 
					
						
							|  |  |  | work status every 50 URLs (adjustable with the -r option), and it | 
					
						
							|  |  |  | reports errors as they are encountered.  Use the -q option to disable | 
					
						
							|  |  |  | this output. | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | Checkpoint feature: | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | Whether interrupted or not, it dumps its state (a Python pickle) to a | 
					
						
							|  |  |  | checkpoint file and the -R option allows it to restart from the | 
					
						
							|  |  |  | checkpoint (assuming that the pages on the subweb that were already | 
					
						
							|  |  |  | processed haven't changed).  Even when it has run till completion, -R | 
					
						
							|  |  |  | can still be useful -- it will print the reports again, and -Rq prints | 
					
						
							|  |  |  | the errors only.  In this case, the checkpoint file is not written | 
					
						
							|  |  |  | again.  The checkpoint file can be set with the -d option. | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | The checkpoint file is written as a Python pickle.  Remember that | 
					
						
							|  |  |  | Python's pickle module is currently quite slow.  Give it the time it | 
					
						
							|  |  |  | needs to load and save the checkpoint file.  When interrupted while | 
					
						
							|  |  |  | writing the checkpoint file, the old checkpoint file is not | 
					
						
							|  |  |  | overwritten, but all work done in the current run is lost. | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | Miscellaneous: | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-02-02 23:30:32 +00:00
										 |  |  | - You may find the (Tk-based) GUI version easier to use.  See wcgui.py. | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-01-30 03:19:41 +00:00
										 |  |  | - Webchecker honors the "robots.txt" convention.  Thanks to Skip | 
					
						
							|  |  |  | Montanaro for his robotparser.py module (included in this directory)! | 
					
						
							|  |  |  | The agent name is hardwired to "webchecker".  URLs that are disallowed | 
					
						
							|  |  |  | by the robots.txt file are reported as external URLs. | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-02-02 23:30:32 +00:00
										 |  |  | - Because the SGML parser is a bit slow, very large SGML files are | 
					
						
							| 
									
										
										
										
											1997-01-30 03:19:41 +00:00
										 |  |  | skipped.  The size limit can be set with the -m option. | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-02-02 23:30:32 +00:00
										 |  |  | - When the server or protocol does not tell us a file's type, we guess | 
					
						
							|  |  |  | it based on the URL's suffix.  The mimetypes.py module (also in this | 
					
						
							|  |  |  | directory) has a built-in table mapping most currently known suffixes, | 
					
						
							|  |  |  | and in addition attempts to read the mime.types configuration files in | 
					
						
							|  |  |  | the default locations of Netscape and the NCSA HTTP daemon. | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  | - We follow links indicated by <A>, <FRAME> and <IMG> tags.  We also | 
					
						
							| 
									
										
										
										
											1997-02-02 23:30:32 +00:00
										 |  |  | honor the <BASE> tag. | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  | - We now check internal NAME anchor links, as well as toplevel links. | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-02-02 23:30:32 +00:00
										 |  |  | - Checking external links is now done by default; use -x to *disable* | 
					
						
							|  |  |  | this feature.  External links are now checked during normal | 
					
						
							|  |  |  | processing.  (XXX The status of a checked link could be categorized | 
					
						
							|  |  |  | better.  Later...) | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  | - If external links are not checked, you can use the -t flag to | 
					
						
							|  |  |  | provide specific overrides to -x. | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  | Usage: webchecker.py [option] ... [rooturl] ... | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | Options: | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | -R        -- restart from checkpoint file | 
					
						
							|  |  |  | -d file   -- checkpoint filename (default %(DUMPFILE)s) | 
					
						
							|  |  |  | -m bytes  -- skip HTML pages larger than this size (default %(MAXPAGE)d) | 
					
						
							| 
									
										
										
										
											1997-01-31 14:43:15 +00:00
										 |  |  | -n        -- reports only, no checking (use with -R) | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | -q        -- quiet operation (also suppresses external links report) | 
					
						
							|  |  |  | -r number -- number of links processed per round (default %(ROUNDSIZE)d) | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  | -t root   -- specify root dir which should be treated as internal (can repeat) | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | -v        -- verbose operation; repeating -v will increase verbosity | 
					
						
							| 
									
										
										
										
											1997-02-02 23:30:32 +00:00
										 |  |  | -x        -- don't check external links (these are often slow to check) | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  | -a        -- don't check name anchors | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  | Arguments: | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | rooturl   -- URL to start checking | 
					
						
							|  |  |  |              (default %(DEFROOT)s) | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | """
 | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-01-31 14:43:15 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1998-02-21 20:02:09 +00:00
										 |  |  | __version__ = "$Revision$" | 
					
						
							| 
									
										
										
										
											1997-01-30 03:30:20 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  | import sys | 
					
						
							|  |  |  | import os | 
					
						
							|  |  |  | from types import * | 
					
						
							|  |  |  | import StringIO | 
					
						
							|  |  |  | import getopt | 
					
						
							|  |  |  | import pickle | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | import urllib | 
					
						
							|  |  |  | import urlparse | 
					
						
							| 
									
										
										
										
											1997-01-31 14:43:15 +00:00
										 |  |  | import sgmllib | 
					
						
							| 
									
										
										
										
											2002-06-06 17:01:21 +00:00
										 |  |  | import cgi | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  | import mimetypes | 
					
						
							| 
									
										
										
										
											1997-01-30 03:19:41 +00:00
										 |  |  | import robotparser | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1998-02-21 20:02:09 +00:00
										 |  |  | # Extract real version number if necessary | 
					
						
							|  |  |  | if __version__[0] == '$': | 
					
						
							| 
									
										
										
										
											2002-09-11 20:36:02 +00:00
										 |  |  |     _v = __version__.split() | 
					
						
							| 
									
										
										
										
											1998-02-21 20:02:09 +00:00
										 |  |  |     if len(_v) == 3: | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         __version__ = _v[1] | 
					
						
							| 
									
										
										
										
											1998-02-21 20:02:09 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  | # Tunable parameters | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  | DEFROOT = "file:/usr/local/etc/httpd/htdocs/"   # Default root URL | 
					
						
							|  |  |  | CHECKEXT = 1                            # Check external references (1 deep) | 
					
						
							|  |  |  | VERBOSE = 1                             # Verbosity level (0-3) | 
					
						
							|  |  |  | MAXPAGE = 150000                        # Ignore files bigger than this | 
					
						
							|  |  |  | ROUNDSIZE = 50                          # Number of links processed per round | 
					
						
							|  |  |  | DUMPFILE = "@webchecker.pickle"         # Pickled checkpoint | 
					
						
							|  |  |  | AGENTNAME = "webchecker"                # Agent name for robots.txt parser | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  | NONAMES = 0                             # Force name anchor checking | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | # Global variables | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | def main(): | 
					
						
							| 
									
										
										
										
											1998-02-21 20:02:09 +00:00
										 |  |  |     checkext = CHECKEXT | 
					
						
							|  |  |  |     verbose = VERBOSE | 
					
						
							|  |  |  |     maxpage = MAXPAGE | 
					
						
							|  |  |  |     roundsize = ROUNDSIZE | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  |     dumpfile = DUMPFILE | 
					
						
							|  |  |  |     restart = 0 | 
					
						
							| 
									
										
										
										
											1997-01-31 14:43:15 +00:00
										 |  |  |     norun = 0 | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     try: | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |         opts, args = getopt.getopt(sys.argv[1:], 'Rd:m:nqr:t:vxa') | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  |     except getopt.error, msg: | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         sys.stdout = sys.stderr | 
					
						
							|  |  |  |         print msg | 
					
						
							|  |  |  |         print __doc__%globals() | 
					
						
							|  |  |  |         sys.exit(2) | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     # The extra_roots variable collects extra roots. | 
					
						
							|  |  |  |     extra_roots = [] | 
					
						
							|  |  |  |     nonames = NONAMES | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  |     for o, a in opts: | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         if o == '-R': | 
					
						
							|  |  |  |             restart = 1 | 
					
						
							|  |  |  |         if o == '-d': | 
					
						
							|  |  |  |             dumpfile = a | 
					
						
							|  |  |  |         if o == '-m': | 
					
						
							| 
									
										
										
										
											2002-09-11 20:36:02 +00:00
										 |  |  |             maxpage = int(a) | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         if o == '-n': | 
					
						
							|  |  |  |             norun = 1 | 
					
						
							|  |  |  |         if o == '-q': | 
					
						
							|  |  |  |             verbose = 0 | 
					
						
							|  |  |  |         if o == '-r': | 
					
						
							| 
									
										
										
										
											2002-09-11 20:36:02 +00:00
										 |  |  |             roundsize = int(a) | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |         if o == '-t': | 
					
						
							|  |  |  |             extra_roots.append(a) | 
					
						
							|  |  |  |         if o == '-a': | 
					
						
							|  |  |  |             nonames = not nonames | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         if o == '-v': | 
					
						
							|  |  |  |             verbose = verbose + 1 | 
					
						
							|  |  |  |         if o == '-x': | 
					
						
							|  |  |  |             checkext = not checkext | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-01-31 14:43:15 +00:00
										 |  |  |     if verbose > 0: | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         print AGENTNAME, "version", __version__ | 
					
						
							| 
									
										
										
										
											1997-01-30 03:30:20 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  |     if restart: | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         c = load_pickle(dumpfile=dumpfile, verbose=verbose) | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  |     else: | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         c = Checker() | 
					
						
							| 
									
										
										
										
											1998-02-21 20:02:09 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     c.setflags(checkext=checkext, verbose=verbose, | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |                maxpage=maxpage, roundsize=roundsize, | 
					
						
							|  |  |  |                nonames=nonames | 
					
						
							|  |  |  |                ) | 
					
						
							| 
									
										
										
										
											1998-02-21 20:02:09 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     if not restart and not args: | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         args.append(DEFROOT) | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     for arg in args: | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         c.addroot(arg) | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |     # The -t flag is only needed if external links are not to be | 
					
						
							|  |  |  |     # checked. So -t values are ignored unless -x was specified. | 
					
						
							|  |  |  |     if not checkext: | 
					
						
							|  |  |  |         for root in extra_roots: | 
					
						
							|  |  |  |             # Make sure it's terminated by a slash, | 
					
						
							|  |  |  |             # so that addroot doesn't discard the last | 
					
						
							|  |  |  |             # directory component. | 
					
						
							|  |  |  |             if root[-1] != "/": | 
					
						
							|  |  |  |                 root = root + "/" | 
					
						
							|  |  |  |             c.addroot(root, add_to_do = 0) | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											1998-04-27 19:35:15 +00:00
										 |  |  |     try: | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  |         if not norun: | 
					
						
							|  |  |  |             try: | 
					
						
							|  |  |  |                 c.run() | 
					
						
							|  |  |  |             except KeyboardInterrupt: | 
					
						
							|  |  |  |                 if verbose > 0: | 
					
						
							|  |  |  |                     print "[run interrupted]" | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         try: | 
					
						
							| 
									
										
										
										
											1998-04-27 19:35:15 +00:00
										 |  |  |             c.report() | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         except KeyboardInterrupt: | 
					
						
							|  |  |  |             if verbose > 0: | 
					
						
							| 
									
										
										
										
											1998-04-27 19:35:15 +00:00
										 |  |  |                 print "[report interrupted]" | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  |     finally: | 
					
						
							|  |  |  |         if c.save_pickle(dumpfile): | 
					
						
							|  |  |  |             if dumpfile == DUMPFILE: | 
					
						
							|  |  |  |                 print "Use ``%s -R'' to restart." % sys.argv[0] | 
					
						
							|  |  |  |             else: | 
					
						
							|  |  |  |                 print "Use ``%s -R -d %s'' to restart." % (sys.argv[0], | 
					
						
							|  |  |  |                                                            dumpfile) | 
					
						
							| 
									
										
										
										
											1998-02-21 20:02:09 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | def load_pickle(dumpfile=DUMPFILE, verbose=VERBOSE): | 
					
						
							|  |  |  |     if verbose > 0: | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         print "Loading checkpoint from %s ..." % dumpfile | 
					
						
							| 
									
										
										
										
											1998-02-21 20:02:09 +00:00
										 |  |  |     f = open(dumpfile, "rb") | 
					
						
							|  |  |  |     c = pickle.load(f) | 
					
						
							|  |  |  |     f.close() | 
					
						
							|  |  |  |     if verbose > 0: | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         print "Done." | 
					
						
							| 
									
										
										
										
											2002-09-11 20:36:02 +00:00
										 |  |  |         print "Root:", "\n      ".join(c.roots) | 
					
						
							| 
									
										
										
										
											1998-02-21 20:02:09 +00:00
										 |  |  |     return c | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | class Checker: | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											1998-02-21 20:02:09 +00:00
										 |  |  |     checkext = CHECKEXT | 
					
						
							|  |  |  |     verbose = VERBOSE | 
					
						
							|  |  |  |     maxpage = MAXPAGE | 
					
						
							|  |  |  |     roundsize = ROUNDSIZE | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |     nonames = NONAMES | 
					
						
							| 
									
										
										
										
											1998-02-21 20:02:09 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     validflags = tuple(dir()) | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  |     def __init__(self): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         self.reset() | 
					
						
							| 
									
										
										
										
											1998-02-21 20:02:09 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def setflags(self, **kw): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         for key in kw.keys(): | 
					
						
							|  |  |  |             if key not in self.validflags: | 
					
						
							|  |  |  |                 raise NameError, "invalid keyword argument: %s" % str(key) | 
					
						
							|  |  |  |         for key, value in kw.items(): | 
					
						
							|  |  |  |             setattr(self, key, value) | 
					
						
							| 
									
										
										
										
											1997-02-02 23:30:32 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def reset(self): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         self.roots = [] | 
					
						
							|  |  |  |         self.todo = {} | 
					
						
							|  |  |  |         self.done = {} | 
					
						
							|  |  |  |         self.bad = {} | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |         # Add a name table, so that the name URLs can be checked. Also | 
					
						
							|  |  |  |         # serves as an implicit cache for which URLs are done. | 
					
						
							|  |  |  |         self.name_table = {} | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         self.round = 0 | 
					
						
							|  |  |  |         # The following are not pickled: | 
					
						
							|  |  |  |         self.robots = {} | 
					
						
							|  |  |  |         self.errors = {} | 
					
						
							|  |  |  |         self.urlopener = MyURLopener() | 
					
						
							|  |  |  |         self.changed = 0 | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |     def note(self, level, format, *args): | 
					
						
							|  |  |  |         if self.verbose > level: | 
					
						
							|  |  |  |             if args: | 
					
						
							|  |  |  |                 format = format%args | 
					
						
							|  |  |  |             self.message(format) | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |     def message(self, format, *args): | 
					
						
							|  |  |  |         if args: | 
					
						
							|  |  |  |             format = format%args | 
					
						
							| 
									
										
										
										
											2004-07-18 06:16:08 +00:00
										 |  |  |         print format | 
					
						
							| 
									
										
										
										
											1997-01-30 03:19:41 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def __getstate__(self): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         return (self.roots, self.todo, self.done, self.bad, self.round) | 
					
						
							| 
									
										
										
										
											1997-01-30 03:19:41 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def __setstate__(self, state): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         self.reset() | 
					
						
							|  |  |  |         (self.roots, self.todo, self.done, self.bad, self.round) = state | 
					
						
							|  |  |  |         for root in self.roots: | 
					
						
							|  |  |  |             self.addrobot(root) | 
					
						
							|  |  |  |         for url in self.bad.keys(): | 
					
						
							|  |  |  |             self.markerror(url) | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |     def addroot(self, root, add_to_do = 1): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         if root not in self.roots: | 
					
						
							|  |  |  |             troot = root | 
					
						
							|  |  |  |             scheme, netloc, path, params, query, fragment = \ | 
					
						
							|  |  |  |                     urlparse.urlparse(root) | 
					
						
							| 
									
										
										
										
											2002-09-11 20:36:02 +00:00
										 |  |  |             i = path.rfind("/") + 1 | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |             if 0 < i < len(path): | 
					
						
							|  |  |  |                 path = path[:i] | 
					
						
							|  |  |  |                 troot = urlparse.urlunparse((scheme, netloc, path, | 
					
						
							|  |  |  |                                              params, query, fragment)) | 
					
						
							|  |  |  |             self.roots.append(troot) | 
					
						
							|  |  |  |             self.addrobot(root) | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |             if add_to_do: | 
					
						
							|  |  |  |                 self.newlink((root, ""), ("<root>", root)) | 
					
						
							| 
									
										
										
										
											1997-01-30 03:19:41 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def addrobot(self, root): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         root = urlparse.urljoin(root, "/") | 
					
						
							|  |  |  |         if self.robots.has_key(root): return | 
					
						
							|  |  |  |         url = urlparse.urljoin(root, "/robots.txt") | 
					
						
							|  |  |  |         self.robots[root] = rp = robotparser.RobotFileParser() | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |         self.note(2, "Parsing %s", url) | 
					
						
							|  |  |  |         rp.debug = self.verbose > 3 | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         rp.set_url(url) | 
					
						
							|  |  |  |         try: | 
					
						
							|  |  |  |             rp.read() | 
					
						
							| 
									
										
										
										
											2001-12-11 22:41:24 +00:00
										 |  |  |         except (OSError, IOError), msg: | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |             self.note(1, "I/O error parsing %s: %s", url, msg) | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def run(self): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         while self.todo: | 
					
						
							|  |  |  |             self.round = self.round + 1 | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |             self.note(0, "\nRound %d (%s)\n", self.round, self.status()) | 
					
						
							| 
									
										
										
										
											1998-06-15 12:33:02 +00:00
										 |  |  |             urls = self.todo.keys() | 
					
						
							|  |  |  |             urls.sort() | 
					
						
							|  |  |  |             del urls[self.roundsize:] | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |             for url in urls: | 
					
						
							|  |  |  |                 self.dopage(url) | 
					
						
							| 
									
										
										
										
											1997-01-31 14:43:15 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def status(self): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         return "%d total, %d to do, %d done, %d bad" % ( | 
					
						
							|  |  |  |             len(self.todo)+len(self.done), | 
					
						
							|  |  |  |             len(self.todo), len(self.done), | 
					
						
							|  |  |  |             len(self.bad)) | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-02-02 23:30:32 +00:00
										 |  |  |     def report(self): | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |         self.message("") | 
					
						
							|  |  |  |         if not self.todo: s = "Final" | 
					
						
							|  |  |  |         else: s = "Interim" | 
					
						
							|  |  |  |         self.message("%s Report (%s)", s, self.status()) | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         self.report_errors() | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def report_errors(self): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         if not self.bad: | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |             self.message("\nNo errors") | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |             return | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |         self.message("\nError Report:") | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         sources = self.errors.keys() | 
					
						
							|  |  |  |         sources.sort() | 
					
						
							|  |  |  |         for source in sources: | 
					
						
							|  |  |  |             triples = self.errors[source] | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |             self.message("") | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |             if len(triples) > 1: | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |                 self.message("%d Errors in %s", len(triples), source) | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |             else: | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |                 self.message("Error in %s", source) | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |             # Call self.format_url() instead of referring | 
					
						
							|  |  |  |             # to the URL directly, since the URLs in these | 
					
						
							|  |  |  |             # triples is now a (URL, fragment) pair. The value | 
					
						
							|  |  |  |             # of the "source" variable comes from the list of | 
					
						
							|  |  |  |             # origins, and is a URL, not a pair. | 
					
						
							| 
									
										
										
										
											2004-07-18 06:16:08 +00:00
										 |  |  |             for url, rawlink, msg in triples: | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |                 if rawlink != self.format_url(url): s = " (%s)" % rawlink | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |                 else: s = "" | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |                 self.message("  HREF %s%s\n    msg %s", | 
					
						
							|  |  |  |                              self.format_url(url), s, msg) | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  |     def dopage(self, url_pair): | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |         # All printing of URLs uses format_url(); argument changed to | 
					
						
							|  |  |  |         # url_pair for clarity. | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         if self.verbose > 1: | 
					
						
							|  |  |  |             if self.verbose > 2: | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |                 self.show("Check ", self.format_url(url_pair), | 
					
						
							|  |  |  |                           "  from", self.todo[url_pair]) | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |             else: | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |                 self.message("Check %s", self.format_url(url_pair)) | 
					
						
							|  |  |  |         url, local_fragment = url_pair | 
					
						
							|  |  |  |         if local_fragment and self.nonames: | 
					
						
							|  |  |  |             self.markdone(url_pair) | 
					
						
							|  |  |  |             return | 
					
						
							| 
									
										
										
										
											2003-02-27 06:59:10 +00:00
										 |  |  |         try: | 
					
						
							|  |  |  |             page = self.getpage(url_pair) | 
					
						
							|  |  |  |         except sgmllib.SGMLParseError, msg: | 
					
						
							|  |  |  |             msg = self.sanitize(msg) | 
					
						
							|  |  |  |             self.note(0, "Error parsing %s: %s", | 
					
						
							|  |  |  |                           self.format_url(url_pair), msg) | 
					
						
							|  |  |  |             # Dont actually mark the URL as bad - it exists, just | 
					
						
							|  |  |  |             # we can't parse it! | 
					
						
							|  |  |  |             page = None | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         if page: | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |             # Store the page which corresponds to this URL. | 
					
						
							|  |  |  |             self.name_table[url] = page | 
					
						
							|  |  |  |             # If there is a fragment in this url_pair, and it's not | 
					
						
							|  |  |  |             # in the list of names for the page, call setbad(), since | 
					
						
							|  |  |  |             # it's a missing anchor. | 
					
						
							|  |  |  |             if local_fragment and local_fragment not in page.getnames(): | 
					
						
							|  |  |  |                 self.setbad(url_pair, ("Missing name anchor `%s'" % local_fragment)) | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |             for info in page.getlinkinfos(): | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |                 # getlinkinfos() now returns the fragment as well, | 
					
						
							|  |  |  |                 # and we store that fragment here in the "todo" dictionary. | 
					
						
							|  |  |  |                 link, rawlink, fragment = info | 
					
						
							|  |  |  |                 # However, we don't want the fragment as the origin, since | 
					
						
							|  |  |  |                 # the origin is logically a page. | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |                 origin = url, rawlink | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |                 self.newlink((link, fragment), origin) | 
					
						
							|  |  |  |         else: | 
					
						
							|  |  |  |             # If no page has been created yet, we want to | 
					
						
							|  |  |  |             # record that fact. | 
					
						
							|  |  |  |             self.name_table[url_pair[0]] = None | 
					
						
							|  |  |  |         self.markdone(url_pair) | 
					
						
							| 
									
										
										
										
											1997-01-31 14:43:15 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-02-02 23:30:32 +00:00
										 |  |  |     def newlink(self, url, origin): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         if self.done.has_key(url): | 
					
						
							|  |  |  |             self.newdonelink(url, origin) | 
					
						
							|  |  |  |         else: | 
					
						
							|  |  |  |             self.newtodolink(url, origin) | 
					
						
							| 
									
										
										
										
											1997-01-31 14:43:15 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def newdonelink(self, url, origin): | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |         if origin not in self.done[url]: | 
					
						
							|  |  |  |             self.done[url].append(origin) | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  |         # Call self.format_url(), since the URL here | 
					
						
							|  |  |  |         # is now a (URL, fragment) pair. | 
					
						
							|  |  |  |         self.note(3, "  Done link %s", self.format_url(url)) | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  |         # Make sure that if it's bad, that the origin gets added. | 
					
						
							| 
									
										
										
										
											1999-11-17 15:00:14 +00:00
										 |  |  |         if self.bad.has_key(url): | 
					
						
							|  |  |  |             source, rawlink = origin | 
					
						
							|  |  |  |             triple = url, rawlink, self.bad[url] | 
					
						
							|  |  |  |             self.seterror(source, triple) | 
					
						
							| 
									
										
										
										
											1997-01-31 14:43:15 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def newtodolink(self, url, origin): | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |         # Call self.format_url(), since the URL here | 
					
						
							|  |  |  |         # is now a (URL, fragment) pair. | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         if self.todo.has_key(url): | 
					
						
							| 
									
										
										
										
											1999-11-17 15:00:14 +00:00
										 |  |  |             if origin not in self.todo[url]: | 
					
						
							|  |  |  |                 self.todo[url].append(origin) | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |             self.note(3, "  Seen todo link %s", self.format_url(url)) | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         else: | 
					
						
							|  |  |  |             self.todo[url] = [origin] | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |             self.note(3, "  New todo link %s", self.format_url(url)) | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											2004-07-18 06:16:08 +00:00
										 |  |  |     def format_url(self, url): | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |         link, fragment = url | 
					
						
							|  |  |  |         if fragment: return link + "#" + fragment | 
					
						
							|  |  |  |         else: return link | 
					
						
							| 
									
										
										
										
											1997-01-31 14:43:15 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def markdone(self, url): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         self.done[url] = self.todo[url] | 
					
						
							|  |  |  |         del self.todo[url] | 
					
						
							|  |  |  |         self.changed = 1 | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def inroots(self, url): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         for root in self.roots: | 
					
						
							|  |  |  |             if url[:len(root)] == root: | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |                 return self.isallowed(root, url) | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         return 0 | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |     def isallowed(self, root, url): | 
					
						
							|  |  |  |         root = urlparse.urljoin(root, "/") | 
					
						
							|  |  |  |         return self.robots[root].can_fetch(AGENTNAME, url) | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |     def getpage(self, url_pair): | 
					
						
							|  |  |  |         # Incoming argument name is a (URL, fragment) pair. | 
					
						
							|  |  |  |         # The page may have been cached in the name_table variable. | 
					
						
							|  |  |  |         url, fragment = url_pair | 
					
						
							|  |  |  |         if self.name_table.has_key(url): | 
					
						
							|  |  |  |             return self.name_table[url] | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											2002-03-08 17:19:10 +00:00
										 |  |  |         scheme, path = urllib.splittype(url) | 
					
						
							| 
									
										
										
										
											2001-04-04 17:47:25 +00:00
										 |  |  |         if scheme in ('mailto', 'news', 'javascript', 'telnet'): | 
					
						
							|  |  |  |             self.note(1, " Not checking %s URL" % scheme) | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |             return None | 
					
						
							|  |  |  |         isint = self.inroots(url) | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |         # Ensure that openpage gets the URL pair to | 
					
						
							|  |  |  |         # print out its error message and record the error pair | 
					
						
							|  |  |  |         # correctly. | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         if not isint: | 
					
						
							|  |  |  |             if not self.checkext: | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |                 self.note(1, " Not checking ext link") | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |                 return None | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |             f = self.openpage(url_pair) | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |             if f: | 
					
						
							|  |  |  |                 self.safeclose(f) | 
					
						
							|  |  |  |             return None | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |         text, nurl = self.readhtml(url_pair) | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         if nurl != url: | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |             self.note(1, " Redirected to %s", nurl) | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |             url = nurl | 
					
						
							|  |  |  |         if text: | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |             return Page(text, url, maxpage=self.maxpage, checker=self) | 
					
						
							| 
									
										
										
										
											1998-02-21 20:02:09 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |     # These next three functions take (URL, fragment) pairs as | 
					
						
							|  |  |  |     # arguments, so that openpage() receives the appropriate tuple to | 
					
						
							|  |  |  |     # record error messages. | 
					
						
							|  |  |  |     def readhtml(self, url_pair): | 
					
						
							|  |  |  |         url, fragment = url_pair | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         text = None | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |         f, url = self.openhtml(url_pair) | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         if f: | 
					
						
							|  |  |  |             text = f.read() | 
					
						
							|  |  |  |             f.close() | 
					
						
							|  |  |  |         return text, url | 
					
						
							| 
									
										
										
										
											1998-02-21 20:02:09 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |     def openhtml(self, url_pair): | 
					
						
							|  |  |  |         url, fragment = url_pair | 
					
						
							|  |  |  |         f = self.openpage(url_pair) | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         if f: | 
					
						
							|  |  |  |             url = f.geturl() | 
					
						
							|  |  |  |             info = f.info() | 
					
						
							|  |  |  |             if not self.checkforhtml(info, url): | 
					
						
							|  |  |  |                 self.safeclose(f) | 
					
						
							|  |  |  |                 f = None | 
					
						
							|  |  |  |         return f, url | 
					
						
							| 
									
										
										
										
											1998-02-21 20:02:09 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |     def openpage(self, url_pair): | 
					
						
							|  |  |  |         url, fragment = url_pair | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         try: | 
					
						
							|  |  |  |             return self.urlopener.open(url) | 
					
						
							| 
									
										
										
										
											2001-12-11 22:41:24 +00:00
										 |  |  |         except (OSError, IOError), msg: | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |             msg = self.sanitize(msg) | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |             self.note(0, "Error %s", msg) | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |             if self.verbose > 0: | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |                 self.show(" HREF ", url, "  from", self.todo[url_pair]) | 
					
						
							|  |  |  |             self.setbad(url_pair, msg) | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |             return None | 
					
						
							| 
									
										
										
										
											1998-02-21 20:02:09 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def checkforhtml(self, info, url): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         if info.has_key('content-type'): | 
					
						
							| 
									
										
										
										
											2002-09-11 20:36:02 +00:00
										 |  |  |             ctype = cgi.parse_header(info['content-type'])[0].lower() | 
					
						
							| 
									
										
										
										
											2002-11-12 22:19:34 +00:00
										 |  |  |             if ';' in ctype: | 
					
						
							|  |  |  |                 # handle content-type: text/html; charset=iso8859-1 : | 
					
						
							|  |  |  |                 ctype = ctype.split(';', 1)[0].strip() | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         else: | 
					
						
							|  |  |  |             if url[-1:] == "/": | 
					
						
							|  |  |  |                 return 1 | 
					
						
							|  |  |  |             ctype, encoding = mimetypes.guess_type(url) | 
					
						
							|  |  |  |         if ctype == 'text/html': | 
					
						
							|  |  |  |             return 1 | 
					
						
							|  |  |  |         else: | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |             self.note(1, " Not HTML, mime type %s", ctype) | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |             return 0 | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-01-31 14:43:15 +00:00
										 |  |  |     def setgood(self, url): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         if self.bad.has_key(url): | 
					
						
							|  |  |  |             del self.bad[url] | 
					
						
							|  |  |  |             self.changed = 1 | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |             self.note(0, "(Clear previously seen error)") | 
					
						
							| 
									
										
										
										
											1997-01-31 14:43:15 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def setbad(self, url, msg): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         if self.bad.has_key(url) and self.bad[url] == msg: | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |             self.note(0, "(Seen this error before)") | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |             return | 
					
						
							|  |  |  |         self.bad[url] = msg | 
					
						
							|  |  |  |         self.changed = 1 | 
					
						
							|  |  |  |         self.markerror(url) | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-02-02 23:30:32 +00:00
										 |  |  |     def markerror(self, url): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         try: | 
					
						
							|  |  |  |             origins = self.todo[url] | 
					
						
							|  |  |  |         except KeyError: | 
					
						
							|  |  |  |             origins = self.done[url] | 
					
						
							|  |  |  |         for source, rawlink in origins: | 
					
						
							|  |  |  |             triple = url, rawlink, self.bad[url] | 
					
						
							|  |  |  |             self.seterror(source, triple) | 
					
						
							| 
									
										
										
										
											1997-02-02 23:30:32 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def seterror(self, url, triple): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         try: | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |             # Because of the way the URLs are now processed, I need to | 
					
						
							|  |  |  |             # check to make sure the URL hasn't been entered in the | 
					
						
							|  |  |  |             # error list.  The first element of the triple here is a | 
					
						
							|  |  |  |             # (URL, fragment) pair, but the URL key is not, since it's | 
					
						
							|  |  |  |             # from the list of origins. | 
					
						
							|  |  |  |             if triple not in self.errors[url]: | 
					
						
							|  |  |  |                 self.errors[url].append(triple) | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         except KeyError: | 
					
						
							|  |  |  |             self.errors[url] = [triple] | 
					
						
							| 
									
										
										
										
											1997-01-31 14:43:15 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1998-02-21 20:02:09 +00:00
										 |  |  |     # The following used to be toplevel functions; they have been | 
					
						
							|  |  |  |     # changed into methods so they can be overridden in subclasses. | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  |     def show(self, p1, link, p2, origins): | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |         self.message("%s %s", p1, link) | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         i = 0 | 
					
						
							|  |  |  |         for source, rawlink in origins: | 
					
						
							|  |  |  |             i = i+1 | 
					
						
							|  |  |  |             if i == 2: | 
					
						
							|  |  |  |                 p2 = ' '*len(p2) | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |             if rawlink != link: s = " (%s)" % rawlink | 
					
						
							|  |  |  |             else: s = "" | 
					
						
							|  |  |  |             self.message("%s %s%s", p2, source, s) | 
					
						
							| 
									
										
										
										
											1998-02-21 20:02:09 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def sanitize(self, msg): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         if isinstance(IOError, ClassType) and isinstance(msg, IOError): | 
					
						
							|  |  |  |             # Do the other branch recursively | 
					
						
							|  |  |  |             msg.args = self.sanitize(msg.args) | 
					
						
							|  |  |  |         elif isinstance(msg, TupleType): | 
					
						
							|  |  |  |             if len(msg) >= 4 and msg[0] == 'http error' and \ | 
					
						
							|  |  |  |                isinstance(msg[3], InstanceType): | 
					
						
							|  |  |  |                 # Remove the Message instance -- it may contain | 
					
						
							|  |  |  |                 # a file object which prevents pickling. | 
					
						
							|  |  |  |                 msg = msg[:3] + msg[4:] | 
					
						
							|  |  |  |         return msg | 
					
						
							| 
									
										
										
										
											1998-02-21 20:02:09 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def safeclose(self, f): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         try: | 
					
						
							|  |  |  |             url = f.geturl() | 
					
						
							|  |  |  |         except AttributeError: | 
					
						
							|  |  |  |             pass | 
					
						
							|  |  |  |         else: | 
					
						
							|  |  |  |             if url[:4] == 'ftp:' or url[:7] == 'file://': | 
					
						
							|  |  |  |                 # Apparently ftp connections don't like to be closed | 
					
						
							|  |  |  |                 # prematurely... | 
					
						
							|  |  |  |                 text = f.read() | 
					
						
							|  |  |  |         f.close() | 
					
						
							| 
									
										
										
										
											1998-02-21 20:02:09 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def save_pickle(self, dumpfile=DUMPFILE): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         if not self.changed: | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |             self.note(0, "\nNo need to save checkpoint") | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         elif not dumpfile: | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |             self.note(0, "No dumpfile, won't save checkpoint") | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         else: | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |             self.note(0, "\nSaving checkpoint to %s ...", dumpfile) | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |             newfile = dumpfile + ".new" | 
					
						
							|  |  |  |             f = open(newfile, "wb") | 
					
						
							|  |  |  |             pickle.dump(self, f) | 
					
						
							|  |  |  |             f.close() | 
					
						
							|  |  |  |             try: | 
					
						
							|  |  |  |                 os.unlink(dumpfile) | 
					
						
							|  |  |  |             except os.error: | 
					
						
							|  |  |  |                 pass | 
					
						
							|  |  |  |             os.rename(newfile, dumpfile) | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |             self.note(0, "Done.") | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |             return 1 | 
					
						
							| 
									
										
										
										
											1998-02-21 20:02:09 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  | class Page: | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |     def __init__(self, text, url, verbose=VERBOSE, maxpage=MAXPAGE, checker=None): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         self.text = text | 
					
						
							|  |  |  |         self.url = url | 
					
						
							|  |  |  |         self.verbose = verbose | 
					
						
							|  |  |  |         self.maxpage = maxpage | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |         self.checker = checker | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |         # The parsing of the page is done in the __init__() routine in | 
					
						
							|  |  |  |         # order to initialize the list of names the file | 
					
						
							|  |  |  |         # contains. Stored the parser in an instance variable. Passed | 
					
						
							|  |  |  |         # the URL to MyHTMLParser(). | 
					
						
							|  |  |  |         size = len(self.text) | 
					
						
							|  |  |  |         if size > self.maxpage: | 
					
						
							|  |  |  |             self.note(0, "Skip huge file %s (%.0f Kbytes)", self.url, (size*0.001)) | 
					
						
							|  |  |  |             self.parser = None | 
					
						
							|  |  |  |             return | 
					
						
							|  |  |  |         self.checker.note(2, "  Parsing %s (%d bytes)", self.url, size) | 
					
						
							|  |  |  |         self.parser = MyHTMLParser(url, verbose=self.verbose, | 
					
						
							|  |  |  |                                    checker=self.checker) | 
					
						
							|  |  |  |         self.parser.feed(self.text) | 
					
						
							|  |  |  |         self.parser.close() | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											1998-08-06 21:31:13 +00:00
										 |  |  |     def note(self, level, msg, *args): | 
					
						
							|  |  |  |         if self.checker: | 
					
						
							|  |  |  |             apply(self.checker.note, (level, msg) + args) | 
					
						
							|  |  |  |         else: | 
					
						
							|  |  |  |             if self.verbose >= level: | 
					
						
							|  |  |  |                 if args: | 
					
						
							|  |  |  |                     msg = msg%args | 
					
						
							|  |  |  |                 print msg | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |     # Method to retrieve names. | 
					
						
							|  |  |  |     def getnames(self): | 
					
						
							| 
									
										
										
										
											2000-03-28 20:10:39 +00:00
										 |  |  |         if self.parser: | 
					
						
							|  |  |  |             return self.parser.names | 
					
						
							|  |  |  |         else: | 
					
						
							|  |  |  |             return [] | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  |     def getlinkinfos(self): | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |         # File reading is done in __init__() routine.  Store parser in | 
					
						
							|  |  |  |         # local variable to indicate success of parsing. | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  |         # If no parser was stored, fail. | 
					
						
							|  |  |  |         if not self.parser: return [] | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  |         rawlinks = self.parser.getlinks() | 
					
						
							|  |  |  |         base = urlparse.urljoin(self.url, self.parser.getbase() or "") | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         infos = [] | 
					
						
							|  |  |  |         for rawlink in rawlinks: | 
					
						
							|  |  |  |             t = urlparse.urlparse(rawlink) | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |             # DON'T DISCARD THE FRAGMENT! Instead, include | 
					
						
							|  |  |  |             # it in the tuples which are returned. See Checker.dopage(). | 
					
						
							|  |  |  |             fragment = t[-1] | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |             t = t[:-1] + ('',) | 
					
						
							|  |  |  |             rawlink = urlparse.urlunparse(t) | 
					
						
							|  |  |  |             link = urlparse.urljoin(base, rawlink) | 
					
						
							| 
									
										
										
										
											2004-07-18 06:16:08 +00:00
										 |  |  |             infos.append((link, rawlink, fragment)) | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         return infos | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | class MyStringIO(StringIO.StringIO): | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  |     def __init__(self, url, info): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         self.__url = url | 
					
						
							|  |  |  |         self.__info = info | 
					
						
							|  |  |  |         StringIO.StringIO.__init__(self) | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def info(self): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         return self.__info | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def geturl(self): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         return self.__url | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | class MyURLopener(urllib.FancyURLopener): | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  |     http_error_default = urllib.URLopener.http_error_default | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-01-30 06:04:00 +00:00
										 |  |  |     def __init__(*args): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         self = args[0] | 
					
						
							|  |  |  |         apply(urllib.FancyURLopener.__init__, args) | 
					
						
							|  |  |  |         self.addheaders = [ | 
					
						
							|  |  |  |             ('User-agent', 'Python-webchecker/%s' % __version__), | 
					
						
							|  |  |  |             ] | 
					
						
							| 
									
										
										
										
											1997-05-07 15:00:56 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def http_error_401(self, url, fp, errcode, errmsg, headers): | 
					
						
							|  |  |  |         return None | 
					
						
							| 
									
										
										
										
											1997-01-30 06:04:00 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  |     def open_file(self, url): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         path = urllib.url2pathname(urllib.unquote(url)) | 
					
						
							|  |  |  |         if os.path.isdir(path): | 
					
						
							| 
									
										
										
										
											1999-04-26 23:11:46 +00:00
										 |  |  |             if path[-1] != os.sep: | 
					
						
							|  |  |  |                 url = url + '/' | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |             indexpath = os.path.join(path, "index.html") | 
					
						
							|  |  |  |             if os.path.exists(indexpath): | 
					
						
							|  |  |  |                 return self.open_file(url + "index.html") | 
					
						
							|  |  |  |             try: | 
					
						
							|  |  |  |                 names = os.listdir(path) | 
					
						
							|  |  |  |             except os.error, msg: | 
					
						
							|  |  |  |                 raise IOError, msg, sys.exc_traceback | 
					
						
							|  |  |  |             names.sort() | 
					
						
							|  |  |  |             s = MyStringIO("file:"+url, {'content-type': 'text/html'}) | 
					
						
							|  |  |  |             s.write('<BASE HREF="file:%s">\n' % | 
					
						
							|  |  |  |                     urllib.quote(os.path.join(path, ""))) | 
					
						
							|  |  |  |             for name in names: | 
					
						
							|  |  |  |                 q = urllib.quote(name) | 
					
						
							|  |  |  |                 s.write('<A HREF="%s">%s</A>\n' % (q, q)) | 
					
						
							|  |  |  |             s.seek(0) | 
					
						
							|  |  |  |             return s | 
					
						
							| 
									
										
										
										
											1999-04-26 23:11:46 +00:00
										 |  |  |         return urllib.FancyURLopener.open_file(self, url) | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-01-31 14:43:15 +00:00
										 |  |  | class MyHTMLParser(sgmllib.SGMLParser): | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |     def __init__(self, url, verbose=VERBOSE, checker=None): | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |         self.myverbose = verbose # now unused | 
					
						
							|  |  |  |         self.checker = checker | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         self.base = None | 
					
						
							|  |  |  |         self.links = {} | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |         self.names = [] | 
					
						
							|  |  |  |         self.url = url | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         sgmllib.SGMLParser.__init__(self) | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											2004-03-21 19:07:23 +00:00
										 |  |  |     def check_name_id( self, attributes ): | 
					
						
							|  |  |  |         """ Check the name or id attributes on an element.
 | 
					
						
							|  |  |  |         """
 | 
					
						
							|  |  |  |         # We must rescue the NAME or id (name is deprecated in XHTML) | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |         # attributes from the anchor, in order to | 
					
						
							|  |  |  |         # cache the internal anchors which are made | 
					
						
							|  |  |  |         # available in the page. | 
					
						
							|  |  |  |         for name, value in attributes: | 
					
						
							| 
									
										
										
										
											2004-03-21 19:07:23 +00:00
										 |  |  |             if name == "name" or name == "id": | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |                 if value in self.names: | 
					
						
							| 
									
										
										
										
											2004-03-21 19:07:23 +00:00
										 |  |  |                     self.checker.message("WARNING: duplicate ID name %s in %s", | 
					
						
							| 
									
										
										
										
											1999-11-17 15:40:08 +00:00
										 |  |  |                                          value, self.url) | 
					
						
							|  |  |  |                 else: self.names.append(value) | 
					
						
							|  |  |  |                 break | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											2004-03-21 19:07:23 +00:00
										 |  |  |     def unknown_starttag( self, tag, attributes ): | 
					
						
							|  |  |  |         """ In XHTML, you can have id attributes on any element.
 | 
					
						
							|  |  |  |         """
 | 
					
						
							|  |  |  |         self.check_name_id(attributes) | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  |     def start_a(self, attributes): | 
					
						
							|  |  |  |         self.link_attr(attributes, 'href') | 
					
						
							|  |  |  |         self.check_name_id(attributes) | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-02-01 05:16:08 +00:00
										 |  |  |     def end_a(self): pass | 
					
						
							|  |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-10-06 18:54:01 +00:00
										 |  |  |     def do_area(self, attributes): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         self.link_attr(attributes, 'href') | 
					
						
							| 
									
										
										
										
											2004-03-21 19:07:23 +00:00
										 |  |  |         self.check_name_id(attributes) | 
					
						
							| 
									
										
										
										
											1997-10-06 18:54:01 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											2001-04-04 17:47:25 +00:00
										 |  |  |     def do_body(self, attributes): | 
					
						
							| 
									
										
										
										
											2001-04-05 18:14:50 +00:00
										 |  |  |         self.link_attr(attributes, 'background', 'bgsound') | 
					
						
							| 
									
										
										
										
											2004-03-21 19:07:23 +00:00
										 |  |  |         self.check_name_id(attributes) | 
					
						
							| 
									
										
										
										
											2001-04-04 17:47:25 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-02-01 05:16:08 +00:00
										 |  |  |     def do_img(self, attributes): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         self.link_attr(attributes, 'src', 'lowsrc') | 
					
						
							| 
									
										
										
										
											2004-03-21 19:07:23 +00:00
										 |  |  |         self.check_name_id(attributes) | 
					
						
							| 
									
										
										
										
											1997-02-01 05:16:08 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def do_frame(self, attributes): | 
					
						
							| 
									
										
										
										
											2001-04-04 17:47:25 +00:00
										 |  |  |         self.link_attr(attributes, 'src', 'longdesc') | 
					
						
							| 
									
										
										
										
											2004-03-21 19:07:23 +00:00
										 |  |  |         self.check_name_id(attributes) | 
					
						
							| 
									
										
										
										
											2001-04-04 17:47:25 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def do_iframe(self, attributes): | 
					
						
							|  |  |  |         self.link_attr(attributes, 'src', 'longdesc') | 
					
						
							| 
									
										
										
										
											2004-03-21 19:07:23 +00:00
										 |  |  |         self.check_name_id(attributes) | 
					
						
							| 
									
										
										
										
											2001-04-04 17:47:25 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def do_link(self, attributes): | 
					
						
							|  |  |  |         for name, value in attributes: | 
					
						
							|  |  |  |             if name == "rel": | 
					
						
							| 
									
										
										
										
											2002-09-11 20:36:02 +00:00
										 |  |  |                 parts = value.lower().split() | 
					
						
							| 
									
										
										
										
											2001-04-04 17:47:25 +00:00
										 |  |  |                 if (  parts == ["stylesheet"] | 
					
						
							|  |  |  |                       or parts == ["alternate", "stylesheet"]): | 
					
						
							|  |  |  |                     self.link_attr(attributes, "href") | 
					
						
							|  |  |  |                     break | 
					
						
							| 
									
										
										
										
											2004-03-21 19:07:23 +00:00
										 |  |  |         self.check_name_id(attributes) | 
					
						
							| 
									
										
										
										
											2001-04-04 17:47:25 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def do_object(self, attributes): | 
					
						
							|  |  |  |         self.link_attr(attributes, 'data', 'usemap') | 
					
						
							| 
									
										
										
										
											2004-03-21 19:07:23 +00:00
										 |  |  |         self.check_name_id(attributes) | 
					
						
							| 
									
										
										
										
											2001-04-04 17:47:25 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def do_script(self, attributes): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         self.link_attr(attributes, 'src') | 
					
						
							| 
									
										
										
										
											2004-03-21 19:07:23 +00:00
										 |  |  |         self.check_name_id(attributes) | 
					
						
							| 
									
										
										
										
											1997-02-01 05:16:08 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											2001-04-05 18:14:50 +00:00
										 |  |  |     def do_table(self, attributes): | 
					
						
							|  |  |  |         self.link_attr(attributes, 'background') | 
					
						
							| 
									
										
										
										
											2004-03-21 19:07:23 +00:00
										 |  |  |         self.check_name_id(attributes) | 
					
						
							| 
									
										
										
										
											2001-04-05 18:14:50 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def do_td(self, attributes): | 
					
						
							|  |  |  |         self.link_attr(attributes, 'background') | 
					
						
							| 
									
										
										
										
											2004-03-21 19:07:23 +00:00
										 |  |  |         self.check_name_id(attributes) | 
					
						
							| 
									
										
										
										
											2001-04-05 18:14:50 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def do_th(self, attributes): | 
					
						
							|  |  |  |         self.link_attr(attributes, 'background') | 
					
						
							| 
									
										
										
										
											2004-03-21 19:07:23 +00:00
										 |  |  |         self.check_name_id(attributes) | 
					
						
							| 
									
										
										
										
											2001-04-05 18:14:50 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def do_tr(self, attributes): | 
					
						
							|  |  |  |         self.link_attr(attributes, 'background') | 
					
						
							| 
									
										
										
										
											2004-03-21 19:07:23 +00:00
										 |  |  |         self.check_name_id(attributes) | 
					
						
							| 
									
										
										
										
											2001-04-05 18:14:50 +00:00
										 |  |  | 
 | 
					
						
							| 
									
										
										
										
											1997-02-01 05:16:08 +00:00
										 |  |  |     def link_attr(self, attributes, *args): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         for name, value in attributes: | 
					
						
							|  |  |  |             if name in args: | 
					
						
							| 
									
										
										
										
											2002-09-11 20:36:02 +00:00
										 |  |  |                 if value: value = value.strip() | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |                 if value: self.links[value] = None | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def do_base(self, attributes): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         for name, value in attributes: | 
					
						
							|  |  |  |             if name == 'href': | 
					
						
							| 
									
										
										
										
											2002-09-11 20:36:02 +00:00
										 |  |  |                 if value: value = value.strip() | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |                 if value: | 
					
						
							| 
									
										
										
										
											1998-07-08 03:04:39 +00:00
										 |  |  |                     if self.checker: | 
					
						
							|  |  |  |                         self.checker.note(1, "  Base %s", value) | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |                     self.base = value | 
					
						
							| 
									
										
										
										
											2004-03-21 19:07:23 +00:00
										 |  |  |         self.check_name_id(attributes) | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def getlinks(self): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         return self.links.keys() | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  |     def getbase(self): | 
					
						
							| 
									
										
										
										
											1998-04-06 14:29:28 +00:00
										 |  |  |         return self.base | 
					
						
							| 
									
										
										
										
											1997-01-30 02:44:48 +00:00
										 |  |  | 
 | 
					
						
							|  |  |  | 
 | 
					
						
							|  |  |  | if __name__ == '__main__': | 
					
						
							|  |  |  |     main() |