2015-12-07 18:18:39 -05:00
|
|
|
/*
|
|
|
|
* HWP Stuff
|
2019-05-04 15:54:54 -04:00
|
|
|
*
|
2023-02-07 19:35:18 -08:00
|
|
|
* Copyright (C) 2015-2023 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
|
2019-05-04 15:54:54 -04:00
|
|
|
*
|
2015-12-07 18:18:39 -05:00
|
|
|
* Authors: Kevin Lin
|
2019-05-04 15:54:54 -04:00
|
|
|
*
|
2015-12-07 18:18:39 -05:00
|
|
|
* This program is free software; you can redistribute it and/or modify it under
|
|
|
|
* the terms of the GNU General Public License version 2 as published by the
|
|
|
|
* Free Software Foundation.
|
2019-05-04 15:54:54 -04:00
|
|
|
*
|
2015-12-07 18:18:39 -05:00
|
|
|
* This program is distributed in the hope that it will be useful, but WITHOUT
|
|
|
|
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
|
|
|
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
|
|
|
* more details.
|
2019-05-04 15:54:54 -04:00
|
|
|
*
|
2015-12-07 18:18:39 -05:00
|
|
|
* You should have received a copy of the GNU General Public License along with
|
|
|
|
* this program; if not, write to the Free Software Foundation, Inc., 51
|
|
|
|
* Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#if HAVE_CONFIG_H
|
|
|
|
#include "clamav-config.h"
|
|
|
|
#endif
|
|
|
|
|
2015-12-15 13:01:40 -05:00
|
|
|
#if HAVE_LIBXML2
|
|
|
|
#include <libxml/xmlreader.h>
|
2015-12-11 17:50:40 -05:00
|
|
|
#endif
|
|
|
|
|
2015-12-07 18:18:39 -05:00
|
|
|
#include <sys/types.h>
|
|
|
|
#include <sys/stat.h>
|
|
|
|
#include <fcntl.h>
|
|
|
|
#include <string.h>
|
|
|
|
#include <ctype.h>
|
|
|
|
#include <zlib.h>
|
|
|
|
|
2016-01-15 15:32:04 -05:00
|
|
|
#if HAVE_ICONV
|
|
|
|
#include <iconv.h>
|
|
|
|
#endif
|
|
|
|
|
2015-12-07 18:18:39 -05:00
|
|
|
#include "clamav.h"
|
|
|
|
#include "fmap.h"
|
2015-12-14 10:59:19 -05:00
|
|
|
#include "str.h"
|
2016-02-18 11:50:57 -05:00
|
|
|
#include "conv.h"
|
2015-12-07 18:18:39 -05:00
|
|
|
#include "others.h"
|
|
|
|
#include "scanners.h"
|
2015-12-15 13:01:40 -05:00
|
|
|
#include "msxml_parser.h"
|
|
|
|
#include "msxml.h"
|
2015-12-07 18:18:39 -05:00
|
|
|
#include "json_api.h"
|
|
|
|
#include "hwp.h"
|
|
|
|
#if HAVE_JSON
|
|
|
|
#include "msdoc.h"
|
|
|
|
#endif
|
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
#define HWP5_DEBUG 0
|
|
|
|
#define HWP3_DEBUG 0
|
2016-01-13 13:39:29 -05:00
|
|
|
#define HWP3_VERIFY 0
|
2015-12-17 11:04:00 -05:00
|
|
|
#define HWPML_DEBUG 0
|
2015-12-14 17:23:14 -05:00
|
|
|
#if HWP5_DEBUG
|
|
|
|
#define hwp5_debug(...) cli_dbgmsg(__VA_ARGS__)
|
|
|
|
#else
|
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
|
|
|
#define hwp5_debug(...) {};
|
2015-12-14 17:23:14 -05:00
|
|
|
#endif
|
2015-12-11 17:50:40 -05:00
|
|
|
#if HWP3_DEBUG
|
|
|
|
#define hwp3_debug(...) cli_dbgmsg(__VA_ARGS__)
|
|
|
|
#else
|
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
|
|
|
#define hwp3_debug(...) {};
|
2015-12-11 17:50:40 -05:00
|
|
|
#endif
|
2015-12-15 13:01:40 -05:00
|
|
|
#if HWPML_DEBUG
|
|
|
|
#define hwpml_debug(...) cli_dbgmsg(__VA_ARGS__)
|
|
|
|
#else
|
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
|
|
|
#define hwpml_debug(...) {};
|
2015-12-15 13:01:40 -05:00
|
|
|
#endif
|
2015-12-11 17:50:40 -05:00
|
|
|
|
2019-05-04 17:28:16 -04:00
|
|
|
typedef cl_error_t (*hwp_cb)(void *cbdata, int fd, const char *filepath, cli_ctx *ctx);
|
|
|
|
|
2019-05-04 18:08:43 -04:00
|
|
|
static cl_error_t decompress_and_callback(cli_ctx *ctx, fmap_t *input, size_t at, size_t len, const char *parent, hwp_cb cb, void *cbdata)
|
2015-12-14 16:34:11 -05:00
|
|
|
{
|
2019-05-04 15:54:54 -04:00
|
|
|
cl_error_t ret = CL_SUCCESS;
|
|
|
|
int zret, ofd;
|
|
|
|
size_t in;
|
2019-05-04 18:08:43 -04:00
|
|
|
size_t off_in = at;
|
2016-02-18 11:50:57 -05:00
|
|
|
size_t count, remain = 1, outsize = 0;
|
2015-12-14 16:34:11 -05:00
|
|
|
z_stream zstrm;
|
|
|
|
char *tmpname;
|
|
|
|
unsigned char inbuf[FILEBUFF], outbuf[FILEBUFF];
|
|
|
|
|
|
|
|
if (!ctx || !input || !cb)
|
|
|
|
return CL_ENULLARG;
|
|
|
|
|
2015-12-14 17:47:28 -05:00
|
|
|
if (len)
|
|
|
|
remain = len;
|
|
|
|
|
2015-12-14 16:34:11 -05:00
|
|
|
/* reserve tempfile for output and callback */
|
2020-03-19 21:23:54 -04:00
|
|
|
if ((ret = cli_gentempfd(ctx->sub_tmpdir, &tmpname, &ofd)) != CL_SUCCESS) {
|
2015-12-14 16:34:11 -05:00
|
|
|
cli_errmsg("%s: Can't generate temporary file\n", parent);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* initialize zlib inflation stream */
|
|
|
|
memset(&zstrm, 0, sizeof(zstrm));
|
2018-12-03 12:40:13 -05:00
|
|
|
zstrm.zalloc = Z_NULL;
|
|
|
|
zstrm.zfree = Z_NULL;
|
|
|
|
zstrm.opaque = Z_NULL;
|
|
|
|
zstrm.next_in = inbuf;
|
|
|
|
zstrm.next_out = outbuf;
|
|
|
|
zstrm.avail_in = 0;
|
2015-12-14 16:34:11 -05:00
|
|
|
zstrm.avail_out = FILEBUFF;
|
|
|
|
|
|
|
|
zret = inflateInit2(&zstrm, -15);
|
|
|
|
if (zret != Z_OK) {
|
2015-12-14 17:23:14 -05:00
|
|
|
cli_errmsg("%s: Can't initialize zlib inflation stream\n", parent);
|
2015-12-14 16:34:11 -05:00
|
|
|
ret = CL_EUNPACK;
|
|
|
|
goto dc_end;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* inflation loop */
|
|
|
|
do {
|
|
|
|
if (zstrm.avail_in == 0) {
|
|
|
|
zstrm.next_in = inbuf;
|
2019-05-04 15:54:54 -04:00
|
|
|
|
|
|
|
in = fmap_readn(input, inbuf, off_in, FILEBUFF);
|
|
|
|
if (in == (size_t)-1) {
|
2015-12-14 16:34:11 -05:00
|
|
|
cli_errmsg("%s: Error reading stream\n", parent);
|
|
|
|
ret = CL_EUNPACK;
|
|
|
|
goto dc_end;
|
|
|
|
}
|
2016-02-03 17:21:57 -05:00
|
|
|
if (!in)
|
2015-12-14 16:34:11 -05:00
|
|
|
break;
|
|
|
|
|
2015-12-14 17:47:28 -05:00
|
|
|
if (len) {
|
2016-02-03 17:21:57 -05:00
|
|
|
if (remain < in)
|
|
|
|
in = remain;
|
|
|
|
remain -= in;
|
2015-12-14 17:47:28 -05:00
|
|
|
}
|
2016-02-03 17:21:57 -05:00
|
|
|
zstrm.avail_in = in;
|
|
|
|
off_in += in;
|
2015-12-14 16:34:11 -05:00
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
zret = inflate(&zstrm, Z_SYNC_FLUSH);
|
2015-12-14 16:34:11 -05:00
|
|
|
count = FILEBUFF - zstrm.avail_out;
|
|
|
|
if (count) {
|
2016-02-01 13:22:59 -05:00
|
|
|
if ((ret = cli_checklimits("HWP", ctx, outsize + count, 0, 0)) != CL_SUCCESS)
|
2015-12-14 16:34:11 -05:00
|
|
|
break;
|
|
|
|
|
|
|
|
if (cli_writen(ofd, outbuf, count) != count) {
|
|
|
|
cli_errmsg("%s: Can't write to file %s\n", parent, tmpname);
|
|
|
|
ret = CL_EWRITE;
|
|
|
|
goto dc_end;
|
|
|
|
}
|
|
|
|
outsize += count;
|
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
zstrm.next_out = outbuf;
|
2015-12-14 16:34:11 -05:00
|
|
|
zstrm.avail_out = FILEBUFF;
|
2018-12-03 12:40:13 -05:00
|
|
|
} while (zret == Z_OK && remain);
|
2015-12-14 16:34:11 -05:00
|
|
|
|
2019-05-04 18:08:43 -04:00
|
|
|
cli_dbgmsg("%s: Decompressed %zu bytes to %s\n", parent, outsize, tmpname);
|
2016-02-03 17:21:57 -05:00
|
|
|
|
2015-12-14 16:34:11 -05:00
|
|
|
/* post inflation checks */
|
|
|
|
if (zret != Z_STREAM_END && zret != Z_OK) {
|
|
|
|
if (outsize == 0) {
|
|
|
|
cli_infomsg(ctx, "%s: Error decompressing stream. No data decompressed.\n", parent);
|
|
|
|
ret = CL_EUNPACK;
|
|
|
|
goto dc_end;
|
|
|
|
}
|
|
|
|
|
|
|
|
cli_infomsg(ctx, "%s: Error decompressing stream. Scanning what was decompressed.\n", parent);
|
|
|
|
}
|
2015-12-14 17:47:28 -05:00
|
|
|
|
2016-02-03 17:21:57 -05:00
|
|
|
/* check for limits exceeded or zlib failure */
|
|
|
|
if (ret == CL_SUCCESS && (zret == Z_STREAM_END || zret == Z_OK)) {
|
2016-02-01 13:22:59 -05:00
|
|
|
if (len && remain > 0)
|
|
|
|
cli_infomsg(ctx, "%s: Error decompressing stream. Not all requested input was converted\n", parent);
|
2015-12-14 16:34:11 -05:00
|
|
|
|
2016-02-01 13:22:59 -05:00
|
|
|
/* scanning inflated stream */
|
2018-07-30 20:19:28 -04:00
|
|
|
ret = cb(cbdata, ofd, tmpname, ctx);
|
2016-02-02 12:38:27 -05:00
|
|
|
} else {
|
|
|
|
/* default to scanning what we got */
|
2022-03-09 22:26:40 -08:00
|
|
|
ret = cli_magic_scan_desc(ofd, tmpname, ctx, NULL, LAYER_ATTRIBUTES_NONE);
|
2016-02-01 13:22:59 -05:00
|
|
|
}
|
2015-12-14 16:34:11 -05:00
|
|
|
|
|
|
|
/* clean-up */
|
2018-12-03 12:40:13 -05:00
|
|
|
dc_end:
|
2015-12-14 16:34:11 -05:00
|
|
|
zret = inflateEnd(&zstrm);
|
2016-02-03 17:21:57 -05:00
|
|
|
if (zret != Z_OK) {
|
|
|
|
cli_errmsg("%s: Error closing zlib inflation stream\n", parent);
|
|
|
|
if (ret == CL_SUCCESS)
|
|
|
|
ret = CL_EUNPACK;
|
|
|
|
}
|
2015-12-14 16:34:11 -05:00
|
|
|
close(ofd);
|
|
|
|
if (!ctx->engine->keeptmp)
|
|
|
|
if (cli_unlink(tmpname))
|
|
|
|
ret = CL_EUNLINK;
|
|
|
|
free(tmpname);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2016-01-15 15:32:04 -05:00
|
|
|
/* convert HANGUL_NUMERICAL to UTF-8 encoding using iconv library, converts to base64 encoding if no iconv or failure */
|
|
|
|
#define HANGUL_NUMERICAL 0
|
2019-05-04 15:54:54 -04:00
|
|
|
static char *convert_hstr_to_utf8(const char *begin, size_t sz, const char *parent, cl_error_t *ret)
|
2016-01-15 15:32:04 -05:00
|
|
|
{
|
2019-05-04 17:28:16 -04:00
|
|
|
cl_error_t rc = CL_SUCCESS;
|
|
|
|
char *res = NULL;
|
2016-01-15 15:32:04 -05:00
|
|
|
#if HANGUL_NUMERICAL && HAVE_ICONV
|
|
|
|
char *p1, *p2, *inbuf = NULL, *outbuf = NULL;
|
|
|
|
size_t inlen, outlen;
|
|
|
|
iconv_t cd;
|
|
|
|
|
|
|
|
do {
|
2018-12-03 12:40:13 -05:00
|
|
|
p1 = inbuf = cli_calloc(1, sz + 1);
|
2016-01-15 15:32:04 -05:00
|
|
|
if (!inbuf) {
|
|
|
|
cli_errmsg("%s: Failed to allocate memory for encoding conversion buffer\n", parent);
|
|
|
|
rc = CL_EMEM;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
memcpy(inbuf, begin, sz);
|
2018-12-03 12:40:13 -05:00
|
|
|
p2 = outbuf = cli_calloc(1, sz + 1);
|
2016-01-15 15:32:04 -05:00
|
|
|
if (!outbuf) {
|
|
|
|
cli_errmsg("%s: Failed to allocate memory for encoding conversion buffer\n", parent);
|
|
|
|
rc = CL_EMEM;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
inlen = outlen = sz;
|
|
|
|
|
|
|
|
cd = iconv_open("UTF-8", "UNICODE");
|
|
|
|
if (cd == (iconv_t)(-1)) {
|
|
|
|
char errbuf[128];
|
|
|
|
cli_strerror(errno, errbuf, sizeof(errbuf));
|
|
|
|
cli_errmsg("%s: Failed to initialize iconv for encoding %s: %s\n", parent, HANGUL_NUMERICAL, errbuf);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
iconv(cd, (char **)(&p1), &inlen, &p2, &outlen);
|
|
|
|
iconv_close(cd);
|
|
|
|
|
|
|
|
/* no data was converted */
|
|
|
|
if (outlen == sz)
|
|
|
|
break;
|
|
|
|
|
|
|
|
outbuf[sz - outlen] = '\0';
|
|
|
|
|
|
|
|
if (!(res = strdup(outbuf))) {
|
|
|
|
cli_errmsg("%s: Failed to allocate memory for encoding conversion buffer\n", parent);
|
|
|
|
rc = CL_EMEM;
|
|
|
|
break;
|
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
} while (0);
|
2016-01-15 15:32:04 -05:00
|
|
|
|
|
|
|
if (inbuf)
|
|
|
|
free(inbuf);
|
|
|
|
if (outbuf)
|
|
|
|
free(outbuf);
|
|
|
|
#endif
|
|
|
|
/* safety base64 encoding */
|
|
|
|
if (!res && (rc == CL_SUCCESS)) {
|
2016-02-18 11:50:57 -05:00
|
|
|
char *tmpbuf;
|
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
tmpbuf = cli_calloc(1, sz + 1);
|
2016-02-18 11:50:57 -05:00
|
|
|
if (tmpbuf) {
|
|
|
|
memcpy(tmpbuf, begin, sz);
|
|
|
|
|
|
|
|
res = (char *)cl_base64_encode(tmpbuf, sz);
|
|
|
|
if (res)
|
|
|
|
rc = CL_VIRUS; /* used as placeholder */
|
|
|
|
else
|
|
|
|
rc = CL_EMEM;
|
|
|
|
|
|
|
|
free(tmpbuf);
|
|
|
|
} else {
|
|
|
|
cli_errmsg("%s: Failed to allocate memory for temporary buffer\n", parent);
|
2016-01-15 15:32:04 -05:00
|
|
|
rc = CL_EMEM;
|
2016-02-18 11:50:57 -05:00
|
|
|
}
|
2016-01-15 15:32:04 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
(*ret) = rc;
|
|
|
|
return res;
|
|
|
|
}
|
|
|
|
|
2015-12-17 11:04:00 -05:00
|
|
|
/*** HWPOLE2 ***/
|
2019-05-04 17:28:16 -04:00
|
|
|
cl_error_t cli_scanhwpole2(cli_ctx *ctx)
|
2015-12-17 11:04:00 -05:00
|
|
|
{
|
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
|
|
|
fmap_t *map = ctx->fmap;
|
2015-12-17 11:04:00 -05:00
|
|
|
uint32_t usize, asize;
|
|
|
|
|
|
|
|
asize = (uint32_t)(map->len - sizeof(usize));
|
|
|
|
|
|
|
|
if (fmap_readn(map, &usize, 0, sizeof(usize)) != sizeof(usize)) {
|
|
|
|
cli_errmsg("HWPOLE2: Failed to read uncompressed ole2 filesize\n");
|
|
|
|
return CL_EREAD;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (usize != asize)
|
|
|
|
cli_warnmsg("HWPOLE2: Mismatched uncompressed prefix and size: %u != %u\n", usize, asize);
|
|
|
|
else
|
|
|
|
cli_dbgmsg("HWPOLE2: Matched uncompressed prefix and size: %u == %u\n", usize, asize);
|
|
|
|
|
2022-03-09 22:26:40 -08:00
|
|
|
return cli_magic_scan_nested_fmap_type(map, 4, 0, ctx,
|
|
|
|
CL_TYPE_ANY, NULL, LAYER_ATTRIBUTES_NONE);
|
2015-12-17 11:04:00 -05:00
|
|
|
}
|
|
|
|
|
2015-12-11 17:50:40 -05:00
|
|
|
/*** HWP5 ***/
|
|
|
|
|
2019-05-04 17:28:16 -04:00
|
|
|
cl_error_t cli_hwp5header(cli_ctx *ctx, hwp5_header_t *hwp5)
|
2015-12-08 14:47:12 -05:00
|
|
|
{
|
|
|
|
if (!ctx || !hwp5)
|
|
|
|
return CL_ENULLARG;
|
|
|
|
|
2016-01-14 11:53:21 -05:00
|
|
|
#if HAVE_JSON
|
2018-07-20 22:28:48 -04:00
|
|
|
if (SCAN_COLLECT_METADATA) {
|
2016-01-14 11:53:21 -05:00
|
|
|
json_object *header, *flags;
|
2015-12-08 14:47:12 -05:00
|
|
|
|
2016-01-14 11:53:21 -05:00
|
|
|
header = cli_jsonobj(ctx->wrkproperty, "Hwp5Header");
|
|
|
|
if (!header) {
|
|
|
|
cli_errmsg("HWP5.x: No memory for Hwp5Header object\n");
|
|
|
|
return CL_EMEM;
|
|
|
|
}
|
2015-12-08 14:47:12 -05:00
|
|
|
|
2016-01-14 11:53:21 -05:00
|
|
|
/* version */
|
|
|
|
cli_jsonint(header, "RawVersion", hwp5->version);
|
2015-12-08 14:47:12 -05:00
|
|
|
|
2016-01-14 11:53:21 -05:00
|
|
|
/* flags */
|
|
|
|
cli_jsonint(header, "RawFlags", hwp5->flags);
|
2015-12-08 14:47:12 -05:00
|
|
|
|
2016-01-14 11:53:21 -05:00
|
|
|
flags = cli_jsonarray(header, "Flags");
|
|
|
|
if (!flags) {
|
|
|
|
cli_errmsg("HWP5.x: No memory for Hwp5Header/Flags array\n");
|
|
|
|
return CL_EMEM;
|
|
|
|
}
|
2015-12-08 14:47:12 -05:00
|
|
|
|
2016-01-14 11:53:21 -05:00
|
|
|
if (hwp5->flags & HWP5_COMPRESSED) {
|
|
|
|
cli_jsonstr(flags, NULL, "HWP5_COMPRESSED");
|
|
|
|
}
|
|
|
|
if (hwp5->flags & HWP5_PASSWORD) {
|
|
|
|
cli_jsonstr(flags, NULL, "HWP5_PASSWORD");
|
|
|
|
}
|
|
|
|
if (hwp5->flags & HWP5_DISTRIBUTABLE) {
|
|
|
|
cli_jsonstr(flags, NULL, "HWP5_DISTRIBUTABLE");
|
|
|
|
}
|
|
|
|
if (hwp5->flags & HWP5_SCRIPT) {
|
|
|
|
cli_jsonstr(flags, NULL, "HWP5_SCRIPT");
|
|
|
|
}
|
|
|
|
if (hwp5->flags & HWP5_DRM) {
|
|
|
|
cli_jsonstr(flags, NULL, "HWP5_DRM");
|
|
|
|
}
|
|
|
|
if (hwp5->flags & HWP5_XMLTEMPLATE) {
|
|
|
|
cli_jsonstr(flags, NULL, "HWP5_XMLTEMPLATE");
|
|
|
|
}
|
|
|
|
if (hwp5->flags & HWP5_HISTORY) {
|
|
|
|
cli_jsonstr(flags, NULL, "HWP5_HISTORY");
|
|
|
|
}
|
|
|
|
if (hwp5->flags & HWP5_CERT_SIGNED) {
|
|
|
|
cli_jsonstr(flags, NULL, "HWP5_CERT_SIGNED");
|
|
|
|
}
|
|
|
|
if (hwp5->flags & HWP5_CERT_ENCRYPTED) {
|
|
|
|
cli_jsonstr(flags, NULL, "HWP5_CERT_ENCRYPTED");
|
|
|
|
}
|
|
|
|
if (hwp5->flags & HWP5_CERT_EXTRA) {
|
|
|
|
cli_jsonstr(flags, NULL, "HWP5_CERT_EXTRA");
|
|
|
|
}
|
|
|
|
if (hwp5->flags & HWP5_CERT_DRM) {
|
|
|
|
cli_jsonstr(flags, NULL, "HWP5_CERT_DRM");
|
|
|
|
}
|
|
|
|
if (hwp5->flags & HWP5_CCL) {
|
|
|
|
cli_jsonstr(flags, NULL, "HWP5_CCL");
|
|
|
|
}
|
|
|
|
}
|
2015-12-08 14:47:12 -05:00
|
|
|
#endif
|
|
|
|
return CL_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2019-05-04 17:28:16 -04:00
|
|
|
static cl_error_t hwp5_cb(void *cbdata, int fd, const char *filepath, cli_ctx *ctx)
|
2015-12-14 17:23:14 -05:00
|
|
|
{
|
2019-02-27 00:47:38 -05:00
|
|
|
UNUSEDPARAM(cbdata);
|
|
|
|
|
2015-12-14 17:23:14 -05:00
|
|
|
if (fd < 0 || !ctx)
|
|
|
|
return CL_ENULLARG;
|
|
|
|
|
2022-03-09 22:26:40 -08:00
|
|
|
return cli_magic_scan_desc(fd, filepath, ctx, NULL, LAYER_ATTRIBUTES_NONE);
|
2015-12-14 17:23:14 -05:00
|
|
|
}
|
|
|
|
|
2019-05-04 17:28:16 -04:00
|
|
|
cl_error_t cli_scanhwp5_stream(cli_ctx *ctx, hwp5_header_t *hwp5, char *name, int fd, const char *filepath)
|
2015-12-07 18:18:39 -05:00
|
|
|
{
|
2016-02-24 13:29:42 -05:00
|
|
|
hwp5_debug("HWP5.x: NAME: %s\n", name ? name : "(NULL)");
|
2015-12-14 17:23:14 -05:00
|
|
|
|
|
|
|
if (fd < 0) {
|
|
|
|
cli_errmsg("HWP5.x: Invalid file descriptor argument\n");
|
|
|
|
return CL_ENULLARG;
|
|
|
|
}
|
2015-12-07 18:18:39 -05:00
|
|
|
|
2016-02-24 13:29:42 -05:00
|
|
|
if (name) {
|
|
|
|
/* encrypted and compressed streams */
|
|
|
|
if (!strncmp(name, "bin", 3) || !strncmp(name, "jscriptversion", 14) ||
|
|
|
|
!strncmp(name, "defaultjscript", 14) || !strncmp(name, "section", 7) ||
|
|
|
|
!strncmp(name, "viewtext", 8) || !strncmp(name, "docinfo", 7)) {
|
2015-12-07 18:18:39 -05:00
|
|
|
|
2016-02-24 13:29:42 -05:00
|
|
|
if (hwp5->flags & HWP5_PASSWORD) {
|
|
|
|
cli_dbgmsg("HWP5.x: Password encrypted stream, scanning as-is\n");
|
2022-03-09 22:26:40 -08:00
|
|
|
return cli_magic_scan_desc(fd, filepath, ctx, name, LAYER_ATTRIBUTES_NONE);
|
2015-12-14 17:23:14 -05:00
|
|
|
}
|
|
|
|
|
2016-02-24 13:29:42 -05:00
|
|
|
if (hwp5->flags & HWP5_COMPRESSED) {
|
|
|
|
/* DocInfo JSON Handling */
|
|
|
|
STATBUF statbuf;
|
|
|
|
fmap_t *input;
|
2019-05-04 17:28:16 -04:00
|
|
|
cl_error_t ret;
|
2016-02-24 13:29:42 -05:00
|
|
|
|
|
|
|
hwp5_debug("HWP5.x: Sending %s for decompress and scan\n", name);
|
|
|
|
|
|
|
|
/* fmap the input file for easier manipulation */
|
|
|
|
if (FSTAT(fd, &statbuf) == -1) {
|
|
|
|
cli_errmsg("HWP5.x: Can't stat file descriptor\n");
|
|
|
|
return CL_ESTAT;
|
|
|
|
}
|
|
|
|
|
2020-03-19 21:23:54 -04:00
|
|
|
input = fmap(fd, 0, statbuf.st_size, NULL);
|
2016-02-24 13:29:42 -05:00
|
|
|
if (!input) {
|
|
|
|
cli_errmsg("HWP5.x: Failed to get fmap for input stream\n");
|
|
|
|
return CL_EMAP;
|
|
|
|
}
|
|
|
|
ret = decompress_and_callback(ctx, input, 0, 0, "HWP5.x", hwp5_cb, NULL);
|
|
|
|
funmap(input);
|
|
|
|
return ret;
|
2015-12-14 17:23:14 -05:00
|
|
|
}
|
2015-12-07 18:18:39 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
#if HAVE_JSON
|
2016-02-24 13:29:42 -05:00
|
|
|
/* JSON Output Summary Information */
|
2018-07-20 22:28:48 -04:00
|
|
|
if (SCAN_COLLECT_METADATA && ctx->properties != NULL) {
|
2016-02-24 13:29:42 -05:00
|
|
|
if (name && !strncmp(name, "_5_hwpsummaryinformation", 24)) {
|
|
|
|
cli_dbgmsg("HWP5.x: Detected a '_5_hwpsummaryinformation' stream\n");
|
|
|
|
/* JSONOLE2 - what to do if something breaks? */
|
|
|
|
if (cli_ole2_summary_json(ctx, fd, 2) == CL_ETIMEOUT)
|
|
|
|
return CL_ETIMEOUT;
|
|
|
|
}
|
2015-12-07 18:18:39 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
#endif
|
2016-02-24 13:29:42 -05:00
|
|
|
}
|
2015-12-07 18:18:39 -05:00
|
|
|
|
|
|
|
/* normal streams */
|
2022-03-09 22:26:40 -08:00
|
|
|
return cli_magic_scan_desc(fd, filepath, ctx, name, LAYER_ATTRIBUTES_NONE);
|
2015-12-07 18:18:39 -05:00
|
|
|
}
|
2015-12-08 17:28:49 -05:00
|
|
|
|
2015-12-11 17:50:40 -05:00
|
|
|
/*** HWP3 ***/
|
|
|
|
|
|
|
|
/* all fields use little endian and unicode encoding, if appliable */
|
|
|
|
|
2022-02-16 00:13:55 +01:00
|
|
|
// File Identification Information - (30 total bytes)
|
2015-12-11 17:50:40 -05:00
|
|
|
#define HWP3_IDENTITY_INFO_SIZE 30
|
|
|
|
|
2022-02-16 00:13:55 +01:00
|
|
|
// Document Information - (128 total bytes)
|
2015-12-11 17:50:40 -05:00
|
|
|
#define HWP3_DOCINFO_SIZE 128
|
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
#define DI_WRITEPROT 24 /* offset 24 (4 bytes) - write protection */
|
|
|
|
#define DI_EXTERNAPP 28 /* offset 28 (2 bytes) - external application */
|
|
|
|
#define DI_PNAME 32 /* offset 32 (40 x 1 bytes) - print name */
|
|
|
|
#define DI_ANNOTE 72 /* offset 72 (24 x 1 bytes) - annotation */
|
|
|
|
#define DI_PASSWD 96 /* offset 96 (2 bytes) - password protected */
|
|
|
|
#define DI_COMPRESSED 124 /* offset 124 (1 byte) - compression */
|
2015-12-11 17:50:40 -05:00
|
|
|
#define DI_INFOBLKSIZE 126 /* offset 126 (2 bytes) - information block length */
|
2016-01-15 15:32:04 -05:00
|
|
|
struct hwp3_docinfo {
|
2015-12-11 17:50:40 -05:00
|
|
|
uint32_t di_writeprot;
|
|
|
|
uint16_t di_externapp;
|
|
|
|
uint16_t di_passwd;
|
2018-12-03 12:40:13 -05:00
|
|
|
uint8_t di_compressed;
|
2015-12-11 17:50:40 -05:00
|
|
|
uint16_t di_infoblksize;
|
|
|
|
};
|
|
|
|
|
2022-02-16 00:13:55 +01:00
|
|
|
// Document Summary - (1008 total bytes)
|
2015-12-11 17:50:40 -05:00
|
|
|
#define HWP3_DOCSUMMARY_SIZE 1008
|
|
|
|
struct hwp3_docsummary_entry {
|
2019-05-04 18:08:43 -04:00
|
|
|
size_t offset;
|
2015-12-11 17:50:40 -05:00
|
|
|
const char *name;
|
|
|
|
} hwp3_docsummary_fields[] = {
|
2018-12-03 12:40:13 -05:00
|
|
|
{0, "Title"}, /* offset 0 (56 x 2 bytes) - title */
|
|
|
|
{112, "Subject"}, /* offset 112 (56 x 2 bytes) - subject */
|
|
|
|
{224, "Author"}, /* offset 224 (56 x 2 bytes) - author */
|
|
|
|
{336, "Date"}, /* offset 336 (56 x 2 bytes) - date */
|
|
|
|
{448, "Keyword1"}, /* offset 448 (2 x 56 x 2 bytes) - keywords */
|
|
|
|
{560, "Keyword2"},
|
|
|
|
|
|
|
|
{672, "Etc0"}, /* offset 672 (3 x 56 x 2 bytes) - etc */
|
|
|
|
{784, "Etc1"},
|
|
|
|
{896, "Etc2"}};
|
|
|
|
#define NUM_DOCSUMMARY_FIELDS sizeof(hwp3_docsummary_fields) / sizeof(struct hwp3_docsummary_entry)
|
2015-12-11 17:50:40 -05:00
|
|
|
|
2022-02-16 00:13:55 +01:00
|
|
|
// Document Paragraph Information - (43 or 230 total bytes)
|
2018-12-03 12:40:13 -05:00
|
|
|
#define HWP3_PARAINFO_SIZE_S 43
|
|
|
|
#define HWP3_PARAINFO_SIZE_L 230
|
|
|
|
#define HWP3_LINEINFO_SIZE 14
|
2016-01-08 11:08:28 -05:00
|
|
|
#define HWP3_CHARSHPDATA_SIZE 31
|
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
#define HWP3_FIELD_LENGTH 512
|
2016-01-08 11:08:28 -05:00
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
#define PI_PPFS 0 /* offset 0 (1 byte) - prior paragraph format style */
|
|
|
|
#define PI_NCHARS 1 /* offset 1 (2 bytes) - character count */
|
|
|
|
#define PI_NLINES 3 /* offset 3 (2 bytes) - line count */
|
|
|
|
#define PI_IFSC 5 /* offset 5 (1 byte) - including font style of characters */
|
|
|
|
#define PI_FLAGS 6 /* offset 6 (1 byte) - other flags */
|
|
|
|
#define PI_SPECIAL 7 /* offset 7 (4 bytes) - special characters markers */
|
|
|
|
#define PI_ISTYLE 11 /* offset 11 (1 byte) - paragraph style index */
|
2016-01-05 17:37:40 -05:00
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
#define PLI_LOFF 0 /* offset 0 (2 bytes) - line starting offset */
|
|
|
|
#define PLI_LCOR 2 /* offset 2 (2 bytes) - line blank correction */
|
|
|
|
#define PLI_LHEI 4 /* offset 4 (2 bytes) - line max char height */
|
|
|
|
#define PLI_LPAG 12 /* offset 12 (2 bytes) - line pagination*/
|
2016-01-08 11:08:28 -05:00
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
#define PCSD_SIZE 0 /* offset 0 (2 bytes) - size of characters */
|
|
|
|
#define PCSD_PROP 26 /* offset 26 (1 byte) - properties */
|
2016-01-08 11:08:28 -05:00
|
|
|
|
2019-05-04 18:08:43 -04:00
|
|
|
static inline cl_error_t parsehwp3_docinfo(cli_ctx *ctx, size_t offset, struct hwp3_docinfo *docinfo)
|
2015-12-11 17:50:40 -05:00
|
|
|
{
|
|
|
|
const uint8_t *hwp3_ptr;
|
2019-05-04 17:28:16 -04:00
|
|
|
cl_error_t iret;
|
2015-12-11 17:50:40 -05:00
|
|
|
|
2022-02-16 00:13:55 +01:00
|
|
|
// TODO: use fmap_readn?
|
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
|
|
|
if (!(hwp3_ptr = fmap_need_off_once(ctx->fmap, offset, HWP3_DOCINFO_SIZE))) {
|
2015-12-14 17:23:14 -05:00
|
|
|
cli_errmsg("HWP3.x: Failed to read fmap for hwp docinfo\n");
|
2015-12-11 17:50:40 -05:00
|
|
|
return CL_EMAP;
|
|
|
|
}
|
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
memcpy(&(docinfo->di_writeprot), hwp3_ptr + DI_WRITEPROT, sizeof(docinfo->di_writeprot));
|
|
|
|
memcpy(&(docinfo->di_externapp), hwp3_ptr + DI_EXTERNAPP, sizeof(docinfo->di_externapp));
|
|
|
|
memcpy(&(docinfo->di_passwd), hwp3_ptr + DI_PASSWD, sizeof(docinfo->di_passwd));
|
|
|
|
memcpy(&(docinfo->di_compressed), hwp3_ptr + DI_COMPRESSED, sizeof(docinfo->di_compressed));
|
|
|
|
memcpy(&(docinfo->di_infoblksize), hwp3_ptr + DI_INFOBLKSIZE, sizeof(docinfo->di_infoblksize));
|
2015-12-11 17:50:40 -05:00
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
docinfo->di_writeprot = le32_to_host(docinfo->di_writeprot);
|
|
|
|
docinfo->di_externapp = le16_to_host(docinfo->di_externapp);
|
|
|
|
docinfo->di_passwd = le16_to_host(docinfo->di_passwd);
|
2015-12-11 17:50:40 -05:00
|
|
|
docinfo->di_infoblksize = le16_to_host(docinfo->di_infoblksize);
|
|
|
|
|
|
|
|
hwp3_debug("HWP3.x: di_writeprot: %u\n", docinfo->di_writeprot);
|
|
|
|
hwp3_debug("HWP3.x: di_externapp: %u\n", docinfo->di_externapp);
|
|
|
|
hwp3_debug("HWP3.x: di_passwd: %u\n", docinfo->di_passwd);
|
|
|
|
hwp3_debug("HWP3.x: di_compressed: %u\n", docinfo->di_compressed);
|
|
|
|
hwp3_debug("HWP3.x: di_infoblksize: %u\n", docinfo->di_infoblksize);
|
|
|
|
|
|
|
|
#if HAVE_JSON
|
2018-07-20 22:28:48 -04:00
|
|
|
if (SCAN_COLLECT_METADATA) {
|
2016-01-14 11:53:21 -05:00
|
|
|
json_object *header, *flags;
|
2016-01-15 15:32:04 -05:00
|
|
|
char *str;
|
2015-12-11 17:50:40 -05:00
|
|
|
|
2016-01-14 11:53:21 -05:00
|
|
|
header = cli_jsonobj(ctx->wrkproperty, "Hwp3Header");
|
|
|
|
if (!header) {
|
|
|
|
cli_errmsg("HWP3.x: No memory for Hwp3Header object\n");
|
|
|
|
return CL_EMEM;
|
|
|
|
}
|
2015-12-11 17:50:40 -05:00
|
|
|
|
2016-01-14 11:53:21 -05:00
|
|
|
flags = cli_jsonarray(header, "Flags");
|
|
|
|
if (!flags) {
|
|
|
|
cli_errmsg("HWP5.x: No memory for Hwp5Header/Flags array\n");
|
|
|
|
return CL_EMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (docinfo->di_writeprot) {
|
|
|
|
cli_jsonstr(flags, NULL, "HWP3_WRITEPROTECTED"); /* HWP3_DISTRIBUTABLE */
|
|
|
|
}
|
|
|
|
if (docinfo->di_externapp) {
|
|
|
|
cli_jsonstr(flags, NULL, "HWP3_EXTERNALAPPLICATION");
|
|
|
|
}
|
|
|
|
if (docinfo->di_passwd) {
|
|
|
|
cli_jsonstr(flags, NULL, "HWP3_PASSWORD");
|
|
|
|
}
|
|
|
|
if (docinfo->di_compressed) {
|
|
|
|
cli_jsonstr(flags, NULL, "HWP3_COMPRESSED");
|
|
|
|
}
|
2016-01-15 15:32:04 -05:00
|
|
|
|
|
|
|
/* Printed File Name */
|
2018-12-03 12:40:13 -05:00
|
|
|
str = convert_hstr_to_utf8((char *)(hwp3_ptr + DI_PNAME), 40, "HWP3.x", &iret);
|
2020-08-04 15:32:24 -04:00
|
|
|
if (!str)
|
2016-01-15 15:32:04 -05:00
|
|
|
return CL_EMEM;
|
|
|
|
|
|
|
|
if (iret == CL_VIRUS)
|
|
|
|
cli_jsonbool(header, "PrintName_base64", 1);
|
|
|
|
|
|
|
|
hwp3_debug("HWP3.x: di_pname: %s\n", str);
|
|
|
|
cli_jsonstr(header, "PrintName", str);
|
|
|
|
free(str);
|
|
|
|
|
|
|
|
/* Annotation */
|
2018-12-03 12:40:13 -05:00
|
|
|
str = convert_hstr_to_utf8((char *)(hwp3_ptr + DI_ANNOTE), 24, "HWP3.x", &iret);
|
2020-08-04 15:32:24 -04:00
|
|
|
if (!str)
|
2016-01-15 15:32:04 -05:00
|
|
|
return CL_EMEM;
|
|
|
|
|
|
|
|
if (iret == CL_VIRUS)
|
|
|
|
cli_jsonbool(header, "Annotation_base64", 1);
|
|
|
|
|
|
|
|
hwp3_debug("HWP3.x: di_annote: %s\n", str);
|
|
|
|
cli_jsonstr(header, "Annotation", str);
|
|
|
|
free(str);
|
2015-12-11 17:50:40 -05:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
return CL_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2019-05-04 18:08:43 -04:00
|
|
|
static inline cl_error_t parsehwp3_docsummary(cli_ctx *ctx, size_t offset)
|
2015-12-08 17:28:49 -05:00
|
|
|
{
|
2015-12-11 17:50:40 -05:00
|
|
|
#if HAVE_JSON
|
|
|
|
const uint8_t *hwp3_ptr;
|
|
|
|
char *str;
|
2019-05-04 15:54:54 -04:00
|
|
|
size_t i;
|
|
|
|
cl_error_t ret, iret;
|
|
|
|
|
2015-12-11 17:50:40 -05:00
|
|
|
json_object *summary;
|
|
|
|
|
2018-07-20 22:28:48 -04:00
|
|
|
if (!SCAN_COLLECT_METADATA)
|
2016-01-14 11:53:21 -05:00
|
|
|
return CL_SUCCESS;
|
|
|
|
|
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
|
|
|
if (!(hwp3_ptr = fmap_need_off_once(ctx->fmap, offset, HWP3_DOCSUMMARY_SIZE))) {
|
2015-12-14 17:23:14 -05:00
|
|
|
cli_errmsg("HWP3.x: Failed to read fmap for hwp docinfo\n");
|
2015-12-11 17:50:40 -05:00
|
|
|
return CL_EMAP;
|
|
|
|
}
|
|
|
|
|
|
|
|
summary = cli_jsonobj(ctx->wrkproperty, "Hwp3SummaryInfo");
|
|
|
|
if (!summary) {
|
|
|
|
cli_errmsg("HWP3.x: No memory for json object\n");
|
|
|
|
return CL_EMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < NUM_DOCSUMMARY_FIELDS; i++) {
|
2018-12-03 12:40:13 -05:00
|
|
|
str = convert_hstr_to_utf8((char *)(hwp3_ptr + hwp3_docsummary_fields[i].offset), 112, "HWP3.x", &iret);
|
2020-08-04 15:32:24 -04:00
|
|
|
if (!str)
|
2016-01-15 15:32:04 -05:00
|
|
|
return CL_EMEM;
|
|
|
|
|
|
|
|
if (iret == CL_VIRUS) {
|
2015-12-14 10:59:19 -05:00
|
|
|
char *b64;
|
2018-12-03 12:40:13 -05:00
|
|
|
size_t b64len = strlen(hwp3_docsummary_fields[i].name) + 8;
|
|
|
|
b64 = cli_calloc(1, b64len);
|
2015-12-14 10:59:19 -05:00
|
|
|
if (!b64) {
|
|
|
|
cli_errmsg("HWP3.x: Failed to allocate memory for b64 boolean\n");
|
2018-11-14 16:58:30 -05:00
|
|
|
free(str);
|
2015-12-14 10:59:19 -05:00
|
|
|
return CL_EMEM;
|
|
|
|
}
|
|
|
|
snprintf(b64, b64len, "%s_base64", hwp3_docsummary_fields[i].name);
|
|
|
|
cli_jsonbool(summary, b64, 1);
|
|
|
|
free(b64);
|
|
|
|
}
|
2015-12-11 17:50:40 -05:00
|
|
|
|
|
|
|
hwp3_debug("HWP3.x: %s, %s\n", hwp3_docsummary_fields[i].name, str);
|
|
|
|
ret = cli_jsonstr(summary, hwp3_docsummary_fields[i].name, str);
|
|
|
|
free(str);
|
|
|
|
if (ret != CL_SUCCESS)
|
|
|
|
return ret;
|
|
|
|
}
|
2015-12-14 16:34:11 -05:00
|
|
|
#else
|
2015-12-15 13:01:40 -05:00
|
|
|
UNUSEDPARAM(ctx);
|
|
|
|
UNUSEDPARAM(offset);
|
2015-12-11 17:50:40 -05:00
|
|
|
#endif
|
2015-12-08 17:28:49 -05:00
|
|
|
return CL_SUCCESS;
|
|
|
|
}
|
2015-12-11 17:50:40 -05:00
|
|
|
|
2016-01-12 12:03:33 -05:00
|
|
|
#if HWP3_VERIFY
|
2018-12-03 12:40:13 -05:00
|
|
|
#define HWP3_PSPECIAL_VERIFY(map, offset, second, id, match) \
|
|
|
|
do { \
|
|
|
|
if (fmap_readn(map, &match, offset + second, sizeof(match)) != sizeof(match)) \
|
|
|
|
return CL_EREAD; \
|
|
|
|
\
|
|
|
|
match = le16_to_host(match); \
|
|
|
|
\
|
|
|
|
if (id != match) { \
|
|
|
|
cli_errmsg("HWP3.x: ID %u block fails verification\n", id); \
|
|
|
|
return CL_EFORMAT; \
|
|
|
|
} \
|
|
|
|
} while (0)
|
2016-01-08 14:22:14 -05:00
|
|
|
|
2016-01-12 12:03:33 -05:00
|
|
|
#else
|
|
|
|
#define HWP3_PSPECIAL_VERIFY(map, offset, second, id, match)
|
|
|
|
#endif
|
2016-01-08 14:22:14 -05:00
|
|
|
|
2019-05-04 18:08:43 -04:00
|
|
|
static inline cl_error_t parsehwp3_paragraph(cli_ctx *ctx, fmap_t *map, int p, uint32_t level, size_t *roffset, int *last)
|
2016-01-08 11:08:28 -05:00
|
|
|
{
|
2019-05-04 17:28:16 -04:00
|
|
|
cl_error_t ret = CL_SUCCESS;
|
|
|
|
|
2019-05-04 18:08:43 -04:00
|
|
|
size_t offset = *roffset;
|
|
|
|
size_t new_offset;
|
2016-01-08 14:22:14 -05:00
|
|
|
uint16_t nchars, nlines, content;
|
2016-01-26 13:03:00 -05:00
|
|
|
uint8_t ppfs, ifsc, cfsb;
|
2019-05-04 15:54:54 -04:00
|
|
|
uint16_t i;
|
2019-05-04 17:28:16 -04:00
|
|
|
int c, l, sp = 0, term = 0;
|
2016-01-12 11:21:13 -05:00
|
|
|
#if HWP3_VERIFY
|
|
|
|
uint16_t match;
|
|
|
|
#endif
|
2016-01-08 11:08:28 -05:00
|
|
|
#if HWP3_DEBUG
|
|
|
|
/* other paragraph info */
|
2016-01-19 12:24:56 -05:00
|
|
|
uint8_t flags, istyle;
|
2016-01-11 14:10:16 -05:00
|
|
|
uint16_t fsize;
|
2016-01-08 11:08:28 -05:00
|
|
|
uint32_t special;
|
|
|
|
|
|
|
|
/* line info */
|
|
|
|
uint16_t loff, lcor, lhei, lpag;
|
|
|
|
|
|
|
|
/* char shape data */
|
|
|
|
uint16_t pcsd_size;
|
|
|
|
uint8_t pcsd_prop;
|
|
|
|
#endif
|
|
|
|
|
2019-05-04 17:28:16 -04:00
|
|
|
hwp3_debug("HWP3.x: recursion level: %u\n", level);
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d] starts @ offset %zu\n", level, p, offset);
|
2016-01-08 14:22:14 -05:00
|
|
|
|
2016-01-19 14:25:55 -05:00
|
|
|
if (level >= ctx->engine->maxrechwp3)
|
2016-01-08 14:22:14 -05:00
|
|
|
return CL_EMAXREC;
|
2016-01-08 11:08:28 -05:00
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
if (fmap_readn(map, &ppfs, offset + PI_PPFS, sizeof(ppfs)) != sizeof(ppfs))
|
2016-01-08 11:08:28 -05:00
|
|
|
return CL_EREAD;
|
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
if (fmap_readn(map, &nchars, offset + PI_NCHARS, sizeof(nchars)) != sizeof(nchars))
|
2016-01-08 11:08:28 -05:00
|
|
|
return CL_EREAD;
|
|
|
|
|
|
|
|
nchars = le16_to_host(nchars);
|
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
if (fmap_readn(map, &nlines, offset + PI_NLINES, sizeof(nlines)) != sizeof(nlines))
|
2016-01-08 11:08:28 -05:00
|
|
|
return CL_EREAD;
|
|
|
|
|
|
|
|
nlines = le16_to_host(nlines);
|
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
if (fmap_readn(map, &ifsc, offset + PI_IFSC, sizeof(ifsc)) != sizeof(ifsc))
|
2016-01-19 12:24:56 -05:00
|
|
|
return CL_EREAD;
|
|
|
|
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: ppfs %u\n", level, p, ppfs);
|
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: nchars %u\n", level, p, nchars);
|
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: nlines %u\n", level, p, nlines);
|
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: ifsc %u\n", level, p, ifsc);
|
2016-01-08 11:08:28 -05:00
|
|
|
|
|
|
|
#if HWP3_DEBUG
|
2018-12-03 12:40:13 -05:00
|
|
|
if (fmap_readn(map, &flags, offset + PI_FLAGS, sizeof(flags)) != sizeof(flags))
|
2016-01-08 11:08:28 -05:00
|
|
|
return CL_EREAD;
|
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
if (fmap_readn(map, &special, offset + PI_SPECIAL, sizeof(special)) != sizeof(special))
|
2016-01-08 11:08:28 -05:00
|
|
|
return CL_EREAD;
|
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
if (fmap_readn(map, &istyle, offset + PI_ISTYLE, sizeof(istyle)) != sizeof(istyle))
|
2016-01-08 11:08:28 -05:00
|
|
|
return CL_EREAD;
|
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
if (fmap_readn(map, &fsize, offset + 12, sizeof(fsize)) != sizeof(fsize))
|
2016-01-11 14:10:16 -05:00
|
|
|
return CL_EREAD;
|
|
|
|
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: flags %x\n", level, p, flags);
|
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: spcl %x\n", level, p, special);
|
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: istyle %u\n", level, p, istyle);
|
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: fsize %u\n", level, p, fsize);
|
2016-01-08 11:08:28 -05:00
|
|
|
#endif
|
|
|
|
|
|
|
|
/* detected empty paragraph marker => end-of-paragraph list */
|
|
|
|
if (nchars == 0) {
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Detected end-of-paragraph list @ offset %zu\n", offset);
|
|
|
|
hwp3_debug("HWP3.x: end recursion level: %u\n", level);
|
2016-01-15 14:36:05 -05:00
|
|
|
(*roffset) = offset + HWP3_PARAINFO_SIZE_S;
|
2018-12-03 12:40:13 -05:00
|
|
|
(*last) = 1;
|
2016-01-08 11:08:28 -05:00
|
|
|
return CL_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2016-01-15 14:36:05 -05:00
|
|
|
if (ppfs)
|
|
|
|
offset += HWP3_PARAINFO_SIZE_S;
|
|
|
|
else
|
|
|
|
offset += HWP3_PARAINFO_SIZE_L;
|
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
/* line information blocks */
|
2016-01-08 11:08:28 -05:00
|
|
|
#if HWP3_DEBUG
|
2016-01-19 12:24:56 -05:00
|
|
|
for (i = 0; (i < nlines) && (offset < map->len); i++) {
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: Line %d information starts @ offset %zu\n", level, p, i, offset);
|
2018-12-03 12:40:13 -05:00
|
|
|
if (fmap_readn(map, &loff, offset + PLI_LOFF, sizeof(loff)) != sizeof(loff))
|
2016-01-19 12:24:56 -05:00
|
|
|
return CL_EREAD;
|
2016-01-08 11:08:28 -05:00
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
if (fmap_readn(map, &lcor, offset + PLI_LCOR, sizeof(lcor)) != sizeof(lcor))
|
2016-01-19 12:24:56 -05:00
|
|
|
return CL_EREAD;
|
2016-01-08 11:08:28 -05:00
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
if (fmap_readn(map, &lhei, offset + PLI_LHEI, sizeof(lhei)) != sizeof(lhei))
|
2016-01-19 12:24:56 -05:00
|
|
|
return CL_EREAD;
|
2016-01-08 11:08:28 -05:00
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
if (fmap_readn(map, &lpag, offset + PLI_LPAG, sizeof(lpag)) != sizeof(lpag))
|
2016-01-19 12:24:56 -05:00
|
|
|
return CL_EREAD;
|
|
|
|
|
|
|
|
loff = le16_to_host(loff);
|
|
|
|
lcor = le16_to_host(lcor);
|
|
|
|
lhei = le16_to_host(lhei);
|
|
|
|
lpag = le16_to_host(lpag);
|
2016-01-08 11:08:28 -05:00
|
|
|
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: Line %d: loff %u\n", level, p, i, loff);
|
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: Line %d: lcor %x\n", level, p, i, lcor);
|
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: Line %d: lhei %u\n", level, p, i, lhei);
|
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: Line %d: lpag %u\n", level, p, i, lpag);
|
2016-01-08 11:08:28 -05:00
|
|
|
|
2016-01-19 12:24:56 -05:00
|
|
|
offset += HWP3_LINEINFO_SIZE;
|
|
|
|
}
|
|
|
|
#else
|
2018-05-24 12:40:42 -04:00
|
|
|
new_offset = offset + (nlines * HWP3_LINEINFO_SIZE);
|
2018-05-30 09:03:32 -07:00
|
|
|
if ((new_offset < offset) || (new_offset >= map->len)) {
|
2019-05-04 18:08:43 -04:00
|
|
|
cli_errmsg("HWP3.x: Paragraph[%u, %d]: nlines value is too high, invalid. %u\n", level, p, nlines);
|
2018-05-24 12:40:42 -04:00
|
|
|
return CL_EPARSE;
|
|
|
|
}
|
|
|
|
offset = new_offset;
|
2016-01-08 11:08:28 -05:00
|
|
|
#endif
|
|
|
|
|
2016-01-19 12:24:56 -05:00
|
|
|
if (offset >= map->len)
|
|
|
|
return CL_EFORMAT;
|
2016-01-08 11:08:28 -05:00
|
|
|
|
2016-01-19 12:24:56 -05:00
|
|
|
if (ifsc) {
|
2016-01-11 14:11:08 -05:00
|
|
|
for (i = 0, c = 0; i < nchars; i++) {
|
|
|
|
/* examine byte for cs data type */
|
2016-01-26 13:03:00 -05:00
|
|
|
if (fmap_readn(map, &cfsb, offset, sizeof(cfsb)) != sizeof(cfsb))
|
2016-01-11 14:11:08 -05:00
|
|
|
return CL_EREAD;
|
2016-01-08 11:08:28 -05:00
|
|
|
|
2016-01-26 13:03:00 -05:00
|
|
|
offset += sizeof(cfsb);
|
2016-01-11 14:11:08 -05:00
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
switch (cfsb) {
|
|
|
|
case 0: /* character shape block */
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: character font style data @ offset %zu\n", level, p, offset);
|
2016-01-08 11:08:28 -05:00
|
|
|
|
|
|
|
#if HWP3_DEBUG
|
2018-12-03 12:40:13 -05:00
|
|
|
if (fmap_readn(map, &pcsd_size, offset + PCSD_SIZE, sizeof(pcsd_size)) != sizeof(pcsd_size))
|
|
|
|
return CL_EREAD;
|
2016-01-08 11:08:28 -05:00
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
if (fmap_readn(map, &pcsd_prop, offset + PCSD_PROP, sizeof(pcsd_prop)) != sizeof(pcsd_prop))
|
|
|
|
return CL_EREAD;
|
2016-01-08 11:08:28 -05:00
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
pcsd_size = le16_to_host(pcsd_size);
|
2016-01-08 11:08:28 -05:00
|
|
|
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: CFS %u: pcsd_size %u\n", level, p, 0, pcsd_size);
|
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: CFS %u: pcsd_prop %x\n", level, p, 0, pcsd_prop);
|
2016-01-08 11:08:28 -05:00
|
|
|
#endif
|
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
c++;
|
|
|
|
offset += HWP3_CHARSHPDATA_SIZE;
|
|
|
|
break;
|
|
|
|
case 1: /* normal character - as representation of another character for previous cs block */
|
|
|
|
break;
|
|
|
|
default:
|
2019-05-04 18:08:43 -04:00
|
|
|
cli_errmsg("HWP3.x: Paragraph[%u, %d]: unknown CFS type 0x%x @ offset %zu\n", level, p, cfsb, offset);
|
2018-12-03 12:40:13 -05:00
|
|
|
cli_errmsg("HWP3.x: Paragraph parsing detected %d of %u characters\n", i, nchars);
|
|
|
|
return CL_EPARSE;
|
2016-01-11 14:11:08 -05:00
|
|
|
}
|
2016-01-08 14:22:14 -05:00
|
|
|
}
|
2016-01-11 14:11:08 -05:00
|
|
|
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected %d CFS block(s) and %d characters\n", level, p, c, i);
|
2016-01-11 14:11:08 -05:00
|
|
|
} else {
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: separate character font style segment not stored\n", level, p);
|
2016-01-08 14:22:14 -05:00
|
|
|
}
|
2016-01-08 11:08:28 -05:00
|
|
|
|
2016-01-11 14:11:08 -05:00
|
|
|
if (!term)
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: content starts @ offset %zu\n", level, p, offset);
|
2016-01-08 11:08:28 -05:00
|
|
|
|
|
|
|
/* scan for end-of-paragraph [0x0d00 on offset parity to current content] */
|
2018-05-21 16:58:51 -04:00
|
|
|
while ((!term) &&
|
2018-12-03 12:40:13 -05:00
|
|
|
(offset < map->len)) {
|
2019-05-04 18:08:43 -04:00
|
|
|
|
2016-01-08 14:22:14 -05:00
|
|
|
if (fmap_readn(map, &content, offset, sizeof(content)) != sizeof(content))
|
2016-01-08 11:08:28 -05:00
|
|
|
return CL_EREAD;
|
|
|
|
|
2016-01-08 14:22:14 -05:00
|
|
|
content = le16_to_host(content);
|
|
|
|
|
|
|
|
/* special character handling */
|
|
|
|
if (content < 32) {
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected special character %u @ offset %zu\n", level, p, content, offset);
|
2018-12-03 12:40:13 -05:00
|
|
|
|
|
|
|
switch (content) {
|
|
|
|
case 0:
|
|
|
|
case 1:
|
|
|
|
case 2:
|
|
|
|
case 3:
|
|
|
|
case 4:
|
|
|
|
case 12:
|
|
|
|
case 27: {
|
2016-01-12 14:28:17 -05:00
|
|
|
/* reserved */
|
2016-01-08 16:43:48 -05:00
|
|
|
uint32_t length;
|
|
|
|
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected special character as [reserved]\n", level, p);
|
2016-01-08 16:43:48 -05:00
|
|
|
|
|
|
|
/*
|
|
|
|
* offset 0 (2 bytes) - special character ID
|
|
|
|
* offset 2 (4 bytes) - length of information = n
|
|
|
|
* offset 6 (2 bytes) - special character ID
|
|
|
|
* offset 8 (n bytes) - information
|
|
|
|
*/
|
|
|
|
|
2016-01-12 12:03:33 -05:00
|
|
|
/* id block verification (only on HWP3_VERIFY) */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 6, content, match);
|
2016-01-08 16:43:48 -05:00
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
if (fmap_readn(map, &length, offset + 2, sizeof(length)) != sizeof(length))
|
2016-01-08 16:43:48 -05:00
|
|
|
return CL_EREAD;
|
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
length = le32_to_host(length);
|
2018-05-24 12:40:42 -04:00
|
|
|
new_offset = offset + (8 + length);
|
2018-07-31 11:10:56 -07:00
|
|
|
if ((new_offset <= offset) || (new_offset > map->len)) {
|
2019-05-04 18:08:43 -04:00
|
|
|
cli_errmsg("HWP3.x: Paragraph[%u, %d]: length value is too high, invalid. %u\n", level, p, length);
|
2018-05-21 16:58:51 -04:00
|
|
|
return CL_EPARSE;
|
|
|
|
}
|
2018-05-24 12:40:42 -04:00
|
|
|
offset = new_offset;
|
2018-05-21 16:58:51 -04:00
|
|
|
|
2016-01-08 16:43:48 -05:00
|
|
|
#if HWP3_DEBUG
|
2019-05-04 18:08:43 -04:00
|
|
|
cli_errmsg("HWP3.x: Paragraph[%u, %d]: possible invalid usage of reserved special character %u\n", level, p, content);
|
2016-01-08 16:43:48 -05:00
|
|
|
return CL_EFORMAT;
|
|
|
|
#endif
|
|
|
|
break;
|
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
case 5: /* field codes */
|
2016-01-12 14:28:17 -05:00
|
|
|
{
|
|
|
|
uint32_t length;
|
|
|
|
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected field code marker @ offset %zu\n", level, p, offset);
|
2016-01-12 14:28:17 -05:00
|
|
|
|
|
|
|
/*
|
|
|
|
* offset 0 (2 bytes) - special character ID
|
|
|
|
* offset 2 (4 bytes) - length of information = n
|
|
|
|
* offset 6 (2 bytes) - special character ID
|
|
|
|
* offset 8 (n bytes) - field code details
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* id block verification (only on HWP3_VERIFY) */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 6, content, match);
|
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
if (fmap_readn(map, &length, offset + 2, sizeof(length)) != sizeof(length))
|
2016-01-12 14:28:17 -05:00
|
|
|
return CL_EREAD;
|
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
length = le32_to_host(length);
|
2018-05-24 12:40:42 -04:00
|
|
|
new_offset = offset + (8 + length);
|
2018-07-31 11:10:56 -07:00
|
|
|
if ((new_offset <= offset) || (new_offset > map->len)) {
|
2019-05-04 18:08:43 -04:00
|
|
|
cli_errmsg("HWP3.x: Paragraph[%u, %d]: length value is too high, invalid. %u\n", level, p, length);
|
2018-05-21 16:58:51 -04:00
|
|
|
return CL_EPARSE;
|
|
|
|
}
|
2018-05-24 12:40:42 -04:00
|
|
|
offset = new_offset;
|
2016-01-12 14:28:17 -05:00
|
|
|
break;
|
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
case 6: /* bookmark */
|
2016-01-08 16:43:48 -05:00
|
|
|
{
|
|
|
|
#if HWP3_VERIFY
|
|
|
|
uint32_t length;
|
|
|
|
#endif
|
|
|
|
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected bookmark marker @ offset %zu\n", level, p, offset);
|
2016-01-08 16:43:48 -05:00
|
|
|
|
|
|
|
/*
|
|
|
|
* offset 0 (2 bytes) - special character ID
|
|
|
|
* offset 2 (4 bytes) - length of information = 34
|
|
|
|
* offset 6 (2 bytes) - special character ID
|
|
|
|
* offset 8 (16 x 2 bytes) - bookmark name
|
|
|
|
* offset 40 (2 bytes) - bookmark type
|
|
|
|
* total is always 42 bytes
|
|
|
|
*/
|
|
|
|
|
|
|
|
#if HWP3_VERIFY
|
2016-01-12 12:03:33 -05:00
|
|
|
/* id block verification (only on HWP3_VERIFY) */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 6, content, match);
|
2016-01-08 16:43:48 -05:00
|
|
|
|
2016-01-12 12:03:33 -05:00
|
|
|
/* length check - always 34 bytes */
|
2018-12-03 12:40:13 -05:00
|
|
|
if (fmap_readn(map, &length, offset + 2, sizeof(length)) != sizeof(length))
|
2016-01-08 16:43:48 -05:00
|
|
|
return CL_EREAD;
|
|
|
|
|
|
|
|
length = le32_to_host(length);
|
|
|
|
|
|
|
|
if (length != 34) {
|
2016-01-11 15:15:02 -05:00
|
|
|
cli_errmsg("HWP3.x: Bookmark has incorrect length: %u != 34)\n", length);
|
2016-01-08 16:43:48 -05:00
|
|
|
return CL_EFORMAT;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
offset += 42;
|
2016-01-11 15:15:02 -05:00
|
|
|
break;
|
2016-01-08 16:43:48 -05:00
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
case 7: /* date format */
|
2016-01-08 16:43:48 -05:00
|
|
|
{
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected date format marker @ offset %zu\n", level, p, offset);
|
2016-01-08 16:43:48 -05:00
|
|
|
|
|
|
|
/*
|
|
|
|
* offset 0 (2 bytes) - special character ID
|
|
|
|
* offset 2 (40 x 2 bytes) - date format as user-defined dialog
|
|
|
|
* offset 82 (2 bytes) - special character ID
|
|
|
|
* total is always 84 bytes
|
|
|
|
*/
|
|
|
|
|
2016-01-12 12:03:33 -05:00
|
|
|
/* id block verification (only on HWP3_VERIFY) */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 82, content, match);
|
2016-01-08 16:43:48 -05:00
|
|
|
|
|
|
|
offset += 84;
|
2016-01-11 15:15:02 -05:00
|
|
|
break;
|
2016-01-08 16:43:48 -05:00
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
case 8: /* date code */
|
2016-01-08 16:43:48 -05:00
|
|
|
{
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected date code marker @ offset %zu\n", level, p, offset);
|
2016-01-08 16:43:48 -05:00
|
|
|
|
|
|
|
/*
|
|
|
|
* offset 0 (2 bytes) - special character ID
|
|
|
|
* offset 2 (40 x 2 bytes) - date format string
|
|
|
|
* offset 82 (4 x 2 bytes) - date (year, month, day of week)
|
|
|
|
* offset 90 (2 x 2 bytes) - time (hour, minute)
|
|
|
|
* offset 94 (2 bytes) - special character ID
|
|
|
|
* total is always 96 bytes
|
|
|
|
*/
|
|
|
|
|
2016-01-12 12:03:33 -05:00
|
|
|
/* id block verification (only on HWP3_VERIFY) */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 94, content, match);
|
2016-01-08 16:43:48 -05:00
|
|
|
|
|
|
|
offset += 96;
|
2016-01-11 15:15:02 -05:00
|
|
|
break;
|
2016-01-08 16:43:48 -05:00
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
case 9: /* tab */
|
2016-01-08 16:43:48 -05:00
|
|
|
{
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected tab marker @ offset %zu\n", level, p, offset);
|
2016-01-08 16:43:48 -05:00
|
|
|
|
|
|
|
/*
|
|
|
|
* offset 0 (2 bytes) - special character ID
|
|
|
|
* offset 2 (2 bytes) - tab width
|
|
|
|
* offset 4 (2 bytes) - unknown(?)
|
|
|
|
* offset 6 (2 bytes) - special character ID
|
|
|
|
* total is always 8 bytes
|
|
|
|
*/
|
|
|
|
|
2016-01-12 12:03:33 -05:00
|
|
|
/* id block verification (only on HWP3_VERIFY) */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 6, content, match);
|
2016-01-08 16:43:48 -05:00
|
|
|
|
|
|
|
offset += 8;
|
2016-01-11 15:15:02 -05:00
|
|
|
break;
|
2016-01-08 16:43:48 -05:00
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
case 10: /* table, test box, equation, button, hypertext */
|
2016-01-08 14:22:14 -05:00
|
|
|
{
|
|
|
|
uint16_t ncells;
|
2016-01-11 14:10:16 -05:00
|
|
|
#if HWP3_DEBUG
|
|
|
|
uint16_t type;
|
|
|
|
#endif
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected box object marker @ offset %zu\n", level, p, offset);
|
2016-01-08 14:22:14 -05:00
|
|
|
|
2016-01-12 12:03:33 -05:00
|
|
|
/* verification (only on HWP3_VERIFY) */
|
|
|
|
/* id block verify */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 6, content, match);
|
|
|
|
/* extra data block verify */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 24, content, match);
|
2016-01-08 14:22:14 -05:00
|
|
|
|
|
|
|
/* ID block is 8 bytes */
|
|
|
|
offset += 8;
|
|
|
|
|
2016-01-11 14:10:16 -05:00
|
|
|
/* box information (84 bytes) */
|
|
|
|
#if HWP3_DEBUG
|
|
|
|
/* box type located at offset 78 of box information */
|
2018-12-03 12:40:13 -05:00
|
|
|
if (fmap_readn(map, &type, offset + 78, sizeof(type)) != sizeof(type))
|
2016-01-11 14:10:16 -05:00
|
|
|
return CL_EREAD;
|
|
|
|
|
|
|
|
type = le16_to_host(type);
|
|
|
|
if (type == 0)
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: box object detected as table\n", level, p);
|
2016-01-11 14:10:16 -05:00
|
|
|
else if (type == 1)
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: box object detected as text box\n", level, p);
|
2016-01-11 14:10:16 -05:00
|
|
|
else if (type == 2)
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: box object detected as equation\n", level, p);
|
2016-01-11 14:10:16 -05:00
|
|
|
else if (type == 3)
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: box object detected as button\n", level, p);
|
2018-12-03 12:40:13 -05:00
|
|
|
else
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: box object detected as UNKNOWN(%u)\n", level, p, type);
|
2016-01-11 14:10:16 -05:00
|
|
|
#endif
|
|
|
|
|
|
|
|
/* ncells is located at offset 80 of box information */
|
2018-12-03 12:40:13 -05:00
|
|
|
if (fmap_readn(map, &ncells, offset + 80, sizeof(ncells)) != sizeof(ncells))
|
2016-01-08 14:22:14 -05:00
|
|
|
return CL_EREAD;
|
|
|
|
|
|
|
|
ncells = le16_to_host(ncells);
|
|
|
|
offset += 84;
|
|
|
|
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: box object contains %u cell(s)\n", level, p, ncells);
|
2016-01-08 14:22:14 -05:00
|
|
|
|
2016-10-19 15:57:45 -04:00
|
|
|
/* cell information (27 bytes x ncells(offset 80 of table)) */
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: box cell info array starts @ %zu\n", level, p, offset);
|
2018-05-24 12:40:42 -04:00
|
|
|
|
|
|
|
new_offset = offset + (27 * ncells);
|
2018-05-30 09:03:32 -07:00
|
|
|
if ((new_offset < offset) || (new_offset >= map->len)) {
|
2019-05-04 18:08:43 -04:00
|
|
|
cli_errmsg("HWP3.x: Paragraph[%u, %d]: number of box cells is too high, invalid. %u\n", level, p, ncells);
|
2018-05-21 16:58:51 -04:00
|
|
|
return CL_EPARSE;
|
|
|
|
}
|
2018-05-24 12:40:42 -04:00
|
|
|
offset = new_offset;
|
2016-01-08 14:22:14 -05:00
|
|
|
|
|
|
|
/* cell paragraph list */
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: box cell paragraph list starts @ %zu\n", level, p, offset);
|
2016-01-08 14:22:14 -05:00
|
|
|
for (i = 0; i < ncells; i++) {
|
|
|
|
l = 0;
|
2018-12-03 12:40:13 -05:00
|
|
|
while (!l && ((ret = parsehwp3_paragraph(ctx, map, sp++, level + 1, &offset, &l)) == CL_SUCCESS)) continue;
|
2016-01-11 14:41:49 -05:00
|
|
|
if (ret != CL_SUCCESS)
|
|
|
|
return ret;
|
2016-01-08 14:22:14 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
/* box caption paragraph list */
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: box cell caption paragraph list starts @ %zu\n", level, p, offset);
|
2016-01-08 14:22:14 -05:00
|
|
|
l = 0;
|
2018-12-03 12:40:13 -05:00
|
|
|
while (!l && ((ret = parsehwp3_paragraph(ctx, map, sp++, level + 1, &offset, &l)) == CL_SUCCESS)) continue;
|
2016-01-11 14:41:49 -05:00
|
|
|
if (ret != CL_SUCCESS)
|
|
|
|
return ret;
|
2016-01-08 14:22:14 -05:00
|
|
|
break;
|
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
case 11: /* drawing */
|
2016-01-08 14:22:14 -05:00
|
|
|
{
|
|
|
|
uint32_t size;
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected drawing marker @ offset %zu\n", level, p, offset);
|
2016-01-08 14:22:14 -05:00
|
|
|
|
2016-01-12 12:03:33 -05:00
|
|
|
/* verification (only on HWP3_VERIFY) */
|
|
|
|
/* id block verify */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 6, content, match);
|
|
|
|
/* extra data block verify */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 24, content, match);
|
2016-01-08 14:22:14 -05:00
|
|
|
|
|
|
|
/* ID block is 8 bytes */
|
|
|
|
offset += 8;
|
|
|
|
|
|
|
|
/* Drawing Info Block is 328+n bytes with n = size of image */
|
|
|
|
/* n is located at offset 0 of info block */
|
|
|
|
if (fmap_readn(map, &size, offset, sizeof(size)) != sizeof(size))
|
|
|
|
return CL_EREAD;
|
|
|
|
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: drawing is %u additional bytes\n", level, p, size);
|
2016-01-08 14:22:14 -05:00
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
size = le32_to_host(size);
|
2018-05-24 12:40:42 -04:00
|
|
|
new_offset = offset + (348 + size);
|
|
|
|
if ((new_offset <= offset) || (new_offset >= map->len)) {
|
2019-05-04 18:08:43 -04:00
|
|
|
cli_errmsg("HWP3.x: Paragraph[%u, %d]: image size value is too high, invalid. %u\n", level, p, size);
|
2018-05-21 16:58:51 -04:00
|
|
|
return CL_EPARSE;
|
|
|
|
}
|
2018-05-24 12:40:42 -04:00
|
|
|
offset = new_offset;
|
2016-01-08 14:22:14 -05:00
|
|
|
|
|
|
|
/* caption paragraph list */
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: drawing caption paragraph list starts @ %zu\n", level, p, offset);
|
2016-01-08 14:22:14 -05:00
|
|
|
l = 0;
|
2018-12-03 12:40:13 -05:00
|
|
|
while (!l && ((ret = parsehwp3_paragraph(ctx, map, sp++, level + 1, &offset, &l)) == CL_SUCCESS)) continue;
|
2016-01-11 14:41:49 -05:00
|
|
|
if (ret != CL_SUCCESS)
|
|
|
|
return ret;
|
2016-01-08 14:22:14 -05:00
|
|
|
break;
|
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
case 13: /* end-of-paragraph marker - treated identically as character */
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Detected end-of-paragraph marker @ offset %zu\n", offset);
|
2018-12-03 12:40:13 -05:00
|
|
|
term = 1;
|
2016-01-08 14:22:14 -05:00
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
offset += sizeof(content);
|
|
|
|
break;
|
|
|
|
case 14: /* line information */
|
2016-01-12 11:21:13 -05:00
|
|
|
{
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Detected line information marker @ offset %zu\n", offset);
|
2016-01-11 17:29:42 -05:00
|
|
|
|
2016-01-12 12:03:33 -05:00
|
|
|
/* verification (only on HWP3_VERIFY) */
|
|
|
|
/* id block verify */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 6, content, match);
|
|
|
|
/* extra data block verify */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 24, content, match);
|
|
|
|
|
2016-01-12 11:21:13 -05:00
|
|
|
/* ID block is 8 bytes + line information is always 84 bytes */
|
|
|
|
offset += 92;
|
|
|
|
break;
|
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
case 15: /* hidden description */
|
2016-01-11 17:29:42 -05:00
|
|
|
{
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Detected hidden description marker @ offset %zu\n", offset);
|
2016-01-11 17:29:42 -05:00
|
|
|
|
|
|
|
/*
|
|
|
|
* offset 0 (2 bytes) - special character ID
|
|
|
|
* offset 2 (4 bytes) - reserved
|
|
|
|
* offset 6 (2 bytes) - special character ID
|
2016-01-26 12:25:29 -05:00
|
|
|
* offset 8 (8 bytes) - reserved
|
|
|
|
* total is always 16 bytes
|
2016-01-11 17:29:42 -05:00
|
|
|
*/
|
|
|
|
|
2016-01-12 12:03:33 -05:00
|
|
|
/* id block verification (only on HWP3_VERIFY) */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 6, content, match);
|
2016-01-11 17:29:42 -05:00
|
|
|
|
2016-01-26 12:25:29 -05:00
|
|
|
offset += 16;
|
|
|
|
|
2016-01-11 17:29:42 -05:00
|
|
|
/* hidden description paragraph list */
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: hidden description paragraph list starts @ %zu\n", level, p, offset);
|
2016-01-11 17:29:42 -05:00
|
|
|
l = 0;
|
2018-12-03 12:40:13 -05:00
|
|
|
while (!l && ((ret = parsehwp3_paragraph(ctx, map, sp++, level + 1, &offset, &l)) == CL_SUCCESS)) continue;
|
2016-01-11 17:29:42 -05:00
|
|
|
if (ret != CL_SUCCESS)
|
|
|
|
return ret;
|
|
|
|
break;
|
2018-12-03 12:40:13 -05:00
|
|
|
}
|
|
|
|
case 16: /* header/footer */
|
2016-01-11 15:17:04 -05:00
|
|
|
{
|
|
|
|
#if HWP3_DEBUG
|
|
|
|
uint8_t type;
|
|
|
|
#endif
|
|
|
|
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected header/footer marker @ offset %zu\n", level, p, offset);
|
2016-01-11 15:17:04 -05:00
|
|
|
|
|
|
|
/*
|
|
|
|
* offset 0 (2 bytes) - special character ID
|
|
|
|
* offset 2 (4 bytes) - reserved
|
|
|
|
* offset 6 (2 bytes) - special character ID
|
|
|
|
* offset 8 (8 x 1 byte) - reserved
|
|
|
|
* offset 16 (1 byte) - type (header/footer)
|
|
|
|
* offset 17 (1 byte) - kind
|
|
|
|
* total is always 18 bytes
|
|
|
|
*/
|
|
|
|
|
2016-01-12 12:03:33 -05:00
|
|
|
/* id block verification (only on HWP3_VERIFY) */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 6, content, match);
|
2016-01-11 15:17:04 -05:00
|
|
|
|
|
|
|
#if HWP3_DEBUG
|
2018-12-03 12:40:13 -05:00
|
|
|
if (fmap_readn(map, &type, offset + 16, sizeof(type)) != sizeof(type))
|
2016-01-11 15:17:04 -05:00
|
|
|
return CL_EREAD;
|
|
|
|
|
|
|
|
if (type == 0)
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected header/footer as header\n", level, p);
|
2016-01-11 15:17:04 -05:00
|
|
|
else if (type == 1)
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected header/footer as footer\n", level, p);
|
2016-01-11 15:17:04 -05:00
|
|
|
else
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected header/footer as UNKNOWN(%u)\n", level, p, type);
|
2016-01-11 15:17:04 -05:00
|
|
|
#endif
|
|
|
|
offset += 18;
|
|
|
|
|
|
|
|
/* content paragraph list */
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: header/footer paragraph list starts @ %zu\n", level, p, offset);
|
2016-01-11 15:17:04 -05:00
|
|
|
l = 0;
|
2018-12-03 12:40:13 -05:00
|
|
|
while (!l && ((ret = parsehwp3_paragraph(ctx, map, sp++, level + 1, &offset, &l)) == CL_SUCCESS)) continue;
|
2016-01-11 15:17:04 -05:00
|
|
|
if (ret != CL_SUCCESS)
|
|
|
|
return ret;
|
|
|
|
break;
|
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
case 17: /* footnote/endnote */
|
2016-01-11 17:29:42 -05:00
|
|
|
{
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Detected footnote/endnote marker @ offset %zu\n", offset);
|
2016-01-11 17:29:42 -05:00
|
|
|
|
|
|
|
/*
|
|
|
|
* offset 0 (2 bytes) - special character ID
|
|
|
|
* offset 2 (4 bytes) - reserved
|
|
|
|
* offset 6 (2 bytes) - special character ID
|
|
|
|
* offset 8 (8 x 1 bytes) - reserved
|
|
|
|
* offset 16 (2 bytes) - number
|
|
|
|
* offset 18 (2 bytes) - type
|
|
|
|
* offset 20 (2 bytes) - alignment
|
|
|
|
* total is always 22 bytes
|
|
|
|
*/
|
|
|
|
|
2016-01-12 12:03:33 -05:00
|
|
|
/* id block verification (only on HWP3_VERIFY) */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 6, content, match);
|
2016-01-11 17:29:42 -05:00
|
|
|
|
|
|
|
offset += 22;
|
2016-02-03 12:04:58 -05:00
|
|
|
|
|
|
|
/* content paragraph list */
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: footnote/endnote paragraph list starts @ %zu\n", level, p, offset);
|
2016-02-03 12:04:58 -05:00
|
|
|
l = 0;
|
2018-12-03 12:40:13 -05:00
|
|
|
while (!l && ((ret = parsehwp3_paragraph(ctx, map, sp++, level + 1, &offset, &l)) == CL_SUCCESS)) continue;
|
2016-02-03 12:04:58 -05:00
|
|
|
if (ret != CL_SUCCESS)
|
|
|
|
return ret;
|
2016-01-11 17:29:42 -05:00
|
|
|
break;
|
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
case 18: /* paste code number */
|
2016-01-11 15:17:04 -05:00
|
|
|
{
|
|
|
|
#if HWP3_DEBUG
|
|
|
|
uint8_t type;
|
|
|
|
#endif
|
|
|
|
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected paste code number marker @ offset %zu\n", level, p, offset);
|
2016-01-11 15:17:04 -05:00
|
|
|
|
|
|
|
/*
|
|
|
|
* offset 0 (2 bytes) - special character ID
|
|
|
|
* offset 2 (2 bytes) - type
|
|
|
|
* offset 4 (2 bytes) - number value
|
|
|
|
* offset 6 (2 bytes) - special character ID
|
|
|
|
* total is always 8 bytes
|
|
|
|
*/
|
|
|
|
|
2016-01-12 12:03:33 -05:00
|
|
|
/* id block verification (only on HWP3_VERIFY) */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 6, content, match);
|
2016-01-11 15:17:04 -05:00
|
|
|
|
|
|
|
#if HWP3_DEBUG
|
2018-12-03 12:40:13 -05:00
|
|
|
if (fmap_readn(map, &type, offset + 2, sizeof(type)) != sizeof(type))
|
2016-01-11 15:17:04 -05:00
|
|
|
return CL_EREAD;
|
|
|
|
|
|
|
|
if (type == 0)
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected paste code number as side\n", level, p);
|
2016-01-11 15:17:04 -05:00
|
|
|
else if (type == 1)
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected paste code number as footnote\n", level, p);
|
2016-01-11 15:17:04 -05:00
|
|
|
else if (type == 2)
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected paste code number as North America???\n", level, p);
|
2016-01-11 15:17:04 -05:00
|
|
|
else if (type == 3)
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected paste code number as drawing\n", level, p);
|
2016-01-11 15:17:04 -05:00
|
|
|
else if (type == 4)
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected paste code number as table\n", level, p);
|
2016-01-11 15:17:04 -05:00
|
|
|
else if (type == 5)
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected paste code number as equation\n", level, p);
|
2016-01-11 15:17:04 -05:00
|
|
|
else
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected paste code number as UNKNOWN(%u)\n", level, p, type);
|
2016-01-12 11:17:00 -05:00
|
|
|
#endif
|
|
|
|
offset += 8;
|
|
|
|
break;
|
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
case 19: /* code number change */
|
2016-01-12 11:17:00 -05:00
|
|
|
{
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected code number change marker @ offset %zu\n", level, p, offset);
|
2016-01-12 11:17:00 -05:00
|
|
|
|
|
|
|
/*
|
|
|
|
* offset 0 (2 bytes) - special character ID
|
|
|
|
* offset 2 (2 bytes) - type
|
|
|
|
* offset 4 (2 bytes) - new number value
|
|
|
|
* offset 6 (2 bytes) - special character ID
|
|
|
|
* total is always 8 bytes
|
|
|
|
*/
|
|
|
|
|
2016-01-12 12:03:33 -05:00
|
|
|
/* id block verification (only on HWP3_VERIFY) */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 6, content, match);
|
2016-01-12 11:17:00 -05:00
|
|
|
|
|
|
|
offset += 8;
|
|
|
|
break;
|
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
case 20: {
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected thread page number marker @ offset %zu\n", level, p, offset);
|
2016-01-12 11:17:00 -05:00
|
|
|
|
|
|
|
/*
|
|
|
|
* offset 0 (2 bytes) - special character ID
|
|
|
|
* offset 2 (2 bytes) - location
|
|
|
|
* offset 4 (2 bytes) - shape
|
|
|
|
* offset 6 (2 bytes) - special character ID
|
|
|
|
* total is always 8 bytes
|
|
|
|
*/
|
|
|
|
|
2016-01-12 12:03:33 -05:00
|
|
|
/* id block verification (only on HWP3_VERIFY) */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 6, content, match);
|
2016-01-12 11:17:00 -05:00
|
|
|
|
|
|
|
offset += 8;
|
|
|
|
break;
|
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
case 21: /* hide special */
|
2016-01-12 11:17:00 -05:00
|
|
|
{
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected hide special marker @ offset %zu\n", level, p, offset);
|
2016-01-12 11:17:00 -05:00
|
|
|
|
|
|
|
/*
|
|
|
|
* offset 0 (2 bytes) - special character ID
|
|
|
|
* offset 2 (2 bytes) - type
|
|
|
|
* offset 4 (2 bytes) - target
|
|
|
|
* offset 6 (2 bytes) - special character ID
|
|
|
|
* total is always 8 bytes
|
|
|
|
*/
|
|
|
|
|
2016-01-12 12:03:33 -05:00
|
|
|
/* id block verification (only on HWP3_VERIFY) */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 6, content, match);
|
2016-01-12 11:17:00 -05:00
|
|
|
|
2016-01-11 15:17:04 -05:00
|
|
|
offset += 8;
|
|
|
|
break;
|
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
case 22: /* mail merge display */
|
2016-01-12 12:32:51 -05:00
|
|
|
{
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected mail merge display marker @ offset %zu\n", level, p, offset);
|
2016-01-12 12:32:51 -05:00
|
|
|
|
|
|
|
/*
|
|
|
|
* offset 0 (2 bytes) - special character ID
|
|
|
|
* offset 2 (20 x 1 bytes) - field name (in ASCII)
|
|
|
|
* offset 22 (2 bytes) - special character ID
|
|
|
|
* total is always 24 bytes
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* id block verification (only on HWP3_VERIFY) */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 22, content, match);
|
|
|
|
|
|
|
|
offset += 24;
|
|
|
|
break;
|
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
case 23: /* overlapping letters */
|
2016-01-12 12:32:51 -05:00
|
|
|
{
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected overlapping marker @ offset %zu\n", level, p, offset);
|
2016-01-12 12:32:51 -05:00
|
|
|
|
|
|
|
/*
|
|
|
|
* offset 0 (2 bytes) - special character ID
|
|
|
|
* offset 2 (3 x 2 bytes) - overlapping letters
|
|
|
|
* offset 8 (2 bytes) - special character ID
|
|
|
|
* total is always 10 bytes
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* id block verification (only on HWP3_VERIFY) */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 8, content, match);
|
|
|
|
|
|
|
|
offset += 10;
|
|
|
|
break;
|
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
case 24: /* hyphen */
|
2016-01-12 12:32:51 -05:00
|
|
|
{
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected hyphen marker @ offset %zu\n", level, p, offset);
|
2016-01-12 12:32:51 -05:00
|
|
|
|
|
|
|
/*
|
|
|
|
* offset 0 (2 bytes) - special character ID
|
|
|
|
* offset 2 (2 bytes) - width of hyphen
|
|
|
|
* offset 4 (2 bytes) - special character ID
|
|
|
|
* total is always 6 bytes
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* id block verification (only on HWP3_VERIFY) */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 4, content, match);
|
|
|
|
|
|
|
|
offset += 6;
|
|
|
|
break;
|
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
case 25: /* title/table/picture show times */
|
2016-01-12 12:32:51 -05:00
|
|
|
{
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected title/table/picture show times marker @ offset %zu\n", level, p, offset);
|
2016-01-12 12:32:51 -05:00
|
|
|
|
|
|
|
/*
|
|
|
|
* offset 0 (2 bytes) - special character ID
|
|
|
|
* offset 2 (2 bytes) - type
|
|
|
|
* offset 4 (2 bytes) - special character ID
|
|
|
|
* total is always 6 bytes
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* id block verification (only on HWP3_VERIFY) */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 4, content, match);
|
|
|
|
|
|
|
|
offset += 6;
|
|
|
|
break;
|
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
case 26: /* browse displayed */
|
2016-01-12 12:32:51 -05:00
|
|
|
{
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected browse displayed marker @ offset %zu\n", level, p, offset);
|
2016-01-12 12:32:51 -05:00
|
|
|
|
|
|
|
/*
|
|
|
|
* offset 0 (2 bytes) - special character ID
|
|
|
|
* offset 2 (60 x 2 bytes) - keyword 1
|
|
|
|
* offset 122 (60 x 2 bytes) - keyword 2
|
|
|
|
* offset 242 (2 bytes) - page number
|
|
|
|
* offset 244 (2 bytes) - special character ID
|
|
|
|
* total is always 246 bytes
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* id block verification (only on HWP3_VERIFY) */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 244, content, match);
|
|
|
|
|
|
|
|
offset += 246;
|
|
|
|
break;
|
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
case 28: /* overview shape/summary number */
|
2016-01-12 12:32:51 -05:00
|
|
|
{
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected overview shape/summary number marker @ offset %zu\n", level, p, offset);
|
2016-01-12 12:32:51 -05:00
|
|
|
|
|
|
|
/*
|
|
|
|
* offset 0 (2 bytes) - special character ID
|
|
|
|
* offset 2 (2 bytes) - type
|
|
|
|
* offset 4 (1 byte) - form
|
|
|
|
* offset 5 (1 byte) - step
|
|
|
|
* offset 6 (7 x 2 bytes) - summary number
|
|
|
|
* offset 20 (7 x 2 bytes) - custom
|
|
|
|
* offset 34 (2 x 7 x 2 bytes) - decorative letters
|
|
|
|
* offset 62 (2 bytes) - special character ID
|
|
|
|
* total is always 64 bytes
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* id block verification (only on HWP3_VERIFY) */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 62, content, match);
|
|
|
|
|
|
|
|
offset += 64;
|
|
|
|
break;
|
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
case 29: /* cross-reference */
|
2016-01-12 12:32:51 -05:00
|
|
|
{
|
|
|
|
uint32_t length;
|
|
|
|
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected cross-reference marker @ offset %zu\n", level, p, offset);
|
2016-01-12 12:32:51 -05:00
|
|
|
|
|
|
|
/*
|
|
|
|
* offset 0 (2 bytes) - special character ID
|
|
|
|
* offset 2 (4 bytes) - length of information
|
|
|
|
* offset 6 (2 bytes) - special character ID
|
|
|
|
* offset 8 (n bytes) - ...
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* id block verification (only on HWP3_VERIFY) */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 6, content, match);
|
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
if (fmap_readn(map, &length, offset + 2, sizeof(length)) != sizeof(length))
|
2016-01-12 12:32:51 -05:00
|
|
|
return CL_EREAD;
|
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
length = le32_to_host(length);
|
2018-05-24 12:40:42 -04:00
|
|
|
new_offset = offset + (8 + length);
|
2018-07-31 11:10:56 -07:00
|
|
|
if ((new_offset <= offset) || (new_offset > map->len)) {
|
2019-05-04 18:08:43 -04:00
|
|
|
cli_errmsg("HWP3.x: Paragraph[%u, %d]: length value is too high, invalid. %u\n", level, p, length);
|
2018-05-21 16:58:51 -04:00
|
|
|
return CL_EPARSE;
|
|
|
|
}
|
2018-05-24 12:40:42 -04:00
|
|
|
offset = new_offset;
|
2016-01-12 12:32:51 -05:00
|
|
|
break;
|
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
case 30: /* bundle of blanks (ON SALE for 2.99!) */
|
2016-01-12 12:32:51 -05:00
|
|
|
{
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected title/table/picture show times marker @ offset %zu\n", level, p, offset);
|
2016-01-12 12:32:51 -05:00
|
|
|
|
|
|
|
/*
|
|
|
|
* offset 0 (2 bytes) - special character ID
|
|
|
|
* offset 2 (2 bytes) - special character ID
|
|
|
|
* total is always 4 bytes
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* id block verification (only on HWP3_VERIFY) */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 2, content, match);
|
|
|
|
|
|
|
|
offset += 4;
|
|
|
|
break;
|
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
case 31: /* fixed-width space */
|
2016-01-12 12:32:51 -05:00
|
|
|
{
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected title/table/picture show times marker @ offset %zu\n", level, p, offset);
|
2016-01-12 12:32:51 -05:00
|
|
|
|
|
|
|
/*
|
|
|
|
* offset 0 (2 bytes) - special character ID
|
|
|
|
* offset 2 (2 bytes) - special character ID
|
|
|
|
* total is always 4 bytes
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* id block verification (only on HWP3_VERIFY) */
|
|
|
|
HWP3_PSPECIAL_VERIFY(map, offset, 2, content, match);
|
|
|
|
|
|
|
|
offset += 4;
|
|
|
|
break;
|
|
|
|
}
|
2018-12-03 12:40:13 -05:00
|
|
|
default:
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Paragraph[%u, %d]: detected special character as [UNKNOWN]\n", level, p);
|
|
|
|
cli_errmsg("HWP3.x: Paragraph[%u, %d]: cannot understand special character %u\n", level, p, content);
|
2018-12-03 12:40:13 -05:00
|
|
|
return CL_EPARSE;
|
2016-01-08 11:08:28 -05:00
|
|
|
}
|
2016-01-08 14:22:14 -05:00
|
|
|
} else { /* normal characters */
|
|
|
|
offset += sizeof(content);
|
2016-01-08 11:08:28 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-01-08 14:22:14 -05:00
|
|
|
hwp3_debug("HWP3.x: end recursion level: %d\n", level);
|
|
|
|
|
2016-01-08 11:08:28 -05:00
|
|
|
(*roffset) = offset;
|
|
|
|
return CL_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2019-05-04 18:08:43 -04:00
|
|
|
static inline cl_error_t parsehwp3_infoblk_1(cli_ctx *ctx, fmap_t *dmap, size_t *offset, int *last)
|
2016-01-05 17:37:40 -05:00
|
|
|
{
|
2019-05-04 17:28:16 -04:00
|
|
|
cl_error_t ret = CL_SUCCESS;
|
|
|
|
|
2016-01-05 17:37:40 -05:00
|
|
|
uint32_t infoid, infolen;
|
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
|
|
|
fmap_t *map = (dmap ? dmap : ctx->fmap);
|
2019-05-04 17:28:16 -04:00
|
|
|
int i, count;
|
2016-01-06 15:09:31 -05:00
|
|
|
long long unsigned infoloc = (long long unsigned)(*offset);
|
2019-02-27 00:47:38 -05:00
|
|
|
#if HWP3_DEBUG
|
2016-01-06 14:38:27 -05:00
|
|
|
char field[HWP3_FIELD_LENGTH];
|
2019-02-27 00:47:38 -05:00
|
|
|
#endif
|
2016-01-15 15:32:04 -05:00
|
|
|
#if HAVE_JSON
|
2023-04-07 19:51:04 -07:00
|
|
|
json_object *infoblk_1, *contents = NULL, *counter, *entry = NULL;
|
2016-01-15 15:32:04 -05:00
|
|
|
#endif
|
2016-01-05 17:37:40 -05:00
|
|
|
|
2016-01-06 15:09:31 -05:00
|
|
|
hwp3_debug("HWP3.x: Information Block @ offset %llu\n", infoloc);
|
2015-12-14 16:34:11 -05:00
|
|
|
|
2016-01-15 15:32:04 -05:00
|
|
|
#if HAVE_JSON
|
2018-07-20 22:28:48 -04:00
|
|
|
if (SCAN_COLLECT_METADATA) {
|
2016-01-28 17:31:10 -05:00
|
|
|
infoblk_1 = cli_jsonobj(ctx->wrkproperty, "InfoBlk_1");
|
2016-01-15 15:32:04 -05:00
|
|
|
if (!infoblk_1) {
|
|
|
|
cli_errmsg("HWP5.x: No memory for information block object\n");
|
|
|
|
return CL_EMEM;
|
|
|
|
}
|
|
|
|
|
2016-01-28 17:31:10 -05:00
|
|
|
contents = cli_jsonarray(infoblk_1, "Contents");
|
|
|
|
if (!contents) {
|
|
|
|
cli_errmsg("HWP5.x: No memory for information block contents array\n");
|
|
|
|
return CL_EMEM;
|
|
|
|
}
|
|
|
|
|
2016-01-15 15:32:04 -05:00
|
|
|
if (!json_object_object_get_ex(infoblk_1, "Count", &counter)) { /* object not found */
|
|
|
|
cli_jsonint(infoblk_1, "Count", 1);
|
|
|
|
} else {
|
|
|
|
int value = json_object_get_int(counter);
|
2018-12-03 12:40:13 -05:00
|
|
|
cli_jsonint(infoblk_1, "Count", value + 1);
|
2016-01-15 15:32:04 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2019-05-04 18:08:43 -04:00
|
|
|
if (fmap_readn(map, &infoid, *offset, sizeof(infoid)) != sizeof(infoid)) {
|
|
|
|
cli_errmsg("HWP3.x: Failed to read information block id @ %zu\n", *offset);
|
2015-12-14 16:34:11 -05:00
|
|
|
return CL_EREAD;
|
|
|
|
}
|
2019-05-04 18:08:43 -04:00
|
|
|
*offset += sizeof(infoid);
|
2016-01-11 16:05:40 -05:00
|
|
|
infoid = le32_to_host(infoid);
|
|
|
|
|
2016-01-15 15:32:04 -05:00
|
|
|
#if HAVE_JSON
|
2018-07-20 22:28:48 -04:00
|
|
|
if (SCAN_COLLECT_METADATA) {
|
2016-01-28 17:31:10 -05:00
|
|
|
entry = cli_jsonobj(contents, NULL);
|
2016-01-15 15:32:04 -05:00
|
|
|
if (!entry) {
|
|
|
|
cli_errmsg("HWP5.x: No memory for information block entry object\n");
|
|
|
|
return CL_EMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
cli_jsonint(entry, "ID", infoid);
|
|
|
|
}
|
|
|
|
#endif
|
2016-01-11 16:05:40 -05:00
|
|
|
hwp3_debug("HWP3.x: Information Block[%llu]: ID: %u\n", infoloc, infoid);
|
|
|
|
|
|
|
|
/* Booking Information(5) - no length field and no content */
|
|
|
|
if (infoid == 5) {
|
2016-10-19 15:57:45 -04:00
|
|
|
hwp3_debug("HWP3.x: Information Block[%llu]: TYPE: Booking Information\n", infoloc);
|
2016-01-15 15:32:04 -05:00
|
|
|
#if HAVE_JSON
|
2018-07-20 22:28:48 -04:00
|
|
|
if (SCAN_COLLECT_METADATA)
|
2016-10-19 15:57:45 -04:00
|
|
|
cli_jsonstr(entry, "Type", "Booking Information");
|
2016-01-15 15:32:04 -05:00
|
|
|
#endif
|
2016-01-11 16:05:40 -05:00
|
|
|
return CL_SUCCESS;
|
|
|
|
}
|
2015-12-14 16:34:11 -05:00
|
|
|
|
2019-05-04 18:08:43 -04:00
|
|
|
if (fmap_readn(map, &infolen, *offset, sizeof(infolen)) != sizeof(infolen)) {
|
|
|
|
cli_errmsg("HWP3.x: Failed to read information block len @ %zu\n", *offset);
|
2015-12-14 16:34:11 -05:00
|
|
|
return CL_EREAD;
|
|
|
|
}
|
2019-05-04 18:08:43 -04:00
|
|
|
*offset += sizeof(infolen);
|
2016-01-05 17:37:40 -05:00
|
|
|
infolen = le32_to_host(infolen);
|
|
|
|
|
2016-01-15 15:32:04 -05:00
|
|
|
#if HAVE_JSON
|
2018-07-20 22:28:48 -04:00
|
|
|
if (SCAN_COLLECT_METADATA) {
|
2016-01-15 15:32:04 -05:00
|
|
|
cli_jsonint64(entry, "Offset", infoloc);
|
|
|
|
cli_jsonint(entry, "Length", infolen);
|
|
|
|
}
|
|
|
|
#endif
|
2016-01-06 15:09:31 -05:00
|
|
|
hwp3_debug("HWP3.x: Information Block[%llu]: LEN: %u\n", infoloc, infolen);
|
2016-01-05 17:37:40 -05:00
|
|
|
|
2016-01-06 14:38:27 -05:00
|
|
|
/* check information block bounds */
|
2019-05-04 18:08:43 -04:00
|
|
|
if (*offset + infolen > map->len) {
|
|
|
|
cli_errmsg("HWP3.x: Information blocks length exceeds remaining map length, %zu > %zu\n", *offset + infolen, map->len);
|
2016-01-06 14:38:27 -05:00
|
|
|
return CL_EREAD;
|
|
|
|
}
|
|
|
|
|
2016-01-11 16:05:40 -05:00
|
|
|
/* Information Blocks */
|
2018-12-03 12:40:13 -05:00
|
|
|
switch (infoid) {
|
|
|
|
case 0: /* Terminating */
|
|
|
|
if (infolen == 0) {
|
|
|
|
hwp3_debug("HWP3.x: Information Block[%llu]: TYPE: Terminating Entry\n", infoloc);
|
2016-01-15 15:32:04 -05:00
|
|
|
#if HAVE_JSON
|
2018-12-03 12:40:13 -05:00
|
|
|
if (SCAN_COLLECT_METADATA)
|
|
|
|
cli_jsonstr(entry, "Type", "Terminating Entry");
|
2016-01-15 15:32:04 -05:00
|
|
|
#endif
|
2018-12-03 12:40:13 -05:00
|
|
|
if (last) *last = 1;
|
|
|
|
return CL_SUCCESS;
|
|
|
|
} else {
|
|
|
|
cli_errmsg("HWP3.x: Information Block[%llu]: TYPE: Invalid Terminating Entry\n", infoloc);
|
|
|
|
return CL_EFORMAT;
|
|
|
|
}
|
|
|
|
case 1: /* Image Data */
|
|
|
|
hwp3_debug("HWP3.x: Information Block[%llu]: TYPE: Image Data\n", infoloc);
|
2016-01-15 15:32:04 -05:00
|
|
|
#if HAVE_JSON
|
2018-12-03 12:40:13 -05:00
|
|
|
if (SCAN_COLLECT_METADATA)
|
|
|
|
cli_jsonstr(entry, "Type", "Image Data");
|
2016-01-15 15:32:04 -05:00
|
|
|
#endif
|
2016-01-11 16:05:40 -05:00
|
|
|
#if HWP3_DEBUG /* additional fields can be added */
|
2018-12-03 12:40:13 -05:00
|
|
|
memset(field, 0, HWP3_FIELD_LENGTH);
|
2019-05-04 18:08:43 -04:00
|
|
|
if (fmap_readn(map, field, *offset, 16) != 16) {
|
|
|
|
cli_errmsg("HWP3.x: Failed to read information block field @ %zu\n", *offset);
|
2018-12-03 12:40:13 -05:00
|
|
|
return CL_EREAD;
|
|
|
|
}
|
|
|
|
hwp3_debug("HWP3.x: Information Block[%llu]: NAME: %s\n", infoloc, field);
|
2016-01-06 14:38:27 -05:00
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
memset(field, 0, HWP3_FIELD_LENGTH);
|
2019-05-04 18:08:43 -04:00
|
|
|
if (fmap_readn(map, field, *offset + 16, 16) != 16) {
|
|
|
|
cli_errmsg("HWP3.x: Failed to read information block field @ %zu\n", *offset);
|
2018-12-03 12:40:13 -05:00
|
|
|
return CL_EREAD;
|
|
|
|
}
|
|
|
|
hwp3_debug("HWP3.x: Information Block[%llu]: FORM: %s\n", infoloc, field);
|
2016-01-06 14:38:27 -05:00
|
|
|
#endif
|
2018-12-03 12:40:13 -05:00
|
|
|
/* 32 bytes for extra data fields */
|
|
|
|
if (infolen > 0)
|
2022-03-09 22:26:40 -08:00
|
|
|
ret = cli_magic_scan_nested_fmap_type(map, *offset + 32, infolen - 32, ctx,
|
|
|
|
CL_TYPE_ANY, NULL, LAYER_ATTRIBUTES_NONE);
|
2018-12-03 12:40:13 -05:00
|
|
|
break;
|
|
|
|
case 2: /* OLE2 Data */
|
|
|
|
hwp3_debug("HWP3.x: Information Block[%llu]: TYPE: OLE2 Data\n", infoloc);
|
2016-01-15 15:32:04 -05:00
|
|
|
#if HAVE_JSON
|
2018-12-03 12:40:13 -05:00
|
|
|
if (SCAN_COLLECT_METADATA)
|
|
|
|
cli_jsonstr(entry, "Type", "OLE2 Data");
|
2016-01-15 15:32:04 -05:00
|
|
|
#endif
|
2018-12-03 12:40:13 -05:00
|
|
|
if (infolen > 0)
|
2022-03-09 22:26:40 -08:00
|
|
|
ret = cli_magic_scan_nested_fmap_type(map, *offset, infolen, ctx,
|
|
|
|
CL_TYPE_ANY, NULL, LAYER_ATTRIBUTES_NONE);
|
2018-12-03 12:40:13 -05:00
|
|
|
break;
|
|
|
|
case 3: /* Hypertext/Hyperlink Information */
|
|
|
|
hwp3_debug("HWP3.x: Information Block[%llu]: TYPE: Hypertext/Hyperlink Information\n", infoloc);
|
|
|
|
if (infolen % 617) {
|
|
|
|
cli_errmsg("HWP3.x: Information Block[%llu]: Invalid multiple of 617 => %u\n", infoloc, infolen);
|
|
|
|
return CL_EFORMAT;
|
|
|
|
}
|
2016-01-11 16:05:40 -05:00
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
count = (infolen / 617);
|
|
|
|
hwp3_debug("HWP3.x: Information Block[%llu]: COUNT: %d entries\n", infoloc, count);
|
2016-01-15 15:32:04 -05:00
|
|
|
#if HAVE_JSON
|
2018-12-03 12:40:13 -05:00
|
|
|
if (SCAN_COLLECT_METADATA) {
|
|
|
|
cli_jsonstr(entry, "Type", "Hypertext/Hyperlink Information");
|
|
|
|
cli_jsonint(entry, "Count", count);
|
|
|
|
}
|
2016-01-15 15:32:04 -05:00
|
|
|
#endif
|
|
|
|
|
2018-12-03 12:40:13 -05:00
|
|
|
for (i = 0; i < count; i++) {
|
2016-01-11 16:05:40 -05:00
|
|
|
#if HWP3_DEBUG /* additional fields can be added */
|
2018-12-03 12:40:13 -05:00
|
|
|
memset(field, 0, HWP3_FIELD_LENGTH);
|
2019-05-04 18:08:43 -04:00
|
|
|
if (fmap_readn(map, field, *offset, 256) != 256) {
|
|
|
|
cli_errmsg("HWP3.x: Failed to read information block field @ %zu\n", *offset);
|
2018-12-03 12:40:13 -05:00
|
|
|
return CL_EREAD;
|
|
|
|
}
|
|
|
|
hwp3_debug("HWP3.x: Information Block[%llu]: %d: NAME: %s\n", infoloc, i, field);
|
2016-01-11 16:05:40 -05:00
|
|
|
#endif
|
2018-12-03 12:40:13 -05:00
|
|
|
/* scanning macros - TODO - check numbers */
|
2022-03-09 22:26:40 -08:00
|
|
|
ret = cli_magic_scan_nested_fmap_type(map, *offset + (617 * i) + 288, 325, ctx,
|
|
|
|
CL_TYPE_ANY, NULL, LAYER_ATTRIBUTES_NONE);
|
2018-12-03 12:40:13 -05:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 4: /* Presentation Information */
|
|
|
|
hwp3_debug("HWP3.x: Information Block[%llu]: TYPE: Presentation Information\n", infoloc);
|
2016-01-15 15:32:04 -05:00
|
|
|
#if HAVE_JSON
|
2018-12-03 12:40:13 -05:00
|
|
|
if (SCAN_COLLECT_METADATA)
|
|
|
|
cli_jsonstr(entry, "Type", "Presentation Information");
|
2016-01-15 15:32:04 -05:00
|
|
|
#endif
|
2018-12-03 12:40:13 -05:00
|
|
|
/* contains nothing of interest to scan */
|
|
|
|
break;
|
|
|
|
case 5: /* Booking Information */
|
|
|
|
/* should never run this as it is short-circuited above */
|
|
|
|
hwp3_debug("HWP3.x: Information Block[%llu]: TYPE: Booking Information\n", infoloc);
|
2016-01-15 15:32:04 -05:00
|
|
|
#if HAVE_JSON
|
2018-12-03 12:40:13 -05:00
|
|
|
if (SCAN_COLLECT_METADATA)
|
|
|
|
cli_jsonstr(entry, "Type", "Booking Information");
|
2016-01-15 15:32:04 -05:00
|
|
|
#endif
|
2018-12-03 12:40:13 -05:00
|
|
|
break;
|
|
|
|
case 6: /* Background Image Data */
|
|
|
|
hwp3_debug("HWP3.x: Information Block[%llu]: TYPE: Background Image Data\n", infoloc);
|
2016-01-15 15:32:04 -05:00
|
|
|
#if HAVE_JSON
|
2018-12-03 12:40:13 -05:00
|
|
|
if (SCAN_COLLECT_METADATA) {
|
|
|
|
cli_jsonstr(entry, "Type", "Background Image Data");
|
|
|
|
cli_jsonint(entry, "ImageSize", infolen - 324);
|
|
|
|
}
|
2016-01-15 15:32:04 -05:00
|
|
|
#endif
|
2016-01-11 16:05:40 -05:00
|
|
|
#if HWP3_DEBUG /* additional fields can be added */
|
2018-12-03 12:40:13 -05:00
|
|
|
memset(field, 0, HWP3_FIELD_LENGTH);
|
2019-05-04 18:08:43 -04:00
|
|
|
if (fmap_readn(map, field, *offset + 24, 256) != 256) {
|
|
|
|
cli_errmsg("HWP3.x: Failed to read information block field @ %zu\n", *offset);
|
2018-12-03 12:40:13 -05:00
|
|
|
return CL_EREAD;
|
|
|
|
}
|
|
|
|
hwp3_debug("HWP3.x: Information Block[%llu]: NAME: %s\n", infoloc, field);
|
2016-01-11 16:05:40 -05:00
|
|
|
#endif
|
2018-12-03 12:40:13 -05:00
|
|
|
/* 324 bytes for extra data fields */
|
|
|
|
if (infolen > 0)
|
2022-03-09 22:26:40 -08:00
|
|
|
ret = cli_magic_scan_nested_fmap_type(map, *offset + 324, infolen - 324, ctx,
|
|
|
|
CL_TYPE_ANY, NULL, LAYER_ATTRIBUTES_NONE);
|
2018-12-03 12:40:13 -05:00
|
|
|
break;
|
|
|
|
case 0x100: /* Table Extension */
|
|
|
|
hwp3_debug("HWP3.x: Information Block[%llu]: TYPE: Table Extension\n", infoloc);
|
2016-01-15 15:32:04 -05:00
|
|
|
#if HAVE_JSON
|
2018-12-03 12:40:13 -05:00
|
|
|
if (SCAN_COLLECT_METADATA)
|
|
|
|
cli_jsonstr(entry, "Type", "Table Extension");
|
2016-01-15 15:32:04 -05:00
|
|
|
#endif
|
2018-12-03 12:40:13 -05:00
|
|
|
/* contains nothing of interest to scan */
|
|
|
|
break;
|
|
|
|
case 0x101: /* Press Frame Information Field Name */
|
|
|
|
hwp3_debug("HWP3.x: Information Block[%llu]: TYPE: Press Frame Information Field Name\n", infoloc);
|
2016-01-15 15:32:04 -05:00
|
|
|
#if HAVE_JSON
|
2018-12-03 12:40:13 -05:00
|
|
|
if (SCAN_COLLECT_METADATA)
|
|
|
|
cli_jsonstr(entry, "Type", "Press Frame Information Field Name");
|
2016-01-15 15:32:04 -05:00
|
|
|
#endif
|
2018-12-03 12:40:13 -05:00
|
|
|
/* contains nothing of interest to scan */
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
cli_warnmsg("HWP3.x: Information Block[%llu]: TYPE: UNKNOWN(%u)\n", infoloc, infoid);
|
|
|
|
if (infolen > 0)
|
2022-03-09 22:26:40 -08:00
|
|
|
ret = cli_magic_scan_nested_fmap_type(map, *offset, infolen, ctx,
|
|
|
|
CL_TYPE_ANY, NULL, LAYER_ATTRIBUTES_NONE);
|
2016-01-05 17:37:40 -05:00
|
|
|
}
|
2015-12-14 16:34:11 -05:00
|
|
|
|
2019-05-04 18:08:43 -04:00
|
|
|
*offset += infolen;
|
2016-01-06 14:38:27 -05:00
|
|
|
return ret;
|
2015-12-14 16:34:11 -05:00
|
|
|
}
|
|
|
|
|
2019-05-04 17:28:16 -04:00
|
|
|
static cl_error_t hwp3_cb(void *cbdata, int fd, const char *filepath, cli_ctx *ctx)
|
2015-12-14 16:34:11 -05:00
|
|
|
{
|
2019-05-04 17:28:16 -04:00
|
|
|
cl_error_t ret = CL_SUCCESS;
|
2016-01-06 15:10:39 -05:00
|
|
|
fmap_t *map, *dmap;
|
2019-05-04 18:08:43 -04:00
|
|
|
size_t offset, start, new_offset;
|
2019-05-04 17:28:16 -04:00
|
|
|
int i, p = 0, last = 0;
|
2016-01-08 11:08:28 -05:00
|
|
|
uint16_t nstyles;
|
2016-01-15 15:32:04 -05:00
|
|
|
#if HAVE_JSON
|
2023-04-07 19:51:04 -07:00
|
|
|
json_object *fonts = NULL;
|
2016-01-15 15:32:04 -05:00
|
|
|
#endif
|
2015-12-14 16:34:11 -05:00
|
|
|
|
2019-02-27 00:47:38 -05:00
|
|
|
UNUSEDPARAM(filepath);
|
|
|
|
|
2019-05-04 18:08:43 -04:00
|
|
|
offset = start = cbdata ? *(size_t *)cbdata : 0;
|
2015-12-14 16:34:11 -05:00
|
|
|
|
2016-01-06 15:10:39 -05:00
|
|
|
if (offset == 0) {
|
|
|
|
if (fd < 0) {
|
|
|
|
cli_errmsg("HWP3.x: Invalid file descriptor argument\n");
|
|
|
|
return CL_ENULLARG;
|
|
|
|
} else {
|
|
|
|
STATBUF statbuf;
|
|
|
|
|
|
|
|
if (FSTAT(fd, &statbuf) == -1) {
|
|
|
|
cli_errmsg("HWP3.x: Can't stat file descriptor\n");
|
|
|
|
return CL_ESTAT;
|
|
|
|
}
|
2015-12-14 16:34:11 -05:00
|
|
|
|
2020-03-19 21:23:54 -04:00
|
|
|
map = dmap = fmap(fd, 0, statbuf.st_size, NULL);
|
2016-01-06 15:10:39 -05:00
|
|
|
if (!map) {
|
|
|
|
cli_errmsg("HWP3.x: Failed to get fmap for uncompressed stream\n");
|
|
|
|
return CL_EMAP;
|
|
|
|
}
|
2015-12-14 16:34:11 -05:00
|
|
|
}
|
2016-01-06 15:10:39 -05:00
|
|
|
} else {
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Document Content Stream starts @ offset %zu\n", offset);
|
2016-01-15 14:36:05 -05:00
|
|
|
|
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
|
|
|
map = ctx->fmap;
|
2016-01-06 15:10:39 -05:00
|
|
|
dmap = NULL;
|
2015-12-14 16:34:11 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Fonts - 7 entries of 2 + (n x 40) bytes where n is the first 2 bytes of the entry */
|
2016-01-15 15:32:04 -05:00
|
|
|
#if HAVE_JSON
|
2018-07-20 22:28:48 -04:00
|
|
|
if (SCAN_COLLECT_METADATA)
|
2016-01-15 15:32:04 -05:00
|
|
|
fonts = cli_jsonarray(ctx->wrkproperty, "FontCounts");
|
|
|
|
#endif
|
2015-12-14 16:34:11 -05:00
|
|
|
for (i = 0; i < 7; i++) {
|
|
|
|
uint16_t nfonts;
|
|
|
|
|
2016-01-06 15:10:39 -05:00
|
|
|
if (fmap_readn(map, &nfonts, offset, sizeof(nfonts)) != sizeof(nfonts)) {
|
|
|
|
if (dmap)
|
|
|
|
funmap(dmap);
|
2016-01-05 17:37:40 -05:00
|
|
|
return CL_EREAD;
|
2015-12-14 16:34:11 -05:00
|
|
|
}
|
|
|
|
nfonts = le16_to_host(nfonts);
|
|
|
|
|
2016-01-15 15:32:04 -05:00
|
|
|
#if HAVE_JSON
|
2018-07-20 22:28:48 -04:00
|
|
|
if (SCAN_COLLECT_METADATA)
|
2016-01-15 15:32:04 -05:00
|
|
|
cli_jsonint(fonts, NULL, nfonts);
|
|
|
|
#endif
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: Font Entry %d with %u entries @ offset %zu\n", i + 1, nfonts, offset);
|
2018-05-24 12:40:42 -04:00
|
|
|
new_offset = offset + (2 + nfonts * 40);
|
|
|
|
if ((new_offset <= offset) || (new_offset >= map->len)) {
|
2018-05-21 16:58:51 -04:00
|
|
|
cli_errmsg("HWP3.x: Font Entry: number of fonts is too high, invalid. %u\n", nfonts);
|
2020-04-16 16:23:47 -07:00
|
|
|
if (dmap)
|
|
|
|
funmap(dmap);
|
2018-05-21 16:58:51 -04:00
|
|
|
return CL_EPARSE;
|
|
|
|
}
|
2018-05-24 12:40:42 -04:00
|
|
|
offset = new_offset;
|
2015-12-14 16:34:11 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Styles - 2 + (n x 238) bytes where n is the first 2 bytes of the section */
|
2016-01-06 15:10:39 -05:00
|
|
|
if (fmap_readn(map, &nstyles, offset, sizeof(nstyles)) != sizeof(nstyles)) {
|
|
|
|
if (dmap)
|
|
|
|
funmap(dmap);
|
2016-01-05 17:37:40 -05:00
|
|
|
return CL_EREAD;
|
2015-12-14 16:34:11 -05:00
|
|
|
}
|
|
|
|
nstyles = le16_to_host(nstyles);
|
|
|
|
|
2016-01-15 15:32:04 -05:00
|
|
|
#if HAVE_JSON
|
2018-07-20 22:28:48 -04:00
|
|
|
if (SCAN_COLLECT_METADATA)
|
2016-01-15 15:32:04 -05:00
|
|
|
cli_jsonint(ctx->wrkproperty, "StyleCount", nstyles);
|
|
|
|
#endif
|
2019-05-04 18:08:43 -04:00
|
|
|
hwp3_debug("HWP3.x: %u Styles @ offset %zu\n", nstyles, offset);
|
2018-05-24 12:40:42 -04:00
|
|
|
new_offset = offset + (2 + nstyles * 238);
|
|
|
|
if ((new_offset <= offset) || (new_offset >= map->len)) {
|
2018-05-21 16:58:51 -04:00
|
|
|
cli_errmsg("HWP3.x: Font Entry: number of font styles is too high, invalid. %u\n", nstyles);
|
2020-04-16 16:23:47 -07:00
|
|
|
if (dmap)
|
|
|
|
funmap(dmap);
|
2018-05-21 16:58:51 -04:00
|
|
|
return CL_EPARSE;
|
|
|
|
}
|
2015-12-14 16:34:11 -05:00
|
|
|
offset += (2 + nstyles * 238);
|
|
|
|
|
2016-01-08 11:08:28 -05:00
|
|
|
last = 0;
|
2016-01-05 17:37:40 -05:00
|
|
|
/* Paragraphs - variable */
|
2016-01-06 11:41:24 -05:00
|
|
|
/* Paragraphs - are terminated with 0x0d00[13(CR) as hchar], empty paragraph marks end of section and do NOT end with 0x0d00 */
|
2018-12-03 12:37:58 -05:00
|
|
|
while (!last && ((ret = parsehwp3_paragraph(ctx, map, p++, 0, &offset, &last)) == CL_SUCCESS)) continue;
|
2016-01-08 11:08:28 -05:00
|
|
|
/* return is never a virus */
|
|
|
|
if (ret != CL_SUCCESS) {
|
|
|
|
if (dmap)
|
|
|
|
funmap(dmap);
|
|
|
|
return ret;
|
|
|
|
}
|
2016-01-15 15:32:04 -05:00
|
|
|
#if HAVE_JSON
|
2018-07-20 22:28:48 -04:00
|
|
|
if (SCAN_COLLECT_METADATA)
|
2016-01-15 15:32:04 -05:00
|
|
|
cli_jsonint(ctx->wrkproperty, "ParagraphCount", p);
|
|
|
|
#endif
|
2015-12-14 16:34:11 -05:00
|
|
|
|
2016-01-08 11:08:28 -05:00
|
|
|
last = 0;
|
2016-01-06 14:38:27 -05:00
|
|
|
/* 'additional information block #1's - attachments and media */
|
2018-12-03 12:37:58 -05:00
|
|
|
while (!last && ((ret = parsehwp3_infoblk_1(ctx, map, &offset, &last)) == CL_SUCCESS)) continue;
|
2015-12-14 16:34:11 -05:00
|
|
|
|
2016-01-08 11:08:28 -05:00
|
|
|
/* scan the uncompressed stream - both compressed and uncompressed cases [ALLMATCH] */
|
2022-08-10 09:41:30 -07:00
|
|
|
if (ret == CL_SUCCESS) {
|
|
|
|
size_t dlen = offset - start;
|
2016-01-06 15:10:39 -05:00
|
|
|
|
2022-08-10 09:41:30 -07:00
|
|
|
ret = cli_magic_scan_nested_fmap_type(map, start, dlen, ctx, CL_TYPE_ANY, NULL, LAYER_ATTRIBUTES_NONE);
|
2016-01-06 14:38:27 -05:00
|
|
|
}
|
2015-12-14 16:34:11 -05:00
|
|
|
|
2016-01-06 15:10:39 -05:00
|
|
|
if (dmap)
|
|
|
|
funmap(dmap);
|
2015-12-14 16:34:11 -05:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2019-05-04 17:28:16 -04:00
|
|
|
cl_error_t cli_scanhwp3(cli_ctx *ctx)
|
2015-12-11 17:50:40 -05:00
|
|
|
{
|
2019-05-04 17:28:16 -04:00
|
|
|
cl_error_t ret = CL_SUCCESS;
|
|
|
|
|
2015-12-11 17:50:40 -05:00
|
|
|
struct hwp3_docinfo docinfo;
|
2019-05-04 18:08:43 -04:00
|
|
|
size_t offset = 0, new_offset = 0;
|
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
|
|
|
fmap_t *map = ctx->fmap;
|
2015-12-11 17:50:40 -05:00
|
|
|
|
2015-12-14 16:34:11 -05:00
|
|
|
#if HAVE_JSON
|
2015-12-11 17:50:40 -05:00
|
|
|
/*
|
2017-08-28 17:49:17 -04:00
|
|
|
// version
|
2015-12-11 17:50:40 -05:00
|
|
|
cli_jsonint(header, "RawVersion", hwp5->version);
|
|
|
|
*/
|
2015-12-14 16:34:11 -05:00
|
|
|
#endif
|
|
|
|
offset += HWP3_IDENTITY_INFO_SIZE;
|
2015-12-11 17:50:40 -05:00
|
|
|
|
2015-12-14 16:34:11 -05:00
|
|
|
if ((ret = parsehwp3_docinfo(ctx, offset, &docinfo)) != CL_SUCCESS)
|
2015-12-11 17:50:40 -05:00
|
|
|
return ret;
|
|
|
|
|
2015-12-14 16:34:11 -05:00
|
|
|
offset += HWP3_DOCINFO_SIZE;
|
|
|
|
|
|
|
|
if ((ret = parsehwp3_docsummary(ctx, offset)) != CL_SUCCESS)
|
2015-12-11 17:50:40 -05:00
|
|
|
return ret;
|
|
|
|
|
2015-12-14 16:34:11 -05:00
|
|
|
offset += HWP3_DOCSUMMARY_SIZE;
|
|
|
|
|
2016-02-18 11:44:54 -05:00
|
|
|
/* password-protected document - cannot parse */
|
|
|
|
if (docinfo.di_passwd) {
|
|
|
|
cli_dbgmsg("HWP3.x: password-protected file, skip parsing\n");
|
|
|
|
return CL_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2015-12-14 16:34:11 -05:00
|
|
|
if (docinfo.di_infoblksize) {
|
2016-01-12 15:10:08 -05:00
|
|
|
/* OPTIONAL TODO: HANDLE OPTIONAL INFORMATION BLOCK #0's FOR PRECLASS */
|
2018-05-24 12:40:42 -04:00
|
|
|
new_offset = offset + docinfo.di_infoblksize;
|
|
|
|
if ((new_offset <= offset) || (new_offset >= map->len)) {
|
2018-05-21 16:58:51 -04:00
|
|
|
cli_errmsg("HWP3.x: Doc info block size is too high, invalid. %u\n", docinfo.di_infoblksize);
|
|
|
|
return CL_EPARSE;
|
|
|
|
}
|
2018-05-24 12:40:42 -04:00
|
|
|
offset = new_offset;
|
2015-12-14 16:34:11 -05:00
|
|
|
}
|
2015-12-11 17:50:40 -05:00
|
|
|
|
2016-01-05 14:22:35 -05:00
|
|
|
if (docinfo.di_compressed)
|
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
|
|
|
ret = decompress_and_callback(ctx, ctx->fmap, offset, 0, "HWP3.x", hwp3_cb, NULL);
|
2016-01-06 15:10:39 -05:00
|
|
|
else
|
2018-07-30 20:19:28 -04:00
|
|
|
ret = hwp3_cb(&offset, 0, ctx->sub_filepath, ctx);
|
2016-01-05 14:22:35 -05:00
|
|
|
|
2015-12-14 16:34:11 -05:00
|
|
|
if (ret != CL_SUCCESS)
|
|
|
|
return ret;
|
|
|
|
|
2016-01-12 15:10:08 -05:00
|
|
|
/* OPTIONAL TODO: HANDLE OPTIONAL ADDITIONAL INFORMATION BLOCK #2's FOR PRECLASS*/
|
2015-12-14 16:34:11 -05:00
|
|
|
|
2015-12-11 17:50:40 -05:00
|
|
|
return ret;
|
|
|
|
}
|
2015-12-15 13:01:40 -05:00
|
|
|
|
|
|
|
/*** HWPML (hijacking the msxml parser) ***/
|
2016-01-26 12:58:42 -05:00
|
|
|
#if HAVE_LIBXML2
|
2015-12-15 13:01:40 -05:00
|
|
|
static const struct key_entry hwpml_keys[] = {
|
2018-12-03 12:40:13 -05:00
|
|
|
{"hwpml", "HWPML", MSXML_JSON_ROOT | MSXML_JSON_ATTRIB},
|
2015-12-15 13:01:40 -05:00
|
|
|
|
|
|
|
/* HEAD - Document Properties */
|
2016-01-28 17:31:10 -05:00
|
|
|
//{ "head", "Head", MSXML_JSON_WRKPTR },
|
2018-12-03 12:40:13 -05:00
|
|
|
{"docsummary", "DocumentProperties", MSXML_JSON_WRKPTR},
|
|
|
|
{"title", "Title", MSXML_JSON_WRKPTR | MSXML_JSON_VALUE},
|
|
|
|
{"author", "Author", MSXML_JSON_WRKPTR | MSXML_JSON_VALUE},
|
|
|
|
{"date", "Date", MSXML_JSON_WRKPTR | MSXML_JSON_VALUE},
|
|
|
|
{"docsetting", "DocumentSettings", MSXML_JSON_WRKPTR},
|
|
|
|
{"beginnumber", "BeginNumber", MSXML_JSON_WRKPTR | MSXML_JSON_ATTRIB},
|
|
|
|
{"caretpos", "CaretPos", MSXML_JSON_WRKPTR | MSXML_JSON_ATTRIB},
|
2016-01-28 17:31:10 -05:00
|
|
|
//{ "bindatalist", "BinDataList", MSXML_JSON_WRKPTR },
|
|
|
|
//{ "binitem", "BinItem", MSXML_JSON_WRKPTR | MSXML_JSON_ATTRIB },
|
2018-12-03 12:40:13 -05:00
|
|
|
{"facenamelist", "FaceNameList", MSXML_IGNORE_ELEM}, /* fonts list */
|
|
|
|
{"borderfilllist", "BorderFillList", MSXML_IGNORE_ELEM}, /* borders list */
|
|
|
|
{"charshapelist", "CharShapeList", MSXML_IGNORE_ELEM}, /* character shapes */
|
|
|
|
{"tabdeflist", "TableDefList", MSXML_IGNORE_ELEM}, /* table defs */
|
|
|
|
{"numberinglist", "NumberingList", MSXML_IGNORE_ELEM}, /* numbering list */
|
|
|
|
{"parashapelist", "ParagraphShapeList", MSXML_IGNORE_ELEM}, /* paragraph shapes */
|
|
|
|
{"stylelist", "StyleList", MSXML_IGNORE_ELEM}, /* styles */
|
|
|
|
{"compatibledocument", "WordCompatibility", MSXML_IGNORE_ELEM}, /* word compatibility data */
|
2015-12-15 13:01:40 -05:00
|
|
|
|
|
|
|
/* BODY - Document Contents */
|
2018-12-03 12:40:13 -05:00
|
|
|
{"body", "Body", MSXML_IGNORE_ELEM}, /* document contents (we could build a document contents summary */
|
2015-12-15 13:01:40 -05:00
|
|
|
|
|
|
|
/* TAIL - Document Attachments */
|
2016-01-28 17:31:10 -05:00
|
|
|
//{ "tail", "Tail", MSXML_JSON_WRKPTR },
|
2018-12-03 12:40:13 -05:00
|
|
|
{"bindatastorage", "BinaryDataStorage", MSXML_JSON_WRKPTR},
|
|
|
|
{"bindata", "BinaryData", MSXML_SCAN_CB | MSXML_JSON_WRKPTR | MSXML_JSON_ATTRIB},
|
|
|
|
{"scriptcode", "ScriptCodeStorage", MSXML_JSON_WRKPTR | MSXML_JSON_ATTRIB},
|
|
|
|
{"scriptheader", "ScriptHeader", MSXML_SCAN_CB | MSXML_JSON_WRKPTR | MSXML_JSON_VALUE},
|
|
|
|
{"scriptsource", "ScriptSource", MSXML_SCAN_CB | MSXML_JSON_WRKPTR | MSXML_JSON_VALUE}};
|
2015-12-15 13:01:40 -05:00
|
|
|
static size_t num_hwpml_keys = sizeof(hwpml_keys) / sizeof(struct key_entry);
|
|
|
|
|
2015-12-16 16:13:05 -05:00
|
|
|
/* binary streams needs to be base64-decoded then decompressed if fields are set */
|
2019-05-04 17:28:16 -04:00
|
|
|
static cl_error_t hwpml_scan_cb(void *cbdata, int fd, const char *filepath, cli_ctx *ctx)
|
2015-12-16 16:13:05 -05:00
|
|
|
{
|
2019-02-27 00:47:38 -05:00
|
|
|
UNUSEDPARAM(cbdata);
|
|
|
|
|
2016-02-03 17:21:57 -05:00
|
|
|
if (fd < 0 || !ctx)
|
|
|
|
return CL_ENULLARG;
|
|
|
|
|
2022-03-09 22:26:40 -08:00
|
|
|
return cli_magic_scan_desc(fd, filepath, ctx, NULL, LAYER_ATTRIBUTES_NONE);
|
2015-12-16 16:13:05 -05:00
|
|
|
}
|
|
|
|
|
2019-05-04 17:28:16 -04:00
|
|
|
static cl_error_t hwpml_binary_cb(int fd, const char *filepath, cli_ctx *ctx, int num_attribs, struct attrib_entry *attribs, void *cbdata)
|
2015-12-16 16:13:05 -05:00
|
|
|
{
|
2019-05-04 17:28:16 -04:00
|
|
|
cl_error_t ret;
|
|
|
|
|
|
|
|
int i, df = 0, com = 0, enc = 0;
|
2016-01-14 11:53:21 -05:00
|
|
|
char *tempfile;
|
2015-12-16 16:13:05 -05:00
|
|
|
|
2016-05-23 16:08:05 -04:00
|
|
|
UNUSEDPARAM(cbdata);
|
|
|
|
|
2015-12-16 16:13:05 -05:00
|
|
|
/* check attributes for compression and encoding */
|
|
|
|
for (i = 0; i < num_attribs; i++) {
|
|
|
|
if (!strcmp(attribs[i].key, "Compress")) {
|
|
|
|
if (!strcmp(attribs[i].value, "true"))
|
|
|
|
com = 1;
|
|
|
|
else if (!strcmp(attribs[i].value, "false"))
|
|
|
|
com = 0;
|
|
|
|
else
|
|
|
|
com = -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!strcmp(attribs[i].key, "Encoding")) {
|
|
|
|
if (!strcmp(attribs[i].value, "Base64"))
|
|
|
|
enc = 1;
|
|
|
|
else
|
|
|
|
enc = -1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
hwpml_debug("HWPML: Checking attributes: com: %d, enc: %d\n", com, enc);
|
|
|
|
|
|
|
|
/* decode the binary data if needed - base64 */
|
|
|
|
if (enc < 0) {
|
|
|
|
cli_errmsg("HWPML: Unrecognized encoding method\n");
|
2022-03-09 22:26:40 -08:00
|
|
|
return cli_magic_scan_desc(fd, filepath, ctx, NULL, LAYER_ATTRIBUTES_NONE);
|
2015-12-16 16:13:05 -05:00
|
|
|
} else if (enc == 1) {
|
|
|
|
STATBUF statbuf;
|
|
|
|
fmap_t *input;
|
|
|
|
const char *instream;
|
|
|
|
char *decoded;
|
|
|
|
size_t decodedlen;
|
|
|
|
|
|
|
|
hwpml_debug("HWPML: Decoding base64-encoded binary data\n");
|
|
|
|
|
|
|
|
/* fmap the input file for easier manipulation */
|
|
|
|
if (FSTAT(fd, &statbuf) == -1) {
|
|
|
|
cli_errmsg("HWPML: Can't stat file descriptor\n");
|
|
|
|
return CL_ESTAT;
|
|
|
|
}
|
|
|
|
|
2020-03-19 21:23:54 -04:00
|
|
|
if (!(input = fmap(fd, 0, statbuf.st_size, NULL))) {
|
2015-12-16 16:13:05 -05:00
|
|
|
cli_errmsg("HWPML: Failed to get fmap for binary data\n");
|
|
|
|
return CL_EMAP;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* send data for base64 conversion - TODO: what happens with really big files? */
|
|
|
|
if (!(instream = fmap_need_off_once(input, 0, input->len))) {
|
|
|
|
cli_errmsg("HWPML: Failed to get input stream from binary data\n");
|
|
|
|
funmap(input);
|
|
|
|
return CL_EMAP;
|
|
|
|
}
|
|
|
|
|
2016-02-18 11:50:57 -05:00
|
|
|
decoded = (char *)cl_base64_decode((char *)instream, input->len, NULL, &decodedlen, 0);
|
2015-12-16 16:13:05 -05:00
|
|
|
funmap(input);
|
|
|
|
if (!decoded) {
|
|
|
|
cli_errmsg("HWPML: Failed to get base64 decode binary data\n");
|
2022-03-09 22:26:40 -08:00
|
|
|
return cli_magic_scan_desc(fd, filepath, ctx, NULL, LAYER_ATTRIBUTES_NONE);
|
2015-12-16 16:13:05 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
/* open file for writing and scanning */
|
2020-03-19 21:23:54 -04:00
|
|
|
if ((ret = cli_gentempfd(ctx->sub_tmpdir, &tempfile, &df)) != CL_SUCCESS) {
|
2016-01-14 11:53:21 -05:00
|
|
|
cli_warnmsg("HWPML: Failed to create temporary file for decoded stream scanning\n");
|
2015-12-16 16:13:05 -05:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2019-05-04 15:54:54 -04:00
|
|
|
if (cli_writen(df, decoded, decodedlen) != decodedlen) {
|
2015-12-16 16:13:05 -05:00
|
|
|
free(decoded);
|
2016-01-14 11:53:21 -05:00
|
|
|
ret = CL_EWRITE;
|
|
|
|
goto hwpml_end;
|
2015-12-16 16:13:05 -05:00
|
|
|
}
|
|
|
|
free(decoded);
|
|
|
|
|
|
|
|
/* keeps the later logic simpler */
|
|
|
|
fd = df;
|
|
|
|
|
|
|
|
cli_dbgmsg("HWPML: Decoded binary data to %s\n", tempfile);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* decompress the file if needed - zlib */
|
|
|
|
if (com) {
|
|
|
|
STATBUF statbuf;
|
|
|
|
fmap_t *input;
|
|
|
|
|
|
|
|
hwpml_debug("HWPML: Decompressing binary data\n");
|
|
|
|
|
|
|
|
/* fmap the input file for easier manipulation */
|
|
|
|
if (FSTAT(fd, &statbuf) == -1) {
|
|
|
|
cli_errmsg("HWPML: Can't stat file descriptor\n");
|
2016-01-14 11:53:21 -05:00
|
|
|
ret = CL_ESTAT;
|
|
|
|
goto hwpml_end;
|
2015-12-16 16:13:05 -05:00
|
|
|
}
|
|
|
|
|
2020-03-19 21:23:54 -04:00
|
|
|
input = fmap(fd, 0, statbuf.st_size, NULL);
|
2015-12-16 16:13:05 -05:00
|
|
|
if (!input) {
|
|
|
|
cli_errmsg("HWPML: Failed to get fmap for binary data\n");
|
2016-01-14 11:53:21 -05:00
|
|
|
ret = CL_EMAP;
|
|
|
|
goto hwpml_end;
|
2015-12-16 16:13:05 -05:00
|
|
|
}
|
|
|
|
ret = decompress_and_callback(ctx, input, 0, 0, "HWPML", hwpml_scan_cb, NULL);
|
|
|
|
funmap(input);
|
|
|
|
} else {
|
2018-07-30 20:19:28 -04:00
|
|
|
if (fd == df) { /* fd is a decoded tempfile */
|
|
|
|
ret = hwpml_scan_cb(NULL, fd, tempfile, ctx);
|
2018-12-03 12:40:13 -05:00
|
|
|
} else { /* fd is the original filepath, no decoding necessary */
|
2018-07-30 20:19:28 -04:00
|
|
|
ret = hwpml_scan_cb(NULL, fd, filepath, ctx);
|
|
|
|
}
|
2015-12-16 16:13:05 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
/* close decoded file descriptor if used */
|
2018-12-03 12:40:13 -05:00
|
|
|
hwpml_end:
|
2015-12-16 16:13:05 -05:00
|
|
|
if (df) {
|
|
|
|
close(df);
|
|
|
|
if (!(ctx->engine->keeptmp))
|
|
|
|
cli_unlink(tempfile);
|
2016-01-14 11:53:21 -05:00
|
|
|
free(tempfile);
|
2015-12-16 16:13:05 -05:00
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
2016-01-26 12:58:42 -05:00
|
|
|
#endif /* HAVE_LIBXML2 */
|
2015-12-16 16:13:05 -05:00
|
|
|
|
2019-05-04 17:28:16 -04:00
|
|
|
cl_error_t cli_scanhwpml(cli_ctx *ctx)
|
2015-12-15 13:01:40 -05:00
|
|
|
{
|
2019-05-04 17:28:16 -04:00
|
|
|
cl_error_t ret = CL_SUCCESS;
|
|
|
|
|
2015-12-15 13:01:40 -05:00
|
|
|
#if HAVE_LIBXML2
|
|
|
|
struct msxml_cbdata cbdata;
|
2016-05-20 13:47:35 -04:00
|
|
|
struct msxml_ctx mxctx;
|
2015-12-15 13:01:40 -05:00
|
|
|
xmlTextReaderPtr reader = NULL;
|
|
|
|
|
|
|
|
cli_dbgmsg("in cli_scanhwpml()\n");
|
|
|
|
|
|
|
|
if (!ctx)
|
|
|
|
return CL_ENULLARG;
|
|
|
|
|
|
|
|
memset(&cbdata, 0, sizeof(cbdata));
|
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
|
|
|
cbdata.map = ctx->fmap;
|
2015-12-15 13:01:40 -05:00
|
|
|
|
|
|
|
reader = xmlReaderForIO(msxml_read_cb, NULL, &cbdata, "hwpml.xml", NULL, CLAMAV_MIN_XMLREADER_FLAGS);
|
|
|
|
if (!reader) {
|
2016-10-19 15:57:45 -04:00
|
|
|
cli_dbgmsg("cli_scanhwpml: cannot initialize xmlReader\n");
|
2015-12-15 13:01:40 -05:00
|
|
|
|
|
|
|
#if HAVE_JSON
|
|
|
|
ret = cli_json_parse_error(ctx->wrkproperty, "HWPML_ERROR_XML_READER_IO");
|
|
|
|
#endif
|
|
|
|
return ret; // libxml2 failed!
|
|
|
|
}
|
|
|
|
|
2016-05-20 13:47:35 -04:00
|
|
|
memset(&mxctx, 0, sizeof(mxctx));
|
|
|
|
mxctx.scan_cb = hwpml_binary_cb;
|
2018-12-03 12:40:13 -05:00
|
|
|
ret = cli_msxml_parse_document(ctx, reader, hwpml_keys, num_hwpml_keys, MSXML_FLAG_JSON, &mxctx);
|
2015-12-15 13:01:40 -05:00
|
|
|
|
|
|
|
xmlTextReaderClose(reader);
|
|
|
|
xmlFreeTextReader(reader);
|
|
|
|
#else
|
|
|
|
UNUSEDPARAM(ctx);
|
|
|
|
cli_dbgmsg("in cli_scanhwpml()\n");
|
|
|
|
cli_dbgmsg("cli_scanhwpml: scanning hwpml documents requires libxml2!\n");
|
|
|
|
#endif
|
2019-05-04 17:28:16 -04:00
|
|
|
|
|
|
|
return ret;
|
2015-12-15 13:01:40 -05:00
|
|
|
}
|