2003-07-29 15:48:06 +00:00
/*
2025-02-14 10:24:30 -05:00
* Copyright ( C ) 2013 - 2025 Cisco Systems , Inc . and / or its affiliates . All rights reserved .
2014-04-30 15:42:11 -04:00
* Copyright ( C ) 2007 - 2013 Sourcefire , Inc .
2008-02-08 17:50:44 +00:00
*
2008-04-02 15:24:51 +00:00
* Authors : Tomasz Kojm
2003-07-29 15:48:06 +00:00
*
* This program is free software ; you can redistribute it and / or modify
2007-03-31 20:31:04 +00:00
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation .
2003-07-29 15:48:06 +00:00
*
* This program is distributed in the hope that it will be useful ,
* but WITHOUT ANY WARRANTY ; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE . See the
* GNU General Public License for more details .
*
* You should have received a copy of the GNU General Public License
* along with this program ; if not , write to the Free Software
2006-04-09 19:59:28 +00:00
* Foundation , Inc . , 51 Franklin Street , Fifth Floor , Boston ,
* MA 02110 - 1301 , USA .
2003-07-29 15:48:06 +00:00
*/
2008-11-10 17:39:58 +00:00
# include "matcher.h"
2004-07-19 17:54:40 +00:00
# ifndef __OTHERS_H_LC
# define __OTHERS_H_LC
2003-07-29 15:48:06 +00:00
2007-03-01 22:31:22 +00:00
# if HAVE_CONFIG_H
# include "clamav-config.h"
# endif
2009-07-17 02:30:21 +02:00
# ifdef HAVE_UNISTD_H
# include <unistd.h>
# endif
2013-10-31 12:30:55 -05:00
# if HAVE_PTHREAD_H
# include <pthread.h>
# endif
2003-10-08 12:51:07 +00:00
# include <stdio.h>
2003-07-29 15:48:06 +00:00
# include <stdlib.h>
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
# include <stdbool.h>
2003-07-29 15:48:06 +00:00
Add CMake build tooling
This patch adds experimental-quality CMake build tooling.
The libmspack build required a modification to use "" instead of <> for
header #includes. This will hopefully be included in the libmspack
upstream project when adding CMake build tooling to libmspack.
Removed use of libltdl when using CMake.
Flex & Bison are now required to build.
If -DMAINTAINER_MODE, then GPERF is also required, though it currently
doesn't actually do anything. TODO!
I found that the autotools build system was generating the lexer output
but not actually compiling it, instead using previously generated (and
manually renamed) lexer c source. As a consequence, changes to the .l
and .y files weren't making it into the build. To resolve this, I
removed generated flex/bison files and fixed the tooling to use the
freshly generated files. Flex and bison are now required build tools.
On Windows, this adds a dependency on the winflexbison package,
which can be obtained using Chocolatey or may be manually installed.
CMake tooling only has partial support for building with external LLVM
library, and no support for the internal LLVM (to be removed in the
future). I.e. The CMake build currently only supports the bytecode
interpreter.
Many files used include paths relative to the top source directory or
relative to the current project, rather than relative to each build
target. Modern CMake support requires including internal dependency
headers the same way you would external dependency headers (albeit
with "" instead of <>). This meant correcting all header includes to
be relative to the build targets and not relative to the workspace.
For example, ...
```c
include "../libclamav/clamav.h"
include "clamd/clamd_others.h"
```
... becomes:
```c
// libclamav
include "clamav.h"
// clamd
include "clamd_others.h"
```
Fixes header name conflicts by renaming a few of the files.
Converted the "shared" code into a static library, which depends on
libclamav. The ironically named "shared" static library provides
features common to the ClamAV apps which are not required in
libclamav itself and are not intended for use by downstream projects.
This change was required for correct modern CMake practices but was
also required to use the automake "subdir-objects" option.
This eliminates warnings when running autoreconf which, in the next
version of autoconf & automake are likely to break the build.
libclamav used to build in multiple stages where an earlier stage is
a static library containing utils required by the "shared" code.
Linking clamdscan and clamdtop with this libclamav utils static lib
allowed these two apps to function without libclamav. While this is
nice in theory, the practical gains are minimal and it complicates
the build system. As such, the autotools and CMake tooling was
simplified for improved maintainability and this feature was thrown
out. clamdtop and clamdscan now require libclamav to function.
Removed the nopthreads version of the autotools
libclamav_internal_utils static library and added pthread linking to
a couple apps that may have issues building on some platforms without
it, with the intention of removing needless complexity from the
source. Kept the regular version of libclamav_internal_utils.la
though it is no longer used anywhere but in libclamav.
Added an experimental doxygen build option which attempts to build
clamav.h and libfreshclam doxygen html docs.
The CMake build tooling also may build the example program(s), which
isn't a feature in the Autotools build system.
Changed C standard to C90+ due to inline linking issues with socket.h
when linking libfreshclam.so on Linux.
Generate common.rc for win32.
Fix tabs/spaces in shared Makefile.am, and remove vestigial ifndef
from misc.c.
Add CMake files to the automake dist, so users can try the new
CMake tooling w/out having to build from a git clone.
clamonacc changes:
- Renamed FANOTIFY macro to HAVE_SYS_FANOTIFY_H to better match other
similar macros.
- Added a new clamav-clamonacc.service systemd unit file, based on
the work of ChadDevOps & Aaron Brighton.
- Added missing clamonacc man page.
Updates to clamdscan man page, add missing options.
Remove vestigial CL_NOLIBCLAMAV definitions (all apps now use
libclamav).
Rename Windows mspack.dll to libmspack.dll so all ClamAV-built
libraries have the lib-prefix with Visual Studio as with CMake.
2020-08-13 00:25:34 -07:00
# include <json.h>
2006-10-28 12:54:36 +00:00
# include "clamav.h"
2007-01-09 20:06:51 +00:00
# include "dconf.h"
2009-08-30 19:46:26 +02:00
# include "filetypes.h"
2009-08-30 19:14:49 +02:00
# include "fmap.h"
2009-05-15 11:53:22 +00:00
# include "regex/regex.h"
2009-09-21 19:24:16 +03:00
# include "bytecode.h"
2009-10-02 17:33:11 +03:00
# include "bytecode_api.h"
2011-02-14 19:19:20 +02:00
# include "events.h"
2011-12-21 22:52:46 +01:00
# include "crtmgr.h"
2015-07-21 16:35:48 -04:00
Add CMake build tooling
This patch adds experimental-quality CMake build tooling.
The libmspack build required a modification to use "" instead of <> for
header #includes. This will hopefully be included in the libmspack
upstream project when adding CMake build tooling to libmspack.
Removed use of libltdl when using CMake.
Flex & Bison are now required to build.
If -DMAINTAINER_MODE, then GPERF is also required, though it currently
doesn't actually do anything. TODO!
I found that the autotools build system was generating the lexer output
but not actually compiling it, instead using previously generated (and
manually renamed) lexer c source. As a consequence, changes to the .l
and .y files weren't making it into the build. To resolve this, I
removed generated flex/bison files and fixed the tooling to use the
freshly generated files. Flex and bison are now required build tools.
On Windows, this adds a dependency on the winflexbison package,
which can be obtained using Chocolatey or may be manually installed.
CMake tooling only has partial support for building with external LLVM
library, and no support for the internal LLVM (to be removed in the
future). I.e. The CMake build currently only supports the bytecode
interpreter.
Many files used include paths relative to the top source directory or
relative to the current project, rather than relative to each build
target. Modern CMake support requires including internal dependency
headers the same way you would external dependency headers (albeit
with "" instead of <>). This meant correcting all header includes to
be relative to the build targets and not relative to the workspace.
For example, ...
```c
include "../libclamav/clamav.h"
include "clamd/clamd_others.h"
```
... becomes:
```c
// libclamav
include "clamav.h"
// clamd
include "clamd_others.h"
```
Fixes header name conflicts by renaming a few of the files.
Converted the "shared" code into a static library, which depends on
libclamav. The ironically named "shared" static library provides
features common to the ClamAV apps which are not required in
libclamav itself and are not intended for use by downstream projects.
This change was required for correct modern CMake practices but was
also required to use the automake "subdir-objects" option.
This eliminates warnings when running autoreconf which, in the next
version of autoconf & automake are likely to break the build.
libclamav used to build in multiple stages where an earlier stage is
a static library containing utils required by the "shared" code.
Linking clamdscan and clamdtop with this libclamav utils static lib
allowed these two apps to function without libclamav. While this is
nice in theory, the practical gains are minimal and it complicates
the build system. As such, the autotools and CMake tooling was
simplified for improved maintainability and this feature was thrown
out. clamdtop and clamdscan now require libclamav to function.
Removed the nopthreads version of the autotools
libclamav_internal_utils static library and added pthread linking to
a couple apps that may have issues building on some platforms without
it, with the intention of removing needless complexity from the
source. Kept the regular version of libclamav_internal_utils.la
though it is no longer used anywhere but in libclamav.
Added an experimental doxygen build option which attempts to build
clamav.h and libfreshclam doxygen html docs.
The CMake build tooling also may build the example program(s), which
isn't a feature in the Autotools build system.
Changed C standard to C90+ due to inline linking issues with socket.h
when linking libfreshclam.so on Linux.
Generate common.rc for win32.
Fix tabs/spaces in shared Makefile.am, and remove vestigial ifndef
from misc.c.
Add CMake files to the automake dist, so users can try the new
CMake tooling w/out having to build from a git clone.
clamonacc changes:
- Renamed FANOTIFY macro to HAVE_SYS_FANOTIFY_H to better match other
similar macros.
- Added a new clamav-clamonacc.service systemd unit file, based on
the work of ChadDevOps & Aaron Brighton.
- Added missing clamonacc man page.
Updates to clamdscan man page, add missing options.
Remove vestigial CL_NOLIBCLAMAV definitions (all apps now use
libclamav).
Rename Windows mspack.dll to libmspack.dll so all ClamAV-built
libraries have the lib-prefix with Visual Studio as with CMake.
2020-08-13 00:25:34 -07:00
# include "unrar_iface.h"
2015-07-21 16:35:48 -04:00
# ifdef HAVE_YARA
2015-05-07 15:50:37 -04:00
# include "yara_clam.h"
2015-07-21 16:35:48 -04:00
# endif
2006-10-28 12:54:36 +00:00
2014-08-26 13:47:27 -04:00
# define CLAMAV_MIN_XMLREADER_FLAGS (XML_PARSE_NOERROR | XML_PARSE_NONET)
2008-12-10 19:02:40 +00:00
/*
* CL_FLEVEL is the signature f - level specific to the current code and
* should never be modified
* CL_FLEVEL_DCONF is used in the dconf module and can be bumped by
* distribution packagers provided they fix * all * security issues found
* in the old versions of ClamAV . Updating CL_FLEVEL_DCONF will result
* in re - enabling affected modules .
*/
2025-03-25 10:23:28 -04:00
# define CL_FLEVEL 230
2018-12-03 12:40:13 -05:00
# define CL_FLEVEL_DCONF CL_FLEVEL
2010-03-16 12:18:08 +01:00
# define CL_FLEVEL_SIGTOOL CL_FLEVEL
2008-12-10 19:02:40 +00:00
2008-11-14 22:23:39 +00:00
extern uint8_t cli_debug_flag ;
2013-10-09 15:57:56 -04:00
extern uint8_t cli_always_gen_section_hash ;
2007-08-31 19:55:09 +00:00
2006-03-12 15:21:32 +00:00
/*
2021-01-23 16:41:41 -08:00
* CLI_ISCONTAINED ( bb , bb_size , sb , sb_size ) checks if sb ( small buffer ) is
* within bb ( big buffer ) .
2006-03-12 15:21:32 +00:00
*
2017-08-15 16:50:01 -04:00
* bb and sb are pointers ( or offsets ) for the main buffer and the
* sub - buffer respectively , and bb_size and sb_size are their sizes
2006-03-12 15:21:32 +00:00
*
* The macro can be used to protect against wraps .
*/
2021-01-23 16:41:41 -08:00
# define CLI_ISCONTAINED(bb, bb_size, sb, sb_size) \
( ( size_t ) ( bb_size ) > 0 & & ( size_t ) ( sb_size ) > 0 & & \
( size_t ) ( sb_size ) < = ( size_t ) ( bb_size ) & & \
( size_t ) ( sb ) > = ( size_t ) ( bb ) & & \
( size_t ) ( sb ) + ( size_t ) ( sb_size ) < = ( size_t ) ( bb ) + ( size_t ) ( bb_size ) & & \
( size_t ) ( sb ) + ( size_t ) ( sb_size ) > ( size_t ) ( bb ) & & \
( size_t ) ( sb ) < ( size_t ) ( bb ) + ( size_t ) ( bb_size ) )
2006-01-09 18:49:41 +00:00
2017-08-15 16:50:01 -04:00
/*
2021-10-28 16:58:21 -07:00
* CLI_ISCONTAINED_0_TO ( bb_size , sb , sb_size ) checks if sb ( small offset ) is
* within bb ( big offset ) where the big offset always starts at 0.
*
* bb and sb are offsets for the main buffer and the
* sub - buffer respectively , and bb_size and sb_size are their sizes
*
* The macro can be used to protect against wraps .
*
* CLI_ISCONTAINED_0_TO is the same as CLI_ISCONTAINED except that ` bb ` is gone
* and assumed ot be zero .
*/
# define CLI_ISCONTAINED_0_TO(bb_size, sb, sb_size) \
( ( size_t ) ( bb_size ) > 0 & & ( size_t ) ( sb_size ) > 0 & & \
( size_t ) ( sb_size ) < = ( size_t ) ( bb_size ) & & \
( size_t ) ( sb ) + ( size_t ) ( sb_size ) < = ( size_t ) ( bb_size ) & & \
( size_t ) ( sb ) < ( size_t ) ( bb_size ) )
/*
* CLI_ISCONTAINED_2 ( bb , bb_size , sb , sb_size ) checks if sb ( small buffer ) is
2021-01-23 16:41:41 -08:00
* within bb ( big buffer ) .
2017-08-15 16:50:01 -04:00
*
2021-10-28 16:58:21 -07:00
* CLI_ISCONTAINED_2 is the same as CLI_ISCONTAINED except that it allows for
2021-01-23 16:41:41 -08:00
* small - buffers with sb_size = = 0.
2017-08-15 16:50:01 -04:00
*/
2021-10-28 16:58:21 -07:00
# define CLI_ISCONTAINED_2(bb, bb_size, sb, sb_size) \
2021-01-23 16:41:41 -08:00
( ( size_t ) ( bb_size ) > 0 & & \
( size_t ) ( sb_size ) < = ( size_t ) ( bb_size ) & & \
( size_t ) ( sb ) > = ( size_t ) ( bb ) & & \
( size_t ) ( sb ) + ( size_t ) ( sb_size ) < = ( size_t ) ( bb ) + ( size_t ) ( bb_size ) & & \
( size_t ) ( sb ) + ( size_t ) ( sb_size ) > = ( size_t ) ( bb ) & & \
( size_t ) ( sb ) < = ( size_t ) ( bb ) + ( size_t ) ( bb_size ) )
2006-01-14 18:57:41 +00:00
2022-01-27 20:24:50 -08:00
/*
* CLI_ISCONTAINED_2 ( bb , bb_size , sb , sb_size ) checks if sb ( small buffer ) is
* within bb ( big buffer ) .
*
* CLI_ISCONTAINED_2 is the same as CLI_ISCONTAINED except that it allows for
* small - buffers with sb_size = = 0.
*
* CLI_ISCONTAINED_2_0_TO is the same as CLI_ISCONTAINED_2 except that ` bb ` is gone
* and assumed ot be zero .
*/
# define CLI_ISCONTAINED_2_0_TO(bb_size, sb, sb_size) \
( ( size_t ) ( bb_size ) > 0 & & \
( size_t ) ( sb_size ) < = ( size_t ) ( bb_size ) & & \
( size_t ) ( sb ) + ( size_t ) ( sb_size ) < = ( size_t ) ( bb_size ) & & \
( size_t ) ( sb ) < = ( size_t ) ( bb_size ) )
2022-08-24 01:12:31 -04:00
# define CLI_MAX_ALLOCATION (1024 * 1024 * 1024)
2006-04-04 22:58:33 +00:00
2018-12-03 12:40:13 -05:00
# ifdef HAVE_SYS_PARAM_H
# include <sys/param.h> /* for NAME_MAX */
2007-03-01 22:31:22 +00:00
# endif
2007-01-13 16:57:58 +00:00
/* Maximum filenames under various systems - njh */
2018-12-03 12:40:13 -05:00
# ifndef NAME_MAX /* e.g. Linux */
# ifdef MAXNAMELEN /* e.g. Solaris */
# define NAME_MAX MAXNAMELEN
# else
# ifdef FILENAME_MAX /* e.g. SCO */
# define NAME_MAX FILENAME_MAX
# else
# define NAME_MAX 256
# endif
# endif
2007-01-13 16:57:58 +00:00
# endif
2007-03-01 00:40:34 +00:00
# if NAME_MAX < 256
# undef NAME_MAX
# define NAME_MAX 256
# endif
2018-12-03 12:40:13 -05:00
typedef struct bitset_tag {
unsigned char * bitset ;
unsigned long length ;
2010-01-19 16:38:12 +02:00
} bitset_t ;
2021-04-21 16:24:24 -07:00
typedef struct image_fuzzy_hash {
uint8_t hash [ 8 ] ;
} image_fuzzy_hash_t ;
2025-06-09 01:33:26 -04:00
typedef void * evidence_t ;
typedef void * onedump_t ;
typedef void * cvd_t ;
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
typedef struct recursion_level_tag {
2017-01-19 12:24:46 -05:00
cli_file_t type ;
size_t size ;
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
cl_fmap_t * fmap ; /* The fmap for this layer. This used to be in an array in the ctx. */
uint32_t recursion_level_buffer ; /* Which buffer layer in scan recursion. */
uint32_t recursion_level_buffer_fmap ; /* Which fmap layer in this buffer. */
2020-12-18 09:13:17 -08:00
uint32_t attributes ; /* layer attributes. */
2021-04-21 16:24:24 -07:00
image_fuzzy_hash_t image_fuzzy_hash ; /* Used for image/graphics files to store a fuzzy hash. */
2022-06-21 16:43:39 -07:00
bool calculated_image_fuzzy_hash ; /* Used for image/graphics files to store a fuzzy hash. */
2025-06-08 01:12:33 -04:00
size_t object_id ; /* Unique ID for this object. */
json_object * metadata_json ; /* JSON object for this recursion level, e.g. for JSON metadata. */
2025-06-09 01:33:26 -04:00
evidence_t evidence ; /* Store signature matches for this layer and its children. */
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
} recursion_level_t ;
2017-01-19 12:24:46 -05:00
2006-02-15 00:41:40 +00:00
/* internal clamav context */
2010-01-19 16:38:12 +02:00
typedef struct cli_ctx_tag {
2025-06-09 01:33:26 -04:00
char * target_filepath ; /* (optional) The filepath of the original scan target. */
char * sub_tmpdir ; /* The directory to store tmp files at this recursion depth. */
2006-02-15 00:41:40 +00:00
unsigned long int * scanned ;
const struct cli_matcher * root ;
const struct cl_engine * engine ;
Fix ability to disable filesize limit with libclamav C API
You should be able to disable the maxfilesize limit by setting it to
zero. When "disabled", ClamAV should defer to inherent limitations, which
at this time is INT_MAX - 2 bytes.
This works okay for ClamScan and ClamD because our option parser
converts max-filesize=0 to 4294967295 (4GB). But it is presently broken
for other applications using the libclamav C API, like this:
```c
cl_engine_set_num(engine, CL_ENGINE_MAX_FILESIZE, 0);
```
The limit checks added for cl_scanmap_callback and cl_scanfile_callback
in 0.103.4 and 0.104.1 broke this ability because we forgot to check if
the `maxfilesize > 0` before enforcing it.
This commit adds that guard so you can disable by setting to `0`.
While working on this, I also found that the `max_size` variables in our
libmspack scanner code are using an `off_t` type, which is a SIGNED integer
that may be 32bit width even on some 64bit platforms, or may be a 64bit
width. AND the default `max_size` when `maxfilesize == 0` was being set to
UINT_MAX (0xffffffff), aka `-1` when `off_t` is 32bits.
This commit addresses this related issue by:
- changing the `max_size` to use `uint64_t`, like our other limits.
- verifying that `maxfilesize > 0` before using it.
- checking that using `UINT32_MAX` as a backup will not exceed the
max-scansize in the same way that we do with the maxfilesize.
2021-12-22 13:29:18 -08:00
uint64_t scansize ;
2018-07-20 22:28:48 -04:00
struct cl_scan_options * options ;
2008-02-06 21:19:10 +00:00
unsigned int scannedfiles ;
2022-08-27 10:46:21 -07:00
unsigned int corrupted_input ; /* Setting this flag will prevent the PE parser from reporting "broken executable" for unpacked/reconstructed files that may not be 100% to spec. */
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
recursion_level_t * recursion_stack ; /* Array of recursion levels used as a stack. */
uint32_t recursion_stack_size ; /* stack size must == engine->max_recursion_level */
uint32_t recursion_level ; /* Index into recursion_stack; current fmap recursion level from start of scan. */
2025-06-09 01:33:26 -04:00
evidence_t this_layer_evidence ; /* Pointer to current evidence in recursion_stack, varies with recursion depth. For convenience. */
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
fmap_t * fmap ; /* Pointer to current fmap in recursion_stack, varies with recursion depth. For convenience. */
2025-06-08 01:12:33 -04:00
size_t object_count ; /* Counter for number of unique entities/contained files (including normalized files) processed. */
2007-01-09 20:06:51 +00:00
struct cli_dconf * dconf ;
2018-12-03 12:40:13 -05:00
bitset_t * hook_lsig_matches ;
2010-07-07 03:01:55 +02:00
void * cb_ctx ;
2018-12-03 12:40:13 -05:00
cli_events_t * perf ;
2025-06-09 01:33:26 -04:00
struct json_object * metadata_json ; /* Top level metadata JSON object for the whole scan. */
struct json_object * this_layer_metadata_json ; /* Pointer to current metadata JSON object in recursion_stack, varies with recursion depth. For convenience. */
2014-06-13 16:11:15 -04:00
struct timeval time_limit ;
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
bool limit_exceeded ; /* To guard against alerting on limits exceeded more than once, or storing that in the JSON metadata more than once. */
bool abort_scan ; /* So we can guarantee a scan is aborted, even if CL_ETIMEOUT/etc. status is lost in the scan recursion stack. */
2006-02-15 00:41:40 +00:00
} cli_ctx ;
2013-10-25 10:17:22 -04:00
# define STATS_ANON_UUID "5b585e8f-3be5-11e3-bf0b-18037319526c"
2013-10-28 11:21:51 -04:00
# define STATS_MAX_SAMPLES 50
2018-12-03 12:40:13 -05:00
# define STATS_MAX_MEM 1024 * 1024
2013-10-25 10:17:22 -04:00
typedef struct cli_flagged_sample {
char * * virus_name ;
2025-06-08 01:12:33 -04:00
char md5 [ MD5_HASH_SIZE ] ;
2014-05-29 15:52:43 -04:00
uint32_t size ; /* A size of zero means size is unavailable (why would this ever happen?) */
2013-10-25 10:17:22 -04:00
uint32_t hits ;
2014-01-21 11:30:27 -05:00
stats_section_t * sections ;
2013-10-25 10:17:22 -04:00
struct cli_flagged_sample * prev ;
struct cli_flagged_sample * next ;
} cli_flagged_sample_t ;
typedef struct cli_clamav_intel {
char * hostid ;
2014-01-08 08:55:52 -05:00
char * host_info ;
2013-10-25 10:17:22 -04:00
cli_flagged_sample_t * samples ;
uint32_t nsamples ;
uint32_t maxsamples ;
uint32_t maxmem ;
2014-02-03 17:23:26 -05:00
uint32_t timeout ;
2013-10-25 10:17:22 -04:00
time_t nextupdate ;
struct cl_engine * engine ;
# ifdef CL_THREAD_SAFE
pthread_mutex_t mutex ;
# endif
} cli_intel_t ;
2009-12-11 23:04:18 +01:00
2018-12-03 12:40:13 -05:00
typedef struct {
uint64_t v [ 2 ] [ 4 ] ;
} icon_groupset ;
2009-12-11 23:04:18 +01:00
2009-12-06 19:49:40 +01:00
struct icomtr {
2009-12-11 23:04:18 +01:00
unsigned int group [ 2 ] ;
2009-12-06 19:49:40 +01:00
unsigned int color_avg [ 3 ] ;
unsigned int color_x [ 3 ] ;
unsigned int color_y [ 3 ] ;
unsigned int gray_avg [ 3 ] ;
unsigned int gray_x [ 3 ] ;
unsigned int gray_y [ 3 ] ;
unsigned int bright_avg [ 3 ] ;
unsigned int bright_x [ 3 ] ;
unsigned int bright_y [ 3 ] ;
unsigned int dark_avg [ 3 ] ;
unsigned int dark_x [ 3 ] ;
unsigned int dark_y [ 3 ] ;
unsigned int edge_avg [ 3 ] ;
unsigned int edge_x [ 3 ] ;
unsigned int edge_y [ 3 ] ;
unsigned int noedge_avg [ 3 ] ;
unsigned int noedge_x [ 3 ] ;
unsigned int noedge_y [ 3 ] ;
unsigned int rsum ;
unsigned int gsum ;
unsigned int bsum ;
unsigned int ccount ;
char * name ;
} ;
2009-12-11 00:52:16 +01:00
struct icon_matcher {
char * * group_names [ 2 ] ;
2025-04-23 16:43:57 -04:00
uint32_t group_counts [ 2 ] ;
2009-12-11 00:52:16 +01:00
struct icomtr * icons [ 3 ] ;
2025-04-23 16:43:57 -04:00
uint32_t icon_counts [ 3 ] ;
2009-12-11 00:52:16 +01:00
} ;
2010-01-20 22:10:56 +01:00
struct cli_dbinfo {
char * name ;
2017-08-08 17:38:17 -04:00
char * hash ;
2010-01-20 22:10:56 +01:00
size_t size ;
struct cl_cvd * cvd ;
struct cli_dbinfo * next ;
} ;
2015-07-20 15:00:18 -04:00
# define CLI_PWDB_COUNT 3
typedef enum {
CLI_PWDB_ANY = 0 ,
CLI_PWDB_ZIP = 1 ,
CLI_PWDB_RAR = 2
} cl_pwdb_t ;
struct cli_pwdb {
2015-07-14 17:23:43 -04:00
char * name ;
2017-08-08 17:38:17 -04:00
char * passwd ;
2015-07-09 17:30:47 -04:00
uint16_t length ;
2015-07-20 15:00:18 -04:00
struct cli_pwdb * next ;
2015-07-09 17:30:47 -04:00
} ;
2008-11-10 17:39:58 +00:00
struct cl_engine {
uint32_t refcount ; /* reference counter */
uint32_t sdb ;
uint32_t dboptions ;
uint32_t dbversion [ 2 ] ;
2008-11-13 19:06:42 +00:00
uint32_t ac_only ;
uint32_t ac_mindepth ;
uint32_t ac_maxdepth ;
2008-11-14 22:23:39 +00:00
char * tmpdir ;
FIPS-compliant CVD signing and verification
Add X509 certificate chain based signing with PKCS7-PEM external
signatures distributed alongside CVD's in a custom .cvd.sign format.
This new signing and verification mechanism is primarily in support
of FIPS compliance.
Fixes: https://github.com/Cisco-Talos/clamav/issues/564
Add a Rust implementation for parsing, verifying, and unpacking CVD
files.
Now installs a 'certs' directory in the app config directory
(e.g. <prefix>/etc/certs). The install location is configurable.
The CMake option to configure the CVD certs directory is:
`-D CVD_CERTS_DIRECTORY=PATH`
New options to set an alternative CVD certs directory:
- Commandline for freshclam, clamd, clamscan, and sigtool is:
`--cvdcertsdir PATH`
- Env variable for freshclam, clamd, clamscan, and sigtool is:
`CVD_CERTS_DIR`
- Config option for freshclam and clamd is:
`CVDCertsDirectory PATH`
Sigtool:
- Add sign/verify commands.
- Also verify CDIFF external digital signatures when applying CDIFFs.
- Place commonly used commands at the top of --help string.
- Fix up manpage.
Freshclam:
- Will try to download .sign files to verify CVDs and CDIFFs.
- Fix an issue where making a CLD would only include the CFG file for
daily and not if patching any other database.
libclamav.so:
- Bump version to 13:0:1 (aka 12.1.0).
- Also remove libclamav.map versioning.
Resolves: https://github.com/Cisco-Talos/clamav/issues/1304
- Add two new API's to the public clamav.h header:
```c
extern cl_error_t cl_cvdverify_ex(const char *file,
const char *certs_directory);
extern cl_error_t cl_cvdunpack_ex(const char *file,
const char *dir,
bool dont_verify,
const char *certs_directory);
```
The original `cl_cvdverify` and `cl_cvdunpack` are deprecated.
- Add `cl_engine_field` enum option `CL_ENGINE_CVDCERTSDIR`.
You may set this option with `cl_engine_set_str` and get it
with `cl_engine_get_str`, to override the compiled in default
CVD certs directory.
libfreshclam.so: Bump version to 4:0:0 (aka 4.0.0).
Add sigtool sign/verify tests and test certs.
Make it so downloadFile doesn't throw a warning if the server
doesn't have the .sign file.
Replace use of md5-based FP signatures in the unit tests with
sha256-based FP signatures because the md5 implementation used
by Python may be disabled in FIPS mode.
Fixes: https://github.com/Cisco-Talos/clamav/issues/1411
CMake: Add logic to enable the Rust openssl-sys / openssl-rs crates
to build against the same OpenSSL library as is used for the C build.
The Rust unit test application must also link directly with libcrypto
and libssl.
Fix some log messages with missing new lines.
Fix missing environment variable notes in --help messages and manpages.
Deconflict CONFDIR/DATADIR/CERTSDIR variable names that are defined in
clamav-config.h.in for libclamav from variable that had the same name
for use in clamav applications that use the optparser.
The 'clamav-test' certs for the unit tests will live for 10 years.
The 'clamav-beta.crt' public cert will only live for 120 days and will
be replaced before the stable release with a production 'clamav.crt'.
2024-11-21 14:01:09 -05:00
char * certs_directory ;
2008-11-14 22:23:39 +00:00
uint32_t keeptmp ;
2013-11-15 19:15:20 +00:00
uint64_t engine_options ;
2023-03-31 17:20:54 -04:00
uint32_t cache_size ;
2008-11-10 17:39:58 +00:00
/* Limits */
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
uint32_t maxscantime ; /* Time limit (in milliseconds) */
uint64_t maxscansize ; /* during the scanning of archives this size
2022-02-16 00:13:55 +01:00
* will never be exceeded
*/
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
uint64_t maxfilesize ; /* compressed files will only be decompressed
2022-02-16 00:13:55 +01:00
* and scanned up to this size
*/
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
uint32_t max_recursion_level ; /* maximum recursion level for archives */
uint32_t maxfiles ; /* maximum number of files to be scanned
2022-02-16 00:13:55 +01:00
* within a single archive
*/
2008-11-10 17:39:58 +00:00
/* This is for structured data detection. You can set the minimum
2016-10-19 15:57:45 -04:00
* number of occurrences of an CC # or SSN before the system will
2008-11-10 17:39:58 +00:00
* generate a notification .
*/
uint32_t min_cc_count ;
uint32_t min_ssn_count ;
/* Roots table */
struct cli_matcher * * root ;
2011-01-07 02:59:41 +01:00
/* hash matcher for standard MD5 sigs */
struct cli_matcher * hm_hdb ;
/* hash matcher for MD5 sigs for PE sections */
struct cli_matcher * hm_mdb ;
2016-06-22 16:28:02 -04:00
/* hash matcher for MD5 sigs for PE import tables */
2016-06-30 11:11:03 -04:00
struct cli_matcher * hm_imp ;
2021-05-27 13:15:52 -07:00
/* hash matcher for allow list db */
2011-01-07 02:59:41 +01:00
struct cli_matcher * hm_fp ;
2010-01-07 18:26:12 +01:00
/* Container metadata */
struct cli_cdb * cdb ;
2008-11-10 17:39:58 +00:00
/* Phishing .pdb and .wdb databases*/
2021-05-27 13:15:52 -07:00
struct regex_matcher * allow_list_matcher ;
struct regex_matcher * domain_list_matcher ;
2008-11-10 17:39:58 +00:00
struct phishcheck * phishcheck ;
/* Dynamic configuration */
struct cli_dconf * dconf ;
/* Filetype definitions */
struct cli_ftype * ftypes ;
2013-09-17 16:45:48 -04:00
struct cli_ftype * ptypes ;
2008-11-10 17:39:58 +00:00
2015-07-09 17:30:47 -04:00
/* Container password storage */
2015-07-20 15:00:18 -04:00
struct cli_pwdb * * pwdbs ;
2015-07-09 17:30:47 -04:00
2016-02-23 10:29:36 -05:00
/* Pre-loading test matcher
* Test for presence before using ; cleared on engine compile .
*/
struct cli_matcher * test_root ;
2008-11-10 17:39:58 +00:00
/* Ignored signatures */
2009-09-28 19:33:59 +02:00
struct cli_matcher * ignored ;
2008-11-10 17:39:58 +00:00
/* PUA categories (to be included or excluded) */
char * pua_cats ;
2009-12-06 19:49:40 +01:00
/* Icon reference storage */
2009-12-11 00:52:16 +01:00
struct icon_matcher * iconcheck ;
2009-12-06 19:49:40 +01:00
2010-01-14 18:54:53 +01:00
/* Negative cache storage */
struct CACHE * cache ;
2010-01-20 22:10:56 +01:00
/* Database information from .info files */
struct cli_dbinfo * dbinfo ;
2021-07-16 11:47:23 -07:00
/* Signature counting, for progress callbacks */
size_t num_total_signatures ;
2008-11-10 17:39:58 +00:00
/* Used for memory pools */
2009-01-26 19:47:02 +00:00
mpool_t * mempool ;
2009-09-21 19:24:16 +03:00
2011-12-21 22:52:46 +01:00
/* crtmgr stuff */
crtmgr cmgr ;
2010-06-22 15:41:19 +02:00
/* Callback(s) */
2020-12-18 09:13:17 -08:00
clcb_file_inspection cb_file_inspection ;
2011-06-14 17:00:06 +02:00
clcb_pre_cache cb_pre_cache ;
2010-06-22 15:41:19 +02:00
clcb_pre_scan cb_pre_scan ;
clcb_post_scan cb_post_scan ;
2015-10-01 17:47:37 -04:00
clcb_virus_found cb_virus_found ;
2010-06-22 15:41:19 +02:00
clcb_sigload cb_sigload ;
void * cb_sigload_ctx ;
2010-11-02 12:26:33 +02:00
clcb_hash cb_hash ;
2011-08-22 15:22:55 +03:00
clcb_meta cb_meta ;
2023-03-02 17:23:46 -08:00
clcb_generic_data cb_vba ;
2014-06-03 13:31:50 -04:00
clcb_file_props cb_file_props ;
2021-07-16 11:47:23 -07:00
clcb_progress cb_sigload_progress ;
void * cb_sigload_progress_ctx ;
clcb_progress cb_engine_compile_progress ;
void * cb_engine_compile_progress_ctx ;
clcb_progress cb_engine_free_progress ;
void * cb_engine_free_progress_ctx ;
2010-06-22 15:41:19 +02:00
2009-09-21 19:24:16 +03:00
/* Used for bytecode */
struct cli_all_bc bcs ;
2009-10-02 17:33:11 +03:00
unsigned * hooks [ _BC_LAST_HOOK - _BC_START_HOOKS ] ;
unsigned hooks_cnt [ _BC_LAST_HOOK - _BC_START_HOOKS ] ;
2010-01-19 16:38:12 +02:00
unsigned hook_lsig_ids ;
2010-01-22 14:36:56 +02:00
enum bytecode_security bytecode_security ;
2010-03-22 17:16:07 +02:00
uint32_t bytecode_timeout ;
2010-07-29 13:22:35 +03:00
enum bytecode_mode bytecode_mode ;
2012-11-27 17:15:02 -05:00
/* Engine max settings */
2018-12-03 12:40:13 -05:00
uint64_t maxembeddedpe ; /* max size to scan MSEXE for PE */
uint64_t maxhtmlnormalize ; /* max size to normalize HTML */
uint64_t maxhtmlnotags ; /* max size for scanning normalized HTML */
2012-11-27 17:15:02 -05:00
uint64_t maxscriptnormalize ; /* max size to normalize scripts */
2018-12-03 12:40:13 -05:00
uint64_t maxziptypercg ; /* max size to re-do zip filetype */
2013-10-25 10:17:22 -04:00
/* Statistics/intelligence gathering */
void * stats_data ;
clcb_stats_add_sample cb_stats_add_sample ;
clcb_stats_remove_sample cb_stats_remove_sample ;
clcb_stats_decrement_count cb_stats_decrement_count ;
clcb_stats_submit cb_stats_submit ;
clcb_stats_flush cb_stats_flush ;
clcb_stats_get_num cb_stats_get_num ;
clcb_stats_get_size cb_stats_get_size ;
clcb_stats_get_hostid cb_stats_get_hostid ;
2014-02-06 18:55:40 -05:00
2014-03-06 18:19:11 -05:00
/* Raw disk image max settings */
2014-05-09 13:33:01 -04:00
uint32_t maxpartitions ; /* max number of partitions to scan in a disk image */
2014-03-06 18:19:11 -05:00
/* Engine max settings */
uint32_t maxiconspe ; /* max number of icons to scan for PE */
2016-01-19 14:25:55 -05:00
uint32_t maxrechwp3 ; /* max recursive calls for HWP3 parsing */
2014-06-13 16:11:15 -04:00
2014-08-25 19:11:12 -04:00
/* PCRE matching limitations */
uint64_t pcre_match_limit ;
uint64_t pcre_recmatch_limit ;
2014-09-19 02:39:52 -04:00
uint64_t pcre_max_filesize ;
2015-05-28 13:36:09 -04:00
2015-07-21 16:35:48 -04:00
# ifdef HAVE_YARA
2015-05-28 13:36:09 -04:00
/* YARA */
2018-12-03 12:40:13 -05:00
struct _yara_global * yara_global ;
2015-07-21 16:35:48 -04:00
# endif
2008-11-10 17:39:58 +00:00
} ;
2009-03-02 18:56:03 +00:00
struct cl_settings {
/* don't store dboptions here; it needs to be provided to cl_load() and
* can be optionally obtained with cl_engine_get ( ) or from the original
* settings stored by the application
*/
uint32_t ac_only ;
uint32_t ac_mindepth ;
uint32_t ac_maxdepth ;
char * tmpdir ;
uint32_t keeptmp ;
2019-08-16 17:18:59 -07:00
uint32_t maxscantime ;
2009-03-02 18:56:03 +00:00
uint64_t maxscansize ;
uint64_t maxfilesize ;
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
uint32_t max_recursion_level ;
2009-03-02 18:56:03 +00:00
uint32_t maxfiles ;
uint32_t min_cc_count ;
uint32_t min_ssn_count ;
2010-12-10 11:05:04 +02:00
enum bytecode_security bytecode_security ;
uint32_t bytecode_timeout ;
enum bytecode_mode bytecode_mode ;
2009-03-02 18:56:03 +00:00
char * pua_cats ;
2013-11-15 19:15:20 +00:00
uint64_t engine_options ;
2023-03-31 17:20:54 -04:00
uint32_t cache_size ;
2010-12-09 13:31:17 +01:00
/* callbacks */
2011-06-14 17:00:06 +02:00
clcb_pre_cache cb_pre_cache ;
2010-12-09 13:31:17 +01:00
clcb_pre_scan cb_pre_scan ;
clcb_post_scan cb_post_scan ;
2015-10-01 17:47:37 -04:00
clcb_virus_found cb_virus_found ;
2010-12-09 13:31:17 +01:00
clcb_sigload cb_sigload ;
void * cb_sigload_ctx ;
clcb_msg cb_msg ;
clcb_hash cb_hash ;
2013-03-26 16:51:51 -04:00
clcb_meta cb_meta ;
2014-06-03 13:31:50 -04:00
clcb_file_props cb_file_props ;
2021-07-16 11:47:23 -07:00
clcb_progress cb_sigload_progress ;
void * cb_sigload_progress_ctx ;
clcb_progress cb_engine_compile_progress ;
void * cb_engine_compile_progress_ctx ;
clcb_progress cb_engine_free_progress ;
void * cb_engine_free_progress_ctx ;
2012-11-27 17:15:02 -05:00
/* Engine max settings */
2018-12-03 12:40:13 -05:00
uint64_t maxembeddedpe ; /* max size to scan MSEXE for PE */
uint64_t maxhtmlnormalize ; /* max size to normalize HTML */
uint64_t maxhtmlnotags ; /* max size for scanning normalized HTML */
2012-11-27 17:15:02 -05:00
uint64_t maxscriptnormalize ; /* max size to normalize scripts */
2018-12-03 12:40:13 -05:00
uint64_t maxziptypercg ; /* max size to re-do zip filetype */
2014-01-15 19:22:53 +00:00
/* Statistics/intelligence gathering */
void * stats_data ;
clcb_stats_add_sample cb_stats_add_sample ;
clcb_stats_remove_sample cb_stats_remove_sample ;
clcb_stats_decrement_count cb_stats_decrement_count ;
clcb_stats_submit cb_stats_submit ;
clcb_stats_flush cb_stats_flush ;
clcb_stats_get_num cb_stats_get_num ;
clcb_stats_get_size cb_stats_get_size ;
clcb_stats_get_hostid cb_stats_get_hostid ;
2014-02-06 18:55:40 -05:00
2014-03-06 18:19:11 -05:00
/* Raw disk image max settings */
uint32_t maxpartitions ; /* max number of partitions to scan in a disk image */
/* Engine max settings */
uint32_t maxiconspe ; /* max number of icons to scan for PE */
2016-01-19 14:25:55 -05:00
uint32_t maxrechwp3 ; /* max recursive calls for HWP3 parsing */
2014-08-25 19:11:12 -04:00
/* PCRE matching limitations */
uint64_t pcre_match_limit ;
uint64_t pcre_recmatch_limit ;
2014-09-19 02:39:52 -04:00
uint64_t pcre_max_filesize ;
2009-03-02 18:56:03 +00:00
} ;
2018-07-30 20:19:28 -04:00
extern cl_unrar_error_t ( * cli_unrar_open ) ( const char * filename , void * * hArchive , char * * comment , uint32_t * comment_size , uint8_t debug_flag ) ;
extern cl_unrar_error_t ( * cli_unrar_peek_file_header ) ( void * hArchive , unrar_metadata_t * file_metadata ) ;
2018-12-03 12:40:13 -05:00
extern cl_unrar_error_t ( * cli_unrar_extract_file ) ( void * hArchive , const char * destPath , char * outputBuffer ) ;
2018-07-30 20:19:28 -04:00
extern cl_unrar_error_t ( * cli_unrar_skip_file ) ( void * hArchive ) ;
extern void ( * cli_unrar_close ) ( void * hArchive ) ;
Add CMake build tooling
This patch adds experimental-quality CMake build tooling.
The libmspack build required a modification to use "" instead of <> for
header #includes. This will hopefully be included in the libmspack
upstream project when adding CMake build tooling to libmspack.
Removed use of libltdl when using CMake.
Flex & Bison are now required to build.
If -DMAINTAINER_MODE, then GPERF is also required, though it currently
doesn't actually do anything. TODO!
I found that the autotools build system was generating the lexer output
but not actually compiling it, instead using previously generated (and
manually renamed) lexer c source. As a consequence, changes to the .l
and .y files weren't making it into the build. To resolve this, I
removed generated flex/bison files and fixed the tooling to use the
freshly generated files. Flex and bison are now required build tools.
On Windows, this adds a dependency on the winflexbison package,
which can be obtained using Chocolatey or may be manually installed.
CMake tooling only has partial support for building with external LLVM
library, and no support for the internal LLVM (to be removed in the
future). I.e. The CMake build currently only supports the bytecode
interpreter.
Many files used include paths relative to the top source directory or
relative to the current project, rather than relative to each build
target. Modern CMake support requires including internal dependency
headers the same way you would external dependency headers (albeit
with "" instead of <>). This meant correcting all header includes to
be relative to the build targets and not relative to the workspace.
For example, ...
```c
include "../libclamav/clamav.h"
include "clamd/clamd_others.h"
```
... becomes:
```c
// libclamav
include "clamav.h"
// clamd
include "clamd_others.h"
```
Fixes header name conflicts by renaming a few of the files.
Converted the "shared" code into a static library, which depends on
libclamav. The ironically named "shared" static library provides
features common to the ClamAV apps which are not required in
libclamav itself and are not intended for use by downstream projects.
This change was required for correct modern CMake practices but was
also required to use the automake "subdir-objects" option.
This eliminates warnings when running autoreconf which, in the next
version of autoconf & automake are likely to break the build.
libclamav used to build in multiple stages where an earlier stage is
a static library containing utils required by the "shared" code.
Linking clamdscan and clamdtop with this libclamav utils static lib
allowed these two apps to function without libclamav. While this is
nice in theory, the practical gains are minimal and it complicates
the build system. As such, the autotools and CMake tooling was
simplified for improved maintainability and this feature was thrown
out. clamdtop and clamdscan now require libclamav to function.
Removed the nopthreads version of the autotools
libclamav_internal_utils static library and added pthread linking to
a couple apps that may have issues building on some platforms without
it, with the intention of removing needless complexity from the
source. Kept the regular version of libclamav_internal_utils.la
though it is no longer used anywhere but in libclamav.
Added an experimental doxygen build option which attempts to build
clamav.h and libfreshclam doxygen html docs.
The CMake build tooling also may build the example program(s), which
isn't a feature in the Autotools build system.
Changed C standard to C90+ due to inline linking issues with socket.h
when linking libfreshclam.so on Linux.
Generate common.rc for win32.
Fix tabs/spaces in shared Makefile.am, and remove vestigial ifndef
from misc.c.
Add CMake files to the automake dist, so users can try the new
CMake tooling w/out having to build from a git clone.
clamonacc changes:
- Renamed FANOTIFY macro to HAVE_SYS_FANOTIFY_H to better match other
similar macros.
- Added a new clamav-clamonacc.service systemd unit file, based on
the work of ChadDevOps & Aaron Brighton.
- Added missing clamonacc man page.
Updates to clamdscan man page, add missing options.
Remove vestigial CL_NOLIBCLAMAV definitions (all apps now use
libclamav).
Rename Windows mspack.dll to libmspack.dll so all ClamAV-built
libraries have the lib-prefix with Visual Studio as with CMake.
2020-08-13 00:25:34 -07:00
extern LIBCLAMAV_EXPORT int have_rar ;
2008-11-13 02:11:21 +00:00
2018-12-03 12:40:13 -05:00
# define SCAN_ALLMATCHES (ctx->options->general & CL_SCAN_GENERAL_ALLMATCHES)
# define SCAN_COLLECT_METADATA (ctx->options->general & CL_SCAN_GENERAL_COLLECT_METADATA)
# define SCAN_HEURISTICS (ctx->options->general & CL_SCAN_GENERAL_HEURISTICS)
# define SCAN_HEURISTIC_PRECEDENCE (ctx->options->general & CL_SCAN_GENERAL_HEURISTIC_PRECEDENCE)
2020-01-23 17:42:33 -08:00
# define SCAN_UNPRIVILEGED (ctx->options->general & CL_SCAN_GENERAL_UNPRIVILEGED)
2025-04-07 16:50:09 -07:00
# define SCAN_STORE_HTML_URIS (ctx->options->general & CL_SCAN_GENERAL_STORE_HTML_URIS)
# define SCAN_STORE_PDF_URIS (ctx->options->general & CL_SCAN_GENERAL_STORE_PDF_URIS)
2025-06-03 19:03:20 -04:00
# define SCAN_STORE_EXTRA_HASHES (ctx->options->general & CL_SCAN_GENERAL_STORE_EXTRA_HASHES)
2018-12-03 12:40:13 -05:00
# define SCAN_PARSE_ARCHIVE (ctx->options->parse & CL_SCAN_PARSE_ARCHIVE)
# define SCAN_PARSE_ELF (ctx->options->parse & CL_SCAN_PARSE_ELF)
# define SCAN_PARSE_PDF (ctx->options->parse & CL_SCAN_PARSE_PDF)
# define SCAN_PARSE_SWF (ctx->options->parse & CL_SCAN_PARSE_SWF)
# define SCAN_PARSE_HWP3 (ctx->options->parse & CL_SCAN_PARSE_HWP3)
# define SCAN_PARSE_XMLDOCS (ctx->options->parse & CL_SCAN_PARSE_XMLDOCS)
# define SCAN_PARSE_MAIL (ctx->options->parse & CL_SCAN_PARSE_MAIL)
# define SCAN_PARSE_OLE2 (ctx->options->parse & CL_SCAN_PARSE_OLE2)
# define SCAN_PARSE_HTML (ctx->options->parse & CL_SCAN_PARSE_HTML)
# define SCAN_PARSE_PE (ctx->options->parse & CL_SCAN_PARSE_PE)
2023-09-29 12:49:29 -07:00
# define SCAN_PARSE_ONENOTE (ctx->options->parse & CL_SCAN_PARSE_ONENOTE)
2024-02-25 12:43:56 -05:00
# define SCAN_PARSE_IMAGE (ctx->options->parse & CL_SCAN_PARSE_IMAGE)
# define SCAN_PARSE_IMAGE_FUZZY_HASH (ctx->options->parse & CL_SCAN_PARSE_IMAGE_FUZZY_HASH)
2018-12-03 12:40:13 -05:00
# define SCAN_HEURISTIC_BROKEN (ctx->options->heuristic & CL_SCAN_HEURISTIC_BROKEN)
GIF, PNG bugfixes; Add AlertBrokenMedia option
Added a new scan option to alert on broken media (graphics) file
formats. This feature mitigates the risk of malformed media files
intended to exploit vulnerabilities in other software. At present
media validation exists for JPEG, TIFF, PNG, and GIF files.
To enable this feature, set `AlertBrokenMedia yes` in clamd.conf, or
use the `--alert-broken-media` option when using `clamscan`.
These options are disabled by default for now.
Application developers may enable this scan option by enabling
`CL_SCAN_HEURISTIC_BROKEN_MEDIA` for the `heuristic` scan option bit
field.
Fixed PNG parser logic bugs that caused an excess of parsing errors
and fixed a stack exhaustion issue affecting some systems when
scanning PNG files. PNG file type detection was disabled via
signature database update for 0.103.0 to mitigate effects from these
bugs.
Fixed an issue where PNG and GIF files no longer work with Target:5
(graphics) signatures if detected as CL_TYPE_PNG/GIF rather than as
CL_TYPE_GRAPHICS. Target types now support up to 10 possible file
types to make way for additional graphics types in future releases.
Scanning JPEG, TIFF, PNG, and GIF files will no longer return "parse"
errors when file format validation fails. Instead, the scan will alert
with the "Heuristics.Broken.Media" signature prefix and a descriptive
suffix to indicate the issue, provided that the "alert broken media"
feature is enabled.
GIF format validation will no longer fail if the GIF image is missing
the trailer byte, as this appears to be a relatively common issue in
otherwise functional GIF files.
Added a TIFF dynamic configuration (DCONF) option, which was missing.
This will allow us to disable TIFF format validation via signature
database update in the event that it proves to be problematic.
This feature already exists for many other file types.
Added CL_TYPE_JPEG and CL_TYPE_TIFF types.
2020-11-04 15:49:43 -08:00
# define SCAN_HEURISTIC_BROKEN_MEDIA (ctx->options->heuristic & CL_SCAN_HEURISTIC_BROKEN_MEDIA)
2018-12-03 12:40:13 -05:00
# define SCAN_HEURISTIC_EXCEEDS_MAX (ctx->options->heuristic & CL_SCAN_HEURISTIC_EXCEEDS_MAX)
# define SCAN_HEURISTIC_PHISHING_SSL_MISMATCH (ctx->options->heuristic & CL_SCAN_HEURISTIC_PHISHING_SSL_MISMATCH)
# define SCAN_HEURISTIC_PHISHING_CLOAK (ctx->options->heuristic & CL_SCAN_HEURISTIC_PHISHING_CLOAK)
# define SCAN_HEURISTIC_MACROS (ctx->options->heuristic & CL_SCAN_HEURISTIC_MACROS)
# define SCAN_HEURISTIC_ENCRYPTED_ARCHIVE (ctx->options->heuristic & CL_SCAN_HEURISTIC_ENCRYPTED_ARCHIVE)
# define SCAN_HEURISTIC_ENCRYPTED_DOC (ctx->options->heuristic & CL_SCAN_HEURISTIC_ENCRYPTED_DOC)
# define SCAN_HEURISTIC_PARTITION_INTXN (ctx->options->heuristic & CL_SCAN_HEURISTIC_PARTITION_INTXN)
# define SCAN_HEURISTIC_STRUCTURED (ctx->options->heuristic & CL_SCAN_HEURISTIC_STRUCTURED)
# define SCAN_HEURISTIC_STRUCTURED_SSN_NORMAL (ctx->options->heuristic & CL_SCAN_HEURISTIC_STRUCTURED_SSN_NORMAL)
# define SCAN_HEURISTIC_STRUCTURED_SSN_STRIPPED (ctx->options->heuristic & CL_SCAN_HEURISTIC_STRUCTURED_SSN_STRIPPED)
# define SCAN_MAIL_PARTIAL_MESSAGE (ctx->options->mail & CL_SCAN_MAIL_PARTIAL_MESSAGE)
# define SCAN_DEV_COLLECT_PERF_INFO (ctx->options->dev & CL_SCAN_DEV_COLLECT_PERFORMANCE_INFO)
2006-02-15 00:41:40 +00:00
2007-12-21 23:47:52 +00:00
/* based on macros from A. Melnikoff */
# define cbswap16(v) (((v & 0xff) << 8) | (((v) >> 8) & 0xff))
2025-06-03 19:03:20 -04:00
# define cbswap32(v) ((((v) & 0x000000ff) << 24) | (((v) & 0x0000ff00) << 8) | \
( ( ( v ) & 0x00ff0000 ) > > 8 ) | ( ( ( v ) & 0xff000000 ) > > 24 ) )
# define cbswap64(v) ((((v) & 0x00000000000000ffULL) << 56) | \
( ( ( v ) & 0x000000000000ff00ULL ) < < 40 ) | \
( ( ( v ) & 0x0000000000ff0000ULL ) < < 24 ) | \
( ( ( v ) & 0x00000000ff000000ULL ) < < 8 ) | \
( ( ( v ) & 0x000000ff00000000ULL ) > > 8 ) | \
( ( ( v ) & 0x0000ff0000000000ULL ) > > 24 ) | \
( ( ( v ) & 0x00ff000000000000ULL ) > > 40 ) | \
( ( ( v ) & 0xff00000000000000ULL ) > > 56 ) )
2018-12-03 12:40:13 -05:00
# ifndef HAVE_ATTRIB_PACKED
2010-01-29 12:17:07 +02:00
# define __attribute__(x)
# endif
# ifdef HAVE_PRAGMA_PACK
# pragma pack(1)
# endif
# ifdef HAVE_PRAGMA_PACK_HPPA
# pragma pack 1
# endif
2007-12-21 23:47:52 +00:00
2009-08-22 11:35:04 +03:00
union unaligned_64 {
2018-12-03 12:40:13 -05:00
uint64_t una_u64 ;
int64_t una_s64 ;
2009-08-22 11:35:04 +03:00
} __attribute__ ( ( packed ) ) ;
union unaligned_32 {
2018-12-03 12:40:13 -05:00
uint32_t una_u32 ;
int32_t una_s32 ;
2009-08-22 11:35:04 +03:00
} __attribute__ ( ( packed ) ) ;
union unaligned_16 {
2018-12-03 12:40:13 -05:00
uint16_t una_u16 ;
int16_t una_s16 ;
2009-08-22 11:35:04 +03:00
} __attribute__ ( ( packed ) ) ;
2008-04-07 14:12:24 +00:00
2010-06-29 12:09:29 +03:00
struct unaligned_ptr {
void * ptr ;
} __attribute__ ( ( packed ) ) ;
2008-04-07 14:12:24 +00:00
# ifdef HAVE_PRAGMA_PACK
# pragma pack()
# endif
# ifdef HAVE_PRAGMA_PACK_HPPA
# pragma pack
# endif
2010-01-29 12:17:07 +02:00
# if WORDS_BIGENDIAN == 0
2018-12-03 12:40:13 -05:00
/* Little endian */
# define le16_to_host(v) (v)
# define le32_to_host(v) (v)
# define le64_to_host(v) (v)
# define be16_to_host(v) cbswap16(v)
# define be32_to_host(v) cbswap32(v)
# define be64_to_host(v) cbswap64(v)
# define cli_readint64(buff) (((const union unaligned_64 *)(buff))->una_s64)
# define cli_readint32(buff) (((const union unaligned_32 *)(buff))->una_s32)
# define cli_readint16(buff) (((const union unaligned_16 *)(buff))->una_s16)
# define cli_writeint32(offset, value) (((union unaligned_32 *)(offset))->una_u32 = (uint32_t)(value))
2006-04-07 23:31:41 +00:00
# else
2018-12-03 12:40:13 -05:00
/* Big endian */
# define le16_to_host(v) cbswap16(v)
# define le32_to_host(v) cbswap32(v)
# define le64_to_host(v) cbswap64(v)
# define be16_to_host(v) (v)
# define be32_to_host(v) (v)
# define be64_to_host(v) (v)
static inline int64_t cli_readint64 ( const void * buff )
{
int64_t ret ;
ret = ( int64_t ) ( ( const char * ) buff ) [ 0 ] & 0xff ;
ret | = ( int64_t ) ( ( ( const char * ) buff ) [ 1 ] & 0xff ) < < 8 ;
ret | = ( int64_t ) ( ( ( const char * ) buff ) [ 2 ] & 0xff ) < < 16 ;
ret | = ( int64_t ) ( ( ( const char * ) buff ) [ 3 ] & 0xff ) < < 24 ;
ret | = ( int64_t ) ( ( ( const char * ) buff ) [ 4 ] & 0xff ) < < 32 ;
ret | = ( int64_t ) ( ( ( const char * ) buff ) [ 5 ] & 0xff ) < < 40 ;
ret | = ( int64_t ) ( ( ( const char * ) buff ) [ 6 ] & 0xff ) < < 48 ;
ret | = ( int64_t ) ( ( ( const char * ) buff ) [ 7 ] & 0xff ) < < 56 ;
return ret ;
}
static inline int32_t cli_readint32 ( const void * buff )
{
int32_t ret ;
ret = ( int32_t ) ( ( const char * ) buff ) [ 0 ] & 0xff ;
ret | = ( int32_t ) ( ( ( const char * ) buff ) [ 1 ] & 0xff ) < < 8 ;
ret | = ( int32_t ) ( ( ( const char * ) buff ) [ 2 ] & 0xff ) < < 16 ;
ret | = ( int32_t ) ( ( ( const char * ) buff ) [ 3 ] & 0xff ) < < 24 ;
return ret ;
}
static inline int16_t cli_readint16 ( const void * buff )
{
int16_t ret ;
ret = ( int16_t ) ( ( const char * ) buff ) [ 0 ] & 0xff ;
ret | = ( int16_t ) ( ( ( const char * ) buff ) [ 1 ] & 0xff ) < < 8 ;
return ret ;
}
static inline void cli_writeint32 ( void * offset , uint32_t value )
{
( ( char * ) offset ) [ 0 ] = value & 0xff ;
( ( char * ) offset ) [ 1 ] = ( value & 0xff00 ) > > 8 ;
( ( char * ) offset ) [ 2 ] = ( value & 0xff0000 ) > > 16 ;
( ( char * ) offset ) [ 3 ] = ( value & 0xff000000 ) > > 24 ;
}
2006-04-07 23:31:41 +00:00
# endif
2020-04-18 10:46:57 -04:00
/**
* @ brief Append an alert .
*
2021-05-27 13:15:52 -07:00
* An FP - check will verify that the file is not allowed .
* The allow list check does not happen before the scan because allowing files
2020-04-18 10:46:57 -04:00
* is so infrequent that such action would be detrimental to performance .
*
* TODO : Replace implementation with severity scale , and severity threshold
* wherein signatures that do not meet the threshold are documented in JSON
* metadata but do not halt the scan .
*
* @ param ctx The scan context .
* @ param virname The alert name .
* @ return cl_error_t CL_VIRUS if scan should be halted due to an alert , CL_CLEAN if scan should continue .
*/
2019-02-27 00:47:38 -05:00
cl_error_t cli_append_virus ( cli_ctx * ctx , const char * virname ) ;
2020-04-18 10:46:57 -04:00
/**
* @ brief Append a PUA ( low severity ) alert .
*
* This function will return CLEAN unless in all - match or Heuristic - precedence
* modes . The intention is for the scan to continue in case something more
* malicious is found .
*
* TODO : Replace implementation with severity scale , and severity threshold
* wherein signatures that do not meet the threshold are documented in JSON
* metadata but do not halt the scan .
*
* BUG : In normal scan mode ( see above ) , the alert is not FP - checked !
*
* @ param ctx The scan context .
* @ param virname The alert name .
* @ return cl_error_t CL_VIRUS if scan should be halted due to an alert , CL_CLEAN if scan should continue .
*/
2022-08-03 20:34:48 -07:00
cl_error_t cli_append_potentially_unwanted ( cli_ctx * ctx , const char * virname ) ;
2020-04-18 10:46:57 -04:00
2022-08-19 16:21:42 -07:00
/**
* @ brief If the SCAN_HEURISTIC_EXCEEDS_MAX option is enabled , append a " potentially unwanted " indicator .
*
* There is no return value because the caller should select the appropriate " CL_EMAX* " error code regardless
* of whether or not an FP sig is found , or allmatch is enabled , or whatever .
* That is , the scan must not continue because of an FP sig .
*
* @ param ctx The scan context .
* @ param virname The name of the potentially unwanted indicator .
*/
void cli_append_potentially_unwanted_if_heur_exceedsmax ( cli_ctx * ctx , char * virname ) ;
2012-10-18 14:12:58 -07:00
const char * cli_get_last_virus ( const cli_ctx * ctx ) ;
2012-10-25 12:36:05 -07:00
const char * cli_get_last_virus_str ( const cli_ctx * ctx ) ;
2022-08-03 20:34:48 -07:00
void cli_virus_found_cb ( cli_ctx * ctx , const char * virname ) ;
2012-10-18 14:12:58 -07:00
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
/**
* @ brief Push a new fmap onto our scan recursion stack .
*
* May fail if we exceed max recursion depth .
*
* @ param ctx The scanning context .
* @ param map The fmap for the new layer .
* @ param type The file type . May be CL_TYPE_ANY if unknown . Can change it later with cli_recursion_stack_change_type ( ) .
* @ param is_new_buffer true if the fmap represents a new buffer / file , and not some window into an existing fmap .
2022-03-09 22:26:40 -08:00
* @ param attributes Layer attributes for the thing to be scanned .
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
* @ return cl_error_t CL_SUCCESS if successful , else CL_EMAXREC if exceeding the max recursion depth .
*/
2022-03-09 22:26:40 -08:00
cl_error_t cli_recursion_stack_push ( cli_ctx * ctx , cl_fmap_t * map , cli_file_t type , bool is_new_buffer , uint32_t attributes ) ;
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
/**
* @ brief Pop off a layer of our scan recursion stack .
*
2025-06-08 01:12:33 -04:00
* Returns the fmap for the popped layer . Does NOT fmap_free ( ) the fmap for you .
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
*
* @ param ctx The scanning context .
* @ return cl_fmap_t * A pointer to the fmap for the popped layer , may return NULL instead if the stack is empty .
*/
cl_fmap_t * cli_recursion_stack_pop ( cli_ctx * ctx ) ;
/**
* @ brief Re - assign the type for the current layer .
*
* @ param ctx The scanning context .
* @ param type The new file type .
*/
void cli_recursion_stack_change_type ( cli_ctx * ctx , cli_file_t type ) ;
/**
* @ brief Get the type of a specific layer .
*
* Ignores normalized layers internally .
*
* For index :
* 0 = = the outermost ( bottom ) layer of the stack .
* 1 = = the first layer ( probably never explicitly used ) .
* - 1 = = the present innermost ( top ) layer of the stack .
* - 2 = = the parent layer ( or " container " ) . That is , the second from the top of the stack .
*
* @ param ctx The scanning context .
* @ param index Desired index , will be converted internally as though the normalized layers were stripped out . Don ' t think too had about it . Or do . ¯ \ _ ( ツ ) _ / ¯
* @ return cli_file_t The type of the requested layer ,
* or returns CL_TYPE_ANY if a negative layer is requested ,
* or returns CL_TYPE_IGNORED if requested layer too high .
*/
cli_file_t cli_recursion_stack_get_type ( cli_ctx * ctx , int index ) ;
/**
* @ brief Get the size of a specific layer .
*
* Ignores normalized layers internally .
*
* For index :
* 0 = = the outermost ( bottom ) layer of the stack .
* 1 = = the first layer ( probably never explicitly used ) .
* - 1 = = the present innermost ( top ) layer of the stack .
* - 2 = = the parent layer ( or " container " ) . That is , the second from the top of the stack .
*
* @ param ctx The scanning context .
* @ param index Desired index , will be converted internally as though the normalized layers were stripped out . Don ' t think too had about it . Or do . ¯ \ _ ( ツ ) _ / ¯
* @ return cli_file_t The size of the requested layer ,
* or returns the size of the whole file if a negative layer is requested ,
* or returns 0 if requested layer too high .
*/
size_t cli_recursion_stack_get_size ( cli_ctx * ctx , int index ) ;
2017-01-19 12:24:46 -05:00
2006-04-07 23:31:41 +00:00
/* used by: spin, yc (C) aCaB */
2018-12-03 12:40:13 -05:00
# define __SHIFTBITS(a) (sizeof(a) << 3)
# define __SHIFTMASK(a) (__SHIFTBITS(a) - 1)
2025-06-03 19:03:20 -04:00
# define CLI_ROL(a, b) a = (a << ((b) & __SHIFTMASK(a))) | (a >> ((__SHIFTBITS(a) - (b)) & __SHIFTMASK(a)))
# define CLI_ROR(a, b) a = (a >> ((b) & __SHIFTMASK(a))) | (a << ((__SHIFTBITS(a) - (b)) & __SHIFTMASK(a)))
2006-04-07 23:31:41 +00:00
2007-07-08 16:43:56 +00:00
/* Implementation independent sign-extended signed right shift */
# ifdef HAVE_SAR
2018-12-03 12:40:13 -05:00
# define CLI_SRS(n, s) ((n) >> (s))
2007-07-08 16:43:56 +00:00
# else
2018-12-03 12:40:13 -05:00
# define CLI_SRS(n, s) ((((n) >> (s)) ^ (1 << (sizeof(n) * 8 - 1 - s))) - (1 << (sizeof(n) * 8 - 1 - s)))
2007-07-08 16:43:56 +00:00
# endif
2018-12-03 12:40:13 -05:00
# define CLI_SAR(n, s) n = CLI_SRS(n, s)
2007-07-08 16:43:56 +00:00
2007-02-10 13:44:16 +00:00
# ifdef __GNUC__
void cli_warnmsg ( const char * str , . . . ) __attribute__ ( ( format ( printf , 1 , 2 ) ) ) ;
# else
2003-07-29 15:48:06 +00:00
void cli_warnmsg ( const char * str , . . . ) ;
2007-02-10 13:44:16 +00:00
# endif
# ifdef __GNUC__
void cli_errmsg ( const char * str , . . . ) __attribute__ ( ( format ( printf , 1 , 2 ) ) ) ;
# else
2003-07-29 15:48:06 +00:00
void cli_errmsg ( const char * str , . . . ) ;
2007-02-10 13:44:16 +00:00
# endif
2010-10-18 10:32:18 +03:00
# ifdef __GNUC__
2018-12-03 12:40:13 -05:00
void cli_infomsg ( const cli_ctx * ctx , const char * fmt , . . . ) __attribute__ ( ( format ( printf , 2 , 3 ) ) ) ;
2010-10-18 10:32:18 +03:00
# else
2018-12-03 12:40:13 -05:00
void cli_infomsg ( const cli_ctx * ctx , const char * fmt , . . . ) ;
2010-10-18 10:32:18 +03:00
# endif
2021-08-23 13:38:26 -04:00
# ifdef __GNUC__
void cli_infomsg_simple ( const char * fmt , . . . ) __attribute__ ( ( format ( printf , 1 , 2 ) ) ) ;
# else
void cli_infomsg_simple ( const char * fmt , . . . ) ;
# endif
2018-12-03 12:40:13 -05:00
void cli_logg_setup ( const cli_ctx * ctx ) ;
2010-11-02 22:17:27 +02:00
void cli_logg_unsetup ( void ) ;
2010-10-18 10:32:18 +03:00
2008-02-15 20:45:51 +00:00
/* tell compiler about branches that are very rarely taken,
* such as debug paths , and error paths */
# if (__GNUC__ >= 4) || (__GNUC__ == 3 && __GNUC_MINOR__ >= 2)
# define UNLIKELY(cond) __builtin_expect(!!(cond), 0)
2009-12-08 23:02:49 +02:00
# define LIKELY(cond) __builtin_expect(!!(cond), 1)
2008-02-15 20:45:51 +00:00
# else
# define UNLIKELY(cond) (cond)
2009-12-08 23:02:49 +02:00
# define LIKELY(cond) (cond)
2008-02-15 20:45:51 +00:00
# endif
2009-07-13 19:34:03 +03:00
# ifdef __GNUC__
# define always_inline inline __attribute__((always_inline))
2010-03-21 19:47:25 +02:00
# define never_inline __attribute__((noinline))
2009-07-13 19:34:03 +03:00
# else
2010-03-22 17:31:38 +02:00
# define never_inline
2009-07-13 19:34:03 +03:00
# define always_inline inline
# endif
2018-12-03 12:40:13 -05:00
# if defined(__GNUC__) && ((__GNUC__ > 4) || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3))
2010-02-09 12:28:23 +02:00
# define __hot__ __attribute__((hot))
2010-02-09 12:01:31 +02:00
# else
2010-02-09 12:28:23 +02:00
# define __hot__
2010-02-09 12:01:31 +02:00
# endif
2007-02-10 13:44:16 +00:00
# ifdef __GNUC__
2021-08-23 13:38:26 -04:00
inline void cli_dbgmsg ( const char * str , . . . ) __attribute__ ( ( format ( printf , 1 , 2 ) ) ) ;
2007-02-10 13:44:16 +00:00
# else
2021-08-23 13:38:26 -04:00
inline void cli_dbgmsg ( const char * str , . . . ) ;
2007-02-10 13:44:16 +00:00
# endif
2021-12-20 09:39:40 -08:00
# ifdef __GNUC__
void cli_dbgmsg_no_inline ( const char * str , . . . ) __attribute__ ( ( format ( printf , 1 , 2 ) ) ) ;
# else
void cli_dbgmsg_no_inline ( const char * str , . . . ) ;
# endif
2022-04-30 19:11:03 -07:00
# ifdef __GNUC__
size_t cli_eprintf ( const char * str , . . . ) __attribute__ ( ( format ( printf , 1 , 2 ) ) ) ;
# else
size_t cli_eprintf ( const char * str , . . . ) ;
# endif
2009-07-17 02:52:39 +02:00
# ifdef HAVE_CLI_GETPAGESIZE
# undef HAVE_CLI_GETPAGESIZE
# endif
2009-10-10 20:46:05 +02:00
# ifdef _WIN32
2018-12-03 12:40:13 -05:00
static inline int cli_getpagesize ( void )
{
2009-10-10 19:10:15 +02:00
SYSTEM_INFO si ;
GetSystemInfo ( & si ) ;
return si . dwPageSize ;
}
2021-03-31 12:20:42 -07:00
# define HAVE_CLI_GETPAGESIZE 1
2009-10-10 20:46:05 +02:00
# else /* ! _WIN32 */
2009-07-16 13:22:28 +02:00
# if HAVE_SYSCONF_SC_PAGESIZE
2018-12-03 12:40:13 -05:00
static inline int cli_getpagesize ( void )
{
return sysconf ( _SC_PAGESIZE ) ;
}
2009-07-16 13:22:28 +02:00
# define HAVE_CLI_GETPAGESIZE 1
# else
# if HAVE_GETPAGESIZE
2018-12-03 12:40:13 -05:00
static inline int cli_getpagesize ( void )
{
return getpagesize ( ) ;
}
2009-07-16 13:22:28 +02:00
# define HAVE_CLI_GETPAGESIZE 1
2009-10-10 19:10:15 +02:00
# endif /* HAVE_GETPAGESIZE */
# endif /* HAVE_SYSCONF_SC_PAGESIZE */
2009-10-10 20:46:05 +02:00
# endif /* _WIN32 */
2009-07-16 13:22:28 +02:00
2022-05-09 14:28:34 -07:00
/**
* @ brief Wrapper around malloc that limits how much may be allocated to CLI_MAX_ALLOCATION .
*
2024-01-09 17:44:33 -05:00
* Please use CLI_MAX_MALLOC_OR_GOTO_DONE ( ) with ` goto done ; ` error handling instead .
2022-05-09 14:28:34 -07:00
*
* @ param ptr
* @ param size
* @ return void *
*/
void * cli_max_malloc ( size_t nmemb ) ;
/**
2024-01-09 17:44:33 -05:00
* @ brief Wrapper around calloc that limits how much may be allocated to CLI_MAX_ALLOCATION .
2022-05-09 14:28:34 -07:00
*
2024-01-09 17:44:33 -05:00
* Please use CLI_MAX_CALLOC_OR_GOTO_DONE ( ) with ` goto done ; ` error handling instead .
2022-05-09 14:28:34 -07:00
*
* @ param ptr
* @ param size
* @ return void *
*/
void * cli_max_calloc ( size_t nmemb , size_t size ) ;
2022-01-25 16:12:41 -08:00
/**
* @ brief Wrapper around realloc that limits how much may be allocated to CLI_MAX_ALLOCATION .
*
2024-01-09 17:44:33 -05:00
* Please use CLI_MAX_REALLOC_OR_GOTO_DONE ( ) with ` goto done ; ` error handling instead .
*
* NOTE : cli_max_realloc ( ) will NOT free ptr if size = = 0. It is safe to free ptr after ` done : ` .
2022-01-25 16:12:41 -08:00
*
* IMPORTANT : This differs from realloc ( ) in that if size = = 0 , it will NOT free the ptr .
*
* @ param ptr
* @ param size
* @ return void *
*/
2022-05-09 14:28:34 -07:00
void * cli_max_realloc ( void * ptr , size_t size ) ;
2022-01-25 16:12:41 -08:00
/**
* @ brief Wrapper around realloc that limits how much may be allocated to CLI_MAX_ALLOCATION .
*
2024-01-09 17:44:33 -05:00
* Please use CLI_MAX_REALLOC_OR_GOTO_DONE ( ) with ` goto done ; ` error handling instead .
2022-01-25 16:12:41 -08:00
*
* IMPORTANT : This differs from realloc ( ) in that if size = = 0 , it will NOT free the ptr .
*
2022-05-09 14:28:34 -07:00
* WARNING : This differs from cli_max_realloc ( ) in that it will free the ptr if the allocation fails .
2022-01-25 16:12:41 -08:00
* If you ' re using ` goto done ; ` error handling , this may result in a double - free ! !
*
* @ param ptr
* @ param size
* @ return void *
*/
2024-01-09 17:44:33 -05:00
void * cli_max_realloc_or_free ( void * ptr , size_t size ) ;
2022-05-09 14:28:34 -07:00
/**
* @ brief Wrapper around realloc that , unlike some variants of realloc , will not free the ptr if size = = 0.
*
2024-01-09 17:44:33 -05:00
* Please use CLI_MAX_REALLOC_OR_GOTO_DONE ( ) with ` goto done ; ` error handling instead .
2022-05-09 14:28:34 -07:00
*
* IMPORTANT : This differs from realloc ( ) in that if size = = 0 , it will NOT free the ptr .
*
* @ param ptr
* @ param size
* @ return void *
*/
void * cli_safer_realloc ( void * ptr , size_t size ) ;
/**
* @ brief Wrapper around realloc that , unlike some variants of realloc , will not free the ptr if size = = 0.
*
2024-01-09 17:44:33 -05:00
* Please use CLI_SAFER_REALLOC_OR_GOTO_DONE ( ) with ` goto done ; ` error handling instead .
2022-05-09 14:28:34 -07:00
*
* IMPORTANT : This differs from realloc ( ) in that if size = = 0 , it will NOT free the ptr .
*
* WARNING : This differs from cli_safer_realloc ( ) in that it will free the ptr if the allocation fails .
* If you ' re using ` goto done ; ` error handling , this may result in a double - free ! !
*
* @ param ptr
* @ param size
* @ return void *
*/
2024-01-09 17:44:33 -05:00
void * cli_safer_realloc_or_free ( void * ptr , size_t size ) ;
2022-01-25 16:12:41 -08:00
2024-01-08 22:51:17 -05:00
/**
* @ brief Wrapper around strdup that does a NULL check .
*
2024-01-09 17:44:33 -05:00
* Please use CLI_STRDUP_OR_GOTO_DONE ( ) with ` goto done ; ` error handling instead .
*
2024-01-08 22:51:17 -05:00
* @ param s
* @ return char * Returns the allocated string or NULL if allocation failed . This includes if allocation fails because s = = NULL .
*/
2024-01-09 17:44:33 -05:00
char * cli_safer_strdup ( const char * s ) ;
2024-01-08 22:51:17 -05:00
2003-07-29 15:48:06 +00:00
int cli_rmdirs ( const char * dirname ) ;
2025-06-03 19:03:20 -04:00
/**
* @ brief Calculate a hash of a stream .
* @ param fs The file stream to read from .
* @ param [ out ] hash ( Optional ) The buffer to store the calculated raw binary hash .
* @ param type The type of hash to calculate .
* @ return char * Returns the allocated hash string or NULL if allocation failed .
*/
char * cli_hashstream ( FILE * fs , uint8_t * hash , cli_hash_type_t type ) ;
/**
* @ brief Calculate a hash of a file .
*
* @ param filename The file to read from .
* @ param [ out ] hash ( Optional ) The buffer to store the calculated raw binary hash .
* @ param type The type of hash to calculate .
* @ return char * Returns the allocated hash string or NULL if allocation failed .
*/
char * cli_hashfile ( const char * filename , uint8_t * hash , cli_hash_type_t type ) ;
2022-08-18 14:55:16 -07:00
/**
* @ brief unlink ( ) with error checking
*
* @ param pathname the file path to unlink
* @ return cl_error_t CL_SUCCESS if successful , CL_EUNLINK if unlink ( ) failed
*/
cl_error_t cli_unlink ( const char * pathname ) ;
2019-05-04 15:54:54 -04:00
size_t cli_readn ( int fd , void * buff , size_t count ) ;
size_t cli_writen ( int fd , const void * buff , size_t count ) ;
2009-09-24 16:21:51 +02:00
const char * cli_gettmpdir ( void ) ;
2018-07-30 20:19:28 -04:00
2019-03-02 13:05:17 -05:00
/**
* @ brief Sanitize a relative path , so it cannot have a negative depth .
*
2020-08-07 23:48:20 -07:00
* Caller is responsible for freeing the sanitized filepath .
2023-11-26 15:01:19 -08:00
* The optional sanitized_filebase output param is a pointer into the filepath ,
2020-08-07 23:48:20 -07:00
* if set , and does not need to be freed .
2019-03-02 13:05:17 -05:00
*
2020-08-07 23:48:20 -07:00
* @ param filepath The filepath to sanitize
* @ param filepath_len The length of the filepath
* @ param [ out ] sanitized_filebase Pointer to the basename portion of the sanitized filepath . ( optional )
* @ return char *
2019-03-02 13:05:17 -05:00
*/
2020-08-07 23:48:20 -07:00
char * cli_sanitize_filepath ( const char * filepath , size_t filepath_len , char * * sanitized_filebase ) ;
2019-03-02 13:05:17 -05:00
2018-07-30 20:19:28 -04:00
/**
* @ brief Generate tempfile filename ( no path ) with a random MD5 hash .
2019-03-02 13:05:17 -05:00
*
2018-07-30 20:19:28 -04:00
* Caller is responsible for freeing the filename .
2019-03-02 13:05:17 -05:00
*
2018-07-30 20:19:28 -04:00
* @ return char * filename or NULL .
*/
char * cli_genfname ( const char * prefix ) ;
2019-05-23 22:50:04 -04:00
/**
* @ brief Generate a full tempfile filepath with a provided the name .
*
* Caller is responsible for freeing the filename .
* If the dir is not provided , the engine - > tmpdir will be used .
*
* @ param dir Alternative directory . ( optional )
* @ return char * filename or NULL .
*/
char * cli_newfilepath ( const char * dir , const char * fname ) ;
/**
* @ brief Generate a full tempfile filepath with a provided the name .
*
* Caller is responsible for freeing the filename .
* If the dir is not provided , the engine - > tmpdir will be used .
*
* @ param dir Alternative temp directory ( optional ) .
2021-06-30 17:30:51 -07:00
* @ param fname Filename for new file .
2019-05-23 22:50:04 -04:00
* @ param [ out ] name Allocated filepath , must be freed by caller .
* @ param [ out ] fd File descriptor of open temp file .
*/
cl_error_t cli_newfilepathfd ( const char * dir , char * fname , char * * name , int * fd ) ;
2018-07-30 20:19:28 -04:00
/**
* @ brief Generate a full tempfile filepath with a random MD5 hash and prefix the name , if provided .
2019-03-02 13:05:17 -05:00
*
2018-07-30 20:19:28 -04:00
* Caller is responsible for freeing the filename .
2019-03-02 13:05:17 -05:00
*
2018-07-30 20:19:28 -04:00
* @ param dir Alternative temp directory . ( optional )
2021-06-30 17:30:51 -07:00
* @ param prefix ( Optional ) Prefix for new file tempfile .
2018-07-30 20:19:28 -04:00
* @ return char * filename or NULL .
*/
2018-12-03 12:40:13 -05:00
char * cli_gentemp_with_prefix ( const char * dir , const char * prefix ) ;
2018-07-30 20:19:28 -04:00
/**
* @ brief Generate a full tempfile filepath with a random MD5 hash .
2019-03-02 13:05:17 -05:00
*
2018-07-30 20:19:28 -04:00
* Caller is responsible for freeing the filename .
2019-03-02 13:05:17 -05:00
*
2018-07-30 20:19:28 -04:00
* @ param dir Alternative temp directory . ( optional )
* @ return char * filename or NULL .
*/
2004-07-19 17:54:40 +00:00
char * cli_gentemp ( const char * dir ) ;
2018-07-30 20:19:28 -04:00
/**
* @ brief Create a temp filename , create the file , open it , and pass back the filepath and open file descriptor .
*
* @ param dir Alternative temp directory ( optional ) .
* @ param [ out ] name Allocated filepath , must be freed by caller .
* @ param [ out ] fd File descriptor of open temp file .
2019-03-02 13:05:17 -05:00
* @ return cl_error_t CL_SUCCESS , CL_ECREAT , or CL_EMEM .
2018-07-30 20:19:28 -04:00
*/
cl_error_t cli_gentempfd ( const char * dir , char * * name , int * fd ) ;
/**
* @ brief Create a temp filename , create the file , open it , and pass back the filepath and open file descriptor .
2019-03-02 13:05:17 -05:00
*
2018-07-30 20:19:28 -04:00
* @ param dir Alternative temp directory ( optional ) .
* @ param prefix ( Optional ) Prefix for new file tempfile .
* @ param [ out ] name Allocated filepath , must be freed by caller .
* @ param [ out ] fd File descriptor of open temp file .
2019-03-02 13:05:17 -05:00
* @ return cl_error_t CL_SUCCESS , CL_ECREAT , or CL_EMEM .
2018-07-30 20:19:28 -04:00
*/
2021-06-30 17:30:51 -07:00
cl_error_t cli_gentempfd_with_prefix ( const char * dir , const char * prefix , char * * name , int * fd ) ;
2018-07-30 20:19:28 -04:00
2004-07-19 17:54:40 +00:00
unsigned int cli_rndnum ( unsigned int max ) ;
2004-12-20 02:04:58 +00:00
int cli_filecopy ( const char * src , const char * dest ) ;
2005-12-10 18:50:26 +00:00
bitset_t * cli_bitset_init ( void ) ;
2005-10-18 10:32:06 +00:00
void cli_bitset_free ( bitset_t * bs ) ;
int cli_bitset_set ( bitset_t * bs , unsigned long bit_offset ) ;
int cli_bitset_test ( bitset_t * bs , unsigned long bit_offset ) ;
2018-12-03 12:40:13 -05:00
const char * cli_ctime ( const time_t * timep , char * buf , const size_t bufsize ) ;
2022-08-19 16:21:42 -07:00
Windows: Fix C/Rust FFI compat issue + Windows compile warnings
Primarily this commit fixes an issue with the size of the parameters
passed to cli_checklimits(). The parameters were "unsigned long", which
varies in size depending on platform.
I've switched them to uint64_t / u64.
While working on this, I observed some concerning warnigns on Windows,
and some less serious ones, primarily regarding inconsistencies with
`const` parameters.
Finally, in `scanmem.c`, there is a warning regarding use of `wchar_t *`
with `GetModuleFileNameEx()` instead of `GetModuleFileNameExW()`.
This made me realize this code assumes we're not defining `UNICODE`,
which would have such macros use the 'A' variant.
I have fixed it the best I can, although I'm still a little
uncomfortable with some of this code that uses `char` or `wchar_t`
instead of TCHAR.
I also remove the `if (GetModuleFileNameEx) {` conditional, because this
macro/function will always be defined. The original code was checking a
function pointer, and so this was a bug when integrating into ClamAV.
Regarding the changes to `rijndael.c`, I found that this module assumes
`unsigned long` == 32bits. It does not.
I have corrected it to use `uint32_t`.
2024-03-20 12:21:40 -04:00
cl_error_t cli_checklimits ( const char * who , cli_ctx * ctx , uint64_t need1 , uint64_t need2 , uint64_t need3 ) ;
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
/**
* @ brief Call before scanning a file to determine if we should scan it , skip it , or abort the entire scanning process .
*
* If the verdict is CL_SUCCESS , then this function increments the # of scanned files , and increments the amount of scanned data .
* If the verdict is that a limit has been exceeded , then ctx - >
*
* @ param ctx The scanning context .
* @ param needed The size of the file we ' re considering scanning .
* @ return cl_error_t CL_SUCCESS if we ' re good to keep scanning else an error status .
*/
cl_error_t cli_updatelimits ( cli_ctx * ctx , size_t needed ) ;
2008-07-18 17:57:27 +00:00
int cli_matchregex ( const char * str , const char * regex ) ;
2009-11-16 19:27:35 +01:00
void cli_qsort ( void * a , size_t n , size_t es , int ( * cmp ) ( const void * , const void * ) ) ;
2018-12-03 12:40:13 -05:00
void cli_qsort_r ( void * a , size_t n , size_t es , int ( * cmp ) ( const void * , const void * , const void * ) , void * arg ) ;
2019-08-16 17:18:59 -07:00
cl_error_t cli_checktimelimit ( cli_ctx * ctx ) ;
2009-02-12 16:40:35 +00:00
/* symlink behaviour */
# define CLI_FTW_FOLLOW_FILE_SYMLINK 0x01
2018-12-03 12:40:13 -05:00
# define CLI_FTW_FOLLOW_DIR_SYMLINK 0x02
2009-02-12 16:40:35 +00:00
/* if the callback needs the stat */
2018-12-03 12:40:13 -05:00
# define CLI_FTW_NEED_STAT 0x04
2009-02-12 16:40:35 +00:00
2009-02-24 13:21:27 +00:00
/* remove leading/trailing slashes */
2018-12-03 12:40:13 -05:00
# define CLI_FTW_TRIM_SLASHES 0x08
2009-02-24 13:21:27 +00:00
# define CLI_FTW_STD (CLI_FTW_NEED_STAT | CLI_FTW_TRIM_SLASHES)
2009-02-12 16:40:35 +00:00
enum cli_ftw_reason {
visit_file ,
visit_directory_toplev , /* this is a directory at toplevel of recursion */
2018-12-03 12:40:13 -05:00
error_mem , /* recommended to return CL_EMEM */
2009-02-12 16:40:35 +00:00
/* recommended to return CL_SUCCESS below */
error_stat ,
warning_skipped_link ,
warning_skipped_special ,
warning_skipped_dir
} ;
/* wrap void*, so that we don't mix it with some other pointer */
struct cli_ftw_cbdata {
void * data ;
} ;
2021-06-29 17:27:07 -07:00
/**
* @ brief Callback to process each file in a file tree walk ( FTW ) .
*
2009-02-12 16:40:35 +00:00
* The callback is responsible for freeing filename when it is done using it .
2021-06-29 17:27:07 -07:00
*
2019-03-02 13:05:17 -05:00
* Note that callback decides if directory traversal should continue
2009-02-12 16:40:35 +00:00
* after an error , we call the callback with reason = = error ,
* and if it returns CL_BREAK we break .
2021-06-29 17:27:07 -07:00
*
* Return :
* - CL_BREAK to break out without an error ,
* - CL_SUCCESS to continue ,
* - any CL_E * to break out due to error .
2009-02-12 16:40:35 +00:00
*/
2020-08-05 22:33:44 -07:00
typedef cl_error_t ( * cli_ftw_cb ) ( STATBUF * stat_buf , char * filename , const char * path , enum cli_ftw_reason reason , struct cli_ftw_cbdata * data ) ;
2009-02-12 16:40:35 +00:00
2021-06-29 17:27:07 -07:00
/**
* @ brief Callback to determine if a path in a file tree walk ( FTW ) should be skipped .
* Has access to the same callback data as the main FTW callback function ( above ) .
*
* Return :
* - 1 if the path should be skipped ( i . e . to not call the callback for the given path ) ,
* - 0 if the path should be processed ( i . e . to call the callback for the given path ) .
2009-07-31 21:28:55 +02:00
*/
typedef int ( * cli_ftw_pathchk ) ( const char * path , struct cli_ftw_cbdata * data ) ;
2021-06-29 17:27:07 -07:00
/**
* @ brief Traverse a file path , calling the callback function on each file
* within if the pathchk ( ) check allows for it . Will skip certain file types :
* -
*
2009-02-12 16:40:35 +00:00
* This is regardless of virus found / not , that is the callback ' s job to store .
* Note that the callback may dispatch async the scan , so that when cli_ftw
* returns we don ' t know the infected / notinfected status of the directory yet !
2021-06-29 17:27:07 -07:00
*
2009-02-12 16:40:35 +00:00
* Due to this if the callback scans synchronously it should store the infected
* status in its cbdata .
* This works for both files and directories . It stats the path to determine
* which one it is .
* If it is a file , it simply calls the callback once , otherwise recurses .
2021-06-29 17:27:07 -07:00
*
* @ param base The top level directory ( or file ) path to be processed
* @ param flags A bitflag field for the CLI_FTW_ * flag options ( see above )
* @ param maxdepth The max recursion depth .
* @ param callback The cli_ftw_cb callback to invoke on each file AND directory .
* @ param data Callback data for the callback function .
* @ param pathchk A function used to determine if the callback should be run on the given file .
* @ return cl_error_t CL_SUCCESS if it traversed all files and subdirs
* @ return cl_error_t CL_BREAK if traversal has stopped at some point
* @ return cl_error_t CL_E * if error encountered during traversal and we had to break out
2009-02-12 16:40:35 +00:00
*/
2021-03-22 16:52:14 -07:00
cl_error_t cli_ftw ( char * base , int flags , int maxdepth , cli_ftw_cb callback , struct cli_ftw_cbdata * data , cli_ftw_pathchk pathchk ) ;
2009-02-12 16:40:35 +00:00
2018-12-03 12:40:13 -05:00
const char * cli_strerror ( int errnum , char * buf , size_t len ) ;
2018-07-30 20:19:28 -04:00
2021-08-09 14:24:16 -07:00
# ifdef _WIN32
/**
* @ brief Attempt to get a filename from an open file handle .
2021-08-21 15:45:20 -07:00
*
2021-08-09 14:24:16 -07:00
* Windows only .
*
* @ param hFile File handle
* @ param [ out ] filepath Will be set to file path if found , or NULL .
* @ return cl_error_t CL_SUCCESS if found , else an error code .
*/
cl_error_t cli_get_filepath_from_handle ( HANDLE hFile , char * * filepath ) ;
# endif
2018-07-30 20:19:28 -04:00
/**
* @ brief Attempt to get a filename from an open file descriptor .
2019-03-02 13:05:17 -05:00
*
2018-07-30 20:19:28 -04:00
* Caller is responsible for free ' ing the filename .
* Should work on Linux , macOS , Windows .
2019-03-02 13:05:17 -05:00
*
2018-07-30 20:19:28 -04:00
* @ param desc File descriptor
* @ param [ out ] filepath Will be set to file path if found , or NULL .
* @ return cl_error_t CL_SUCCESS if found , else an error code .
*/
2018-12-03 12:40:13 -05:00
cl_error_t cli_get_filepath_from_filedesc ( int desc , char * * filepath ) ;
2018-07-30 20:19:28 -04:00
clamd clients: Mitigate move/remove symlink attack
A malicious user could replace a scan target's directory with a symlink
to another path to trick clamscan, clamdscan, or clamonacc into removing
or moving a different file (eg. a critical system file). The issue would
affect users that use the `--move` or `--remove` options for clamscan,
clamdscan, and clamonacc.
This patch gets the real path for the scan target before the scan,
and if the file alerts and the --move or --remove quarantine features
are used, it mitigates the symlink attack by traversing the path one
directory at a time until reaching the leaf directory where the scan
target file resides before unlinking (or renaming) the file directly.
This commit applies a similar tactic used in the previous commit for
Windows builds, using the Win32 Native API to traverse a path and delete
or move files by handle rather than by file path.
I had some trouble using SetFileInformationByHandle to rename a file by
handle, so for Windows instead it will copy the file to the new location
and then use the safe unlink technique to remove the old file. If the
symlink attack occurs, the unlink will fail, and the system will not be
damaged.
For more information about AV quarantine attacks using links, see the
[RACK911 Lab's report](https://www.rack911labs.com/research/exploiting-almost-every-antivirus-software)
2020-07-01 22:21:40 -07:00
/**
* @ brief Attempt to get the real path of a provided path ( evaluating symlinks ) .
*
* Caller is responsible for free ' ing the file path .
* On posix systems this just calls realpath ( ) under the hood .
* On Win32 , it opens a handle and uses cli_get_filepath_from_filedesc ( )
* to get the real path .
*
* @ param desc A file path to evaluate .
2021-07-16 11:47:23 -07:00
* @ param [ out ] char * A malloced string containing the real path .
clamd clients: Mitigate move/remove symlink attack
A malicious user could replace a scan target's directory with a symlink
to another path to trick clamscan, clamdscan, or clamonacc into removing
or moving a different file (eg. a critical system file). The issue would
affect users that use the `--move` or `--remove` options for clamscan,
clamdscan, and clamonacc.
This patch gets the real path for the scan target before the scan,
and if the file alerts and the --move or --remove quarantine features
are used, it mitigates the symlink attack by traversing the path one
directory at a time until reaching the leaf directory where the scan
target file resides before unlinking (or renaming) the file directly.
This commit applies a similar tactic used in the previous commit for
Windows builds, using the Win32 Native API to traverse a path and delete
or move files by handle rather than by file path.
I had some trouble using SetFileInformationByHandle to rename a file by
handle, so for Windows instead it will copy the file to the new location
and then use the safe unlink technique to remove the old file. If the
symlink attack occurs, the unlink will fail, and the system will not be
damaged.
For more information about AV quarantine attacks using links, see the
[RACK911 Lab's report](https://www.rack911labs.com/research/exploiting-almost-every-antivirus-software)
2020-07-01 22:21:40 -07:00
* @ return cl_error_t CL_SUCCESS if found , else an error code .
*/
cl_error_t cli_realpath ( const char * file_name , char * * real_filename ) ;
2021-08-11 18:51:42 -07:00
/**
* @ brief Get the libclamav debug flag ( e . g . if debug logging is enabled )
*
* This is required for unit tests to be able to link with clamav . dll and not
* directly manipulate libclamav global variables .
*/
2022-11-17 17:26:11 -08:00
uint8_t cli_get_debug_flag ( void ) ;
2021-08-11 18:51:42 -07:00
/**
* @ brief Set the libclamav debug flag to a specific value .
*
* The public cl_debug ( ) API will only ever enable debug mode , it won ' t disable debug mode .
*
* This is required for unit tests to be able to link with clamav . dll and not
* directly manipulate libclamav global variables .
*/
uint8_t cli_set_debug_flag ( uint8_t debug_flag ) ;
2024-01-09 17:44:33 -05:00
# ifndef CLI_SAFER_STRDUP_OR_GOTO_DONE
/**
* @ brief Wrapper around strdup that does a NULL check .
*
* This macro requires ` goto done ; ` error handling .
*
* @ param buf The string to duplicate .
* @ param var The variable to assign the allocated string to .
* @ param . . . The error handling code to execute if the allocation fails .
*/
# define CLI_SAFER_STRDUP_OR_GOTO_DONE(buf, var, ...) \
do { \
var = cli_safer_strdup ( buf ) ; \
if ( NULL = = var ) { \
do { \
__VA_ARGS__ ; \
} while ( 0 ) ; \
goto done ; \
} \
2022-05-16 21:29:25 -04:00
} while ( 0 )
# endif
2024-01-09 17:44:33 -05:00
# ifndef CLI_FREE_AND_SET_NULL
/**
* @ brief Wrapper around ` free ( ) ` to ensure you reset the variable to NULL so as to prevent a double - free .
*
* @ param var The variable to free and set to NULL .
*/
# define CLI_FREE_AND_SET_NULL(var) \
do { \
if ( NULL ! = var ) { \
Windows: Fix C/Rust FFI compat issue + Windows compile warnings
Primarily this commit fixes an issue with the size of the parameters
passed to cli_checklimits(). The parameters were "unsigned long", which
varies in size depending on platform.
I've switched them to uint64_t / u64.
While working on this, I observed some concerning warnigns on Windows,
and some less serious ones, primarily regarding inconsistencies with
`const` parameters.
Finally, in `scanmem.c`, there is a warning regarding use of `wchar_t *`
with `GetModuleFileNameEx()` instead of `GetModuleFileNameExW()`.
This made me realize this code assumes we're not defining `UNICODE`,
which would have such macros use the 'A' variant.
I have fixed it the best I can, although I'm still a little
uncomfortable with some of this code that uses `char` or `wchar_t`
instead of TCHAR.
I also remove the `if (GetModuleFileNameEx) {` conditional, because this
macro/function will always be defined. The original code was checking a
function pointer, and so this was a bug when integrating into ClamAV.
Regarding the changes to `rijndael.c`, I found that this module assumes
`unsigned long` == 32bits. It does not.
I have corrected it to use `uint32_t`.
2024-03-20 12:21:40 -04:00
free ( ( void * ) var ) ; \
2024-01-09 17:44:33 -05:00
var = NULL ; \
} \
2021-08-10 13:19:01 -07:00
} while ( 0 )
# endif
2024-01-09 17:44:33 -05:00
# ifndef CLI_MALLOC_OR_GOTO_DONE
/**
* @ brief Wrapper around malloc that will ` goto done ; ` if the allocation fails .
*
* This macro requires ` goto done ; ` error handling .
*
* @ param ptr The variable to assign the allocated memory to .
* @ param size The size of the memory to allocate .
* @ param . . . The error handling code to execute if the allocation fails .
*/
# define CLI_MALLOC_OR_GOTO_DONE(var, size, ...) \
do { \
var = malloc ( size ) ; \
if ( NULL = = var ) { \
do { \
__VA_ARGS__ ; \
} while ( 0 ) ; \
goto done ; \
} \
2021-08-10 13:19:01 -07:00
} while ( 0 )
# endif
2024-01-09 17:44:33 -05:00
# ifndef CLI_MAX_MALLOC_OR_GOTO_DONE
/**
* @ brief Wrapper around malloc that limits how much may be allocated to CLI_MAX_ALLOCATION .
*
* This macro requires ` goto done ; ` error handling .
*
* @ param var The variable to assign the allocated memory to .
* @ param size The size of the memory to allocate .
* @ param . . . The error handling code to execute if the allocation fails .
*/
# define CLI_MAX_MALLOC_OR_GOTO_DONE(var, size, ...) \
do { \
var = cli_max_malloc ( size ) ; \
if ( NULL = = var ) { \
do { \
__VA_ARGS__ ; \
} while ( 0 ) ; \
goto done ; \
} \
2021-08-10 13:19:01 -07:00
} while ( 0 )
# endif
2024-01-09 17:44:33 -05:00
# ifndef CLI_CALLOC_OR_GOTO_DONE
/**
* @ brief Wrapper around calloc that will ` goto done ; ` if the allocation fails .
*
* This macro requires ` goto done ; ` error handling .
*
* @ param var The variable to assign the allocated memory to .
* @ param nmemb The number of elements to allocate .
* @ param size The size of each element .
* @ param . . . The error handling code to execute if the allocation fails .
*/
# define CLI_CALLOC_OR_GOTO_DONE(var, nmemb, size, ...) \
do { \
( var ) = calloc ( nmemb , size ) ; \
if ( NULL = = var ) { \
do { \
__VA_ARGS__ ; \
} while ( 0 ) ; \
goto done ; \
} \
2021-08-10 13:19:01 -07:00
} while ( 0 )
# endif
2024-01-09 17:44:33 -05:00
# ifndef CLI_MAX_CALLOC_OR_GOTO_DONE
/**
* @ brief Wrapper around calloc that limits how much may be allocated to CLI_MAX_ALLOCATION .
*
* This macro requires ` goto done ; ` error handling .
*
* @ param var The variable to assign the allocated memory to .
* @ param nmemb The number of elements to allocate .
* @ param size The size of each element .
* @ param . . . The error handling code to execute if the allocation fails .
*/
# define CLI_MAX_CALLOC_OR_GOTO_DONE(var, nmemb, size, ...) \
do { \
( var ) = cli_max_calloc ( nmemb , size ) ; \
if ( NULL = = var ) { \
do { \
__VA_ARGS__ ; \
} while ( 0 ) ; \
goto done ; \
} \
2021-08-10 13:19:01 -07:00
} while ( 0 )
# endif
2024-01-09 17:44:33 -05:00
# ifndef CLI_VERIFY_POINTER_OR_GOTO_DONE
/**
* @ brief Wrapper around a NULL - check that will ` goto done ; ` if the pointer is NULL .
*
* This macro requires ` goto done ; ` error handling .
*
* @ param ptr The pointer to verify .
* @ param . . . The error handling code to execute if the pointer is NULL .
*/
# define CLI_VERIFY_POINTER_OR_GOTO_DONE(ptr, ...) \
do { \
if ( NULL = = ptr ) { \
do { \
__VA_ARGS__ ; \
} while ( 0 ) ; \
goto done ; \
} \
2021-08-10 13:19:01 -07:00
} while ( 0 )
# endif
2022-05-16 21:29:25 -04:00
/**
* @ brief Wrapper around realloc that limits how much may be allocated to CLI_MAX_ALLOCATION .
*
* IMPORTANT : This differs from realloc ( ) in that if size = = 0 , it will NOT free the ptr .
*
2022-05-09 14:28:34 -07:00
* NOTE : cli_max_realloc ( ) will NOT free ptr if size = = 0. It is safe to free ptr after ` done : ` .
*
2024-01-09 17:44:33 -05:00
* @ param ptr The pointer to realloc .
* @ param size The size of the memory to allocate .
* @ param . . . The error handling code to execute if the allocation fails .
2022-05-09 14:28:34 -07:00
*/
2024-01-09 17:44:33 -05:00
# ifndef CLI_MAX_REALLOC_OR_GOTO_DONE
Windows: Fix C/Rust FFI compat issue + Windows compile warnings
Primarily this commit fixes an issue with the size of the parameters
passed to cli_checklimits(). The parameters were "unsigned long", which
varies in size depending on platform.
I've switched them to uint64_t / u64.
While working on this, I observed some concerning warnigns on Windows,
and some less serious ones, primarily regarding inconsistencies with
`const` parameters.
Finally, in `scanmem.c`, there is a warning regarding use of `wchar_t *`
with `GetModuleFileNameEx()` instead of `GetModuleFileNameExW()`.
This made me realize this code assumes we're not defining `UNICODE`,
which would have such macros use the 'A' variant.
I have fixed it the best I can, although I'm still a little
uncomfortable with some of this code that uses `char` or `wchar_t`
instead of TCHAR.
I also remove the `if (GetModuleFileNameEx) {` conditional, because this
macro/function will always be defined. The original code was checking a
function pointer, and so this was a bug when integrating into ClamAV.
Regarding the changes to `rijndael.c`, I found that this module assumes
`unsigned long` == 32bits. It does not.
I have corrected it to use `uint32_t`.
2024-03-20 12:21:40 -04:00
# define CLI_MAX_REALLOC_OR_GOTO_DONE(ptr, size, ...) \
do { \
void * vTmp = cli_max_realloc ( ( void * ) ptr , size ) ; \
if ( NULL = = vTmp ) { \
do { \
__VA_ARGS__ ; \
} while ( 0 ) ; \
goto done ; \
} \
ptr = vTmp ; \
2022-05-09 14:28:34 -07:00
} while ( 0 )
# endif
/**
* @ brief Wrapper around realloc that , unlike some variants of realloc , will not free the ptr if size = = 0.
*
* IMPORTANT : This differs from realloc ( ) in that if size = = 0 , it will NOT free the ptr .
*
* NOTE : cli_safer_realloc ( ) will NOT free ptr if size = = 0. It is safe to free ptr after ` done : ` .
2022-05-16 21:29:25 -04:00
*
2024-01-09 17:44:33 -05:00
* @ param ptr The pointer to realloc .
* @ param size The size of the memory to allocate .
* @ param . . . The error handling code to execute if the allocation fails .
2022-05-16 21:29:25 -04:00
*/
2024-01-09 17:44:33 -05:00
# ifndef CLI_SAFER_REALLOC_OR_GOTO_DONE
Windows: Fix C/Rust FFI compat issue + Windows compile warnings
Primarily this commit fixes an issue with the size of the parameters
passed to cli_checklimits(). The parameters were "unsigned long", which
varies in size depending on platform.
I've switched them to uint64_t / u64.
While working on this, I observed some concerning warnigns on Windows,
and some less serious ones, primarily regarding inconsistencies with
`const` parameters.
Finally, in `scanmem.c`, there is a warning regarding use of `wchar_t *`
with `GetModuleFileNameEx()` instead of `GetModuleFileNameExW()`.
This made me realize this code assumes we're not defining `UNICODE`,
which would have such macros use the 'A' variant.
I have fixed it the best I can, although I'm still a little
uncomfortable with some of this code that uses `char` or `wchar_t`
instead of TCHAR.
I also remove the `if (GetModuleFileNameEx) {` conditional, because this
macro/function will always be defined. The original code was checking a
function pointer, and so this was a bug when integrating into ClamAV.
Regarding the changes to `rijndael.c`, I found that this module assumes
`unsigned long` == 32bits. It does not.
I have corrected it to use `uint32_t`.
2024-03-20 12:21:40 -04:00
# define CLI_SAFER_REALLOC_OR_GOTO_DONE(ptr, size, ...) \
do { \
void * vTmp = cli_safer_realloc ( ( void * ) ptr , size ) ; \
if ( NULL = = vTmp ) { \
do { \
__VA_ARGS__ ; \
} while ( 0 ) ; \
goto done ; \
} \
ptr = vTmp ; \
2022-05-27 19:51:18 -04:00
} while ( 0 )
# endif
2003-07-29 15:48:06 +00:00
# endif