2008-08-07 12:40:41 +00:00
/*
2025-02-14 10:24:30 -05:00
* Copyright ( C ) 2013 - 2025 Cisco Systems , Inc . and / or its affiliates . All rights reserved .
2019-01-25 10:15:50 -05:00
* Copyright ( C ) 2008 - 2013 Sourcefire , Inc .
2008-08-07 12:40:41 +00:00
*
* Authors : Tomasz Kojm
*
* This program is free software ; you can redistribute it and / or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation .
*
* This program is distributed in the hope that it will be useful ,
* but WITHOUT ANY WARRANTY ; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE . See the
* GNU General Public License for more details .
*
* You should have received a copy of the GNU General Public License
* along with this program ; if not , write to the Free Software
* Foundation , Inc . , 51 Franklin Street , Fifth Floor , Boston ,
* MA 02110 - 1301 , USA .
*/
# if HAVE_CONFIG_H
# include "clamav-config.h"
# endif
# include <check.h>
# include <stdio.h>
2008-08-21 20:21:43 +00:00
# include <string.h>
2008-08-07 12:40:41 +00:00
Add CMake build tooling
This patch adds experimental-quality CMake build tooling.
The libmspack build required a modification to use "" instead of <> for
header #includes. This will hopefully be included in the libmspack
upstream project when adding CMake build tooling to libmspack.
Removed use of libltdl when using CMake.
Flex & Bison are now required to build.
If -DMAINTAINER_MODE, then GPERF is also required, though it currently
doesn't actually do anything. TODO!
I found that the autotools build system was generating the lexer output
but not actually compiling it, instead using previously generated (and
manually renamed) lexer c source. As a consequence, changes to the .l
and .y files weren't making it into the build. To resolve this, I
removed generated flex/bison files and fixed the tooling to use the
freshly generated files. Flex and bison are now required build tools.
On Windows, this adds a dependency on the winflexbison package,
which can be obtained using Chocolatey or may be manually installed.
CMake tooling only has partial support for building with external LLVM
library, and no support for the internal LLVM (to be removed in the
future). I.e. The CMake build currently only supports the bytecode
interpreter.
Many files used include paths relative to the top source directory or
relative to the current project, rather than relative to each build
target. Modern CMake support requires including internal dependency
headers the same way you would external dependency headers (albeit
with "" instead of <>). This meant correcting all header includes to
be relative to the build targets and not relative to the workspace.
For example, ...
```c
include "../libclamav/clamav.h"
include "clamd/clamd_others.h"
```
... becomes:
```c
// libclamav
include "clamav.h"
// clamd
include "clamd_others.h"
```
Fixes header name conflicts by renaming a few of the files.
Converted the "shared" code into a static library, which depends on
libclamav. The ironically named "shared" static library provides
features common to the ClamAV apps which are not required in
libclamav itself and are not intended for use by downstream projects.
This change was required for correct modern CMake practices but was
also required to use the automake "subdir-objects" option.
This eliminates warnings when running autoreconf which, in the next
version of autoconf & automake are likely to break the build.
libclamav used to build in multiple stages where an earlier stage is
a static library containing utils required by the "shared" code.
Linking clamdscan and clamdtop with this libclamav utils static lib
allowed these two apps to function without libclamav. While this is
nice in theory, the practical gains are minimal and it complicates
the build system. As such, the autotools and CMake tooling was
simplified for improved maintainability and this feature was thrown
out. clamdtop and clamdscan now require libclamav to function.
Removed the nopthreads version of the autotools
libclamav_internal_utils static library and added pthread linking to
a couple apps that may have issues building on some platforms without
it, with the intention of removing needless complexity from the
source. Kept the regular version of libclamav_internal_utils.la
though it is no longer used anywhere but in libclamav.
Added an experimental doxygen build option which attempts to build
clamav.h and libfreshclam doxygen html docs.
The CMake build tooling also may build the example program(s), which
isn't a feature in the Autotools build system.
Changed C standard to C90+ due to inline linking issues with socket.h
when linking libfreshclam.so on Linux.
Generate common.rc for win32.
Fix tabs/spaces in shared Makefile.am, and remove vestigial ifndef
from misc.c.
Add CMake files to the automake dist, so users can try the new
CMake tooling w/out having to build from a git clone.
clamonacc changes:
- Renamed FANOTIFY macro to HAVE_SYS_FANOTIFY_H to better match other
similar macros.
- Added a new clamav-clamonacc.service systemd unit file, based on
the work of ChadDevOps & Aaron Brighton.
- Added missing clamonacc man page.
Updates to clamdscan man page, add missing options.
Remove vestigial CL_NOLIBCLAMAV definitions (all apps now use
libclamav).
Rename Windows mspack.dll to libmspack.dll so all ClamAV-built
libraries have the lib-prefix with Visual Studio as with CMake.
2020-08-13 00:25:34 -07:00
// libclamav
# include "clamav.h"
# include "readdb.h"
# include "matcher.h"
# include "matcher-ac.h"
# include "matcher-bm.h"
# include "matcher-pcre.h"
# include "others.h"
# include "default.h"
2022-08-03 20:34:48 -07:00
# include "clamav_rust.h"
Add CMake build tooling
This patch adds experimental-quality CMake build tooling.
The libmspack build required a modification to use "" instead of <> for
header #includes. This will hopefully be included in the libmspack
upstream project when adding CMake build tooling to libmspack.
Removed use of libltdl when using CMake.
Flex & Bison are now required to build.
If -DMAINTAINER_MODE, then GPERF is also required, though it currently
doesn't actually do anything. TODO!
I found that the autotools build system was generating the lexer output
but not actually compiling it, instead using previously generated (and
manually renamed) lexer c source. As a consequence, changes to the .l
and .y files weren't making it into the build. To resolve this, I
removed generated flex/bison files and fixed the tooling to use the
freshly generated files. Flex and bison are now required build tools.
On Windows, this adds a dependency on the winflexbison package,
which can be obtained using Chocolatey or may be manually installed.
CMake tooling only has partial support for building with external LLVM
library, and no support for the internal LLVM (to be removed in the
future). I.e. The CMake build currently only supports the bytecode
interpreter.
Many files used include paths relative to the top source directory or
relative to the current project, rather than relative to each build
target. Modern CMake support requires including internal dependency
headers the same way you would external dependency headers (albeit
with "" instead of <>). This meant correcting all header includes to
be relative to the build targets and not relative to the workspace.
For example, ...
```c
include "../libclamav/clamav.h"
include "clamd/clamd_others.h"
```
... becomes:
```c
// libclamav
include "clamav.h"
// clamd
include "clamd_others.h"
```
Fixes header name conflicts by renaming a few of the files.
Converted the "shared" code into a static library, which depends on
libclamav. The ironically named "shared" static library provides
features common to the ClamAV apps which are not required in
libclamav itself and are not intended for use by downstream projects.
This change was required for correct modern CMake practices but was
also required to use the automake "subdir-objects" option.
This eliminates warnings when running autoreconf which, in the next
version of autoconf & automake are likely to break the build.
libclamav used to build in multiple stages where an earlier stage is
a static library containing utils required by the "shared" code.
Linking clamdscan and clamdtop with this libclamav utils static lib
allowed these two apps to function without libclamav. While this is
nice in theory, the practical gains are minimal and it complicates
the build system. As such, the autotools and CMake tooling was
simplified for improved maintainability and this feature was thrown
out. clamdtop and clamdscan now require libclamav to function.
Removed the nopthreads version of the autotools
libclamav_internal_utils static library and added pthread linking to
a couple apps that may have issues building on some platforms without
it, with the intention of removing needless complexity from the
source. Kept the regular version of libclamav_internal_utils.la
though it is no longer used anywhere but in libclamav.
Added an experimental doxygen build option which attempts to build
clamav.h and libfreshclam doxygen html docs.
The CMake build tooling also may build the example program(s), which
isn't a feature in the Autotools build system.
Changed C standard to C90+ due to inline linking issues with socket.h
when linking libfreshclam.so on Linux.
Generate common.rc for win32.
Fix tabs/spaces in shared Makefile.am, and remove vestigial ifndef
from misc.c.
Add CMake files to the automake dist, so users can try the new
CMake tooling w/out having to build from a git clone.
clamonacc changes:
- Renamed FANOTIFY macro to HAVE_SYS_FANOTIFY_H to better match other
similar macros.
- Added a new clamav-clamonacc.service systemd unit file, based on
the work of ChadDevOps & Aaron Brighton.
- Added missing clamonacc man page.
Updates to clamdscan man page, add missing options.
Remove vestigial CL_NOLIBCLAMAV definitions (all apps now use
libclamav).
Rename Windows mspack.dll to libmspack.dll so all ClamAV-built
libraries have the lib-prefix with Visual Studio as with CMake.
2020-08-13 00:25:34 -07:00
2008-08-07 12:40:41 +00:00
# include "checks.h"
static const struct ac_testdata_s {
const char * data ;
const char * hexsig ;
const char * virname ;
} ac_testdata [ ] = {
/* IMPORTANT: ac_testdata[i].hexsig should only match ac_testdata[i].data */
2018-12-03 12:40:13 -05:00
{ " daaaaaaaaddbbbbbcce " , " 64[4-4]61616161{2}6262[3-6]65 " , " Test_1: anchored and ranged wildcard " } ,
{ " ebbbbbbbbeecccccddf " , " 6262(6162|6364|6265|6465){2}6363 " , " Test_2: multi-byte fixed alternate w/ ranged wild " } ,
{ " aaaabbbbcccccdddddeeee " , " 616161*63636363*6565 " , " Test_3: unbounded wildcards " } ,
{ " oprstuwxy " , " 6f??727374????7879 " , " Test_4: nibble wildcards " } ,
{ " abdcabcddabccadbbdbacb " , " 6463{2-3}64646162(63|64|65)6361*6462????6261{-1}6362 " , " Test_5: various wildcard combinations w/ alternate " } ,
{ " abcdefghijkabcdefghijk " , " 62????65666768*696a6b6162{2-3}656667[1-3]6b " , " Test_6: various wildcard combinations " } ,
{ " abcadbabcadbabcacb " , " 6?6164?26?62{3}?26162?361 " , " Test_7: nibble and ranged wildcards " } ,
2010-02-09 12:01:31 +02:00
/* testcase for filter bug: it was checking only first 32 chars, and last
* maxpatlen */
2018-12-03 12:40:13 -05:00
{ " \1 \1 \1 \1 \1 \1 \1 \1 \1 \1 \1 \1 \1 \1 \1 \1 \1 \1 \1 \1 \1 \1 \1 dddddddddddddddddddd5 \1 \1 \1 \1 \1 \1 \1 \1 \1 \1 \1 \1 \1 " , " 6464646464646464646464646464646464646464(35|36) " , " Test_8: filter bug " } ,
2008-08-07 12:40:41 +00:00
2015-07-07 14:02:57 -04:00
/* altbyte */
2018-12-03 12:40:13 -05:00
{ " aabaa " , " 6161(62|63|64)6161 " , " Ac_Altstr_Test_1 " } , /* control */
{ " aacaa " , " 6161(62|63|64)6161 " , " Ac_Altstr_Test_1 " } , /* control */
{ " aadaa " , " 6161(62|63|64)6161 " , " Ac_Altstr_Test_1 " } , /* control */
2015-07-07 14:02:57 -04:00
/* alt-fstr */
2018-12-03 12:40:13 -05:00
{ " aabbbaa " , " 6161(626262|636363|646464)6161 " , " Ac_Altstr_Test_2 " } , /* control */
{ " aacccaa " , " 6161(626262|636363|646464)6161 " , " Ac_Altstr_Test_2 " } , /* control */
{ " aadddaa " , " 6161(626262|636363|646464)6161 " , " Ac_Altstr_Test_2 " } , /* control */
2015-07-07 14:02:57 -04:00
/* alt-vstr */
2018-12-03 12:40:13 -05:00
{ " aabbaa " , " 6161(6262|63636363|6464646464)6161 " , " Ac_Altstr_Test_3 " } , /* control */
{ " aaccccaa " , " 6161(6262|63636363|6464646464)6161 " , " Ac_Altstr_Test_3 " } , /* control */
{ " aadddddaa " , " 6161(6262|63636363|6464646464)6161 " , " Ac_Altstr_Test_3 " } , /* control */
2015-07-07 14:02:57 -04:00
/* alt-embed */
2018-12-03 12:40:13 -05:00
{ " aajjaa " , " 6161(6a6a|66(6767|6868)66|6969)6161 " , " Ac_Altstr_Test_4 " } , /* control */
{ " aafggfaa " , " 6161(6a6a|66(6767|6868)66|6969)6161 " , " Ac_Altstr_Test_4 " } , /* control */
{ " aafhhfaa " , " 6161(6a6a|66(6767|6868)66|6969)6161 " , " Ac_Altstr_Test_4 " } , /* control */
{ " aaiiaa " , " 6161(6a6a|66(6767|6868)66|6969)6161 " , " Ac_Altstr_Test_4 " } , /* control */
2015-07-07 14:02:57 -04:00
2018-12-03 12:40:13 -05:00
{ NULL , NULL , NULL } } ;
2008-08-07 12:40:41 +00:00
2015-06-09 11:12:20 -04:00
static const struct ac_sigopts_testdata_s {
const char * data ;
uint32_t dlength ;
const char * hexsig ;
const char * offset ;
const uint16_t sigopts ;
const char * virname ;
const uint8_t expected_result ;
} ac_sigopts_testdata [ ] = {
/* nocase */
2018-12-03 12:40:13 -05:00
{ " aaaaa " , 5 , " 6161616161 " , " * " , ACPATT_OPTION_NOOPTS , " AC_Sigopts_Test_1 " , CL_VIRUS } , /* control */
{ " bBbBb " , 5 , " 6262626262 " , " * " , ACPATT_OPTION_NOOPTS , " AC_Sigopts_Test_2 " , CL_CLEAN } , /* nocase control */
{ " cCcCc " , 5 , " 6363636363 " , " * " , ACPATT_OPTION_NOCASE , " AC_Sigopts_Test_3 " , CL_VIRUS } , /* nocase test */
2015-06-09 11:12:20 -04:00
/* fullword */
2018-12-03 12:40:13 -05:00
{ " ddddd&e " , 7 , " 6464646464 " , " * " , ACPATT_OPTION_FULLWORD , " AC_Sigopts_Test_4 " , CL_VIRUS } , /* fullword start */
{ " s&eeeee&e " , 9 , " 6565656565 " , " * " , ACPATT_OPTION_FULLWORD , " AC_Sigopts_Test_5 " , CL_VIRUS } , /* fullword middle */
{ " s&fffff " , 7 , " 6666666666 " , " * " , ACPATT_OPTION_FULLWORD , " AC_Sigopts_Test_6 " , CL_VIRUS } , /* fullword end */
{ " sggggg " , 6 , " 6767676767 " , " * " , ACPATT_OPTION_FULLWORD , " AC_Sigopts_Test_7 " , CL_CLEAN } , /* fullword fail start */
{ " hhhhhe " , 6 , " 6868686868 " , " * " , ACPATT_OPTION_FULLWORD , " AC_Sigopts_Test_8 " , CL_CLEAN } , /* fullword fail end */
{ " iiiii " , 5 , " (W)6969696969 " , " * " , ACPATT_OPTION_NOOPTS , " AC_Sigopts_Test_9 " , CL_VIRUS } , /* fullword class start */
{ " jjj&jj " , 6 , " 6a6a6a(W)6a6a " , " * " , ACPATT_OPTION_NOOPTS , " AC_Sigopts_Test_10 " , CL_VIRUS } , /* fullword class middle */
{ " kkkkk " , 5 , " 6b6b6b6b6b(W) " , " * " , ACPATT_OPTION_NOOPTS , " AC_Sigopts_Test_11 " , CL_VIRUS } , /* fullword class end */
{ " slllll " , 6 , " (W)6c6c6c6c6c " , " * " , ACPATT_OPTION_NOOPTS , " AC_Sigopts_Test_12 " , CL_CLEAN } , /* fullword fail start */
{ " mmmmme " , 6 , " 6d6d6d6d6d(W) " , " * " , ACPATT_OPTION_NOOPTS , " AC_Sigopts_Test_13 " , CL_CLEAN } , /* fullword class end */
{ " nNnNn " , 5 , " 6e6e6e6e6e " , " * " , ACPATT_OPTION_NOCASE | ACPATT_OPTION_FULLWORD , " AC_Sigopts_Test_14 " , CL_VIRUS } , /* nocase fullword */
{ " soOoOo " , 6 , " 6f6f6f6f6f " , " * " , ACPATT_OPTION_NOCASE | ACPATT_OPTION_FULLWORD , " AC_Sigopts_Test_15 " , CL_CLEAN } , /* nocase fullword start fail */
{ " pPpPpe " , 6 , " 7070707070 " , " * " , ACPATT_OPTION_NOCASE | ACPATT_OPTION_FULLWORD , " AC_Sigopts_Test_16 " , CL_CLEAN } , /* nocase fullword end fail */
2015-06-09 11:12:20 -04:00
/* wide */
2018-12-03 12:40:13 -05:00
{ " q \0 q \0 q \0 q \0 q \0 " , 10 , " 7171717171 " , " * " , ACPATT_OPTION_WIDE , " AC_Sigopts_Test_17 " , CL_VIRUS } , /* control */
{ " r \0 R \0 r \0 R \0 r \0 " , 10 , " 7272727272 " , " * " , ACPATT_OPTION_WIDE | ACPATT_OPTION_NOCASE , " AC_Sigopts_Test_18 " , CL_VIRUS } , /* control */
{ " s \0 s \0 s \0 s \0 s \0 " , 10 , " 7373737373 " , " * " , ACPATT_OPTION_WIDE | ACPATT_OPTION_FULLWORD , " AC_Sigopts_Test_19 " , CL_VIRUS } , /* control */
2015-06-09 11:12:20 -04:00
2018-12-03 12:40:13 -05:00
{ " t \0 t \0 t \0 t \0 t \0 " , 10 , " 7474747474 " , " * " , ACPATT_OPTION_WIDE | ACPATT_OPTION_ASCII , " AC_Sigopts_Test_20 " , CL_VIRUS } , /* control */
2015-06-09 11:12:20 -04:00
2018-12-03 12:40:13 -05:00
{ " u \0 u \0 u \0 u \0 u \0 " , 10 , " 7575757575 " , " * " , ACPATT_OPTION_WIDE | ACPATT_OPTION_NOCASE | ACPATT_OPTION_FULLWORD , " AC_Sigopts_Test_21 " , CL_VIRUS } , /* control */
{ " v \0 v \0 v \0 v \0 v \0 " , 10 , " 7676767676 " , " * " , ACPATT_OPTION_WIDE | ACPATT_OPTION_NOCASE | ACPATT_OPTION_ASCII , " AC_Sigopts_Test_22 " , CL_VIRUS } , /* control */
2015-06-09 11:12:20 -04:00
2018-12-03 12:40:13 -05:00
{ " w \0 w \0 w \0 w \0 w \0 " , 10 , " 7777777777 " , " * " , ACPATT_OPTION_WIDE | ACPATT_OPTION_FULLWORD | ACPATT_OPTION_ASCII , " AC_Sigopts_Test_23 " , CL_VIRUS } , /* control */
{ " x \0 x \0 x \0 x \0 x \0 " , 10 , " 7878787878 " , " * " , ACPATT_OPTION_WIDE | ACPATT_OPTION_NOCASE | ACPATT_OPTION_FULLWORD | ACPATT_OPTION_ASCII , " AC_Sigopts_Test_24 " , CL_VIRUS } , /* control */
2015-06-09 11:12:20 -04:00
2018-12-03 12:40:13 -05:00
{ NULL , 0 , NULL , NULL , ACPATT_OPTION_NOOPTS , NULL , CL_CLEAN } } ;
2015-06-09 11:12:20 -04:00
2015-06-03 15:28:39 -04:00
static const struct pcre_testdata_s {
const char * data ;
const char * hexsig ;
const char * offset ;
2015-06-09 11:12:20 -04:00
const uint16_t sigopts ;
2015-06-03 15:28:39 -04:00
const char * virname ;
const uint8_t expected_result ;
} pcre_testdata [ ] = {
2018-12-03 12:40:13 -05:00
{ " clamav " , " /clamav/ " , " * " , ACPATT_OPTION_NOOPTS , " Test_1: simple string " , CL_VIRUS } ,
{ " cla:mav " , " /cla:mav/ " , " * " , ACPATT_OPTION_NOOPTS , " Test_2: embedded colon " , CL_VIRUS } ,
2015-06-05 11:28:50 -04:00
2018-12-03 12:40:13 -05:00
{ " notbasic " , " /basic/r " , " 0 " , ACPATT_OPTION_NOOPTS , " Test_3: rolling option " , CL_VIRUS } ,
{ " nottrue " , " /true/ " , " 0 " , ACPATT_OPTION_NOOPTS , " Test4: rolling(off) option " , CL_SUCCESS } ,
2015-06-05 11:28:50 -04:00
2018-12-03 12:40:13 -05:00
{ " not12345678truly " , " /12345678/e " , " 3,8 " , ACPATT_OPTION_NOOPTS , " Test_5: encompass option " , CL_VIRUS } ,
{ " not23456789truly " , " /23456789/e " , " 4,8 " , ACPATT_OPTION_NOOPTS , " Test6: encompass option (low end) " , CL_SUCCESS } ,
{ " not34567890truly " , " /34567890/e " , " 3,7 " , ACPATT_OPTION_NOOPTS , " Test7: encompass option (high end) " , CL_SUCCESS } ,
2015-06-05 11:28:50 -04:00
2018-12-03 12:40:13 -05:00
{ " notapietruly " , " /apie/re " , " 2,2 " , ACPATT_OPTION_NOOPTS , " Test8: rolling encompass " , CL_SUCCESS } ,
{ " notafigtruly " , " /afig/e " , " 2,2 " , ACPATT_OPTION_NOOPTS , " Test9: rolling(off) encompass " , CL_SUCCESS } ,
{ " notatretruly " , " /atre/re " , " 2,6 " , ACPATT_OPTION_NOOPTS , " Test10: rolling encompass " , CL_VIRUS } ,
{ " notasadtruly " , " /asad/e " , " 2,6 " , ACPATT_OPTION_NOOPTS , " Test11: rolling(off) encompass " , CL_VIRUS } ,
2015-06-05 11:28:50 -04:00
2018-12-03 12:40:13 -05:00
{ NULL , NULL , NULL , ACPATT_OPTION_NOOPTS , NULL , CL_CLEAN } } ;
2015-06-03 15:28:39 -04:00
2010-02-09 12:01:31 +02:00
static cli_ctx ctx ;
2018-07-20 22:28:48 -04:00
static struct cl_scan_options options ;
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
static fmap_t thefmap ;
2010-02-09 12:01:31 +02:00
static const char * virname = NULL ;
static void setup ( void )
{
2018-12-03 12:40:13 -05:00
struct cli_matcher * root ;
virname = NULL ;
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
memset ( & thefmap , 0 , sizeof ( thefmap ) ) ;
2018-07-20 22:28:48 -04:00
memset ( & ctx , 0 , sizeof ( ctx ) ) ;
memset ( & options , 0 , sizeof ( struct cl_scan_options ) ) ;
ctx . options = & options ;
2022-08-03 20:34:48 -07:00
ctx . engine = cl_engine_new ( ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ! ! ctx . engine , " cl_engine_new() failed " ) ;
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
ctx . dconf = ctx . engine - > dconf ;
ctx . recursion_stack_size = ctx . engine - > max_recursion_level ;
Remove max-allocation limits where not required
The cli_max_malloc, cli_max_calloc, and cli_max_realloc functions
provide a way to protect against allocating too much memory
when the size of the allocation is derived from the untrusted input.
Specifically, we worry about values in the file being scanned being
manipulated to exhaust the RAM and crash the application.
There is no need to check the limits if the size of the allocation
is fixed, or if the size of the allocation is necessary for signature
loading, or the general operation of the applications.
E.g. checking the max-allocation limit for the size of a hash, or
for the size of the scan recursion stack, is a complete waste of
time.
Although we significantly increased the max-allocation limit in
a recent release, it is best not to check an allocation if the
allocation will be safe. It would be a waste of time.
I am also hopeful that if we can reduce the number allocations
that require a limit-check to those that require it for the safe
scan of a file, then eventually we can store the limit in the scan-
context, and make it configurable.
2024-01-08 22:48:28 -05:00
ctx . recursion_stack = calloc ( sizeof ( recursion_level_t ) , ctx . recursion_stack_size ) ;
ck_assert_msg ( ! ! ctx . recursion_stack , " calloc() for recursion_stack failed " ) ;
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
// ctx was memset, so recursion_level starts at 0.
ctx . recursion_stack [ ctx . recursion_level ] . fmap = & thefmap ;
ctx . fmap = ctx . recursion_stack [ ctx . recursion_level ] . fmap ;
2019-05-03 18:16:03 -04:00
root = ( struct cli_matcher * ) MPOOL_CALLOC ( ctx . engine - > mempool , 1 , sizeof ( struct cli_matcher ) ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( root ! = NULL , " root == NULL " ) ;
2010-02-09 12:01:31 +02:00
# ifdef USE_MPOOL
2018-12-03 12:40:13 -05:00
root - > mempool = ctx . engine - > mempool ;
2010-02-09 12:01:31 +02:00
# endif
2018-12-03 12:40:13 -05:00
ctx . engine - > root [ 0 ] = root ;
2010-02-09 12:01:31 +02:00
}
static void teardown ( void )
{
2018-12-03 12:40:13 -05:00
cl_engine_free ( ( struct cl_engine * ) ctx . engine ) ;
2025-06-09 01:33:26 -04:00
if ( ctx . recursion_stack [ ctx . recursion_level ] . evidence ) {
evidence_free ( ctx . recursion_stack [ ctx . recursion_level ] . evidence ) ;
}
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
free ( ctx . recursion_stack ) ;
2010-02-09 12:01:31 +02:00
}
2018-12-03 12:40:13 -05:00
START_TEST ( test_ac_scanbuff )
{
struct cli_ac_data mdata ;
struct cli_matcher * root ;
unsigned int i ;
int ret ;
2008-08-07 12:40:41 +00:00
2010-02-09 12:01:31 +02:00
root = ctx . engine - > root [ 0 ] ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( root ! = NULL , " root == NULL " ) ;
2008-08-07 12:40:41 +00:00
root - > ac_only = 1 ;
2008-11-04 10:40:24 +00:00
# ifdef USE_MPOOL
2009-01-26 19:47:02 +00:00
root - > mempool = mpool_create ( ) ;
2008-11-03 19:26:57 +00:00
# endif
2015-02-09 14:22:45 -08:00
ret = cli_ac_init ( root , CLI_DEFAULT_AC_MINDEPTH , CLI_DEFAULT_AC_MAXDEPTH , 1 ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " cli_ac_init() failed " ) ;
2008-08-07 12:40:41 +00:00
2018-12-03 12:40:13 -05:00
for ( i = 0 ; ac_testdata [ i ] . data ; i + + ) {
PE, ELF, Mach-O: code cleanup
The header parsing / executable metadata collecting functions for the
PE, ELF, and Mach-O file types were using `int` for the return type.
Mostly they were returning 0 for success and -1, -2, -3, or -4 for
failure. But in some cases they were returning cl_error_t enum values
for failure. Regardless, the function using them was treating 0 as
success and non-zero as failure, which it stored as -1 ... every time.
This commit switches them all to use cl_error_t. I am continuing to
storeo the final result as 0 / -1 in the `peinfo` struct, but outside of
that everything has been made consistent.
While I was working on that, I got a tad side tracked. I noticed that
the target type isn't an enum, or even a set of #defines. So I made an
enum and then changed the code that uses target types to use the enum.
I also removed the `target` parameter from a number of functions that
don't actually use it at all. Some recursion was masking the fact that
it was an unused parameter which is why there was no warning about it.
2022-08-28 18:41:04 -07:00
ret = cli_add_content_match_pattern ( root , ac_testdata [ i ] . virname , ac_testdata [ i ] . hexsig , 0 , 0 , 0 , " * " , NULL , 0 ) ;
2022-02-12 13:34:25 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " cli_add_content_match_pattern failed " ) ;
2008-08-07 12:40:41 +00:00
}
ret = cli_ac_buildtrie ( root ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " cli_ac_buildtrie() failed " ) ;
2008-08-07 12:40:41 +00:00
2009-08-21 15:55:10 +02:00
ret = cli_ac_initdata ( & mdata , root - > ac_partsigs , 0 , 0 , CLI_DEFAULT_AC_TRACKLEN ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " cli_ac_initdata() failed " ) ;
2008-08-07 12:40:41 +00:00
2018-07-20 22:28:48 -04:00
ctx . options - > general & = ~ CL_SCAN_GENERAL_ALLMATCHES ; /* make sure all-match is disabled */
2018-12-03 12:40:13 -05:00
for ( i = 0 ; ac_testdata [ i ] . data ; i + + ) {
ret = cli_ac_scanbuff ( ( const unsigned char * ) ac_testdata [ i ] . data , strlen ( ac_testdata [ i ] . data ) , & virname , NULL , NULL , root , & mdata , 0 , 0 , NULL , AC_SCAN_VIR , NULL ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = CL_VIRUS , " cli_ac_scanbuff() failed for %s " , ac_testdata [ i ] . virname ) ;
ck_assert_msg ( ! strncmp ( virname , ac_testdata [ i ] . virname , strlen ( ac_testdata [ i ] . virname ) ) , " Dataset %u matched with %s " , i , virname ) ;
2018-12-03 12:40:13 -05:00
2020-03-21 14:15:28 -04:00
ret = cli_scan_buff ( ( const unsigned char * ) ac_testdata [ i ] . data , strlen ( ac_testdata [ i ] . data ) , 0 , & ctx , 0 , NULL ) ;
ck_assert_msg ( ret = = CL_VIRUS , " cli_scan_buff() failed for %s " , ac_testdata [ i ] . virname ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ! strncmp ( virname , ac_testdata [ i ] . virname , strlen ( ac_testdata [ i ] . virname ) ) , " Dataset %u matched with %s " , i , virname ) ;
2008-08-07 12:40:41 +00:00
}
cli_ac_freedata ( & mdata ) ;
}
END_TEST
2018-12-03 12:40:13 -05:00
START_TEST ( test_ac_scanbuff_allscan )
{
struct cli_ac_data mdata ;
struct cli_matcher * root ;
unsigned int i ;
int ret ;
2012-10-18 14:12:58 -07:00
root = ctx . engine - > root [ 0 ] ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( root ! = NULL , " root == NULL " ) ;
2012-10-18 14:12:58 -07:00
root - > ac_only = 1 ;
# ifdef USE_MPOOL
root - > mempool = mpool_create ( ) ;
# endif
2015-02-09 14:22:45 -08:00
ret = cli_ac_init ( root , CLI_DEFAULT_AC_MINDEPTH , CLI_DEFAULT_AC_MAXDEPTH , 1 ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " cli_ac_init() failed " ) ;
2012-10-18 14:12:58 -07:00
2018-12-03 12:40:13 -05:00
for ( i = 0 ; ac_testdata [ i ] . data ; i + + ) {
PE, ELF, Mach-O: code cleanup
The header parsing / executable metadata collecting functions for the
PE, ELF, and Mach-O file types were using `int` for the return type.
Mostly they were returning 0 for success and -1, -2, -3, or -4 for
failure. But in some cases they were returning cl_error_t enum values
for failure. Regardless, the function using them was treating 0 as
success and non-zero as failure, which it stored as -1 ... every time.
This commit switches them all to use cl_error_t. I am continuing to
storeo the final result as 0 / -1 in the `peinfo` struct, but outside of
that everything has been made consistent.
While I was working on that, I got a tad side tracked. I noticed that
the target type isn't an enum, or even a set of #defines. So I made an
enum and then changed the code that uses target types to use the enum.
I also removed the `target` parameter from a number of functions that
don't actually use it at all. Some recursion was masking the fact that
it was an unused parameter which is why there was no warning about it.
2022-08-28 18:41:04 -07:00
ret = cli_add_content_match_pattern ( root , ac_testdata [ i ] . virname , ac_testdata [ i ] . hexsig , 0 , 0 , 0 , " * " , NULL , 0 ) ;
2022-02-12 13:34:25 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " cli_add_content_match_pattern failed " ) ;
2012-10-18 14:12:58 -07:00
}
ret = cli_ac_buildtrie ( root ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " cli_ac_buildtrie() failed " ) ;
2012-10-18 14:12:58 -07:00
ret = cli_ac_initdata ( & mdata , root - > ac_partsigs , 0 , 0 , CLI_DEFAULT_AC_TRACKLEN ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " cli_ac_initdata() failed " ) ;
2012-10-18 14:12:58 -07:00
2018-07-20 22:28:48 -04:00
ctx . options - > general | = CL_SCAN_GENERAL_ALLMATCHES ; /* enable all-match */
2018-12-03 12:40:13 -05:00
for ( i = 0 ; ac_testdata [ i ] . data ; i + + ) {
ret = cli_ac_scanbuff ( ( const unsigned char * ) ac_testdata [ i ] . data , strlen ( ac_testdata [ i ] . data ) , & virname , NULL , NULL , root , & mdata , 0 , 0 , NULL , AC_SCAN_VIR , NULL ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = CL_VIRUS , " cli_ac_scanbuff() failed for %s " , ac_testdata [ i ] . virname ) ;
ck_assert_msg ( ! strncmp ( virname , ac_testdata [ i ] . virname , strlen ( ac_testdata [ i ] . virname ) ) , " Dataset %u matched with %s " , i , virname ) ;
2018-12-03 12:40:13 -05:00
2020-03-21 14:15:28 -04:00
ret = cli_scan_buff ( ( const unsigned char * ) ac_testdata [ i ] . data , strlen ( ac_testdata [ i ] . data ) , 0 , & ctx , 0 , NULL ) ;
2022-08-03 20:34:48 -07:00
ck_assert_msg ( ret = = CL_SUCCESS , " cli_scan_buff() failed for %s " , ac_testdata [ i ] . virname ) ;
// phishingScan() doesn't check the number of alerts. When using CL_SCAN_GENERAL_ALLMATCHES
// or if using `CL_SCAN_GENERAL_HEURISTIC_PRECEDENCE` and `cli_append_potentially_unwanted()`
// we need to count the number of alerts manually to determine the verdict.
2025-06-09 01:33:26 -04:00
ck_assert_msg ( 0 < evidence_num_alerts ( ctx . this_layer_evidence ) , " cli_scan_buff() failed for %s " , ac_testdata [ i ] . virname ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ! strncmp ( virname , ac_testdata [ i ] . virname , strlen ( ac_testdata [ i ] . virname ) ) , " Dataset %u matched with %s " , i , virname ) ;
2025-06-09 01:33:26 -04:00
if ( evidence_num_alerts ( ctx . this_layer_evidence ) > 0 ) {
// Reset evidence for the next scan.
evidence_free ( ctx . recursion_stack [ ctx . recursion_level ] . evidence ) ;
ctx . recursion_stack [ ctx . recursion_level ] . evidence = NULL ;
libclamav: Add engine option to toggle temp directory recursion
Temp directory recursion in ClamAV is when each layer of a scan gets its
own temp directory in the parent layer's temp directory.
In addition to temp directory recursion, ClamAV has been creating a new
subdirectory for each file scan as a risk-adverse method to ensure
no temporary file leaks fill up the disk.
Creating a directory is relatively slow on Windows in particular if
scanning a lot of very small files.
This commit:
1. Separates the temp directory recursion feature from the leave-temps
feature so that libclamav can leave temp files without making
subdirectories for each file scanned.
2. Makes it so that when temp directory recursion is off, libclamav
will just use the configure temp directory for all files.
The new option to enable temp directory recursion is for libclamav-only
at this time. It is off by default, and you can enable it like this:
```c
cl_engine_set_num(engine, CL_ENGINE_TMPDIR_RECURSION, 1);
```
For the `clamscan` and `clamd` programs, temp directory recursion will
be enabled when `--leave-temps` / `LeaveTemporaryFiles` is enabled.
The difference is that when disabled, it will return to using the
configured temp directory without making a subdirectory for each file
scanned, so as to improve scan performance for small files, mostly on
Windows.
Under the hood, this commit also:
1. Cleans up how we keep track of tmpdirs for each layer.
The goal here is to align how we keep track of layer-specific stuff
using the scan_layer structure.
2. Cleans up how we record metadata JSON for embedded files.
Note: Embedded files being different from Contained files, as they
are extracted not with a parser, but by finding them with
file type magic signatures.
CLAM-1583
2025-06-09 20:42:31 -04:00
ctx . this_layer_evidence = NULL ;
2022-08-03 20:34:48 -07:00
}
2018-12-03 12:40:13 -05:00
}
2012-10-18 14:12:58 -07:00
cli_ac_freedata ( & mdata ) ;
}
END_TEST
2018-12-03 12:40:13 -05:00
START_TEST ( test_ac_scanbuff_ex )
{
struct cli_ac_data mdata ;
struct cli_matcher * root ;
unsigned int i ;
int ret ;
2015-06-09 11:12:20 -04:00
root = ctx . engine - > root [ 0 ] ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( root ! = NULL , " root == NULL " ) ;
2015-06-09 11:12:20 -04:00
root - > ac_only = 1 ;
# ifdef USE_MPOOL
root - > mempool = mpool_create ( ) ;
# endif
ret = cli_ac_init ( root , CLI_DEFAULT_AC_MINDEPTH , CLI_DEFAULT_AC_MAXDEPTH , 1 ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " [ac_ex] cli_ac_init() failed " ) ;
2015-06-09 11:12:20 -04:00
2018-12-03 12:40:13 -05:00
for ( i = 0 ; ac_sigopts_testdata [ i ] . data ; i + + ) {
PE, ELF, Mach-O: code cleanup
The header parsing / executable metadata collecting functions for the
PE, ELF, and Mach-O file types were using `int` for the return type.
Mostly they were returning 0 for success and -1, -2, -3, or -4 for
failure. But in some cases they were returning cl_error_t enum values
for failure. Regardless, the function using them was treating 0 as
success and non-zero as failure, which it stored as -1 ... every time.
This commit switches them all to use cl_error_t. I am continuing to
storeo the final result as 0 / -1 in the `peinfo` struct, but outside of
that everything has been made consistent.
While I was working on that, I got a tad side tracked. I noticed that
the target type isn't an enum, or even a set of #defines. So I made an
enum and then changed the code that uses target types to use the enum.
I also removed the `target` parameter from a number of functions that
don't actually use it at all. Some recursion was masking the fact that
it was an unused parameter which is why there was no warning about it.
2022-08-28 18:41:04 -07:00
ret = cli_sigopts_handler ( root , ac_sigopts_testdata [ i ] . virname , ac_sigopts_testdata [ i ] . hexsig , ac_sigopts_testdata [ i ] . sigopts , 0 , 0 , ac_sigopts_testdata [ i ] . offset , NULL , 0 ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " [ac_ex] cli_sigopts_handler() failed " ) ;
2015-06-09 11:12:20 -04:00
}
ret = cli_ac_buildtrie ( root ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " [ac_ex] cli_ac_buildtrie() failed " ) ;
2015-06-09 11:12:20 -04:00
ret = cli_ac_initdata ( & mdata , root - > ac_partsigs , 0 , 0 , CLI_DEFAULT_AC_TRACKLEN ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " [ac_ex] cli_ac_initdata() failed " ) ;
2015-06-09 11:12:20 -04:00
2018-07-20 22:28:48 -04:00
ctx . options - > general & = ~ CL_SCAN_GENERAL_ALLMATCHES ; /* make sure all-match is disabled */
2018-12-03 12:40:13 -05:00
for ( i = 0 ; ac_sigopts_testdata [ i ] . data ; i + + ) {
ret = cli_ac_scanbuff ( ( const unsigned char * ) ac_sigopts_testdata [ i ] . data , ac_sigopts_testdata [ i ] . dlength , & virname , NULL , NULL , root , & mdata , 0 , 0 , NULL , AC_SCAN_VIR , NULL ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = ac_sigopts_testdata [ i ] . expected_result , " [ac_ex] cli_ac_scanbuff() failed for %s (%d != %d) " , ac_sigopts_testdata [ i ] . virname , ret , ac_sigopts_testdata [ i ] . expected_result ) ;
2018-12-03 12:40:13 -05:00
if ( ac_sigopts_testdata [ i ] . expected_result = = CL_VIRUS )
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ! strncmp ( virname , ac_sigopts_testdata [ i ] . virname , strlen ( ac_sigopts_testdata [ i ] . virname ) ) , " [ac_ex] Dataset %u matched with %s " , i , virname ) ;
2018-12-03 12:40:13 -05:00
2020-03-21 14:15:28 -04:00
ret = cli_scan_buff ( ( const unsigned char * ) ac_sigopts_testdata [ i ] . data , ac_sigopts_testdata [ i ] . dlength , 0 , & ctx , 0 , NULL ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = ac_sigopts_testdata [ i ] . expected_result , " [ac_ex] cli_ac_scanbuff() failed for %s (%d != %d) " , ac_sigopts_testdata [ i ] . virname , ret , ac_sigopts_testdata [ i ] . expected_result ) ;
2015-06-09 11:12:20 -04:00
}
cli_ac_freedata ( & mdata ) ;
}
END_TEST
2018-12-03 12:40:13 -05:00
START_TEST ( test_ac_scanbuff_allscan_ex )
{
struct cli_ac_data mdata ;
struct cli_matcher * root ;
unsigned int i ;
int ret ;
2015-06-09 11:12:20 -04:00
root = ctx . engine - > root [ 0 ] ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( root ! = NULL , " root == NULL " ) ;
2015-06-09 11:12:20 -04:00
root - > ac_only = 1 ;
# ifdef USE_MPOOL
root - > mempool = mpool_create ( ) ;
# endif
ret = cli_ac_init ( root , CLI_DEFAULT_AC_MINDEPTH , CLI_DEFAULT_AC_MAXDEPTH , 1 ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " [ac_ex] cli_ac_init() failed " ) ;
2015-06-09 11:12:20 -04:00
2018-12-03 12:40:13 -05:00
for ( i = 0 ; ac_sigopts_testdata [ i ] . data ; i + + ) {
PE, ELF, Mach-O: code cleanup
The header parsing / executable metadata collecting functions for the
PE, ELF, and Mach-O file types were using `int` for the return type.
Mostly they were returning 0 for success and -1, -2, -3, or -4 for
failure. But in some cases they were returning cl_error_t enum values
for failure. Regardless, the function using them was treating 0 as
success and non-zero as failure, which it stored as -1 ... every time.
This commit switches them all to use cl_error_t. I am continuing to
storeo the final result as 0 / -1 in the `peinfo` struct, but outside of
that everything has been made consistent.
While I was working on that, I got a tad side tracked. I noticed that
the target type isn't an enum, or even a set of #defines. So I made an
enum and then changed the code that uses target types to use the enum.
I also removed the `target` parameter from a number of functions that
don't actually use it at all. Some recursion was masking the fact that
it was an unused parameter which is why there was no warning about it.
2022-08-28 18:41:04 -07:00
ret = cli_sigopts_handler ( root , ac_sigopts_testdata [ i ] . virname , ac_sigopts_testdata [ i ] . hexsig , ac_sigopts_testdata [ i ] . sigopts , 0 , 0 , ac_sigopts_testdata [ i ] . offset , NULL , 0 ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " [ac_ex] cli_sigopts_handler() failed " ) ;
2015-06-09 11:12:20 -04:00
}
ret = cli_ac_buildtrie ( root ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " [ac_ex] cli_ac_buildtrie() failed " ) ;
2015-06-09 11:12:20 -04:00
ret = cli_ac_initdata ( & mdata , root - > ac_partsigs , 0 , 0 , CLI_DEFAULT_AC_TRACKLEN ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " [ac_ex] cli_ac_initdata() failed " ) ;
2015-06-09 11:12:20 -04:00
2018-07-20 22:28:48 -04:00
ctx . options - > general | = CL_SCAN_GENERAL_ALLMATCHES ; /* enable all-match */
2018-12-03 12:40:13 -05:00
for ( i = 0 ; ac_sigopts_testdata [ i ] . data ; i + + ) {
2022-08-03 20:34:48 -07:00
cl_error_t verdict = CL_CLEAN ;
2018-12-03 12:40:13 -05:00
ret = cli_ac_scanbuff ( ( const unsigned char * ) ac_sigopts_testdata [ i ] . data , ac_sigopts_testdata [ i ] . dlength , & virname , NULL , NULL , root , & mdata , 0 , 0 , NULL , AC_SCAN_VIR , NULL ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = ac_sigopts_testdata [ i ] . expected_result , " [ac_ex] cli_ac_scanbuff() failed for %s (%d != %d) " , ac_sigopts_testdata [ i ] . virname , ret , ac_sigopts_testdata [ i ] . expected_result ) ;
2018-12-03 12:40:13 -05:00
if ( ac_sigopts_testdata [ i ] . expected_result = = CL_VIRUS )
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ! strncmp ( virname , ac_sigopts_testdata [ i ] . virname , strlen ( ac_sigopts_testdata [ i ] . virname ) ) , " [ac_ex] Dataset %u matched with %s " , i , virname ) ;
2018-12-03 12:40:13 -05:00
2020-03-21 14:15:28 -04:00
ret = cli_scan_buff ( ( const unsigned char * ) ac_sigopts_testdata [ i ] . data , ac_sigopts_testdata [ i ] . dlength , 0 , & ctx , 0 , NULL ) ;
2022-08-03 20:34:48 -07:00
ck_assert_msg ( ret = = CL_SUCCESS , " [ac_ex] cli_ac_scanbuff() failed for %s (%d != %d) " , ac_sigopts_testdata [ i ] . virname , ret , ac_sigopts_testdata [ i ] . expected_result ) ;
// phishingScan() doesn't check the number of alerts. When using CL_SCAN_GENERAL_ALLMATCHES
// or if using `CL_SCAN_GENERAL_HEURISTIC_PRECEDENCE` and `cli_append_potentially_unwanted()`
// we need to count the number of alerts manually to determine the verdict.
2025-06-09 01:33:26 -04:00
if ( 0 < evidence_num_alerts ( ctx . this_layer_evidence ) ) {
2022-08-03 20:34:48 -07:00
verdict = CL_VIRUS ;
}
ck_assert_msg ( verdict = = ac_sigopts_testdata [ i ] . expected_result , " [ac_ex] cli_ac_scanbuff() failed for %s (%d != %d) " , ac_sigopts_testdata [ i ] . virname , verdict , ac_sigopts_testdata [ i ] . expected_result ) ;
2025-06-09 01:33:26 -04:00
if ( evidence_num_alerts ( ctx . this_layer_evidence ) > 0 ) {
// Reset evidence for the next scan.
evidence_free ( ctx . recursion_stack [ ctx . recursion_level ] . evidence ) ;
ctx . recursion_stack [ ctx . recursion_level ] . evidence = NULL ;
libclamav: Add engine option to toggle temp directory recursion
Temp directory recursion in ClamAV is when each layer of a scan gets its
own temp directory in the parent layer's temp directory.
In addition to temp directory recursion, ClamAV has been creating a new
subdirectory for each file scan as a risk-adverse method to ensure
no temporary file leaks fill up the disk.
Creating a directory is relatively slow on Windows in particular if
scanning a lot of very small files.
This commit:
1. Separates the temp directory recursion feature from the leave-temps
feature so that libclamav can leave temp files without making
subdirectories for each file scanned.
2. Makes it so that when temp directory recursion is off, libclamav
will just use the configure temp directory for all files.
The new option to enable temp directory recursion is for libclamav-only
at this time. It is off by default, and you can enable it like this:
```c
cl_engine_set_num(engine, CL_ENGINE_TMPDIR_RECURSION, 1);
```
For the `clamscan` and `clamd` programs, temp directory recursion will
be enabled when `--leave-temps` / `LeaveTemporaryFiles` is enabled.
The difference is that when disabled, it will return to using the
configured temp directory without making a subdirectory for each file
scanned, so as to improve scan performance for small files, mostly on
Windows.
Under the hood, this commit also:
1. Cleans up how we keep track of tmpdirs for each layer.
The goal here is to align how we keep track of layer-specific stuff
using the scan_layer structure.
2. Cleans up how we record metadata JSON for embedded files.
Note: Embedded files being different from Contained files, as they
are extracted not with a parser, but by finding them with
file type magic signatures.
CLAM-1583
2025-06-09 20:42:31 -04:00
ctx . this_layer_evidence = NULL ;
2022-08-03 20:34:48 -07:00
}
2015-06-09 11:12:20 -04:00
}
cli_ac_freedata ( & mdata ) ;
}
END_TEST
2018-12-03 12:40:13 -05:00
START_TEST ( test_bm_scanbuff )
{
struct cli_matcher * root ;
const char * virname = NULL ;
int ret ;
2015-06-09 11:12:20 -04:00
root = ctx . engine - > root [ 0 ] ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( root ! = NULL , " root == NULL " ) ;
2015-06-09 11:12:20 -04:00
# ifdef USE_MPOOL
root - > mempool = mpool_create ( ) ;
# endif
ret = cli_bm_init ( root ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " cli_bm_init() failed " ) ;
2015-06-09 11:12:20 -04:00
PE, ELF, Mach-O: code cleanup
The header parsing / executable metadata collecting functions for the
PE, ELF, and Mach-O file types were using `int` for the return type.
Mostly they were returning 0 for success and -1, -2, -3, or -4 for
failure. But in some cases they were returning cl_error_t enum values
for failure. Regardless, the function using them was treating 0 as
success and non-zero as failure, which it stored as -1 ... every time.
This commit switches them all to use cl_error_t. I am continuing to
storeo the final result as 0 / -1 in the `peinfo` struct, but outside of
that everything has been made consistent.
While I was working on that, I got a tad side tracked. I noticed that
the target type isn't an enum, or even a set of #defines. So I made an
enum and then changed the code that uses target types to use the enum.
I also removed the `target` parameter from a number of functions that
don't actually use it at all. Some recursion was masking the fact that
it was an unused parameter which is why there was no warning about it.
2022-08-28 18:41:04 -07:00
ret = cli_add_content_match_pattern ( root , " Sig1 " , " deadbabe " , 0 , 0 , 0 , " * " , NULL , 0 ) ;
2022-02-12 13:34:25 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " cli_add_content_match_pattern failed " ) ;
PE, ELF, Mach-O: code cleanup
The header parsing / executable metadata collecting functions for the
PE, ELF, and Mach-O file types were using `int` for the return type.
Mostly they were returning 0 for success and -1, -2, -3, or -4 for
failure. But in some cases they were returning cl_error_t enum values
for failure. Regardless, the function using them was treating 0 as
success and non-zero as failure, which it stored as -1 ... every time.
This commit switches them all to use cl_error_t. I am continuing to
storeo the final result as 0 / -1 in the `peinfo` struct, but outside of
that everything has been made consistent.
While I was working on that, I got a tad side tracked. I noticed that
the target type isn't an enum, or even a set of #defines. So I made an
enum and then changed the code that uses target types to use the enum.
I also removed the `target` parameter from a number of functions that
don't actually use it at all. Some recursion was masking the fact that
it was an unused parameter which is why there was no warning about it.
2022-08-28 18:41:04 -07:00
ret = cli_add_content_match_pattern ( root , " Sig2 " , " deadbeef " , 0 , 0 , 0 , " * " , NULL , 0 ) ;
2022-02-12 13:34:25 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " cli_add_content_match_pattern failed " ) ;
PE, ELF, Mach-O: code cleanup
The header parsing / executable metadata collecting functions for the
PE, ELF, and Mach-O file types were using `int` for the return type.
Mostly they were returning 0 for success and -1, -2, -3, or -4 for
failure. But in some cases they were returning cl_error_t enum values
for failure. Regardless, the function using them was treating 0 as
success and non-zero as failure, which it stored as -1 ... every time.
This commit switches them all to use cl_error_t. I am continuing to
storeo the final result as 0 / -1 in the `peinfo` struct, but outside of
that everything has been made consistent.
While I was working on that, I got a tad side tracked. I noticed that
the target type isn't an enum, or even a set of #defines. So I made an
enum and then changed the code that uses target types to use the enum.
I also removed the `target` parameter from a number of functions that
don't actually use it at all. Some recursion was masking the fact that
it was an unused parameter which is why there was no warning about it.
2022-08-28 18:41:04 -07:00
ret = cli_add_content_match_pattern ( root , " Sig3 " , " babedead " , 0 , 0 , 0 , " * " , NULL , 0 ) ;
2022-02-12 13:34:25 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " cli_add_content_match_pattern failed " ) ;
2015-06-09 11:12:20 -04:00
2018-07-20 22:28:48 -04:00
ctx . options - > general & = ~ CL_SCAN_GENERAL_ALLMATCHES ; /* make sure all-match is disabled */
2025-07-25 16:25:10 +01:00
ret = cli_bm_scanbuff ( ( const unsigned char * ) " blah \xde \xad \xbe \xef " , 8 , & virname , NULL , root , 0 , NULL , NULL , NULL ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = CL_VIRUS , " cli_bm_scanbuff() failed " ) ;
ck_assert_msg ( ! strncmp ( virname , " Sig2 " , 4 ) , " Incorrect signature matched in cli_bm_scanbuff() \n " ) ;
2015-06-09 11:12:20 -04:00
}
END_TEST
2018-12-03 12:40:13 -05:00
START_TEST ( test_bm_scanbuff_allscan )
{
struct cli_matcher * root ;
const char * virname = NULL ;
int ret ;
2012-10-18 14:12:58 -07:00
root = ctx . engine - > root [ 0 ] ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( root ! = NULL , " root == NULL " ) ;
2012-10-18 14:12:58 -07:00
# ifdef USE_MPOOL
root - > mempool = mpool_create ( ) ;
# endif
ret = cli_bm_init ( root ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " cli_bm_init() failed " ) ;
2012-10-18 14:12:58 -07:00
PE, ELF, Mach-O: code cleanup
The header parsing / executable metadata collecting functions for the
PE, ELF, and Mach-O file types were using `int` for the return type.
Mostly they were returning 0 for success and -1, -2, -3, or -4 for
failure. But in some cases they were returning cl_error_t enum values
for failure. Regardless, the function using them was treating 0 as
success and non-zero as failure, which it stored as -1 ... every time.
This commit switches them all to use cl_error_t. I am continuing to
storeo the final result as 0 / -1 in the `peinfo` struct, but outside of
that everything has been made consistent.
While I was working on that, I got a tad side tracked. I noticed that
the target type isn't an enum, or even a set of #defines. So I made an
enum and then changed the code that uses target types to use the enum.
I also removed the `target` parameter from a number of functions that
don't actually use it at all. Some recursion was masking the fact that
it was an unused parameter which is why there was no warning about it.
2022-08-28 18:41:04 -07:00
ret = cli_add_content_match_pattern ( root , " Sig1 " , " deadbabe " , 0 , 0 , 0 , " * " , NULL , 0 ) ;
2022-02-12 13:34:25 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " cli_add_content_match_pattern failed " ) ;
PE, ELF, Mach-O: code cleanup
The header parsing / executable metadata collecting functions for the
PE, ELF, and Mach-O file types were using `int` for the return type.
Mostly they were returning 0 for success and -1, -2, -3, or -4 for
failure. But in some cases they were returning cl_error_t enum values
for failure. Regardless, the function using them was treating 0 as
success and non-zero as failure, which it stored as -1 ... every time.
This commit switches them all to use cl_error_t. I am continuing to
storeo the final result as 0 / -1 in the `peinfo` struct, but outside of
that everything has been made consistent.
While I was working on that, I got a tad side tracked. I noticed that
the target type isn't an enum, or even a set of #defines. So I made an
enum and then changed the code that uses target types to use the enum.
I also removed the `target` parameter from a number of functions that
don't actually use it at all. Some recursion was masking the fact that
it was an unused parameter which is why there was no warning about it.
2022-08-28 18:41:04 -07:00
ret = cli_add_content_match_pattern ( root , " Sig2 " , " deadbeef " , 0 , 0 , 0 , " * " , NULL , 0 ) ;
2022-02-12 13:34:25 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " cli_add_content_match_pattern failed " ) ;
PE, ELF, Mach-O: code cleanup
The header parsing / executable metadata collecting functions for the
PE, ELF, and Mach-O file types were using `int` for the return type.
Mostly they were returning 0 for success and -1, -2, -3, or -4 for
failure. But in some cases they were returning cl_error_t enum values
for failure. Regardless, the function using them was treating 0 as
success and non-zero as failure, which it stored as -1 ... every time.
This commit switches them all to use cl_error_t. I am continuing to
storeo the final result as 0 / -1 in the `peinfo` struct, but outside of
that everything has been made consistent.
While I was working on that, I got a tad side tracked. I noticed that
the target type isn't an enum, or even a set of #defines. So I made an
enum and then changed the code that uses target types to use the enum.
I also removed the `target` parameter from a number of functions that
don't actually use it at all. Some recursion was masking the fact that
it was an unused parameter which is why there was no warning about it.
2022-08-28 18:41:04 -07:00
ret = cli_add_content_match_pattern ( root , " Sig3 " , " babedead " , 0 , 0 , 0 , " * " , NULL , 0 ) ;
2022-02-12 13:34:25 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " cli_add_content_match_pattern failed " ) ;
2012-10-18 14:12:58 -07:00
2018-07-20 22:28:48 -04:00
ctx . options - > general | = CL_SCAN_GENERAL_ALLMATCHES ; /* enable all-match */
2025-07-25 16:25:10 +01:00
ret = cli_bm_scanbuff ( ( const unsigned char * ) " blah \xde \xad \xbe \xef " , 8 , & virname , NULL , root , 0 , NULL , NULL , NULL ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = CL_VIRUS , " cli_bm_scanbuff() failed " ) ;
ck_assert_msg ( ! strncmp ( virname , " Sig2 " , 4 ) , " Incorrect signature matched in cli_bm_scanbuff() \n " ) ;
2008-08-07 12:40:41 +00:00
}
END_TEST
2018-12-03 12:40:13 -05:00
START_TEST ( test_pcre_scanbuff )
{
struct cli_ac_data mdata ;
struct cli_matcher * root ;
char * hexsig ;
unsigned int i , hexlen ;
int ret ;
2015-06-03 15:28:39 -04:00
root = ctx . engine - > root [ 0 ] ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( root ! = NULL , " root == NULL " ) ;
2015-06-03 15:28:39 -04:00
# ifdef USE_MPOOL
root - > mempool = mpool_create ( ) ;
# endif
2018-12-03 12:40:13 -05:00
for ( i = 0 ; pcre_testdata [ i ] . data ; i + + ) {
hexlen = strlen ( PCRE_BYPASS ) + strlen ( pcre_testdata [ i ] . hexsig ) + 1 ;
2015-06-03 15:28:39 -04:00
Remove max-allocation limits where not required
The cli_max_malloc, cli_max_calloc, and cli_max_realloc functions
provide a way to protect against allocating too much memory
when the size of the allocation is derived from the untrusted input.
Specifically, we worry about values in the file being scanned being
manipulated to exhaust the RAM and crash the application.
There is no need to check the limits if the size of the allocation
is fixed, or if the size of the allocation is necessary for signature
loading, or the general operation of the applications.
E.g. checking the max-allocation limit for the size of a hash, or
for the size of the scan recursion stack, is a complete waste of
time.
Although we significantly increased the max-allocation limit in
a recent release, it is best not to check an allocation if the
allocation will be safe. It would be a waste of time.
I am also hopeful that if we can reduce the number allocations
that require a limit-check to those that require it for the safe
scan of a file, then eventually we can store the limit in the scan-
context, and make it configurable.
2024-01-08 22:48:28 -05:00
hexsig = calloc ( hexlen , sizeof ( char ) ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( hexsig ! = NULL , " [pcre] failed to prepend bypass (out-of-memory) " ) ;
2015-06-03 15:28:39 -04:00
2018-12-03 12:40:13 -05:00
strncat ( hexsig , PCRE_BYPASS , hexlen ) ;
strncat ( hexsig , pcre_testdata [ i ] . hexsig , hexlen ) ;
2015-06-03 15:28:39 -04:00
PE, ELF, Mach-O: code cleanup
The header parsing / executable metadata collecting functions for the
PE, ELF, and Mach-O file types were using `int` for the return type.
Mostly they were returning 0 for success and -1, -2, -3, or -4 for
failure. But in some cases they were returning cl_error_t enum values
for failure. Regardless, the function using them was treating 0 as
success and non-zero as failure, which it stored as -1 ... every time.
This commit switches them all to use cl_error_t. I am continuing to
storeo the final result as 0 / -1 in the `peinfo` struct, but outside of
that everything has been made consistent.
While I was working on that, I got a tad side tracked. I noticed that
the target type isn't an enum, or even a set of #defines. So I made an
enum and then changed the code that uses target types to use the enum.
I also removed the `target` parameter from a number of functions that
don't actually use it at all. Some recursion was masking the fact that
it was an unused parameter which is why there was no warning about it.
2022-08-28 18:41:04 -07:00
ret = readdb_parse_ldb_subsignature ( root , pcre_testdata [ i ] . virname , hexsig , pcre_testdata [ i ] . offset , NULL , 0 , 0 , 0 , NULL ) ;
2022-02-12 13:34:25 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " [pcre] readdb_parse_ldb_subsignature failed " ) ;
2018-12-03 12:40:13 -05:00
free ( hexsig ) ;
2015-06-03 15:28:39 -04:00
}
ret = cli_pcre_build ( root , CLI_DEFAULT_PCRE_MATCH_LIMIT , CLI_DEFAULT_PCRE_RECMATCH_LIMIT , NULL ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " [pcre] cli_pcre_build() failed " ) ;
2015-06-03 15:28:39 -04:00
// recomputate offsets
ret = cli_ac_initdata ( & mdata , root - > ac_partsigs , root - > ac_lsigs , root - > ac_reloff_num , CLI_DEFAULT_AC_TRACKLEN ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " [pcre] cli_ac_initdata() failed " ) ;
2015-06-03 15:28:39 -04:00
2018-07-20 22:28:48 -04:00
ctx . options - > general & = ~ CL_SCAN_GENERAL_ALLMATCHES ; /* make sure all-match is disabled */
2018-12-03 12:40:13 -05:00
for ( i = 0 ; pcre_testdata [ i ] . data ; i + + ) {
ret = cli_pcre_scanbuf ( ( const unsigned char * ) pcre_testdata [ i ] . data , strlen ( pcre_testdata [ i ] . data ) , & virname , NULL , root , NULL , NULL , NULL ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = pcre_testdata [ i ] . expected_result , " [pcre] cli_pcre_scanbuff() failed for %s (%d != %d) " , pcre_testdata [ i ] . virname , ret , pcre_testdata [ i ] . expected_result ) ;
2021-04-21 16:24:24 -07:00
// we cannot check if the virname matches because we didn't load a whole logical signature, and virnames are stored in the lsig structure, now.
2018-12-03 12:40:13 -05:00
2020-03-21 14:15:28 -04:00
ret = cli_scan_buff ( ( const unsigned char * ) pcre_testdata [ i ] . data , strlen ( pcre_testdata [ i ] . data ) , 0 , & ctx , 0 , NULL ) ;
ck_assert_msg ( ret = = pcre_testdata [ i ] . expected_result , " [pcre] cli_scan_buff() failed for %s " , pcre_testdata [ i ] . virname ) ;
2015-06-03 15:28:39 -04:00
}
cli_ac_freedata ( & mdata ) ;
}
END_TEST
2018-12-03 12:40:13 -05:00
START_TEST ( test_pcre_scanbuff_allscan )
{
struct cli_ac_data mdata ;
struct cli_matcher * root ;
char * hexsig ;
unsigned int i , hexlen ;
int ret ;
2015-06-03 15:28:39 -04:00
root = ctx . engine - > root [ 0 ] ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( root ! = NULL , " root == NULL " ) ;
2015-06-03 15:28:39 -04:00
# ifdef USE_MPOOL
root - > mempool = mpool_create ( ) ;
# endif
2018-12-03 12:40:13 -05:00
for ( i = 0 ; pcre_testdata [ i ] . data ; i + + ) {
hexlen = strlen ( PCRE_BYPASS ) + strlen ( pcre_testdata [ i ] . hexsig ) + 1 ;
2015-06-03 15:28:39 -04:00
Remove max-allocation limits where not required
The cli_max_malloc, cli_max_calloc, and cli_max_realloc functions
provide a way to protect against allocating too much memory
when the size of the allocation is derived from the untrusted input.
Specifically, we worry about values in the file being scanned being
manipulated to exhaust the RAM and crash the application.
There is no need to check the limits if the size of the allocation
is fixed, or if the size of the allocation is necessary for signature
loading, or the general operation of the applications.
E.g. checking the max-allocation limit for the size of a hash, or
for the size of the scan recursion stack, is a complete waste of
time.
Although we significantly increased the max-allocation limit in
a recent release, it is best not to check an allocation if the
allocation will be safe. It would be a waste of time.
I am also hopeful that if we can reduce the number allocations
that require a limit-check to those that require it for the safe
scan of a file, then eventually we can store the limit in the scan-
context, and make it configurable.
2024-01-08 22:48:28 -05:00
hexsig = calloc ( hexlen , sizeof ( char ) ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( hexsig ! = NULL , " [pcre] failed to prepend bypass (out-of-memory) " ) ;
2015-06-03 15:28:39 -04:00
2018-12-03 12:40:13 -05:00
strncat ( hexsig , PCRE_BYPASS , hexlen ) ;
strncat ( hexsig , pcre_testdata [ i ] . hexsig , hexlen ) ;
2015-06-03 15:28:39 -04:00
PE, ELF, Mach-O: code cleanup
The header parsing / executable metadata collecting functions for the
PE, ELF, and Mach-O file types were using `int` for the return type.
Mostly they were returning 0 for success and -1, -2, -3, or -4 for
failure. But in some cases they were returning cl_error_t enum values
for failure. Regardless, the function using them was treating 0 as
success and non-zero as failure, which it stored as -1 ... every time.
This commit switches them all to use cl_error_t. I am continuing to
storeo the final result as 0 / -1 in the `peinfo` struct, but outside of
that everything has been made consistent.
While I was working on that, I got a tad side tracked. I noticed that
the target type isn't an enum, or even a set of #defines. So I made an
enum and then changed the code that uses target types to use the enum.
I also removed the `target` parameter from a number of functions that
don't actually use it at all. Some recursion was masking the fact that
it was an unused parameter which is why there was no warning about it.
2022-08-28 18:41:04 -07:00
ret = readdb_parse_ldb_subsignature ( root , pcre_testdata [ i ] . virname , hexsig , pcre_testdata [ i ] . offset , NULL , 0 , 0 , 1 , NULL ) ;
2022-02-12 13:34:25 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " [pcre] readdb_parse_ldb_subsignature failed " ) ;
2018-12-03 12:40:13 -05:00
free ( hexsig ) ;
2015-06-03 15:28:39 -04:00
}
ret = cli_pcre_build ( root , CLI_DEFAULT_PCRE_MATCH_LIMIT , CLI_DEFAULT_PCRE_RECMATCH_LIMIT , NULL ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " [pcre] cli_pcre_build() failed " ) ;
2015-06-03 15:28:39 -04:00
// recomputate offsets
ret = cli_ac_initdata ( & mdata , root - > ac_partsigs , root - > ac_lsigs , root - > ac_reloff_num , CLI_DEFAULT_AC_TRACKLEN ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = CL_SUCCESS , " [pcre] cli_ac_initdata() failed " ) ;
2015-06-03 15:28:39 -04:00
2018-07-20 22:28:48 -04:00
ctx . options - > general | = CL_SCAN_GENERAL_ALLMATCHES ; /* enable all-match */
2018-12-03 12:40:13 -05:00
for ( i = 0 ; pcre_testdata [ i ] . data ; i + + ) {
2022-08-03 20:34:48 -07:00
cl_error_t verdict = CL_CLEAN ;
2018-12-03 12:40:13 -05:00
ret = cli_pcre_scanbuf ( ( const unsigned char * ) pcre_testdata [ i ] . data , strlen ( pcre_testdata [ i ] . data ) , & virname , NULL , root , NULL , NULL , NULL ) ;
2020-01-15 08:14:23 -08:00
ck_assert_msg ( ret = = pcre_testdata [ i ] . expected_result , " [pcre] cli_pcre_scanbuff() failed for %s (%d != %d) " , pcre_testdata [ i ] . virname , ret , pcre_testdata [ i ] . expected_result ) ;
2021-04-21 16:24:24 -07:00
// we cannot check if the virname matches because we didn't load a whole logical signature, and virnames are stored in the lsig structure, now.
2018-12-03 12:40:13 -05:00
2020-03-21 14:15:28 -04:00
ret = cli_scan_buff ( ( const unsigned char * ) pcre_testdata [ i ] . data , strlen ( pcre_testdata [ i ] . data ) , 0 , & ctx , 0 , NULL ) ;
2022-08-03 20:34:48 -07:00
// cli_scan_buff() doesn't check the number of alerts. When using CL_SCAN_GENERAL_ALLMATCHES
// or if using `CL_SCAN_GENERAL_HEURISTIC_PRECEDENCE` and `cli_append_potentially_unwanted()`
// we need to count the number of alerts manually to determine the verdict.
2025-06-09 01:33:26 -04:00
if ( 0 < evidence_num_alerts ( ctx . this_layer_evidence ) ) {
2022-08-03 20:34:48 -07:00
verdict = CL_VIRUS ;
}
ck_assert_msg ( verdict = = pcre_testdata [ i ] . expected_result , " [pcre] cli_scan_buff() failed for %s " , pcre_testdata [ i ] . virname ) ;
2018-12-03 12:40:13 -05:00
/* num_virus field add to test case struct */
2025-06-09 01:33:26 -04:00
if ( evidence_num_alerts ( ctx . this_layer_evidence ) > 0 ) {
// Reset evidence for the next scan.
evidence_free ( ctx . recursion_stack [ ctx . recursion_level ] . evidence ) ;
ctx . recursion_stack [ ctx . recursion_level ] . evidence = NULL ;
libclamav: Add engine option to toggle temp directory recursion
Temp directory recursion in ClamAV is when each layer of a scan gets its
own temp directory in the parent layer's temp directory.
In addition to temp directory recursion, ClamAV has been creating a new
subdirectory for each file scan as a risk-adverse method to ensure
no temporary file leaks fill up the disk.
Creating a directory is relatively slow on Windows in particular if
scanning a lot of very small files.
This commit:
1. Separates the temp directory recursion feature from the leave-temps
feature so that libclamav can leave temp files without making
subdirectories for each file scanned.
2. Makes it so that when temp directory recursion is off, libclamav
will just use the configure temp directory for all files.
The new option to enable temp directory recursion is for libclamav-only
at this time. It is off by default, and you can enable it like this:
```c
cl_engine_set_num(engine, CL_ENGINE_TMPDIR_RECURSION, 1);
```
For the `clamscan` and `clamd` programs, temp directory recursion will
be enabled when `--leave-temps` / `LeaveTemporaryFiles` is enabled.
The difference is that when disabled, it will return to using the
configured temp directory without making a subdirectory for each file
scanned, so as to improve scan performance for small files, mostly on
Windows.
Under the hood, this commit also:
1. Cleans up how we keep track of tmpdirs for each layer.
The goal here is to align how we keep track of layer-specific stuff
using the scan_layer structure.
2. Cleans up how we record metadata JSON for embedded files.
Note: Embedded files being different from Contained files, as they
are extracted not with a parser, but by finding them with
file type magic signatures.
CLAM-1583
2025-06-09 20:42:31 -04:00
ctx . this_layer_evidence = NULL ;
2022-08-03 20:34:48 -07:00
}
2015-06-03 15:28:39 -04:00
}
cli_ac_freedata ( & mdata ) ;
}
END_TEST
2008-08-07 12:40:41 +00:00
Suite * test_matchers_suite ( void )
{
Suite * s = suite_create ( " matchers " ) ;
TCase * tc_matchers ;
tc_matchers = tcase_create ( " matchers " ) ;
suite_add_tcase ( s , tc_matchers ) ;
2018-12-03 12:40:13 -05:00
tcase_add_checked_fixture ( tc_matchers , setup , teardown ) ;
2008-08-07 12:40:41 +00:00
tcase_add_test ( tc_matchers , test_ac_scanbuff ) ;
2015-06-09 11:12:20 -04:00
tcase_add_test ( tc_matchers , test_ac_scanbuff_ex ) ;
2008-08-07 12:40:41 +00:00
tcase_add_test ( tc_matchers , test_bm_scanbuff ) ;
2015-06-03 15:28:39 -04:00
tcase_add_test ( tc_matchers , test_pcre_scanbuff ) ;
2012-10-18 14:12:58 -07:00
tcase_add_test ( tc_matchers , test_ac_scanbuff_allscan ) ;
2015-06-09 11:12:20 -04:00
tcase_add_test ( tc_matchers , test_ac_scanbuff_allscan_ex ) ;
2012-10-18 14:12:58 -07:00
tcase_add_test ( tc_matchers , test_bm_scanbuff_allscan ) ;
2015-06-03 15:28:39 -04:00
tcase_add_test ( tc_matchers , test_pcre_scanbuff_allscan ) ;
2008-08-07 12:40:41 +00:00
return s ;
}