clamav/unit_tests/check_bytecode.c

655 lines
20 KiB
C
Raw Normal View History

2009-07-13 19:45:05 +03:00
/*
2020-01-03 15:44:07 -05:00
* Unit tests for bytecode functions.
2009-07-13 19:45:05 +03:00
*
* Copyright (C) 2013-2022 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
* Copyright (C) 2009-2013 Sourcefire, Inc.
2009-07-13 19:45:05 +03:00
*
* Authors: Török Edvin
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
* MA 02110-1301, USA.
*/
#if HAVE_CONFIG_H
#include "clamav-config.h"
#endif
#include <stdio.h>
#include <stdlib.h>
#include <limits.h>
#include <string.h>
#include <check.h>
2010-03-24 12:46:34 +02:00
#include <fcntl.h>
2010-05-13 22:44:29 +03:00
#include <errno.h>
2014-02-08 00:31:12 -05:00
Add CMake build tooling This patch adds experimental-quality CMake build tooling. The libmspack build required a modification to use "" instead of <> for header #includes. This will hopefully be included in the libmspack upstream project when adding CMake build tooling to libmspack. Removed use of libltdl when using CMake. Flex & Bison are now required to build. If -DMAINTAINER_MODE, then GPERF is also required, though it currently doesn't actually do anything. TODO! I found that the autotools build system was generating the lexer output but not actually compiling it, instead using previously generated (and manually renamed) lexer c source. As a consequence, changes to the .l and .y files weren't making it into the build. To resolve this, I removed generated flex/bison files and fixed the tooling to use the freshly generated files. Flex and bison are now required build tools. On Windows, this adds a dependency on the winflexbison package, which can be obtained using Chocolatey or may be manually installed. CMake tooling only has partial support for building with external LLVM library, and no support for the internal LLVM (to be removed in the future). I.e. The CMake build currently only supports the bytecode interpreter. Many files used include paths relative to the top source directory or relative to the current project, rather than relative to each build target. Modern CMake support requires including internal dependency headers the same way you would external dependency headers (albeit with "" instead of <>). This meant correcting all header includes to be relative to the build targets and not relative to the workspace. For example, ... ```c include "../libclamav/clamav.h" include "clamd/clamd_others.h" ``` ... becomes: ```c // libclamav include "clamav.h" // clamd include "clamd_others.h" ``` Fixes header name conflicts by renaming a few of the files. Converted the "shared" code into a static library, which depends on libclamav. The ironically named "shared" static library provides features common to the ClamAV apps which are not required in libclamav itself and are not intended for use by downstream projects. This change was required for correct modern CMake practices but was also required to use the automake "subdir-objects" option. This eliminates warnings when running autoreconf which, in the next version of autoconf & automake are likely to break the build. libclamav used to build in multiple stages where an earlier stage is a static library containing utils required by the "shared" code. Linking clamdscan and clamdtop with this libclamav utils static lib allowed these two apps to function without libclamav. While this is nice in theory, the practical gains are minimal and it complicates the build system. As such, the autotools and CMake tooling was simplified for improved maintainability and this feature was thrown out. clamdtop and clamdscan now require libclamav to function. Removed the nopthreads version of the autotools libclamav_internal_utils static library and added pthread linking to a couple apps that may have issues building on some platforms without it, with the intention of removing needless complexity from the source. Kept the regular version of libclamav_internal_utils.la though it is no longer used anywhere but in libclamav. Added an experimental doxygen build option which attempts to build clamav.h and libfreshclam doxygen html docs. The CMake build tooling also may build the example program(s), which isn't a feature in the Autotools build system. Changed C standard to C90+ due to inline linking issues with socket.h when linking libfreshclam.so on Linux. Generate common.rc for win32. Fix tabs/spaces in shared Makefile.am, and remove vestigial ifndef from misc.c. Add CMake files to the automake dist, so users can try the new CMake tooling w/out having to build from a git clone. clamonacc changes: - Renamed FANOTIFY macro to HAVE_SYS_FANOTIFY_H to better match other similar macros. - Added a new clamav-clamonacc.service systemd unit file, based on the work of ChadDevOps & Aaron Brighton. - Added missing clamonacc man page. Updates to clamdscan man page, add missing options. Remove vestigial CL_NOLIBCLAMAV definitions (all apps now use libclamav). Rename Windows mspack.dll to libmspack.dll so all ClamAV-built libraries have the lib-prefix with Visual Studio as with CMake.
2020-08-13 00:25:34 -07:00
// libclamav
#include "clamav.h"
#include "others.h"
#include "bytecode.h"
#include "dconf.h"
#include "bytecode_priv.h"
#include "pe.h"
2009-07-13 19:45:05 +03:00
#include "checks.h"
Add CMake build tooling This patch adds experimental-quality CMake build tooling. The libmspack build required a modification to use "" instead of <> for header #includes. This will hopefully be included in the libmspack upstream project when adding CMake build tooling to libmspack. Removed use of libltdl when using CMake. Flex & Bison are now required to build. If -DMAINTAINER_MODE, then GPERF is also required, though it currently doesn't actually do anything. TODO! I found that the autotools build system was generating the lexer output but not actually compiling it, instead using previously generated (and manually renamed) lexer c source. As a consequence, changes to the .l and .y files weren't making it into the build. To resolve this, I removed generated flex/bison files and fixed the tooling to use the freshly generated files. Flex and bison are now required build tools. On Windows, this adds a dependency on the winflexbison package, which can be obtained using Chocolatey or may be manually installed. CMake tooling only has partial support for building with external LLVM library, and no support for the internal LLVM (to be removed in the future). I.e. The CMake build currently only supports the bytecode interpreter. Many files used include paths relative to the top source directory or relative to the current project, rather than relative to each build target. Modern CMake support requires including internal dependency headers the same way you would external dependency headers (albeit with "" instead of <>). This meant correcting all header includes to be relative to the build targets and not relative to the workspace. For example, ... ```c include "../libclamav/clamav.h" include "clamd/clamd_others.h" ``` ... becomes: ```c // libclamav include "clamav.h" // clamd include "clamd_others.h" ``` Fixes header name conflicts by renaming a few of the files. Converted the "shared" code into a static library, which depends on libclamav. The ironically named "shared" static library provides features common to the ClamAV apps which are not required in libclamav itself and are not intended for use by downstream projects. This change was required for correct modern CMake practices but was also required to use the automake "subdir-objects" option. This eliminates warnings when running autoreconf which, in the next version of autoconf & automake are likely to break the build. libclamav used to build in multiple stages where an earlier stage is a static library containing utils required by the "shared" code. Linking clamdscan and clamdtop with this libclamav utils static lib allowed these two apps to function without libclamav. While this is nice in theory, the practical gains are minimal and it complicates the build system. As such, the autotools and CMake tooling was simplified for improved maintainability and this feature was thrown out. clamdtop and clamdscan now require libclamav to function. Removed the nopthreads version of the autotools libclamav_internal_utils static library and added pthread linking to a couple apps that may have issues building on some platforms without it, with the intention of removing needless complexity from the source. Kept the regular version of libclamav_internal_utils.la though it is no longer used anywhere but in libclamav. Added an experimental doxygen build option which attempts to build clamav.h and libfreshclam doxygen html docs. The CMake build tooling also may build the example program(s), which isn't a feature in the Autotools build system. Changed C standard to C90+ due to inline linking issues with socket.h when linking libfreshclam.so on Linux. Generate common.rc for win32. Fix tabs/spaces in shared Makefile.am, and remove vestigial ifndef from misc.c. Add CMake files to the automake dist, so users can try the new CMake tooling w/out having to build from a git clone. clamonacc changes: - Renamed FANOTIFY macro to HAVE_SYS_FANOTIFY_H to better match other similar macros. - Added a new clamav-clamonacc.service systemd unit file, based on the work of ChadDevOps & Aaron Brighton. - Added missing clamonacc man page. Updates to clamdscan man page, add missing options. Remove vestigial CL_NOLIBCLAMAV definitions (all apps now use libclamav). Rename Windows mspack.dll to libmspack.dll so all ClamAV-built libraries have the lib-prefix with Visual Studio as with CMake.
2020-08-13 00:25:34 -07:00
2010-11-04 22:13:35 +02:00
#ifdef CL_THREAD_SAFE
#include <pthread.h>
#endif
2009-07-13 19:45:05 +03:00
2010-03-24 12:46:34 +02:00
static void runtest(const char *file, uint64_t expected, int fail, int nojit,
const char *infile, struct cli_pe_hook_data *pedata,
struct cli_exe_section *sections, const char *expectedvirname,
int testmode)
2009-07-13 19:45:05 +03:00
{
2010-03-24 12:46:34 +02:00
fmap_t *map = NULL;
2009-07-13 19:45:05 +03:00
int rc;
int fd = open_testfile(file, O_RDONLY); /* CBC databases should be opened in text mode (not binary), or else CRLF line indexing with `fgets()` will fail. */
2009-07-13 19:45:05 +03:00
FILE *f;
struct cli_bc bc;
2010-03-24 12:46:34 +02:00
cli_ctx cctx;
2009-07-13 19:45:05 +03:00
struct cli_bc_ctx *ctx;
struct cli_all_bc bcs;
2009-07-13 19:45:05 +03:00
uint64_t v;
2010-05-13 22:44:29 +03:00
struct cl_engine *engine;
2010-05-13 23:35:47 +03:00
int fdin = -1;
2011-06-14 22:35:03 +03:00
char filestr[512];
const char *virname = NULL;
struct cl_scan_options options;
2009-07-13 19:45:05 +03:00
2010-03-24 12:46:34 +02:00
memset(&cctx, 0, sizeof(cctx));
memset(&options, 0, sizeof(struct cl_scan_options));
cctx.options = &options;
cctx.options->general |= CL_SCAN_GENERAL_ALLMATCHES;
cctx.virname = &virname;
2010-05-13 22:44:29 +03:00
cctx.engine = engine = cl_engine_new();
ck_assert_msg(!!cctx.engine, "cannot create engine");
2010-05-13 23:25:11 +03:00
rc = cl_engine_compile(engine);
ck_assert_msg(!rc, "cannot compile engine");
libclamav: Fix scan recursion tracking Scan recursion is the process of identifying files embedded in other files and then scanning them, recursively. Internally this process is more complex than it may sound because a file may have multiple layers of types before finding a new "file". At present we treat the recursion count in the scanning context as an index into both our fmap list AND our container list. These two lists are conceptually a part of the same thing and should be unified. But what's concerning is that the "recursion level" isn't actually incremented or decremented at the same time that we add a layer to the fmap or container lists but instead is more touchy-feely, increasing when we find a new "file". To account for this shadiness, the size of the fmap and container lists has always been a little longer than our "max scan recursion" limit so we don't accidentally overflow the fmap or container arrays (!). I've implemented a single recursion-stack as an array, similar to before, which includes a pointer to each fmap at each layer, along with the size and type. Push and pop functions add and remove layers whenever a new fmap is added. A boolean argument when pushing indicates if the new layer represents a new buffer or new file (descriptor). A new buffer will reset the "nested fmap level" (described below). This commit also provides a solution for an issue where we detect embedded files more than once during scan recursion. For illustration, imagine a tarball named foo.tar.gz with this structure: | description | type | rec level | nested fmap level | | ------------------------- | ----- | --------- | ----------------- | | foo.tar.gz | GZ | 0 | 0 | | └── foo.tar | TAR | 1 | 0 | | ├── bar.zip | ZIP | 2 | 1 | | │   └── hola.txt | ASCII | 3 | 0 | | └── baz.exe | PE | 2 | 1 | But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this: | description | type | rec level | nested fmap level | | ------------------------- | ----- | --------- | ----------------- | | baz.exe | PE | 0 | 0 | | ├── sfx.zip | ZIP | 1 | 1 | | │   └── hello.txt | ASCII | 2 | 0 | | └── sfx.7z | 7Z | 1 | 1 | |    └── world.txt | ASCII | 2 | 0 | (A) If we scan for embedded files at any layer, we may detect: | description | type | rec level | nested fmap level | | ------------------------- | ----- | --------- | ----------------- | | foo.tar.gz | GZ | 0 | 0 | | ├── foo.tar | TAR | 1 | 0 | | │ ├── bar.zip | ZIP | 2 | 1 | | │ │   └── hola.txt | ASCII | 3 | 0 | | │ ├── baz.exe | PE | 2 | 1 | | │ │ ├── sfx.zip | ZIP | 3 | 1 | | │ │ │   └── hello.txt | ASCII | 4 | 0 | | │ │ └── sfx.7z | 7Z | 3 | 1 | | │ │    └── world.txt | ASCII | 4 | 0 | | │ ├── sfx.zip | ZIP | 2 | 1 | | │ │   └── hello.txt | ASCII | 3 | 0 | | │ └── sfx.7z | 7Z | 2 | 1 | | │   └── world.txt | ASCII | 3 | 0 | | ├── sfx.zip | ZIP | 1 | 1 | | └── sfx.7z | 7Z | 1 | 1 | (A) is bad because it scans content more than once. Note that for the GZ layer, it may detect the ZIP and 7Z if the signature hits on the compressed data, which it might, though extracting the ZIP and 7Z will likely fail. The reason the above doesn't happen now is that we restrict embedded type scans for a bunch of archive formats to include GZ and TAR. (B) If we scan for embedded files at the foo.tar layer, we may detect: | description | type | rec level | nested fmap level | | ------------------------- | ----- | --------- | ----------------- | | foo.tar.gz | GZ | 0 | 0 | | └── foo.tar | TAR | 1 | 0 | | ├── bar.zip | ZIP | 2 | 1 | | │   └── hola.txt | ASCII | 3 | 0 | | ├── baz.exe | PE | 2 | 1 | | ├── sfx.zip | ZIP | 2 | 1 | | │   └── hello.txt | ASCII | 3 | 0 | | └── sfx.7z | 7Z | 2 | 1 | |    └── world.txt | ASCII | 3 | 0 | (B) is almost right. But we can achieve it easily enough only scanning for embedded content in the current fmap when the "nested fmap level" is 0. The upside is that it should safely detect all embedded content, even if it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe. The biggest risk I can think of affects ZIPs. SFXZIP detection is identical to ZIP detection, which is why we don't allow SFXZIP to be detected if insize of a ZIP. If we only allow embedded type scanning at fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP if the bar.exe was not compressed in foo.zip and if non-compressed files extracted from ZIPs aren't extracted as new buffers: | description | type | rec level | nested fmap level | | ------------------------- | ----- | --------- | ----------------- | | foo.zip | ZIP | 0 | 0 | | └── bar.exe | PE | 1 | 1 | | └── sfx.zip | ZIP | 2 | 2 | Provided that we ensure all files extracted from zips are scanned in new buffers, option (B) should be safe. (C) If we scan for embedded files at the baz.exe layer, we may detect: | description | type | rec level | nested fmap level | | ------------------------- | ----- | --------- | ----------------- | | foo.tar.gz | GZ | 0 | 0 | | └── foo.tar | TAR | 1 | 0 | | ├── bar.zip | ZIP | 2 | 1 | | │   └── hola.txt | ASCII | 3 | 0 | | └── baz.exe | PE | 2 | 1 | | ├── sfx.zip | ZIP | 3 | 1 | | │   └── hello.txt | ASCII | 4 | 0 | | └── sfx.7z | 7Z | 3 | 1 | |    └── world.txt | ASCII | 4 | 0 | (C) is right. But it's harder to achieve. For this example we can get it by restricting 7ZSFX and ZIPSFX detection only when scanning an executable. But that may mean losing detection of archives embedded elsewhere. And we'd have to identify allowable container types for each possible embedded type, which would be very difficult. So this commit aims to solve the issue the (B)-way. Note that in all situations, we still have to scan with file typing enabled to determine if we need to reassign the current file type, such as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2- compressed. Detection of DMG and a handful of other types rely on finding data partway through or near the ned of a file before reassigning the entire file as the new type. Other fixes and considerations in this commit: - The utf16 HTML parser has weak error handling, particularly with respect to creating a nested fmap for scanning the ascii decoded file. This commit cleans up the error handling and wraps the nested scan with the recursion-stack push()/pop() for correct recursion tracking. Before this commit, each container layer had a flag to indicate if the container layer is valid. We need something similar so that the cli_recursion_stack_get_*() functions ignore normalized layers. Details... Imagine an LDB signature for HTML content that specifies a ZIP container. If the signature actually alerts on the normalized HTML and you don't ignore normalized layers for the container check, it will appear as though the alert is in an HTML container rather than a ZIP container. This commit accomplishes this with a boolean you set in the scan context before scanning a new layer. Then when the new fmap is created, it will use that flag to set similar flag for the layer. The context flag is reset those that anything after this doesn't have that flag. The flag allows the new recursion_stack_get() function to ignore normalized layers when iterating the stack to return a layer at a requested index, negative or positive. Scanning normalized extracted/normalized javascript and VBA should also use the 'layer is normalized' flag. - This commit also fixes Heuristic.Broken.Executable alert for ELF files to make sure that: A) these only alert if cli_append_virus() returns CL_VIRUS (aka it respects the FP check). B) all broken-executable alerts for ELF only happen if the SCAN_HEURISTIC_BROKEN option is enabled. - This commit also cleans up the error handling in cli_magic_scan_dir(). This was needed so we could correctly apply the layer-is-normalized-flag to all VBA macros extracted to a directory when scanning the directory. - Also fix an issue where exceeding scan maximums wouldn't cause embedded file detection scans to abort. Granted we don't actually want to abort if max filesize or max recursion depth are exceeded... only if max scansize, max files, and max scantime are exceeded. Add 'abort_scan' flag to scan context, to protect against depending on correct error propagation for fatal conditions. Instead, setting this flag in the scan context should guarantee that a fatal condition deep in scan recursion isn't lost which result in more stuff being scanned instead of aborting. This shouldn't be necessary, but some status codes like CL_ETIMEOUT never used to be fatal and it's easier to do this than to verify every parser only returns CL_ETIMEOUT and other "fatal status codes" in fatal conditions. - Remove duplicate is_tar() prototype from filestypes.c and include is_tar.h instead. - Presently we create the fmap hash when creating the fmap. This wastes a bit of CPU if the hash is never needed. Now that we're creating fmap's for all embedded files discovered with file type recognition scans, this is a much more frequent occurence and really slows things down. This commit fixes the issue by only creating fmap hashes as needed. This should not only resolve the perfomance impact of creating fmap's for all embedded files, but also should improve performance in general. - Add allmatch check to the zip parser after the central-header meta match. That way we don't multiple alerts with the same match except in allmatch mode. Clean up error handling in the zip parser a tiny bit. - Fixes to ensure that the scan limits such as scansize, filesize, recursion depth, # of embedded files, and scantime are always reported if AlertExceedsMax (--alert-exceeds-max) is enabled. - Fixed an issue where non-fatal alerts for exceeding scan maximums may mask signature matches later on. I changed it so these alerts use the "possibly unwanted" alert-type and thus only alert if no other alerts were found or if all-match or heuristic-precedence are enabled. - Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata when the --gen-json feature is enabled. These will show up once under "ParseErrors" the first time a limit is exceeded. In the present implementation, only one limits-exceeded events will be added, so as to prevent a malicious or malformed sample from filling the JSON buffer with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
cctx.dconf = cctx.engine->dconf;
cctx.recursion_stack_size = cctx.engine->max_recursion_level;
cctx.recursion_stack = cli_calloc(sizeof(recursion_level_t), cctx.recursion_stack_size);
ck_assert_msg(!!cctx.recursion_stack, "cli_calloc() for recursion_stack failed");
// ctx was memset, so recursion_level starts at 0.
cctx.recursion_stack[cctx.recursion_level].fmap = NULL;
cctx.fmap = cctx.recursion_stack[cctx.recursion_level].fmap;
2010-03-24 12:46:34 +02:00
ck_assert_msg(fd >= 0, "retmagic open failed");
2009-07-13 19:45:05 +03:00
f = fdopen(fd, "r");
ck_assert_msg(!!f, "retmagic fdopen failed");
2009-07-13 19:45:05 +03:00
cl_debug();
if (!nojit) {
rc = cli_bytecode_init(&bcs);
ck_assert_msg(rc == CL_SUCCESS, "cli_bytecode_init failed");
} else {
bcs.engine = NULL;
}
bcs.all_bcs = &bc;
bcs.count = 1;
2012-12-05 15:48:52 -08:00
rc = cli_bytecode_load(&bc, f, NULL, 1, 0);
ck_assert_msg(rc == CL_SUCCESS, "cli_bytecode_load failed");
2009-07-13 19:45:05 +03:00
fclose(f);
if (testmode && have_clamjit())
engine->bytecode_mode = CL_BYTECODE_MODE_TEST;
rc = cli_bytecode_prepare2(engine, &bcs, BYTECODE_ENGINE_MASK);
ck_assert_msg(rc == CL_SUCCESS, "cli_bytecode_prepare failed");
if (have_clamjit() && !nojit && nojit != -1 && !testmode) {
ck_assert_msg(bc.state == bc_jit, "preparing for JIT failed");
}
ctx = cli_bytecode_context_alloc();
2010-03-30 11:10:58 +03:00
ctx->bytecode_timeout = fail == CL_ETIMEOUT ? 10 : 10000;
ck_assert_msg(!!ctx, "cli_bytecode_context_alloc failed");
2009-07-13 19:45:05 +03:00
2010-05-13 22:44:29 +03:00
ctx->ctx = &cctx;
2010-03-24 12:46:34 +02:00
if (infile) {
CMake: Add CTest support to match Autotools checks An ENABLE_TESTS CMake option is provided so that users can disable testing if they don't want it. Instructions for how to use this included in the INSTALL.cmake.md file. If you run `ctest`, each testcase will write out a log file to the <build>/unit_tests directory. As with Autotools' make check, the test files are from test/.split and unit_tests/.split files, but for CMake these are generated at build time instead of at test time. On Posix systems, sets the LD_LIBRARY_PATH so that ClamAV-compiled libraries can be loaded when running tests. On Windows systems, CTest will identify and collect all library dependencies and assemble a temporarily install under the build/unit_tests directory so that the libraries can be loaded when running tests. The same feature is used on Windows when using CMake to install to collect all DLL dependencies so that users don't have to install them manually afterwards. Each of the CTest tests are run using a custom wrapper around Python's unittest framework, which is also responsible for finding and inserting valgrind into the valgrind tests on Posix systems. Unlike with Autotools, the CMake CTest Valgrind-tests are enabled by default, if Valgrind can be found. There's no need to set VG=1. CTest's memcheck module is NOT supported, because we use Python to orchestrate our tests. Added a bunch of Windows compatibility changes to the unit tests. These were primarily changing / to PATHSEP and making adjustments to use Win32 C headers and ifdef out the POSIX ones which aren't available on Windows. Also disabled a bunch of tests on Win32 that don't work on Windows, notably the mmap ones and FD-passing (i.e. FILEDES) ones. Add JSON_C_HAVE_INTTYPES_H definition to clamav-config.h to eliminate warnings on Windows where json.h is included after inttypes.h because json-c's inttypes replacement relies on it. This is a it of a hack and may be removed if json-c fixes their inttypes header stuff in the future. Add preprocessor definitions on Windows to disable MSVC warnings about CRT secure and nonstandard functions. While there may be a better solution, this is needed to be able to see other more serious warnings. Add missing file comment block and copyright statement for clamsubmit.c. Also change json-c/json.h include filename to json.h in clamsubmit.c. The directory name is not required. Changed the hash table data integer type from long, which is poorly defined, to size_t -- which is capable of storing a pointer. Fixed a bunch of casts regarding this variable to eliminate warnings. Fixed two bugs causing utf8 encoding unit tests to fail on Windows: - The in_size variable should be the number of bytes, not the character count. This was was causing the SHIFT_JIS (japanese codepage) to UTF8 transcoding test to only transcode half the bytes. - It turns out that the MultiByteToWideChar() API can't transcode UTF16-BE to UTF16-LE. The solution is to just iterate over the buffer and flip the bytes on each uint16_t. This but was causing the UTF16-BE to UTF8 tests to fail. I also split up the utf8 transcoding tests into separate tests so I could see all of the failures instead of just the first one. Added a flags parameter to the unit test function to open testfiles because it turns out that on Windows if a file contains the \r\n it will replace it with just \n if you opened the file as a text file instead of as binary. However, if we open the CBC files as binary, then a bunch of bytecode tests fail. So I've changed the tests to open the CBC files in the bytecode tests as text files and open all other files as binary. Ported the feature tests from shell scripts to Python using a modified version of our QA test-framework, which is largely compatible and will allow us to migrate some QA tests into this repo. I'd like to add GitHub Actions pipelines in the future so that all public PR's get some testing before anyone has to manually review them. The clamd --log option was missing from the help string, though it definitely works. I've added it in this commit. It appears that clamd.c was never clang-format'd, so this commit also reformats clamd.c. Some of the check_clamd tests expected the path returned by clamd to match character for character with original path sent to clamd. However, as we now evaluate real paths before a scan, the path returned by clamd isn't going to match the relative (and possibly symlink-ridden) path passed to clamdscan. I fixed this test by changing the test to search for the basename: <signature> FOUND within the response instead of matching the exact path. Autotools: Link check_clamd with libclamav so we can use our utility functions in check_clamd.c.
2020-08-25 23:14:23 -07:00
snprintf(filestr, sizeof(filestr), OBJDIR PATHSEP "%s", infile);
fdin = open(filestr, O_RDONLY | O_BINARY);
if (fdin < 0 && errno == ENOENT)
fdin = open_testfile(infile, O_RDONLY | O_BINARY);
ck_assert_msg(fdin >= 0, "failed to open infile");
Record names of extracted files A way is needed to record scanned file names for two purposes: 1. File names (and extensions) must be stored in the json metadata properties recorded when using the --gen-json clamscan option. Future work may use this to compare file extensions with detected file types. 2. File names are useful when interpretting tmp directory output when using the --leave-temps option. This commit enables file name retention for later use by storing file names in the fmap header structure, if a file name exists. To store the names in fmaps, an optional name argument has been added to any internal scan API's that create fmaps and every call to these APIs has been modified to pass a file name or NULL if a file name is not required. The zip and gpt parsers required some modification to record file names. The NSIS and XAR parsers fail to collect file names at all and will require future work to support file name extraction. Also: - Added recursive extraction to the tmp directory when the --leave-temps option is enabled. When not enabled, the tmp directory structure remains flat so as to prevent the likelihood of exceeding MAX_PATH. The current tmp directory is stored in the scan context. - Made the cli_scanfile() internal API non-static and added it to scanners.h so it would be accessible outside of scanners.c in order to remove code duplication within libmspack.c. - Added function comments to scanners.h and matcher.h - Converted a TDB-type macros and LSIG-type macros to enums for improved type safey. - Converted more return status variables from `int` to `cl_error_t` for improved type safety, and corrected ooxml file typing functions so they use `cli_file_t` exclusively rather than mixing types with `cl_error_t`. - Restructured the magic_scandesc() function to use goto's for error handling and removed the early_ret_from_magicscan() macro and magic_scandesc_cleanup() function. This makes the code easier to read and made it easier to add the recursive tmp directory cleanup to magic_scandesc(). - Corrected zip, egg, rar filename extraction issues. - Removed use of extra sub-directory layer for zip, egg, and rar file extraction. For Zip, this also involved changing the extracted filenames to be randomly generated rather than using the "zip.###" file name scheme.
2020-03-19 21:23:54 -04:00
map = fmap(fdin, 0, 0, filestr);
ck_assert_msg(!!map, "unable to fmap infile");
if (pedata)
ctx->hooks.pedata = pedata;
ctx->sections = sections;
cli_bytecode_context_setfile(ctx, map);
2010-03-24 12:46:34 +02:00
}
2009-07-13 19:45:05 +03:00
cli_bytecode_context_setfuncid(ctx, &bc, 0);
2009-08-27 20:41:29 +03:00
rc = cli_bytecode_run(&bcs, &bc, ctx);
ck_assert_msg(rc == fail, "cli_bytecode_run failed, expected: %u, have: %u\n",
2020-07-24 08:32:47 -07:00
fail, rc);
2009-07-13 19:45:05 +03:00
if (rc == CL_SUCCESS) {
v = cli_bytecode_context_getresult_int(ctx);
ck_assert_msg(v == expected, "Invalid return value from bytecode run, expected: " STDx64 ", have: " STDx64 "\n",
2020-07-24 08:32:47 -07:00
expected, v);
}
2010-03-24 12:46:34 +02:00
if (infile && expectedvirname) {
ck_assert_msg(ctx->virname &&
2020-07-24 08:32:47 -07:00
!strcmp(ctx->virname, expectedvirname),
"Invalid virname, expected: %s\n", expectedvirname);
2010-03-24 12:46:34 +02:00
}
2009-07-13 19:45:05 +03:00
cli_bytecode_context_destroy(ctx);
2010-03-24 12:46:34 +02:00
if (map)
funmap(map);
2009-07-13 19:45:05 +03:00
cli_bytecode_destroy(&bc);
cli_bytecode_done(&bcs);
libclamav: Fix scan recursion tracking Scan recursion is the process of identifying files embedded in other files and then scanning them, recursively. Internally this process is more complex than it may sound because a file may have multiple layers of types before finding a new "file". At present we treat the recursion count in the scanning context as an index into both our fmap list AND our container list. These two lists are conceptually a part of the same thing and should be unified. But what's concerning is that the "recursion level" isn't actually incremented or decremented at the same time that we add a layer to the fmap or container lists but instead is more touchy-feely, increasing when we find a new "file". To account for this shadiness, the size of the fmap and container lists has always been a little longer than our "max scan recursion" limit so we don't accidentally overflow the fmap or container arrays (!). I've implemented a single recursion-stack as an array, similar to before, which includes a pointer to each fmap at each layer, along with the size and type. Push and pop functions add and remove layers whenever a new fmap is added. A boolean argument when pushing indicates if the new layer represents a new buffer or new file (descriptor). A new buffer will reset the "nested fmap level" (described below). This commit also provides a solution for an issue where we detect embedded files more than once during scan recursion. For illustration, imagine a tarball named foo.tar.gz with this structure: | description | type | rec level | nested fmap level | | ------------------------- | ----- | --------- | ----------------- | | foo.tar.gz | GZ | 0 | 0 | | └── foo.tar | TAR | 1 | 0 | | ├── bar.zip | ZIP | 2 | 1 | | │   └── hola.txt | ASCII | 3 | 0 | | └── baz.exe | PE | 2 | 1 | But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this: | description | type | rec level | nested fmap level | | ------------------------- | ----- | --------- | ----------------- | | baz.exe | PE | 0 | 0 | | ├── sfx.zip | ZIP | 1 | 1 | | │   └── hello.txt | ASCII | 2 | 0 | | └── sfx.7z | 7Z | 1 | 1 | |    └── world.txt | ASCII | 2 | 0 | (A) If we scan for embedded files at any layer, we may detect: | description | type | rec level | nested fmap level | | ------------------------- | ----- | --------- | ----------------- | | foo.tar.gz | GZ | 0 | 0 | | ├── foo.tar | TAR | 1 | 0 | | │ ├── bar.zip | ZIP | 2 | 1 | | │ │   └── hola.txt | ASCII | 3 | 0 | | │ ├── baz.exe | PE | 2 | 1 | | │ │ ├── sfx.zip | ZIP | 3 | 1 | | │ │ │   └── hello.txt | ASCII | 4 | 0 | | │ │ └── sfx.7z | 7Z | 3 | 1 | | │ │    └── world.txt | ASCII | 4 | 0 | | │ ├── sfx.zip | ZIP | 2 | 1 | | │ │   └── hello.txt | ASCII | 3 | 0 | | │ └── sfx.7z | 7Z | 2 | 1 | | │   └── world.txt | ASCII | 3 | 0 | | ├── sfx.zip | ZIP | 1 | 1 | | └── sfx.7z | 7Z | 1 | 1 | (A) is bad because it scans content more than once. Note that for the GZ layer, it may detect the ZIP and 7Z if the signature hits on the compressed data, which it might, though extracting the ZIP and 7Z will likely fail. The reason the above doesn't happen now is that we restrict embedded type scans for a bunch of archive formats to include GZ and TAR. (B) If we scan for embedded files at the foo.tar layer, we may detect: | description | type | rec level | nested fmap level | | ------------------------- | ----- | --------- | ----------------- | | foo.tar.gz | GZ | 0 | 0 | | └── foo.tar | TAR | 1 | 0 | | ├── bar.zip | ZIP | 2 | 1 | | │   └── hola.txt | ASCII | 3 | 0 | | ├── baz.exe | PE | 2 | 1 | | ├── sfx.zip | ZIP | 2 | 1 | | │   └── hello.txt | ASCII | 3 | 0 | | └── sfx.7z | 7Z | 2 | 1 | |    └── world.txt | ASCII | 3 | 0 | (B) is almost right. But we can achieve it easily enough only scanning for embedded content in the current fmap when the "nested fmap level" is 0. The upside is that it should safely detect all embedded content, even if it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe. The biggest risk I can think of affects ZIPs. SFXZIP detection is identical to ZIP detection, which is why we don't allow SFXZIP to be detected if insize of a ZIP. If we only allow embedded type scanning at fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP if the bar.exe was not compressed in foo.zip and if non-compressed files extracted from ZIPs aren't extracted as new buffers: | description | type | rec level | nested fmap level | | ------------------------- | ----- | --------- | ----------------- | | foo.zip | ZIP | 0 | 0 | | └── bar.exe | PE | 1 | 1 | | └── sfx.zip | ZIP | 2 | 2 | Provided that we ensure all files extracted from zips are scanned in new buffers, option (B) should be safe. (C) If we scan for embedded files at the baz.exe layer, we may detect: | description | type | rec level | nested fmap level | | ------------------------- | ----- | --------- | ----------------- | | foo.tar.gz | GZ | 0 | 0 | | └── foo.tar | TAR | 1 | 0 | | ├── bar.zip | ZIP | 2 | 1 | | │   └── hola.txt | ASCII | 3 | 0 | | └── baz.exe | PE | 2 | 1 | | ├── sfx.zip | ZIP | 3 | 1 | | │   └── hello.txt | ASCII | 4 | 0 | | └── sfx.7z | 7Z | 3 | 1 | |    └── world.txt | ASCII | 4 | 0 | (C) is right. But it's harder to achieve. For this example we can get it by restricting 7ZSFX and ZIPSFX detection only when scanning an executable. But that may mean losing detection of archives embedded elsewhere. And we'd have to identify allowable container types for each possible embedded type, which would be very difficult. So this commit aims to solve the issue the (B)-way. Note that in all situations, we still have to scan with file typing enabled to determine if we need to reassign the current file type, such as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2- compressed. Detection of DMG and a handful of other types rely on finding data partway through or near the ned of a file before reassigning the entire file as the new type. Other fixes and considerations in this commit: - The utf16 HTML parser has weak error handling, particularly with respect to creating a nested fmap for scanning the ascii decoded file. This commit cleans up the error handling and wraps the nested scan with the recursion-stack push()/pop() for correct recursion tracking. Before this commit, each container layer had a flag to indicate if the container layer is valid. We need something similar so that the cli_recursion_stack_get_*() functions ignore normalized layers. Details... Imagine an LDB signature for HTML content that specifies a ZIP container. If the signature actually alerts on the normalized HTML and you don't ignore normalized layers for the container check, it will appear as though the alert is in an HTML container rather than a ZIP container. This commit accomplishes this with a boolean you set in the scan context before scanning a new layer. Then when the new fmap is created, it will use that flag to set similar flag for the layer. The context flag is reset those that anything after this doesn't have that flag. The flag allows the new recursion_stack_get() function to ignore normalized layers when iterating the stack to return a layer at a requested index, negative or positive. Scanning normalized extracted/normalized javascript and VBA should also use the 'layer is normalized' flag. - This commit also fixes Heuristic.Broken.Executable alert for ELF files to make sure that: A) these only alert if cli_append_virus() returns CL_VIRUS (aka it respects the FP check). B) all broken-executable alerts for ELF only happen if the SCAN_HEURISTIC_BROKEN option is enabled. - This commit also cleans up the error handling in cli_magic_scan_dir(). This was needed so we could correctly apply the layer-is-normalized-flag to all VBA macros extracted to a directory when scanning the directory. - Also fix an issue where exceeding scan maximums wouldn't cause embedded file detection scans to abort. Granted we don't actually want to abort if max filesize or max recursion depth are exceeded... only if max scansize, max files, and max scantime are exceeded. Add 'abort_scan' flag to scan context, to protect against depending on correct error propagation for fatal conditions. Instead, setting this flag in the scan context should guarantee that a fatal condition deep in scan recursion isn't lost which result in more stuff being scanned instead of aborting. This shouldn't be necessary, but some status codes like CL_ETIMEOUT never used to be fatal and it's easier to do this than to verify every parser only returns CL_ETIMEOUT and other "fatal status codes" in fatal conditions. - Remove duplicate is_tar() prototype from filestypes.c and include is_tar.h instead. - Presently we create the fmap hash when creating the fmap. This wastes a bit of CPU if the hash is never needed. Now that we're creating fmap's for all embedded files discovered with file type recognition scans, this is a much more frequent occurence and really slows things down. This commit fixes the issue by only creating fmap hashes as needed. This should not only resolve the perfomance impact of creating fmap's for all embedded files, but also should improve performance in general. - Add allmatch check to the zip parser after the central-header meta match. That way we don't multiple alerts with the same match except in allmatch mode. Clean up error handling in the zip parser a tiny bit. - Fixes to ensure that the scan limits such as scansize, filesize, recursion depth, # of embedded files, and scantime are always reported if AlertExceedsMax (--alert-exceeds-max) is enabled. - Fixed an issue where non-fatal alerts for exceeding scan maximums may mask signature matches later on. I changed it so these alerts use the "possibly unwanted" alert-type and thus only alert if no other alerts were found or if all-match or heuristic-precedence are enabled. - Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata when the --gen-json feature is enabled. These will show up once under "ParseErrors" the first time a limit is exceeded. In the present implementation, only one limits-exceeded events will be added, so as to prevent a malicious or malformed sample from filling the JSON buffer with millions of events and using a tonne of RAM.
2021-09-11 14:15:21 -07:00
free(cctx.recursion_stack);
2010-05-13 22:44:29 +03:00
cl_engine_free(engine);
2010-05-13 23:35:47 +03:00
if (fdin >= 0)
close(fdin);
2009-07-13 19:45:05 +03:00
}
START_TEST(test_retmagic_jit)
2009-07-13 19:45:05 +03:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "retmagic.cbc", 0x1234f00d, CL_SUCCESS, 0, NULL, NULL, NULL, NULL, 0);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "retmagic.cbc", 0x1234f00d, CL_SUCCESS, 0, NULL, NULL, NULL, NULL, 1);
}
END_TEST
START_TEST(test_retmagic_int)
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "retmagic.cbc", 0x1234f00d, CL_SUCCESS, 1, NULL, NULL, NULL, NULL, 0);
2009-07-13 19:45:05 +03:00
}
END_TEST
START_TEST(test_arith_jit)
2009-07-13 19:45:05 +03:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "arith.cbc", 0xd5555555, CL_SUCCESS, 0, NULL, NULL, NULL, NULL, 0);
}
END_TEST
START_TEST(test_arith_int)
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "arith.cbc", 0xd5555555, CL_SUCCESS, 1, NULL, NULL, NULL, NULL, 0);
2009-07-13 19:45:05 +03:00
}
END_TEST
2009-08-20 16:23:43 +03:00
START_TEST(test_apicalls_jit)
2009-08-20 16:23:43 +03:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "apicalls.cbc", 0xf00d, CL_SUCCESS, 0, NULL, NULL, NULL, NULL, 0);
}
END_TEST
START_TEST(test_apicalls_int)
{
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "apicalls.cbc", 0xf00d, CL_SUCCESS, 1, NULL, NULL, NULL, NULL, 0);
2009-08-20 16:23:43 +03:00
}
END_TEST
START_TEST(test_apicalls2_jit)
2009-09-02 18:53:29 +03:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "apicalls2.cbc", 0xf00d, CL_SUCCESS, 0, NULL, NULL, NULL, NULL, 0);
}
END_TEST
START_TEST(test_apicalls2_int)
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "apicalls2.cbc", 0xf00d, CL_SUCCESS, 1, NULL, NULL, NULL, NULL, 0);
2009-09-02 18:53:29 +03:00
}
END_TEST
START_TEST(test_div0_jit)
{
cl_init(CL_INIT_DEFAULT);
/* must not crash on div#0 but catch it */
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "div0.cbc", 0, CL_EBYTECODE, 0, NULL, NULL, NULL, NULL, 0);
}
END_TEST
START_TEST(test_div0_int)
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "div0.cbc", 0, CL_EBYTECODE, 1, NULL, NULL, NULL, NULL, 0);
}
END_TEST
2009-08-20 16:23:43 +03:00
START_TEST(test_lsig_jit)
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "lsig.cbc", 0, 0, 0, NULL, NULL, NULL, NULL, 0);
}
END_TEST
START_TEST(test_lsig_int)
{
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "lsig.cbc", 0, 0, 1, NULL, NULL, NULL, NULL, 0);
}
END_TEST
START_TEST(test_inf_jit)
2010-03-22 17:16:07 +02:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "inf.cbc", 0, CL_ETIMEOUT, 0, NULL, NULL, NULL, NULL, 0);
}
END_TEST
START_TEST(test_inf_int)
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "inf.cbc", 0, CL_ETIMEOUT, 1, NULL, NULL, NULL, NULL, 0);
2010-03-24 12:46:34 +02:00
}
END_TEST
START_TEST(test_matchwithread_jit)
2010-03-24 12:46:34 +02:00
{
struct cli_exe_section sect;
struct cli_pe_hook_data pedata;
cl_init(CL_INIT_DEFAULT);
memset(&pedata, 0, sizeof(pedata));
pedata.ep = 64;
cli_writeint32(&pedata.opt32.ImageBase, 0x400000);
pedata.hdr_size = 0x400;
2010-03-24 12:46:34 +02:00
pedata.nsections = 1;
sect.rva = 4096;
sect.vsz = 4096;
sect.raw = 0;
sect.rsz = 512;
sect.urva = 4096;
sect.uvsz = 4096;
sect.uraw = 1;
sect.ursz = 512;
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "matchwithread.cbc", 0, 0, 0,
"input" PATHSEP "clamav_hdb_scanfiles" PATHSEP "clam.exe",
&pedata, &sect, "ClamAV-Test-File-detected-via-bytecode", 0);
}
END_TEST
START_TEST(test_matchwithread_int)
{
struct cli_exe_section sect;
struct cli_pe_hook_data pedata;
cl_init(CL_INIT_DEFAULT);
memset(&pedata, 0, sizeof(pedata));
pedata.ep = 64;
cli_writeint32(&pedata.opt32.ImageBase, 0x400000);
pedata.hdr_size = 0x400;
pedata.nsections = 1;
sect.rva = 4096;
sect.vsz = 4096;
sect.raw = 0;
sect.rsz = 512;
sect.urva = 4096;
sect.uvsz = 4096;
sect.uraw = 1;
sect.ursz = 512;
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "matchwithread.cbc", 0, 0, 1,
"input" PATHSEP "clamav_hdb_scanfiles" PATHSEP "clam.exe",
&pedata, &sect, "ClamAV-Test-File-detected-via-bytecode", 0);
2010-03-22 17:16:07 +02:00
}
END_TEST
START_TEST(test_pdf_jit)
2010-03-24 14:14:33 +02:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "pdf.cbc", 0, 0, 0, NULL, NULL, NULL, NULL, 0);
}
END_TEST
START_TEST(test_pdf_int)
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "pdf.cbc", 0, 0, 1, NULL, NULL, NULL, NULL, 0);
2010-03-24 14:14:33 +02:00
}
END_TEST
START_TEST(test_bswap_jit)
2010-03-24 15:27:15 +02:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "bswap.cbc", 0xbeef, 0, 0, NULL, NULL, NULL, NULL, 0);
}
END_TEST
START_TEST(test_bswap_int)
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "bswap.cbc", 0xbeef, 0, 1, NULL, NULL, NULL, NULL, 0);
2010-03-24 15:27:15 +02:00
}
END_TEST
START_TEST(test_inflate_jit)
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "inflate.cbc", 0xbeef, 0, 1, NULL, NULL, NULL, NULL, 0);
}
END_TEST
START_TEST(test_inflate_int)
2010-03-24 17:07:14 +02:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "inflate.cbc", 0xbeef, 0, 0, NULL, NULL, NULL, NULL, 0);
2010-05-13 22:44:29 +03:00
}
END_TEST
START_TEST(test_api_extract_jit)
2010-05-13 22:44:29 +03:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "api_extract_7.cbc", 0xf00d, 0, 0,
"input" PATHSEP "bytecode_scanfiles" PATHSEP "apitestfile", NULL, NULL, NULL, 0);
2010-05-13 22:44:29 +03:00
}
END_TEST
START_TEST(test_api_files_jit)
2010-05-13 22:44:29 +03:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "api_files_7.cbc", 0xf00d, 0, 0,
"input" PATHSEP "bytecode_scanfiles" PATHSEP "apitestfile", NULL, NULL, NULL, 0);
2010-05-13 22:44:29 +03:00
}
END_TEST
START_TEST(test_apicalls2_7_jit)
2010-05-13 22:44:29 +03:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "apicalls2_7.cbc", 0xf00d, 0, 0, NULL, NULL, NULL, NULL, 0);
2010-05-13 22:44:29 +03:00
}
END_TEST
START_TEST(test_apicalls_7_jit)
2010-05-13 22:44:29 +03:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "apicalls_7.cbc", 0xf00d, 0, 0, NULL, NULL, NULL, NULL, 0);
2010-05-13 22:44:29 +03:00
}
END_TEST
START_TEST(test_arith_7_jit)
2010-05-13 22:44:29 +03:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "arith_7.cbc", 0xd55555dd, CL_SUCCESS, 0, NULL, NULL, NULL, NULL, 0);
2010-05-13 22:44:29 +03:00
}
END_TEST
START_TEST(test_debug_jit)
2010-05-13 22:44:29 +03:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "debug_7.cbc", 0xf00d, 0, 0, NULL, NULL, NULL, NULL, 0);
2010-05-13 22:44:29 +03:00
}
END_TEST
START_TEST(test_inf_7_jit)
2010-05-13 22:44:29 +03:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "inf_7.cbc", 0, CL_ETIMEOUT, 0, NULL, NULL, NULL, NULL, 0);
2010-05-13 22:44:29 +03:00
}
END_TEST
START_TEST(test_lsig_7_jit)
2010-05-13 22:44:29 +03:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "lsig_7.cbc", 0, 0, 0, NULL, NULL, NULL, NULL, 0);
2010-05-13 22:44:29 +03:00
}
END_TEST
START_TEST(test_retmagic_7_jit)
2010-05-13 22:44:29 +03:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "retmagic_7.cbc", 0x1234f00d, CL_SUCCESS, 0, NULL, NULL, NULL, NULL, 0);
2010-05-13 22:44:29 +03:00
}
END_TEST
START_TEST(test_testadt_jit)
2010-05-13 22:44:29 +03:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "testadt_7.cbc", 0xf00d, 0, 0, NULL, NULL, NULL, NULL, 0);
2010-05-13 22:44:29 +03:00
}
END_TEST
START_TEST(test_api_extract_int)
2010-05-13 22:44:29 +03:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "api_extract_7.cbc", 0xf00d, 0, 1,
"input" PATHSEP "bytecode_scanfiles" PATHSEP "apitestfile", NULL, NULL, NULL, 0);
2010-05-13 22:44:29 +03:00
}
END_TEST
START_TEST(test_api_files_int)
2010-05-13 22:44:29 +03:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "api_files_7.cbc", 0xf00d, 0, 1,
"input" PATHSEP "bytecode_scanfiles" PATHSEP "apitestfile", NULL, NULL, NULL, 0);
2010-05-13 22:44:29 +03:00
}
END_TEST
START_TEST(test_apicalls2_7_int)
2010-05-13 22:44:29 +03:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "apicalls2_7.cbc", 0xf00d, 0, 1, NULL, NULL, NULL, NULL, 0);
2010-05-13 22:44:29 +03:00
}
END_TEST
START_TEST(test_apicalls_7_int)
2010-05-13 22:44:29 +03:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "apicalls_7.cbc", 0xf00d, 0, 1, NULL, NULL, NULL, NULL, 0);
2010-05-13 22:44:29 +03:00
}
END_TEST
START_TEST(test_arith_7_int)
2010-05-13 22:44:29 +03:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "arith_7.cbc", 0xd55555dd, CL_SUCCESS, 1, NULL, NULL, NULL, NULL, 0);
2010-05-13 22:44:29 +03:00
}
END_TEST
START_TEST(test_debug_int)
2010-05-13 22:44:29 +03:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "debug_7.cbc", 0xf00d, 0, 1, NULL, NULL, NULL, NULL, 0);
2010-05-13 22:44:29 +03:00
}
END_TEST
START_TEST(test_inf_7_int)
2010-05-13 22:44:29 +03:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "inf_7.cbc", 0, CL_ETIMEOUT, 1, NULL, NULL, NULL, NULL, 0);
2010-05-13 22:44:29 +03:00
}
END_TEST
START_TEST(test_lsig_7_int)
2010-05-13 22:44:29 +03:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "lsig_7.cbc", 0, 0, 1, NULL, NULL, NULL, NULL, 0);
2010-05-13 22:44:29 +03:00
}
END_TEST
START_TEST(test_retmagic_7_int)
2010-05-13 22:44:29 +03:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "retmagic_7.cbc", 0x1234f00d, CL_SUCCESS, 1, NULL, NULL, NULL, NULL, 0);
2010-05-13 22:44:29 +03:00
}
END_TEST
START_TEST(test_testadt_int)
2010-05-13 22:44:29 +03:00
{
cl_init(CL_INIT_DEFAULT);
runtest("input" PATHSEP "bytecode_sigs" PATHSEP "testadt_7.cbc", 0xf00d, 0, 1, NULL, NULL, NULL, NULL, 0);
2010-03-24 17:07:14 +02:00
}
END_TEST
static void runload(const char *dbname, struct cl_engine *engine, unsigned signoexp)
2010-05-14 17:16:50 +03:00
{
const char *srcdir = getenv("srcdir");
2010-05-14 17:16:50 +03:00
char *str;
unsigned signo = 0;
int rc;
if (!srcdir) {
/* when run from automake srcdir is set, but if run manually then not */
srcdir = SRCDIR;
2010-05-14 17:16:50 +03:00
}
str = cli_malloc(strlen(srcdir) + 1 + strlen(dbname) + 1);
ck_assert_msg(!!str, "cli_malloc");
CMake: Add CTest support to match Autotools checks An ENABLE_TESTS CMake option is provided so that users can disable testing if they don't want it. Instructions for how to use this included in the INSTALL.cmake.md file. If you run `ctest`, each testcase will write out a log file to the <build>/unit_tests directory. As with Autotools' make check, the test files are from test/.split and unit_tests/.split files, but for CMake these are generated at build time instead of at test time. On Posix systems, sets the LD_LIBRARY_PATH so that ClamAV-compiled libraries can be loaded when running tests. On Windows systems, CTest will identify and collect all library dependencies and assemble a temporarily install under the build/unit_tests directory so that the libraries can be loaded when running tests. The same feature is used on Windows when using CMake to install to collect all DLL dependencies so that users don't have to install them manually afterwards. Each of the CTest tests are run using a custom wrapper around Python's unittest framework, which is also responsible for finding and inserting valgrind into the valgrind tests on Posix systems. Unlike with Autotools, the CMake CTest Valgrind-tests are enabled by default, if Valgrind can be found. There's no need to set VG=1. CTest's memcheck module is NOT supported, because we use Python to orchestrate our tests. Added a bunch of Windows compatibility changes to the unit tests. These were primarily changing / to PATHSEP and making adjustments to use Win32 C headers and ifdef out the POSIX ones which aren't available on Windows. Also disabled a bunch of tests on Win32 that don't work on Windows, notably the mmap ones and FD-passing (i.e. FILEDES) ones. Add JSON_C_HAVE_INTTYPES_H definition to clamav-config.h to eliminate warnings on Windows where json.h is included after inttypes.h because json-c's inttypes replacement relies on it. This is a it of a hack and may be removed if json-c fixes their inttypes header stuff in the future. Add preprocessor definitions on Windows to disable MSVC warnings about CRT secure and nonstandard functions. While there may be a better solution, this is needed to be able to see other more serious warnings. Add missing file comment block and copyright statement for clamsubmit.c. Also change json-c/json.h include filename to json.h in clamsubmit.c. The directory name is not required. Changed the hash table data integer type from long, which is poorly defined, to size_t -- which is capable of storing a pointer. Fixed a bunch of casts regarding this variable to eliminate warnings. Fixed two bugs causing utf8 encoding unit tests to fail on Windows: - The in_size variable should be the number of bytes, not the character count. This was was causing the SHIFT_JIS (japanese codepage) to UTF8 transcoding test to only transcode half the bytes. - It turns out that the MultiByteToWideChar() API can't transcode UTF16-BE to UTF16-LE. The solution is to just iterate over the buffer and flip the bytes on each uint16_t. This but was causing the UTF16-BE to UTF8 tests to fail. I also split up the utf8 transcoding tests into separate tests so I could see all of the failures instead of just the first one. Added a flags parameter to the unit test function to open testfiles because it turns out that on Windows if a file contains the \r\n it will replace it with just \n if you opened the file as a text file instead of as binary. However, if we open the CBC files as binary, then a bunch of bytecode tests fail. So I've changed the tests to open the CBC files in the bytecode tests as text files and open all other files as binary. Ported the feature tests from shell scripts to Python using a modified version of our QA test-framework, which is largely compatible and will allow us to migrate some QA tests into this repo. I'd like to add GitHub Actions pipelines in the future so that all public PR's get some testing before anyone has to manually review them. The clamd --log option was missing from the help string, though it definitely works. I've added it in this commit. It appears that clamd.c was never clang-format'd, so this commit also reformats clamd.c. Some of the check_clamd tests expected the path returned by clamd to match character for character with original path sent to clamd. However, as we now evaluate real paths before a scan, the path returned by clamd isn't going to match the relative (and possibly symlink-ridden) path passed to clamdscan. I fixed this test by changing the test to search for the basename: <signature> FOUND within the response instead of matching the exact path. Autotools: Link check_clamd with libclamav so we can use our utility functions in check_clamd.c.
2020-08-25 23:14:23 -07:00
sprintf(str, "%s" PATHSEP "%s", srcdir, dbname);
2010-05-14 17:16:50 +03:00
rc = cl_load(str, engine, &signo, CL_DB_STDOPT);
ck_assert_msg(rc == CL_SUCCESS, "failed to load %s: %s\n",
CMake: Add CTest support to match Autotools checks An ENABLE_TESTS CMake option is provided so that users can disable testing if they don't want it. Instructions for how to use this included in the INSTALL.cmake.md file. If you run `ctest`, each testcase will write out a log file to the <build>/unit_tests directory. As with Autotools' make check, the test files are from test/.split and unit_tests/.split files, but for CMake these are generated at build time instead of at test time. On Posix systems, sets the LD_LIBRARY_PATH so that ClamAV-compiled libraries can be loaded when running tests. On Windows systems, CTest will identify and collect all library dependencies and assemble a temporarily install under the build/unit_tests directory so that the libraries can be loaded when running tests. The same feature is used on Windows when using CMake to install to collect all DLL dependencies so that users don't have to install them manually afterwards. Each of the CTest tests are run using a custom wrapper around Python's unittest framework, which is also responsible for finding and inserting valgrind into the valgrind tests on Posix systems. Unlike with Autotools, the CMake CTest Valgrind-tests are enabled by default, if Valgrind can be found. There's no need to set VG=1. CTest's memcheck module is NOT supported, because we use Python to orchestrate our tests. Added a bunch of Windows compatibility changes to the unit tests. These were primarily changing / to PATHSEP and making adjustments to use Win32 C headers and ifdef out the POSIX ones which aren't available on Windows. Also disabled a bunch of tests on Win32 that don't work on Windows, notably the mmap ones and FD-passing (i.e. FILEDES) ones. Add JSON_C_HAVE_INTTYPES_H definition to clamav-config.h to eliminate warnings on Windows where json.h is included after inttypes.h because json-c's inttypes replacement relies on it. This is a it of a hack and may be removed if json-c fixes their inttypes header stuff in the future. Add preprocessor definitions on Windows to disable MSVC warnings about CRT secure and nonstandard functions. While there may be a better solution, this is needed to be able to see other more serious warnings. Add missing file comment block and copyright statement for clamsubmit.c. Also change json-c/json.h include filename to json.h in clamsubmit.c. The directory name is not required. Changed the hash table data integer type from long, which is poorly defined, to size_t -- which is capable of storing a pointer. Fixed a bunch of casts regarding this variable to eliminate warnings. Fixed two bugs causing utf8 encoding unit tests to fail on Windows: - The in_size variable should be the number of bytes, not the character count. This was was causing the SHIFT_JIS (japanese codepage) to UTF8 transcoding test to only transcode half the bytes. - It turns out that the MultiByteToWideChar() API can't transcode UTF16-BE to UTF16-LE. The solution is to just iterate over the buffer and flip the bytes on each uint16_t. This but was causing the UTF16-BE to UTF8 tests to fail. I also split up the utf8 transcoding tests into separate tests so I could see all of the failures instead of just the first one. Added a flags parameter to the unit test function to open testfiles because it turns out that on Windows if a file contains the \r\n it will replace it with just \n if you opened the file as a text file instead of as binary. However, if we open the CBC files as binary, then a bunch of bytecode tests fail. So I've changed the tests to open the CBC files in the bytecode tests as text files and open all other files as binary. Ported the feature tests from shell scripts to Python using a modified version of our QA test-framework, which is largely compatible and will allow us to migrate some QA tests into this repo. I'd like to add GitHub Actions pipelines in the future so that all public PR's get some testing before anyone has to manually review them. The clamd --log option was missing from the help string, though it definitely works. I've added it in this commit. It appears that clamd.c was never clang-format'd, so this commit also reformats clamd.c. Some of the check_clamd tests expected the path returned by clamd to match character for character with original path sent to clamd. However, as we now evaluate real paths before a scan, the path returned by clamd isn't going to match the relative (and possibly symlink-ridden) path passed to clamdscan. I fixed this test by changing the test to search for the basename: <signature> FOUND within the response instead of matching the exact path. Autotools: Link check_clamd with libclamav so we can use our utility functions in check_clamd.c.
2020-08-25 23:14:23 -07:00
str, cl_strerror(rc));
ck_assert_msg(signo == signoexp, "different number of signatures loaded, expected %u, got %u\n",
2020-07-24 08:32:47 -07:00
signoexp, signo);
2010-05-14 17:16:50 +03:00
free(str);
rc = cl_engine_compile(engine);
ck_assert_msg(rc == CL_SUCCESS, "failed to load %s: %s\n",
CMake: Add CTest support to match Autotools checks An ENABLE_TESTS CMake option is provided so that users can disable testing if they don't want it. Instructions for how to use this included in the INSTALL.cmake.md file. If you run `ctest`, each testcase will write out a log file to the <build>/unit_tests directory. As with Autotools' make check, the test files are from test/.split and unit_tests/.split files, but for CMake these are generated at build time instead of at test time. On Posix systems, sets the LD_LIBRARY_PATH so that ClamAV-compiled libraries can be loaded when running tests. On Windows systems, CTest will identify and collect all library dependencies and assemble a temporarily install under the build/unit_tests directory so that the libraries can be loaded when running tests. The same feature is used on Windows when using CMake to install to collect all DLL dependencies so that users don't have to install them manually afterwards. Each of the CTest tests are run using a custom wrapper around Python's unittest framework, which is also responsible for finding and inserting valgrind into the valgrind tests on Posix systems. Unlike with Autotools, the CMake CTest Valgrind-tests are enabled by default, if Valgrind can be found. There's no need to set VG=1. CTest's memcheck module is NOT supported, because we use Python to orchestrate our tests. Added a bunch of Windows compatibility changes to the unit tests. These were primarily changing / to PATHSEP and making adjustments to use Win32 C headers and ifdef out the POSIX ones which aren't available on Windows. Also disabled a bunch of tests on Win32 that don't work on Windows, notably the mmap ones and FD-passing (i.e. FILEDES) ones. Add JSON_C_HAVE_INTTYPES_H definition to clamav-config.h to eliminate warnings on Windows where json.h is included after inttypes.h because json-c's inttypes replacement relies on it. This is a it of a hack and may be removed if json-c fixes their inttypes header stuff in the future. Add preprocessor definitions on Windows to disable MSVC warnings about CRT secure and nonstandard functions. While there may be a better solution, this is needed to be able to see other more serious warnings. Add missing file comment block and copyright statement for clamsubmit.c. Also change json-c/json.h include filename to json.h in clamsubmit.c. The directory name is not required. Changed the hash table data integer type from long, which is poorly defined, to size_t -- which is capable of storing a pointer. Fixed a bunch of casts regarding this variable to eliminate warnings. Fixed two bugs causing utf8 encoding unit tests to fail on Windows: - The in_size variable should be the number of bytes, not the character count. This was was causing the SHIFT_JIS (japanese codepage) to UTF8 transcoding test to only transcode half the bytes. - It turns out that the MultiByteToWideChar() API can't transcode UTF16-BE to UTF16-LE. The solution is to just iterate over the buffer and flip the bytes on each uint16_t. This but was causing the UTF16-BE to UTF8 tests to fail. I also split up the utf8 transcoding tests into separate tests so I could see all of the failures instead of just the first one. Added a flags parameter to the unit test function to open testfiles because it turns out that on Windows if a file contains the \r\n it will replace it with just \n if you opened the file as a text file instead of as binary. However, if we open the CBC files as binary, then a bunch of bytecode tests fail. So I've changed the tests to open the CBC files in the bytecode tests as text files and open all other files as binary. Ported the feature tests from shell scripts to Python using a modified version of our QA test-framework, which is largely compatible and will allow us to migrate some QA tests into this repo. I'd like to add GitHub Actions pipelines in the future so that all public PR's get some testing before anyone has to manually review them. The clamd --log option was missing from the help string, though it definitely works. I've added it in this commit. It appears that clamd.c was never clang-format'd, so this commit also reformats clamd.c. Some of the check_clamd tests expected the path returned by clamd to match character for character with original path sent to clamd. However, as we now evaluate real paths before a scan, the path returned by clamd isn't going to match the relative (and possibly symlink-ridden) path passed to clamdscan. I fixed this test by changing the test to search for the basename: <signature> FOUND within the response instead of matching the exact path. Autotools: Link check_clamd with libclamav so we can use our utility functions in check_clamd.c.
2020-08-25 23:14:23 -07:00
str, cl_strerror(rc));
2010-05-14 17:16:50 +03:00
}
START_TEST(test_load_bytecode_jit)
2010-05-14 17:16:50 +03:00
{
struct cl_engine *engine;
cl_init(CL_INIT_DEFAULT);
engine = cl_engine_new();
ck_assert_msg(!!engine, "failed to create engine\n");
2010-05-14 17:16:50 +03:00
runload("input" PATHSEP "bytecode_sigs" PATHSEP "bytecode.cvd", engine, 5);
2010-05-14 17:16:50 +03:00
cl_engine_free(engine);
}
END_TEST
START_TEST(test_load_bytecode_int)
2010-05-14 17:16:50 +03:00
{
struct cl_engine *engine;
cl_init(CL_INIT_DEFAULT);
engine = cl_engine_new();
2010-05-14 17:16:50 +03:00
engine->dconf->bytecode = BYTECODE_INTERPRETER;
ck_assert_msg(!!engine, "failed to create engine\n");
2010-05-14 17:16:50 +03:00
runload("input" PATHSEP "bytecode_sigs" PATHSEP "bytecode.cvd", engine, 5);
2010-05-14 17:16:50 +03:00
cl_engine_free(engine);
}
END_TEST
2011-01-20 10:02:06 +02:00
#if defined(CL_THREAD_SAFE) && defined(C_LINUX) && ((__GLIBC__ << 16) + __GLIBC_MINOR__ >= (2 << 16) + 4)
#define DO_BARRIER
#endif
#ifdef DO_BARRIER
static pthread_barrier_t barrier;
static void *thread(void *arg)
{
struct cl_engine *engine;
engine = cl_engine_new();
ck_assert_msg(!!engine, "failed to create engine\n");
/* run all cl_load at once, to maximize chance of a crash
* in case of a race condition */
pthread_barrier_wait(&barrier);
runload("input" PATHSEP "bytecode_sigs" PATHSEP "bytecode.cvd", engine, 5);
cl_engine_free(engine);
return NULL;
}
START_TEST(test_parallel_load)
{
#define N 5
pthread_t threads[N];
unsigned i;
cl_init(CL_INIT_DEFAULT);
pthread_barrier_init(&barrier, NULL, N);
for (i = 0; i < N; i++) {
pthread_create(&threads[i], NULL, thread, NULL);
}
for (i = 0; i < N; i++) {
pthread_join(threads[i], NULL);
}
/* DB load used to crash due to 'static' variable in cache.c,
* and also due to something wrong in LLVM 2.7.
* Enabled the mutex around codegen in bytecode2llvm.cpp, and this test is
* here to make sure it doesn't crash */
}
END_TEST
#endif
2009-07-13 19:45:05 +03:00
Suite *test_bytecode_suite(void)
{
Suite *s = suite_create("bytecode");
2009-07-13 19:45:05 +03:00
TCase *tc_cli_arith = tcase_create("arithmetic");
suite_add_tcase(s, tc_cli_arith);
2010-03-30 11:10:58 +03:00
tcase_set_timeout(tc_cli_arith, 20);
tcase_add_test(tc_cli_arith, test_retmagic_jit);
tcase_add_test(tc_cli_arith, test_arith_jit);
tcase_add_test(tc_cli_arith, test_apicalls_jit);
tcase_add_test(tc_cli_arith, test_apicalls2_jit);
tcase_add_test(tc_cli_arith, test_div0_jit);
tcase_add_test(tc_cli_arith, test_lsig_jit);
tcase_add_test(tc_cli_arith, test_inf_jit);
tcase_add_test(tc_cli_arith, test_matchwithread_jit);
tcase_add_test(tc_cli_arith, test_pdf_jit);
tcase_add_test(tc_cli_arith, test_bswap_jit);
tcase_add_test(tc_cli_arith, test_inflate_jit);
2010-05-13 22:44:29 +03:00
tcase_add_test(tc_cli_arith, test_arith_int);
tcase_add_test(tc_cli_arith, test_apicalls_int);
tcase_add_test(tc_cli_arith, test_apicalls2_int);
tcase_add_test(tc_cli_arith, test_div0_int);
tcase_add_test(tc_cli_arith, test_lsig_int);
tcase_add_test(tc_cli_arith, test_inf_int);
tcase_add_test(tc_cli_arith, test_matchwithread_int);
tcase_add_test(tc_cli_arith, test_pdf_int);
tcase_add_test(tc_cli_arith, test_bswap_int);
tcase_add_test(tc_cli_arith, test_inflate_int);
tcase_add_test(tc_cli_arith, test_retmagic_int);
2010-05-13 22:44:29 +03:00
tcase_add_test(tc_cli_arith, test_api_extract_jit);
tcase_add_test(tc_cli_arith, test_api_files_jit);
tcase_add_test(tc_cli_arith, test_apicalls2_7_jit);
tcase_add_test(tc_cli_arith, test_apicalls_7_jit);
tcase_add_test(tc_cli_arith, test_apicalls_7_jit);
tcase_add_test(tc_cli_arith, test_arith_7_jit);
tcase_add_test(tc_cli_arith, test_debug_jit);
tcase_add_test(tc_cli_arith, test_inf_7_jit);
tcase_add_test(tc_cli_arith, test_lsig_7_jit);
tcase_add_test(tc_cli_arith, test_retmagic_7_jit);
tcase_add_test(tc_cli_arith, test_testadt_jit);
tcase_add_test(tc_cli_arith, test_api_extract_int);
tcase_add_test(tc_cli_arith, test_api_files_int);
tcase_add_test(tc_cli_arith, test_apicalls2_7_int);
tcase_add_test(tc_cli_arith, test_apicalls_7_int);
tcase_add_test(tc_cli_arith, test_apicalls_7_int);
tcase_add_test(tc_cli_arith, test_arith_7_int);
tcase_add_test(tc_cli_arith, test_debug_int);
tcase_add_test(tc_cli_arith, test_inf_7_int);
tcase_add_test(tc_cli_arith, test_lsig_7_int);
tcase_add_test(tc_cli_arith, test_retmagic_int);
tcase_add_test(tc_cli_arith, test_testadt_int);
2010-05-14 17:16:50 +03:00
tcase_add_test(tc_cli_arith, test_load_bytecode_jit);
tcase_add_test(tc_cli_arith, test_load_bytecode_int);
#ifdef DO_BARRIER
tcase_add_test(tc_cli_arith, test_parallel_load);
#endif
2010-05-14 17:16:50 +03:00
2009-07-13 19:45:05 +03:00
return s;
}