`libclamav/libmspack.c`: Initialize variables before first `goto done;`
to fix unitialized variable use in an error condition.
`libclamav/others.c`: Explicitly ignore return values for calls to add
JSON values when subsequent calls don't depend on them.
If we were to add error handling here, the only thing we'd do is debug-
log it. I don't think it's worth adding the extra lines of code.
`libclamav/unarj.c`: Removed dead code.
The `status` variable is immediately set afterwards based on whether or
not any files may be extracted.
`libclamav/unzip.c`: Removed dead code.
The `ret` variable is checked immediately after being set, above. This
check after the `do`-`while()` loop is dead code.
`sigtool/sigtool.c`: Fix potential NULL deref in error handling.
This is a fix for the same issue as was fixed in a previous commit.
I somehow overlooked this one. Copy/paste bug.
`libclamav/pdfdecode.c`: Fix leaked `stream` memory when
`filter_lzwdecode()` fails.
`clamdtop/clamdtop.c`: Fix possible NULL dereference if `strchr` returns
NULL in `read_version()` and `check_stats_available()`.
`libclamav/rtf.c`: Fix memory leak in `rtf_object_process()` if
`cli_gentemp_with_prefix()` fails.
Also change empty for-loop to resolve clang-format weirdness and make it
more obvious the for-loop has no body.
`libclamav/aspack.c`: Ensure that `endoff - old` is not negative in
`build_decrypt_array()` before passing to `CLI_ISCONTAINED()` which expects
unsigned values.
`libclamav/upx.c`: Fix integer overflow checks in multiple functions.
`libclamav/vba_extract.c`: Set `entries` pointer back to NULL after free in
`word_read_macro_entry()` error condition.
`libclamav/unzip.c`: Remove logic to return `CL_EMAXFILES` from
`index_local_file_headers()`. It seems it only overwrote the status when
not `CL_SUCCESS` in which case it could be overriding a more serious failure.
Further, updates to the how the ZIP parser works has made it so this needs
to return `CL_SUCCESS` in order for the caller to at least scan the files
found so far.
Finally, the calling function has checks of its own to make sure we don't
exceeds the max-files limit.
`libclamav/unzip.c`: Fix issue where `cli_append_potentially_unwanted()` in
`index_local_file_headers()` might overwrite an error in `status` with
`CL_CLEAN`. Instead, it now checks the return value and only overwrites the
`CL_EFORMAT` status with a different value if not `CL_SUCCESS`.
`libclamav/unzip.c`: Fix a potential leak with `combined_catalogue` and
`temp_catalogue` in an error condition. We should always free them if not NULL,
not just if the function failed. And to make this safe, we must set
`combined_catalogue` to NULL when we give ownership to `*catalogue`.
`libclamav/scanners.c`: Fix a potential leak in error handling for the
`cli_ole2_tempdir_scan_vba()` function.
CLAM-2768
`clamscan/manager.c`: Fix double-free in an error condition in `scanfile()`.
`common/optparser.c`: Fix uninitialized use of the `numarg` variable when
`arg` is `NULL`.
`libclamav/cache.c`: Don't check if `ctx-fmap` is `NULL` when we've
already dereferenced it.
`libclamav/crypto.c`: The `win_exception` variable and associated logic
is Windows-specific and so needs preprocessor platform checks. Otherwise
it generates unused variable warnings.
`libclamav/crypto.c`: Check for `size_t` overflow of the `byte_read`
variable in the `cl_hash_file_fd_ex()` function.
`libclamav/crypto.c`: Fix a memory leak in the `cl_hash_file_fd_ex()`
function.
`libclamav/fmap.c`: Correctly the `name` and `path` pointer if
`fmap_duplicate()` fails. Also need to clear those variables when
duplicating the parent `map` so that on error it does not free the wrong
`name` or `path`.
`libclamav/fmap.c`: Refine error handling for `hash_string` cleanup in
`cl_fmap_get_hash()`. Coverity's complaint was that `hash_string` could
never be non-NULL if `status` is not `CL_SUCCESS`. I.e., the cleanup is
dead code. I don't think my cleanup actually "fixes" that though it is
definitely a better way to do the error handling.
The `if (NULL != hash_string) {` check is still technically dead code.
It safeguards against future changes that may `goto done` between the
allocation and transfering ownership from `hash_string` to `hash_out`.
`libclamav/others.c`: Fix possible memory leak in `cli_recursion_stack_push()`.
`libclamav/others.c`: Refactor an if/else + switch statement inside
`cli_dispatch_scan_callback()` so that the `CL_SCAN_CALLBACK_ALERT` case
is not dead-code. It's also easier to read now.
`libclamav/pdfdecode.c`: For logging, use the `%zu` to format `size_t`
instead of casting to `long long` and using `%llu`. Simiularly use the
`STDu32` format string macro for `uint32_t`.
`libclamav/pdfdecode.c`: Fix a possible double-free for the `decoded`
pointer in `filter_lzwdecode()`.
`libclamav/pdfdecode.c`: Remove the `if (capacity > UINT_MAX) {`
overflow check inside `filter_lzwdecode()`, which didn't do anything.
The `capacity` variable this point is a fixed value and so I also changed
the `avail_out` to be that fixed `INFLATE_CHUNK_SIZE` value rather than
using `capacity`. It is more straightforward and replicates how similar
logic works later in the file.
I also removed the copy-pasted `(Bytef *)` cast which didn't reaaally do
anything, and was a copypaste from a different algorihm. The lzw
implementation interface doesn't use `Bytef`.
`libclamav/readdb.c`: Fix a possible NULL-deref on the `matcher` variable
in the error handling/cleanup code if the function fails.
`libclamav/scanners.c`: Fix an issue where the return value from some of
the parsers may be lost/overridden by the call to
`cli_dispatch_scan_callback()` just after the `done:` label in
`cli_magic_scan()`.
`libclamav/scanners.c`: Silence an unused-return value warning when
calling `cli_basename()`.
`sigtool/sigtool.c` and `unit_tests/check_regex.c`:
Fix possible NULL-derefs of the `ctx.recursion_stack` pointer in the error
handling for several functions.
Also, and this isn't a Coverity thing:
`libclamav/json_api.c` and `libclamav/others.c`:
Fix support for libjson-c version 0.13 and older.
I don't think we *should* be using the old version, but some environments
such as the current OSS-Fuzz base image are older and still use it.
The issue is that `json_object_new_uint64()` was introduced in a later
libjson-c version, so we have to fallback to use `json_object_new_int64()`
with older libjson-c, provided the int were storing isn't too big.
CLAM-2768
Sigtool crashes when you use the `--html-normalise` option, every time.
Simple double free bug only affecting that specific sigtool command.
Does not affect clamscan scans (thank goodness).
Fixes: https://github.com/Cisco-Talos/clamav/issues/1483
CLAM-2835
ClamAV will not function when using a FIPS-enabled OpenSSL 3.x.
This is because ClamAV uses MD5 and SHA1 algorithms for a variety of
purposes including matching for malware detection, matching to prevent
false positives on known-clean files, and for verification of MD5-based
RSA digital signatures for determining CVD (signature database archive)
authenticity.
Interestingly, FIPS had been intentionally bypassed when creating hashes
based whole buffers and whole files (by descriptor or `FILE`-pointer):
78d4a9985a
Note: this bypassed FIPS the 1.x way with:
`EVP_MD_CTX_set_flags(ctx, EVP_MD_CTX_FLAG_NON_FIPS_ALLOW);`
It was NOT disabled when using `cl_hash_init()` / `cl_update_hash()` /
`cl_finish_hash()`. That likely worked by coincidence in that the hash
was already calculated most of the time. It certainly would have made
use of those functions if the hash had not been calculated prior:
78d4a9985a/libclamav/matcher.c (L743)
Regardless, bypassing FIPS entirely is not the correct solution.
The FIPS restrictions against using MD5 and SHA1 are valid, particularly
when verifying CVD digital siganatures, but also I think when using a
hash to determine if the file is known-clean (i.e. the "clean cache" and
also MD5-based and SHA1-based FP signatures).
This commit extends the work to bypass FIPS using the newer 3.x method:
`md = EVP_MD_fetch(NULL, alg, "-fips");`
It does this for the legacy `cl_hash*()` functions including
`cl_hash_init()` / `cl_update_hash()` / `cl_finish_hash()`.
It also introduces extended versions that allow the caller to choose if
they want to bypass FIPS:
- `cl_hash_data_ex()`
- `cl_hash_init_ex()`
- `cl_update_hash_ex()`
- `cl_finish_hash_ex()`
- `cl_hash_destroy_ex()`
- `cl_hash_file_fd_ex()`
See the `flags` parameter for each.
Ironically, this commit does NOT use the new functions at this time.
The rational is that ClamAV may need MD5, SHA1, and SHA-256 hashes of
the same files both for determining if the file is malware, and for
determining if the file is clean.
So instead, this commit will do a checks when:
1. Creating a new ClamAV scanning engine. If FIPS-mode enabled, it will
automatically toggle the "FIPS limits" engine option.
When loading signatures, if the engine "FIPS limits" option is enabled,
then MD5 and SHA1 FP signatures will be skipped.
2. Before verifying a CVD (e.g. also for loading, unpacking when
verification enabled).
If "FIPS limits" or FIPS-mode are enabled, then the legacy MD5-based RSA
method is disabled.
Note: This commit also refactors the interface for `cl_cvdverify_ex()`
and `cl_cvdunpack_ex()` so they take a `flags` parameters, rather than a
single `bool`. As these functions are new in this version, it does not
break the ABI.
The cache was already switched to use SHA2-256, so that's not a concern
for checking FIPS-mode / FIPS limits options.
This adds an option for `freshclam.conf` and `clamd.conf`:
FIPSCryptoHashLimits yes
And an equivalent command-line option for `clamscan` and `sigtool`:
--fips-limits
You may programmatically enable FIPS-limits for a ClamAV engine like this:
```C
cl_engine_set_num(engine, CL_ENGINE_FIPS_LIMITS, 1);
```
CLAM-2792
Add the following scan callbacks:
```c
cl_engine_set_scan_callback(engine, &pre_hash_callback, CL_SCAN_CALLBACK_PRE_HASH);
cl_engine_set_scan_callback(engine, &pre_scan_callback, CL_SCAN_CALLBACK_PRE_SCAN);
cl_engine_set_scan_callback(engine, &post_scan_callback, CL_SCAN_CALLBACK_POST_SCAN);
cl_engine_set_scan_callback(engine, &alert_callback, CL_SCAN_CALLBACK_ALERT);
cl_engine_set_scan_callback(engine, &file_type_callback, CL_SCAN_CALLBACK_FILE_TYPE);
```
Each callback may alter scan behavior using the following return codes:
* CL_BREAK
Scan aborted by callback (the rest of the scan is skipped).
This does not mark the file as clean or infected, it just skips the rest of the scan.
* CL_SUCCESS / CL_CLEAN
File scan will continue.
This is different than CL_VERIFIED because it does not affect prior or future alerts.
Return CL_VERIFIED instead if you want to remove prior alerts for this layer and skip
the rest of the scan for this layer.
* CL_VIRUS
This means you don't trust the file. A new alert will be added.
For CL_SCAN_CALLBACK_ALERT: Means you agree with the alert (no extra alert needed).
* CL_VERIFIED
Layer explicitly trusted by the callback and previous alerts removed FOR THIS layer.
You might want to do this if you trust the hash or verified a digital signature.
The rest of the scan will be skipped FOR THIS layer.
For contained files, this does NOT mean that the parent or adjacent layers are trusted.
Each callback is given a pointer to the current scan layer from which
they can get previous layers, can get the the layer's fmap, and then
various attributes of the layer and of the fmap such as:
- layer recursion level
- layer object id
- layer file type
- layer attributes (was decerypted, normalized, embedded, or re-typed)
- layer last alert
- fmap name
- fmap hash (md5, sha1, or sha2-256)
- fmap data (pointer and size)
- fmap file descriptor, if any (fd, offset, size)
- fmap filepath, if any (filepath, offset, size)
To make this possible, this commits introduced a handful of new APIs to
query scan-layer details and fmap details:
- `cl_error_t cl_fmap_set_name(cl_fmap_t *map, const char *name);`
- `cl_error_t cl_fmap_get_name(cl_fmap_t *map, const char **name_out);`
- `cl_error_t cl_fmap_set_path(cl_fmap_t *map, const char *path);`
- `cl_error_t cl_fmap_get_path(cl_fmap_t *map, const char **path_out, size_t *offset_out, size_t *len_out);`
- `cl_error_t cl_fmap_get_fd(const cl_fmap_t *map, int *fd_out, size_t *offset_out, size_t *len_out);`
- `cl_error_t cl_fmap_get_size(const cl_fmap_t *map, size_t *size_out);`
- `cl_error_t cl_fmap_set_hash(const cl_fmap_t *map, const char *hash_alg, char hash);`
- `cl_error_t cl_fmap_have_hash(const cl_fmap_t *map, const char *hash_alg, bool *have_hash_out);`
- `cl_error_t cl_fmap_will_need_hash_later(const cl_fmap_t *map, const char *hash_alg);`
- `cl_error_t cl_fmap_get_hash(const cl_fmap_t *map, const char *hash_alg, const char **hash_out);`
- `cl_error_t cl_fmap_get_data(const cl_fmap_t *map, size_t offset, size_t len, const uint8_t **data_out, size_t *data_len_out);`
- `cl_error_t cl_scan_layer_get_fmap(cl_scan_layer_t *layer, cl_fmap_t **fmap_out);`
- `cl_error_t cl_scan_layer_get_parent_layer(cl_scan_layer_t *layer, cl_scan_layer_t **parent_layer_out);`
- `cl_error_t cl_scan_layer_get_type(cl_scan_layer_t *layer, const char **type_out);`
- `cl_error_t cl_scan_layer_get_recursion_level(cl_scan_layer_t *layer, uint32_t *recursion_level_out);`
- `cl_error_t cl_scan_layer_get_object_id(cl_scan_layer_t *layer, uint64_t *object_id_out);`
- `cl_error_t cl_scan_layer_get_last_alert(cl_scan_layer_t *layer, const char **alert_name_out);`
- `cl_error_t cl_scan_layer_get_attributes(cl_scan_layer_t *layer, uint32_t *attributes_out);`
This commit deprecates but does not remove the existing scan callbacks:
- `void cl_engine_set_clcb_pre_cache(struct cl_engine *engine, clcb_pre_cache callback);`
- `void cl_engine_set_clcb_file_inspection(struct cl_engine *engine, clcb_file_inspection callback);`
- `void cl_engine_set_clcb_pre_scan(struct cl_engine *engine, clcb_pre_scan callback);`
- `void cl_engine_set_clcb_post_scan(struct cl_engine *engine, clcb_post_scan callback);`
- `void cl_engine_set_clcb_virus_found(struct cl_engine *engine, clcb_virus_found callback);`
- `void cl_engine_set_clcb_hash(struct cl_engine *engine, clcb_hash callback);`
This commit also adds an interactive test program to demonstrate the callbacks.
See: `examples/ex_scan_callbacks.c`
CLAM-255
CLAM-2485
CLAM-2626
Temp directory recursion in ClamAV is when each layer of a scan gets its
own temp directory in the parent layer's temp directory.
In addition to temp directory recursion, ClamAV has been creating a new
subdirectory for each file scan as a risk-adverse method to ensure
no temporary file leaks fill up the disk.
Creating a directory is relatively slow on Windows in particular if
scanning a lot of very small files.
This commit:
1. Separates the temp directory recursion feature from the leave-temps
feature so that libclamav can leave temp files without making
subdirectories for each file scanned.
2. Makes it so that when temp directory recursion is off, libclamav
will just use the configure temp directory for all files.
The new option to enable temp directory recursion is for libclamav-only
at this time. It is off by default, and you can enable it like this:
```c
cl_engine_set_num(engine, CL_ENGINE_TMPDIR_RECURSION, 1);
```
For the `clamscan` and `clamd` programs, temp directory recursion will
be enabled when `--leave-temps` / `LeaveTemporaryFiles` is enabled.
The difference is that when disabled, it will return to using the
configured temp directory without making a subdirectory for each file
scanned, so as to improve scan performance for small files, mostly on
Windows.
Under the hood, this commit also:
1. Cleans up how we keep track of tmpdirs for each layer.
The goal here is to align how we keep track of layer-specific stuff
using the scan_layer structure.
2. Cleans up how we record metadata JSON for embedded files.
Note: Embedded files being different from Contained files, as they
are extracted not with a parser, but by finding them with
file type magic signatures.
CLAM-1583
Move recording of evidence (aka Strong, PUA, and Weak indicators) to be
done in each layer of a scan, and passed up to the parent layer with the
top level only connecting the results at the very end of the scan.
This is needed to provide access the last alert for a given layer when
we upgrade the scan callbacks.
Note that when adding evidence from a child layer that is a normalized
layer, we do not want to increase the depth. It should appear as though
the match occured on the parent layer.
This is for two reasons:
1. We don't run the scan callbacks on normalized layers.
2. Future matches on Weak Indicators should be able to treat normalized
layer matches the same as original file matches. Keep reading for
more about Weak Indicators.
Recording scan matches at each recursion layer is also needed to support
Weak Indicators, a feature where an alerting signature (aka Strong
Indicator) may require the the match of a non-alerting signature (aka
Weak Indicator) on the same layer or on child layers in order to alert.
Support for Weak indicators was blocked by not keeping track of where
indicators were found. So this commit also enables support for recording
Weak indicators.
Like PUA, Weak indicators are treated differently based on the signature
prefix. That is, any signatures starting with "Weak." won't cause an
alert on its own.
The next step to completing Weak Indicator support will be adding a
logical subsignature feature to depend on a weak indicator match.
CLAM-2626
CLAM-2485
Every time we push a new map onto the scanning recursion context, give
it a unique object id number, which counts from zero.
Moved the location where we add metadata for each file from the
"cli_magic_scan" function over to the "recursion stack push" function.
Include a "path" as a parameter for creating a new fmap, and rename some
related variables and functions to be more intuitive.
CLAM-2796
See also: CLAM-2485, CLAM-2626
Change the clean-cache to use SHA2-256 instead of MD5.
Note that all references are changed to specify "SHA2-256" now instead
of "SHA256", for clarity. But there is no plan to add support for SHA3
algorithms at this time.
Significant code cleanup. E.g.:
- Implemented goto-done error handling.
- Used `uint8_t *` instead of `unsigned char *`.
- Use `bool` for boolean checks, rather than `int.
- Used `#defines` instead of magic numbers.
- Removed duplicate `#defines` for things like hash length.
Add new option to calculate and record additional hash types when the
"generate metadata JSON" feature is enabled:
- libclamav option: `CL_SCAN_GENERAL_STORE_EXTRA_HASHES`
- clamscan option: `--json-store-extra-hashes` (default off)
- clamd.conf option: `JsonStoreExtraHashes` (default 'no')
Renamed the sigtool option `--sha256` to `--sha2-256`.
The original option is still functional, but is deprecated.
For the "generate metadata JSON" feature, the file hash is now stored as
"sha2-256" instead of "FileMD5". If you enable the "extra hashes" option,
then it will also record "md5" and "sha1".
Deprecate and disable the internal "SHA collect" feature.
This option had been hidden behind C #ifdef checks for an option that
wasn't exposed through CMake, so it was basically unavailable anyways.
Changes to calculate file hashes when they're needed and no sooner.
For the FP feature in the matcher module, I have mimiced the
optimization in the FMAP scan routine which makes it so that it can
calculate multiple hashes in a single pass of the file.
The `HandlerType` feature stores a hash of the file in the scan ctx to
prevent retyping the exact same data more than once.
I removed that hash field and replaced it with an attribute flag that is
applied to the new recursion stack layer when retyping a file.
This also closes a minor bug that would prevent retyping a file with an
all-zero hash. :)
The work upgrading cache.c to support SHA2-256 sized hashes thanks to:
https://github.com/m-sola
CLAM-255
CLAM-1858
CLAM-1859
CLAM-1860
Sigtool's `--diff CVD_OLD CVD_NEW` feature will fail with preclass_tcfa
(or any other CVD with an underscore).
Apparently '_' is not a supported character in that code.
While debugging this, I found some other issues:
* The call to verify the `.script` created with the `--diff` feature
fails since adding the .sign digital signature verification code,
because I called it wrong.
We didn't notice because there are no automated tests for this feature.
* The --diff feature assumes you're in the same directory as the CVD
files and that it is a relative path.
* The --diff feature will change directories to a temp directory to
verify the diff and then fail to apply the script because it has a
relative path and now in a totally different directory
I don't know how (2) or (3) ever worked right.
One require absolute paths, while the other didn't provide a buffer
big enough for absolute paths. So confused!
This commit should make it so relative or absolute paths are fine for
the CVD's and the cvd name may now include underscores.
CLAM-2815
We were signing with the signing key + signing cert and verifying
with the intermediate cert + root cert. However, we should have been
signing with the signing key + signing cert + intermediate cert, and
verifying with just the root cert.
To fix this, I...
1. Provided new certs and test file .sign files to use the correct
signing method.
2. Restructured the `unit_tests/input/signing` directory to highlight
which files are for signing and which are for verification.
There is a multi-arch build issue because I previously used i8 to
represent a C character. I switched it to c_char, which should fix the
clamav-debian multi-arch Docker image build.
It turns out we weren't failing out when signing if one of the provided
intermediate certificate paths is incorrect. Instead of using
`filter_map()`, I switched to just iterate the list to populate the
vector of intermediate certs.
Add X509 certificate chain based signing with PKCS7-PEM external
signatures distributed alongside CVD's in a custom .cvd.sign format.
This new signing and verification mechanism is primarily in support
of FIPS compliance.
Fixes: https://github.com/Cisco-Talos/clamav/issues/564
Add a Rust implementation for parsing, verifying, and unpacking CVD
files.
Now installs a 'certs' directory in the app config directory
(e.g. <prefix>/etc/certs). The install location is configurable.
The CMake option to configure the CVD certs directory is:
`-D CVD_CERTS_DIRECTORY=PATH`
New options to set an alternative CVD certs directory:
- Commandline for freshclam, clamd, clamscan, and sigtool is:
`--cvdcertsdir PATH`
- Env variable for freshclam, clamd, clamscan, and sigtool is:
`CVD_CERTS_DIR`
- Config option for freshclam and clamd is:
`CVDCertsDirectory PATH`
Sigtool:
- Add sign/verify commands.
- Also verify CDIFF external digital signatures when applying CDIFFs.
- Place commonly used commands at the top of --help string.
- Fix up manpage.
Freshclam:
- Will try to download .sign files to verify CVDs and CDIFFs.
- Fix an issue where making a CLD would only include the CFG file for
daily and not if patching any other database.
libclamav.so:
- Bump version to 13:0:1 (aka 12.1.0).
- Also remove libclamav.map versioning.
Resolves: https://github.com/Cisco-Talos/clamav/issues/1304
- Add two new API's to the public clamav.h header:
```c
extern cl_error_t cl_cvdverify_ex(const char *file,
const char *certs_directory);
extern cl_error_t cl_cvdunpack_ex(const char *file,
const char *dir,
bool dont_verify,
const char *certs_directory);
```
The original `cl_cvdverify` and `cl_cvdunpack` are deprecated.
- Add `cl_engine_field` enum option `CL_ENGINE_CVDCERTSDIR`.
You may set this option with `cl_engine_set_str` and get it
with `cl_engine_get_str`, to override the compiled in default
CVD certs directory.
libfreshclam.so: Bump version to 4:0:0 (aka 4.0.0).
Add sigtool sign/verify tests and test certs.
Make it so downloadFile doesn't throw a warning if the server
doesn't have the .sign file.
Replace use of md5-based FP signatures in the unit tests with
sha256-based FP signatures because the md5 implementation used
by Python may be disabled in FIPS mode.
Fixes: https://github.com/Cisco-Talos/clamav/issues/1411
CMake: Add logic to enable the Rust openssl-sys / openssl-rs crates
to build against the same OpenSSL library as is used for the C build.
The Rust unit test application must also link directly with libcrypto
and libssl.
Fix some log messages with missing new lines.
Fix missing environment variable notes in --help messages and manpages.
Deconflict CONFDIR/DATADIR/CERTSDIR variable names that are defined in
clamav-config.h.in for libclamav from variable that had the same name
for use in clamav applications that use the optparser.
The 'clamav-test' certs for the unit tests will live for 10 years.
The 'clamav-beta.crt' public cert will only live for 120 days and will
be replaced before the stable release with a production 'clamav.crt'.
As of ClamAV 0.105, PCRE2 is required. PCRE (1) is not an option, and
there is also no option to disable PCRE support.
This commit removes the dead code associated with those old build
options.
The cli_max_malloc, cli_max_calloc, and cli_max_realloc functions
provide a way to protect against allocating too much memory
when the size of the allocation is derived from the untrusted input.
Specifically, we worry about values in the file being scanned being
manipulated to exhaust the RAM and crash the application.
There is no need to check the limits if the size of the allocation
is fixed, or if the size of the allocation is necessary for signature
loading, or the general operation of the applications.
E.g. checking the max-allocation limit for the size of a hash, or
for the size of the scan recursion stack, is a complete waste of
time.
Although we significantly increased the max-allocation limit in
a recent release, it is best not to check an allocation if the
allocation will be safe. It would be a waste of time.
I am also hopeful that if we can reduce the number allocations
that require a limit-check to those that require it for the safe
scan of a file, then eventually we can store the limit in the scan-
context, and make it configurable.
We have some special functions to wrap malloc, calloc, and realloc to
make sure we don't allocate more than some limit, similar to the
max-filesize and max-scansize limits. Our wrappers are really only
needed when allocating memory for scans based on untrusted user input,
where a scan file could have bytes that claim you need to allocate
some ridiculous amount of memory. Right now they're named:
- cli_malloc
- cli_calloc
- cli_realloc
- cli_realloc2
... and these names do not convey their purpose
This commit renames them to:
- cli_max_malloc
- cli_max_calloc
- cli_max_realloc
- cli_max_realloc2
The realloc ones also have an additional feature in that they will not
free your pointer if you try to realloc to 0 bytes. Freeing the memory
is undefined by the C spec, and only done with some realloc
implementations, so this stabilizes on the behavior of not doing that,
which should prevent accidental double-free's.
So for the case where you may want to realloc and do not need to have a
maximum, this commit adds the following functions:
- cli_safer_realloc
- cli_safer_realloc2
These are used for the MPOOL_REALLOC and MPOOL_REALLOC2 macros when
MPOOL is disabled (e.g. because mmap-support is not found), so as to
match the behavior in the mpool_realloc/2 functions that do not make use
of the allocation-limit.
Some log statements using the old ^, !, and * logg-prefix where they
were making use a ternary to determine the log level in the log
statement.
Also sigtool and freshclam weren't outputting error log messages using
the Rust log macros e.g. `error!("...")`.
Add a new cl_engine_set_clcb_vba() function to set a cb_vba callback
function and add clcb_generic_data handler prototype to the clamav.h
public API.
The cb_vba callback function will be run whenever VBA is extracted from
office documents. The provided data will be a normalized copy of the
original VBA. This callback is added to support Sigtool so it can use
the same VBA extraction logic as when scanning documents.
Change the Sigtool temp directory creation for any commands that use
temp directories so that you can select a custom temp directory with the
`--tempdir=PATH` option, and can retain the temp files with the
`--leave-temps` option.
Added `--tempdir` and `--leave-temps` to the Sigtool `--help` output.
Added `--tempdir` and `--leave-temps` to the Sigtool manpage.
This commit adds a feature to find, decode, and scan each image found
within HTML <style> tags where the image data is embedded in `url()`
function parameters a base64 blob
In C in the html normalization process we extract style tag contents
to new buffer for processing. We call into a new feature in Rust code to
find and decode each image (if there are multiple).
Once extracted, the images are scanned as contained files of unknown
type, and file type identifcation will determine the actual type.
The CVD/CLD unpack calls performed by sigtool didn't used to verify the
CVD. If working with a CLD, verifying will fail, such as when using
`sigtool --find-sig` when daily is a CLD.
This commit reverts that behavior for sigtool unpack operations.
The header parsing / executable metadata collecting functions for the
PE, ELF, and Mach-O file types were using `int` for the return type.
Mostly they were returning 0 for success and -1, -2, -3, or -4 for
failure. But in some cases they were returning cl_error_t enum values
for failure. Regardless, the function using them was treating 0 as
success and non-zero as failure, which it stored as -1 ... every time.
This commit switches them all to use cl_error_t. I am continuing to
storeo the final result as 0 / -1 in the `peinfo` struct, but outside of
that everything has been made consistent.
While I was working on that, I got a tad side tracked. I noticed that
the target type isn't an enum, or even a set of #defines. So I made an
enum and then changed the code that uses target types to use the enum.
I also removed the `target` parameter from a number of functions that
don't actually use it at all. Some recursion was masking the fact that
it was an unused parameter which is why there was no warning about it.
Significantly tidy the `cli_scan_fmap()` function, and add comments to
explain how it all works.
Add SHA1 and SHA256 digest variables to the FMAP structure in addition
to the existing MD5. Add a function to set the hash so that when we
calculate the hashes for hash matching, we save them for subsequent FP
matching. This enabled me to remove the extra "hash-only" FP check from
`cli_scan_fmap()`. This will also make it easier to switch the clean
cache hash algorithm to SHA256 in the future.
Remove extra allmatch checks that are no longer required.
Add a new header to prevent #include order issues.
Rework the append_virus mechanism to store evidence (strong indicators,
pua indicators, and eventually weak indicators) in vectors. When
appending a "virus", we will return CLEAN when in allmatch-mode, and
simply add the indicator to the appropriate vector.
Later we can check if there were any alerts to return a vector by
summing the lengths of the strong and pua indicator vectors.
This does away with storing the latest "virname" in the scan context.
Instead, we can query for the last indicator in the evidence, giving
priority to strong indicators.
When heuristic-precendence is enabled, add PUA as Strong instead of
as PotentiallyUnwanted. This way, they will be treated equally and
reported in order in allmatch mode.
Also document reason for disabling cache with metadata JSON enabled
In the interest of using the public API's as much as possible for our
own applications (dog-fooding the API), this commit swaps sigtool and
freshclam `cli_cvdunpack()` calls to `cl_cvdunpack()`.
Add `sigtool --fuzzy-img` option to generate image fuzzy hash.
Also fix assorted warnings, mostly ensuring enough buffer space so format
strings aren't truncated.
For the dsig change: the returned string is allocated and is not const.
The caller will have to free it.
Extends the new frs_error module to provide variants of the
frs_result!() macro that accept a Result as input instead of calling a
function on your behalf. This enables us to use the macro in conditions
where we don't want to return on success, and want to do other things
before we return.
Use the new frs_error module to return errors to the C calling functions
rather than logging the error in Rust-land.
Notably, this enables us to store more meaningful error messages in the
JSON output if we fail to calculate the image fuzzy hash.
Add a new logical signature subsignature type for matching on images
with image fuzzy hashes.
Image fuzzy hash subsigantures follow this format:
fuzzy_img#<hash>#<dist>
In this initial implementation, the hamming distance (dist) is ignored
and only exact fuzzy hash matches will alert.
Fuzzy hash matching is only performed for supported image types.
Also: removed some excessive debug log messages on start-up.
Fixed an issue where the signature name (virname) is being allocated and
stored for every subsignature or even ever sub-pattern in an AC-pattern
(i.e. NDB sig or LDB subsig) containing a `{n-m}` or `*` wildcard.
This fix is only for LDB subsigs though. NDB signatures are still
allocaing one virname per sub-pattern.
This fix was required because I needed a place to store the virname with
fuzzy-hash subsignatures. Storing it in the fuzzy-hash subsig
metadatathe way AC-pattern, PCRE, and BComp subsigs were doing it
wouldn't work because it would cross the C-Rust FFI boundary and giving
pointers to Rust allocated stuff is dicey. Not to mention native Rust
strings are different thatn C strings. Anyways, the correct thing to do
was to store the virname with the actual logical signature.
TODO: Keep track of NDB signatures in the same way and store the virname
for NDB sigs there instead of in AC-patterns so that we can get rid of
the virname field in the AC-pattern struct.
Fix two locations where the stack-allocated arrays lack space for a null-
terminating byte and could overwrite the array in:
- dsig.c
- sigtool.c
The ClamAV team verified that these overflows are not a security issue.
The logic for parsing a logical subsignature isn't clearly identified
and has been, perhaps mistakenly or out of convenience, used to when
parsing NDB signatures in addition to LDB subsignatures. What this means
is that you can technically use a PCRE subsignature in an NDB file and
clam won't complain about it. It won't work however, because a PCRE
subsignature requires another matching subsignature to trigger it, but
it will parse. The same is likely true for byte-compare subsignatures.
This commit restructures that logic a bit so subsignature parsing has
its own function and is more organized.
I also renamed the functions a little bit and added lots of comments.
I fixed a few minor warnings relating to format string characters.
The change in str.c:cli_ldbtokenize is to prevent a buffer under-read if
you were to use the function on the start of a buffer, as is now down in
this commit.
* Added loglevel parameter to logg()
* Fix logg and mprintf internals with new loglevels
* Update all logg calls to set loglevel
* Update all mprintf calls to set loglevel
* Fix hidden logg calls
* Executed clam-format
Rustify cdiff_apply() and clean up error handling:
- Restore [some] safety and clean up error handling.
- Use rust-crypto sha2 instead of OpenSSL's.
Fix signedness of cli_versig2() dsig parameter.
c_char may be an i8 or u8 depending on platform:
https://doc.rust-lang.org/src/std/os/raw/mod.rs.html#91-133
Rustify cmd_close():
- Consolidate DEL/XCHG records.
- Tidy up ADD handling.
- Various error handling cleanup, etc.
Apply both .cdiff and .script CVD patches.
Note: A script is a non-compressed and unsigned file containing cdiff
commands. There is no header or footer that should be processed.
This Rust-based implementation of the cdiff-apply feature includes
equivalent features as found in the C-based implementation:
- cdiff file signature validation against sha256 of the file contents
- Gz decoding of file contents
- File open command
- File close command
- Signature add command
- Line delete command
- Xchg command
- Move command
- Unlink command
This Rust implementation adds cdiff-apply unit tests to verify correct
functionality.
CID 361074: fmap.c: Possible invalid dereference if status != success
and the new map was not yet allocated.
CID 361077: others.c: Structurally dead code revealed a bug in the
cli_recursion_stack_get_size() function.
CID 361080, 361078, 361083: sigtool.c: Inverted check for if engine
needs to be free'd, could leak the engine structure.
CID 361075: sigtool.c: Missed a `return -1` that should've been `goto
done;` and would leak the new_map buffer.
CID 361079: sigtool/vba.c: Checking if we should free the new_map on
failure only if ctx also needs to be free'd, which would leak the
new_map if ctx was not allocated yet.
The fmap module provides a mechanism for creating a mapping into an
existing map at an offset and length that's used when a file is found
with an uncompressed archive or when embedded files are found with
embedded file type recognition in scanraw(). This is the
"fmap_duplicate()" function. Duplicate fmaps just reference the original
fmap's 'data' or file handle/descriptor while allowing the caller to
treat it like a new map using offsets and lengths that don't account for
the original/actual file dimensions.
fmap's keep track of this with m->nested_offset & m->real_len, which
admittedly have confusing names. I found incorrect uses of these in a
handful of locations. Notably:
- In cli_magic_scan_nested_fmap_type().
The force-to-disk feature would have been checking incorrect sizes and
may have written incorrect offsets for duplicate fmaps.
- In XDP parser.
- A bunch of places from the previous commit when making dupe maps.
This commit fixes those and adds lots of documentation to the fmap.h API
to try to prevent confusion in the future.
nested_offset should never be referenced outside of fmap.c/h.
The fmap_* functions for accessing or reading map data have two
implementations, mem_* or handle_*, depending the data source.
I found issues with some of these so I made a unit test that covers each
of the functions I'm concerned about for both types of data sources and
for both original fmaps and nested/duplicate fmaps.
With the tests, I found and fixed issues in these fmap functions:
- handle_need_offstr(): must account for the nested_offset in dupe maps.
- handle_gets(): must account for nested_offset and use len & real_len
correctly.
- mem_need_offstr(): must account for nested_offset in dupe maps.
- mem_gets(): must account for nested_offset and use len & real_len
correctly.
Moved CDBRANGE() macro out of function definition so for better
legibility.
Fixed a few warnings.
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
Added a feature to extract images from OLE2 BIFF streams.
This work was derived from InQuests blog post about extracting XLM and
images from XLS files:
https://inquest.net/blog/2019/01/29/Carving-Sneaky-XLM-Files
Assorted ole2 parser code cleanup and massive error handling cleanup.
Also fixed the following:
- The XLS parser may fail to process all BIFF records if some of the
records contain unexpected data or is otherwise malformed. Because the
record size is already known, we can skip over the "malformed" record
and continue with the rest.
- Fixed an issue where the ole2 header size was improperly calculated,
failing to account for the new "has_xlm" boolean added for context.
Improvements to use modern block list and allow list verbiage.
blacklist -> block list
whitelist -> allow listed
blacklisted -> blocked
whitelisted -> allowed
In the case of certificate verification, use "trust" or "verify" when
something is allowed.
Also changed domainlist -> domain list (or DomainList) to match.
The named "shared" is confusing, especially now that these features are
built as a static library instead of being directly compiled into the
various applications.