Commit graph

81 commits

Author SHA1 Message Date
Val S.
d4114e0d2c
Fix static analysis code quality issues; Fix old libjson-c support (#1574)
`clamscan/manager.c`: Fix double-free in an error condition in `scanfile()`.

`common/optparser.c`: Fix uninitialized use of the `numarg` variable when
`arg` is `NULL`.

`libclamav/cache.c`: Don't check if `ctx-fmap` is `NULL` when we've
already dereferenced it.

`libclamav/crypto.c`: The `win_exception` variable and associated logic
is Windows-specific and so needs preprocessor platform checks. Otherwise
it generates unused variable warnings.

`libclamav/crypto.c`: Check for `size_t` overflow of the `byte_read`
variable in the `cl_hash_file_fd_ex()` function.

`libclamav/crypto.c`: Fix a memory leak in the `cl_hash_file_fd_ex()`
function.

`libclamav/fmap.c`: Correctly the `name` and `path` pointer if
`fmap_duplicate()` fails. Also need to clear those variables when
duplicating the parent `map` so that on error it does not free the wrong
`name` or `path`.

`libclamav/fmap.c`: Refine error handling for `hash_string` cleanup in
`cl_fmap_get_hash()`. Coverity's complaint was that `hash_string` could
never be non-NULL if `status` is not `CL_SUCCESS`. I.e., the cleanup is
dead code. I don't think my cleanup actually "fixes" that though it is
definitely a better way to do the error handling.
The `if (NULL != hash_string) {` check is still technically dead code.
It safeguards against future changes that may `goto done` between the
allocation and transfering ownership from `hash_string` to `hash_out`.

`libclamav/others.c`: Fix possible memory leak in `cli_recursion_stack_push()`.

`libclamav/others.c`: Refactor an if/else + switch statement inside
`cli_dispatch_scan_callback()` so that the `CL_SCAN_CALLBACK_ALERT` case
is not dead-code. It's also easier to read now.

`libclamav/pdfdecode.c`: For logging, use the `%zu` to format `size_t`
instead of casting to `long long` and using `%llu`. Simiularly use the
`STDu32` format string macro for `uint32_t`.

`libclamav/pdfdecode.c`: Fix a possible double-free for the `decoded`
pointer in `filter_lzwdecode()`.

`libclamav/pdfdecode.c`: Remove the `if (capacity > UINT_MAX) {`
overflow check inside `filter_lzwdecode()`, which didn't do anything.
The `capacity` variable this point is a fixed value and so I also changed
the `avail_out` to be that fixed `INFLATE_CHUNK_SIZE` value rather than
using `capacity`. It is more straightforward and replicates how similar
logic works later in the file.
I also removed the copy-pasted `(Bytef *)` cast which didn't reaaally do
anything, and was a copypaste from a different algorihm. The lzw
implementation interface doesn't use `Bytef`.

`libclamav/readdb.c`: Fix a possible NULL-deref on the `matcher` variable
in the error handling/cleanup code if the function fails.

`libclamav/scanners.c`: Fix an issue where the return value from some of
the parsers may be lost/overridden by the call to
`cli_dispatch_scan_callback()` just after the `done:` label in
`cli_magic_scan()`.

`libclamav/scanners.c`: Silence an unused-return value warning when
calling `cli_basename()`.

`sigtool/sigtool.c` and `unit_tests/check_regex.c`:
Fix possible NULL-derefs of the `ctx.recursion_stack` pointer in the error
handling for several functions.

Also, and this isn't a Coverity thing:

`libclamav/json_api.c` and `libclamav/others.c`:
Fix support for libjson-c version 0.13 and older.
I don't think we *should* be using the old version, but some environments
such as the current OSS-Fuzz base image are older and still use it.
The issue is that `json_object_new_uint64()` was introduced in a later
libjson-c version, so we have to fallback to use `json_object_new_int64()`
with older libjson-c, provided the int were storing isn't too big.

CLAM-2768
2025-09-26 18:26:00 -04:00
Valerie Snyder
7f25b928de
Record scan matches (evidence) at each recursion layer
Move recording of evidence (aka Strong, PUA, and Weak indicators) to be
done in each layer of a scan, and passed up to the parent layer with the
top level only connecting the results at the very end of the scan.

This is needed to provide access the last alert for a given layer when
we upgrade the scan callbacks.

Note that when adding evidence from a child layer that is a normalized
layer, we do not want to increase the depth. It should appear as though
the match occured on the parent layer.
This is for two reasons:
1. We don't run the scan callbacks on normalized layers.
2. Future matches on Weak Indicators should be able to treat normalized
   layer matches the same as original file matches. Keep reading for
   more about Weak Indicators.

Recording scan matches at each recursion layer is also needed to support
Weak Indicators, a feature where an alerting signature (aka Strong
Indicator) may require the the match of a non-alerting signature (aka
Weak Indicator) on the same layer or on child layers in order to alert.

Support for Weak indicators was blocked by not keeping track of where
indicators were found. So this commit also enables support for recording
Weak indicators.
Like PUA, Weak indicators are treated differently based on the signature
prefix. That is, any signatures starting with "Weak." won't cause an
alert on its own.
The next step to completing Weak Indicator support will be adding a
logical subsignature feature to depend on a weak indicator match.

CLAM-2626
CLAM-2485
2025-08-14 21:23:34 -04:00
Valerie Snyder
aa7b7e9421
Swap clean cache from MD5 to SHA2-256
Change the clean-cache to use SHA2-256 instead of MD5.
Note that all references are changed to specify "SHA2-256" now instead
of "SHA256", for clarity. But there is no plan to add support for SHA3
algorithms at this time.

Significant code cleanup. E.g.:
- Implemented goto-done error handling.
- Used `uint8_t *` instead of `unsigned char *`.
- Use `bool` for boolean checks, rather than `int.
- Used `#defines` instead of magic numbers.
- Removed duplicate `#defines` for things like hash length.

Add new option to calculate and record additional hash types when the
"generate metadata JSON" feature is enabled:
- libclamav option: `CL_SCAN_GENERAL_STORE_EXTRA_HASHES`
- clamscan option: `--json-store-extra-hashes` (default off)
- clamd.conf option: `JsonStoreExtraHashes` (default 'no')

Renamed the sigtool option `--sha256` to `--sha2-256`.
The original option is still functional, but is deprecated.

For the "generate metadata JSON" feature, the file hash is now stored as
"sha2-256" instead of "FileMD5". If you enable the "extra hashes" option,
then it will also record "md5" and "sha1".

Deprecate and disable the internal "SHA collect" feature.
This option had been hidden behind C #ifdef checks for an option that
wasn't exposed through CMake, so it was basically unavailable anyways.

Changes to calculate file hashes when they're needed and no sooner.

For the FP feature in the matcher module, I have mimiced the
optimization in the FMAP scan routine which makes it so that it can
calculate multiple hashes in a single pass of the file.

The `HandlerType` feature stores a hash of the file in the scan ctx to
prevent retyping the exact same data more than once.
I removed that hash field and replaced it with an attribute flag that is
applied to the new recursion stack layer when retyping a file.
This also closes a minor bug that would prevent retyping a file with an
all-zero hash. :)

The work upgrading cache.c to support SHA2-256 sized hashes thanks to:
https://github.com/m-sola

CLAM-255
CLAM-1858
CLAM-1859
CLAM-1860
2025-08-14 21:23:30 -04:00
Val Snyder
7ff29b8c37
Bump copyright dates for 2025 2025-02-14 10:24:30 -05:00
Anthony Chan
8ae5f9a8f2 Defer or avoid file MD5 calculation when cache is disabled 2024-05-06 12:59:07 -07:00
Micah Snyder
9cb28e51e6 Bump copyright dates for 2024 2024-01-22 11:27:17 -05:00
RainRat
1b17e20571
Fix typos (no functional changes) 2024-01-19 09:08:36 -08:00
Craig Andrews
e70493cf61 Add options: --cache-size, CacheSize
* Add new clamd and clamscan option --cache-size

This option allows you to set the number of entries the cache can store.

Additionally, introduce CacheSize as a clamd.conf
synonym for --cache-size.

Fixes #867
2023-05-16 19:18:30 -07:00
Micah Snyder
6eebecc303 Bump copyright for 2023 2023-02-12 11:20:22 -08:00
Micah Snyder
621381e0cd Allmatch-mode overhaul, part 1: append_virus
Rework the append_virus mechanism to store evidence (strong indicators,
pua indicators, and eventually weak indicators) in vectors. When
appending a "virus", we will return CLEAN when in allmatch-mode, and
simply add the indicator to the appropriate vector.
Later we can check if there were any alerts to return a vector by
summing the lengths of the strong and pua indicator vectors.

This does away with storing the latest "virname" in the scan context.
Instead, we can query for the last indicator in the evidence, giving
priority to strong indicators.

When heuristic-precendence is enabled, add PUA as Strong instead of
as PotentiallyUnwanted. This way, they will be treated equally and
reported in order in allmatch mode.

Also document reason for disabling cache with metadata JSON enabled
2022-10-19 13:13:57 -07:00
Micah Snyder
959fc13111 Clang-format touchup 2022-07-22 12:44:08 -07:00
Mickey Sola
9b026033b4 Fix NULL param crash when caching
Since converting the hash variable from a stack array to a pointer, the
pointer may now be NULL if the file is truncated after the scan starts
but before the hash is calculated. This race condition would result in
a NULL pointer dereference and crash.

This commit adds additional NULL parameter checks.

Thanks to Alexander Patrakov and Antoine Gatineau for reporting this issue.

Resolves: https://github.com/Cisco-Talos/clamav/issues/440
2022-05-01 12:24:19 -07:00
Micah Snyder
375ecf678c Update vendored TomsFastMath code to 0.13.1
Update the vendored TomsFastMath (TFM) library to v0.13.1.

Resolves: https://bugzilla.clamav.net/show_bug.cgi?id=11992

I removed compatibility macro's from when libTomMath was used.
This required removing a bunch of faux-error handling because
the fast-math equivalent functions return void, and cannot fail.

The previous version used had named the header "bignum_fast.h"
instead of "tfm.h" and had customizations in that header to enable
TFM_CHECK all the time, and also TFM_NO_ASM if __GNUC__ not defined
or if the system isn't 64bit architecture. This update uses tfm.h
as-is, and has CMake define TFM_CHECK and TFM_NO_ASM as needed.

I've kept bignum.h as an interface to including tfm.h so that in
the future we can more easily add support for system-installed
TomsFastMath instead of the vendored one, taking inspiration from
Debian's patch to support system-TomsFastMath.

See: https://salsa.debian.org/clamav-team/clamav/-/blob/unstable/debian/patches/add-support-for-system-tomsfastmath.patch
2022-02-10 12:54:23 -07:00
Scott Hutton
5a3e0a6190
Improve mutex safety when caching scan results
The logging functions use a callback to print log messages. Because the
callback could be anything provided by an application, it isn't safe to
log while holding a mutex.

This commit defers error reporting in cacheset_add() to prevent running
the callback while the mutex is held.
2022-01-27 10:08:59 -08:00
micasnyd
140c88aa4e Bump copyright for 2022
Includes minor format corrections.
2022-01-09 14:23:25 -07:00
Micah Snyder
db013a2bfd libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.

Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".

At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.

But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".

To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).

I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).

This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.

For illustration, imagine a tarball named foo.tar.gz with this structure:
| description               | type  | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz                | GZ    | 0         | 0                 |
| └── foo.tar               | TAR   | 1         | 0                 |
|     ├── bar.zip           | ZIP   | 2         | 1                 |
|     │   └── hola.txt      | ASCII | 3         | 0                 |
|     └── baz.exe           | PE    | 2         | 1                 |

But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description               | type  | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe                   | PE    | 0         | 0                 |
| ├── sfx.zip               | ZIP   | 1         | 1                 |
| │   └── hello.txt         | ASCII | 2         | 0                 |
| └── sfx.7z                | 7Z    | 1         | 1                 |
|     └── world.txt         | ASCII | 2         | 0                 |

(A) If we scan for embedded files at any layer, we may detect:
| description               | type  | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz                | GZ    | 0         | 0                 |
| ├── foo.tar               | TAR   | 1         | 0                 |
| │   ├── bar.zip           | ZIP   | 2         | 1                 |
| │   │   └── hola.txt      | ASCII | 3         | 0                 |
| │   ├── baz.exe           | PE    | 2         | 1                 |
| │   │   ├── sfx.zip       | ZIP   | 3         | 1                 |
| │   │   │   └── hello.txt | ASCII | 4         | 0                 |
| │   │   └── sfx.7z        | 7Z    | 3         | 1                 |
| │   │       └── world.txt | ASCII | 4         | 0                 |
| │   ├── sfx.zip           | ZIP   | 2         | 1                 |
| │   │   └── hello.txt     | ASCII | 3         | 0                 |
| │   └── sfx.7z            | 7Z    | 2         | 1                 |
| │       └── world.txt     | ASCII | 3         | 0                 |
| ├── sfx.zip               | ZIP   | 1         | 1                 |
| └── sfx.7z                | 7Z    | 1         | 1                 |

(A) is bad because it scans content more than once.

Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.

The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.

(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description               | type  | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz                | GZ    | 0         | 0                 |
| └── foo.tar               | TAR   | 1         | 0                 |
|     ├── bar.zip           | ZIP   | 2         | 1                 |
|     │   └── hola.txt      | ASCII | 3         | 0                 |
|     ├── baz.exe           | PE    | 2         | 1                 |
|     ├── sfx.zip           | ZIP   | 2         | 1                 |
|     │   └── hello.txt     | ASCII | 3         | 0                 |
|     └── sfx.7z            | 7Z    | 2         | 1                 |
|         └── world.txt     | ASCII | 3         | 0                 |

(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.

The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description               | type  | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip                   | ZIP   | 0         | 0                 |
| └── bar.exe               | PE    | 1         | 1                 |
|     └── sfx.zip           | ZIP   | 2         | 2                 |

Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.

(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description               | type  | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz                | GZ    | 0         | 0                 |
| └── foo.tar               | TAR   | 1         | 0                 |
|     ├── bar.zip           | ZIP   | 2         | 1                 |
|     │   └── hola.txt      | ASCII | 3         | 0                 |
|     └── baz.exe           | PE    | 2         | 1                 |
|         ├── sfx.zip       | ZIP   | 3         | 1                 |
|         │   └── hello.txt | ASCII | 4         | 0                 |
|         └── sfx.7z        | 7Z    | 3         | 1                 |
|             └── world.txt | ASCII | 4         | 0                 |

(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.

So this commit aims to solve the issue the (B)-way.

Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.

Other fixes and considerations in this commit:

- The utf16 HTML parser has weak error handling, particularly with respect
  to creating a nested fmap for scanning the ascii decoded file.
  This commit cleans up the error handling and wraps the nested scan with
  the recursion-stack push()/pop() for correct recursion tracking.

  Before this commit, each container layer had a flag to indicate if the
  container layer is valid.
  We need something similar so that the cli_recursion_stack_get_*()
  functions ignore normalized layers. Details...

  Imagine an LDB signature for HTML content that specifies a ZIP
  container. If the signature actually alerts on the normalized HTML and
  you don't ignore normalized layers for the container check, it will
  appear as though the alert is in an HTML container rather than a ZIP
  container.

  This commit accomplishes this with a boolean you set in the scan context
  before scanning a new layer. Then when the new fmap is created, it will
  use that flag to set similar flag for the layer. The context flag is
  reset those that anything after this doesn't have that flag.
  The flag allows the new recursion_stack_get() function to ignore
  normalized layers when iterating the stack to return a layer at a
  requested index, negative or positive.

  Scanning normalized extracted/normalized javascript and VBA should also
  use the 'layer is normalized' flag.

- This commit also fixes Heuristic.Broken.Executable alert for ELF files
  to make sure that:

  A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
  respects the FP check).

  B) all broken-executable alerts for ELF only happen if the
  SCAN_HEURISTIC_BROKEN option is enabled.

- This commit also cleans up the error handling in cli_magic_scan_dir().
  This was needed so we could correctly apply the layer-is-normalized-flag
  to all VBA macros extracted to a directory when scanning the directory.

- Also fix an issue where exceeding scan maximums wouldn't cause embedded
  file detection scans to abort. Granted we don't actually want to abort
  if max filesize or max recursion depth are exceeded... only if max
  scansize, max files, and max scantime are exceeded.

  Add 'abort_scan' flag to scan context, to protect against depending on
  correct error propagation for fatal conditions. Instead, setting this
  flag in the scan context should guarantee that a fatal condition deep in
  scan recursion isn't lost which result in more stuff being scanned
  instead of aborting. This shouldn't be necessary, but some status codes
  like CL_ETIMEOUT never used to be fatal and it's easier to do this than
  to verify every parser only returns CL_ETIMEOUT and other "fatal
  status codes" in fatal conditions.

- Remove duplicate is_tar() prototype from filestypes.c and include
  is_tar.h instead.

- Presently we create the fmap hash when creating the fmap.
  This wastes a bit of CPU if the hash is never needed.
  Now that we're creating fmap's for all embedded files discovered with
  file type recognition scans, this is a much more frequent occurence and
  really slows things down.

  This commit fixes the issue by only creating fmap hashes as needed.
  This should not only resolve the perfomance impact of creating fmap's
  for all embedded files, but also should improve performance in general.

- Add allmatch check to the zip parser after the central-header meta
  match. That way we don't multiple alerts with the same match except in
  allmatch mode. Clean up error handling in the zip parser a tiny bit.

- Fixes to ensure that the scan limits such as scansize, filesize,
  recursion depth, # of embedded files, and scantime are always reported
  if AlertExceedsMax (--alert-exceeds-max) is enabled.

- Fixed an issue where non-fatal alerts for exceeding scan maximums may
  mask signature matches later on. I changed it so these alerts use the
  "possibly unwanted" alert-type and thus only alert if no other alerts
  were found or if all-match or heuristic-precedence are enabled.

- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
  when the --gen-json feature is enabled. These will show up once under
  "ParseErrors" the first time a limit is exceeded. In the present
  implementation, only one limits-exceeded events will be added, so as to
  prevent a malicious or malformed sample from filling the JSON buffer
  with millions of events and using a tonne of RAM.
2021-10-25 16:02:29 -07:00
Micah Snyder (micasnyd)
b9ca6ea103 Update copyright dates for 2021
Also fixes up clang-format.
2021-03-19 15:12:26 -07:00
Micah Snyder
cbe2cba4d1 libclamav: Generate hash for each new fmap
Signature alerts on content extracted into a new fmap such as normalized
HTML resulted in checking FP signatures against the fmap's hash value
that was initialized to all zeroes, and never computed.

This patch seeks will enable FP signatures of normalized HTML files or
other content that is extracted to a new fmap to work.  This patch
doesn't resolve the issue that normal people will write FP signatures
targeting the original file, not the normalized file and thus won't
really see benefit from this bug-fix.

Additional work is needed to traverse the fmap recursion lists and
FP-check all parent fmaps when an alert occurs.  In addition, the HTML
normalization method of temporarily overriding the ctx->fmap instead of
increasing the recursion depth and doing ctx->fmap++/-- will need to be
corrected for fmap reverse recursion traversal to work.
2020-04-20 11:26:43 -07:00
Micah Snyder
206dbaefe8 Update copyright dates for 2020 2020-01-03 15:44:07 -05:00
Micah Snyder
9baa0ad708 Fixes to alleviate warnings with regards to mempool usage. 2019-10-02 16:08:26 -04:00
Micah Snyder
ee40795fe2 Converted mpool calls to macros when USE_MPOOL is defined to clearly differentiate between function and macro behavior. 2019-10-02 16:08:25 -04:00
Micah Snyder
52cddcbcfd Updating and cleaning up copyright notices. 2019-10-02 16:08:18 -04:00
Micah Snyder
72fd33c8b2 clang-format'd using new .clang-format rules. 2019-10-02 16:08:16 -04:00
Micah Snyder
d7979d4ff7 Restructured scan options flags from a single bitflag field to a structure containing multiple bitflag fields. This also required adding a new function to the bytecode API to get scan options a la carte, and modifying the existing function to hand back scan options in the old/deprecated uint32_t bitflag format. Re-generated bytecode iface header files.
Updated libclamav documentation detailing new scan options structure.
Renamed references to 'algorithmic' detection to 'heuristic' detection. Renaming references to 'properties' to 'collect metadata'.
Renamed references to 'scan all' to 'scan all match'.
Renamed a couple of 'Hueristic.*' signature names as 'Heuristics.*' signatures (plural) to match majority of other heuristics.
2018-12-02 23:06:59 -05:00
Steven Morgan
7d4213a729 bb11420 - fix preclass/cache interaction. 2015-11-04 14:46:46 -05:00
Mickey Sola
46a35abe56 mass update of copyright headers 2015-09-17 13:41:26 -04:00
Steven Morgan
5d872d36c5 bb11338 - better placement of an assert(). 2015-06-17 17:13:09 -04:00
Shawn Webb
cd94be7a52 Silence a bunch of compiler warnings in libclamav 2014-07-10 18:11:49 -04:00
Shawn Webb
60d8d2c352 Move all the crypto API to clamav.h 2014-07-01 19:38:01 -04:00
Shawn Webb
da6e06dd68 Provide further abstractions to the OpenSSL integration work 2014-02-28 12:12:30 -05:00
Shawn Webb
4fa2f50832 Revert "bb9735 - Add ability to purge engine cache"
This reverts commit 433a335fb9.
2014-02-25 13:21:42 -05:00
Shawn Webb
433a335fb9 bb9735 - Add ability to purge engine cache 2014-02-25 12:51:14 -05:00
Shawn Webb
f077c6174f Fix some race conditions. Fix some memory leaks. 2014-02-13 13:05:50 -05:00
Shawn Webb
a1cbd793f3 Fix all memory leaks introduce by OpenSSL backport. 2014-02-12 17:42:48 -05:00
Shawn Webb
7fb5036fb2 Make Valgrind happy. Rely less on EVP_MD_CTX_create. 2014-02-08 01:42:41 -05:00
Shawn Webb
b2e7c931d0 Use OpenSSL for hashing. 2014-02-08 00:31:12 -05:00
David Raynor
af34d9815e Put engine option checks in the right place for cache functions 2013-12-11 14:58:07 -05:00
Shawn Webb
34e9acb098 Add option to disable the cache. Add a new bitfield in the engine struct that will govern options relating to engine internals. 2013-11-15 19:15:20 +00:00
David Raynor
a3dcc429a6 Scan_all: only cache when truly clean 2013-06-18 12:30:21 -04:00
David Raynor
f2e5083393 cache: cacheset_remove fix and better logging 2013-04-15 17:58:26 -04:00
David Raynor
b89dc676f7 libclamav: splaytree cacheset_remove fix 2013-02-27 11:34:13 -05:00
David Raynor
f703e46afa libclamav: fix null check in cache_remove() 2013-02-07 12:23:45 -05:00
David Raynor
1f765cf976 downgrade cacheset_remove node-not-found to debug level 2012-12-19 17:15:57 -05:00
David Raynor
f129bc1f34 Refactoring cache_remove to engine instead of full context object 2012-11-27 16:17:31 -05:00
David Raynor
59d02e32e5 cache_remove and cacheset_remove functions to support corrections of false negatives 2012-11-27 11:24:52 -05:00
David Raynor
84c35b01f5 bb#5750 2012-09-13 12:16:03 -04:00
aCaB
ed98fae7ad bb#4669 2012-05-08 15:35:26 +02:00
Török Edvin
f304dc688a fmapify: fix const-ness warnings 2012-01-05 14:16:09 +02:00
Török Edvin
a7cf187a0c Make cl_load thread safe (bb #2333).
Parallel cl_load() crash (bb #2333).
Reason is twofold:
 - cache.c had 2 'static' global variables, thus trying to initialize same cache
 from multiple threads
  - bytecode2llvm.cpp: something in LLVM 2.7 is crashing when loading in
  parallel

Fix is to drop the 'static' on the variable (cache is per engine already).
This also fixes a potential memory leak in clamd!

The other part of the fix is to turn on the mutex around bytecode compilation
always. We don't call cl_load in parallel, so this doesn't affect clamd, but
some may need to call cl_load in parallel.
2010-11-04 21:53:03 +02:00
Török Edvin
91505254dc Fix build without pthreads bb #1897. 2010-05-07 13:00:11 +03:00