We sync the spam training data encrypted through our server to make
sure that all clients for a specific user behave the same when
classifying mails. Additionally, this enables the spam classification
in the webApp. We compress the training data vectors
(see clientSpamTrainingDatum) before uploading to our server using
SparseVectorCompressor.ts. When a user has the ClientSpamClassification
enabled, the spam training data sync will happen for every mail
received.
ClientSpamTrainingDatum are not stored in the CacheStorage.
No entityEvents are emitted for this type.
However, we retrieve creations and updates for ClientSpamTrainingData
through the modifiedClientSpamTrainingDataIndex.
We calculate a threshold per classifier based on the dataset ham to spam
ratio, we also subsample our training data to cap the ham to spam ratio
within a certain limit.
Co-authored-by: jomapp <17314077+jomapp@users.noreply.github.com>
Co-authored-by: das <das@tutao.de>
Co-authored-by: abp <abp@tutao.de>
Co-authored-by: Kinan <104761667+kibibytium@users.noreply.github.com>
Co-authored-by: sug <sug@tutao.de>
Co-authored-by: nif <nif@tutao.de>
Co-authored-by: map <mpfau@users.noreply.github.com>
Add the header fields(sender, toRecipients, ccRecipients, bccRecipients,
authStatus) to the anti-spam vectors. We also improve some of the
preprocessing steps and add offline migrations by deleting old spam
tables
Co-authored-by: amm@tutao.de
Co-authored-by: jhm <17314077+jomapp@users.noreply.github.com>
`applicationWillEnterForeground` is not getting called on app
startup, only on transition from background to foreground state.
This means the counter would only get reset when the app would
come foreground from background state.
We did not notice it as notifications are also reset via
NativePushFacade.
We fixed it by using `applicationDidBecomeActive` instead. It it a
different lifecycle event but it also works, perhaps even better.
Close#9922
Co-authored-by: hrb-hub <hrb-hub@users.noreply.github.com>
Before, the CI depended on precompiled binaries from the emscripten SDK.
Now we use our own package (see tuta-wasm-tools).
Co-authored-by: mab <mab@tutao.de>
We can't have the fallback for Rust libraries, which are necessary now,
so there is no point in maintaining them for the C libraries.
Co-authored-by: vis <vis@tutao.de>
Co-authored-by: mab <mab@tutao.de>
Add new labels in `social networks` and `Instant messengers`
Changed to URLs for social IDs are built to avoid creating invalid
links. Also remove forced "www". Add a fallback for export.
Close#8831
Co-authored-by: ivk <ivk@tutao.de>
Adding any Organization field(Company, Role or Department) will create
new Organization section instead of updating existing section.
Fixed by updating the Organization section instead of creating new one
and removes it entirely when all fields are empty to prevent showing an
empty organization section.
Close#9900Close#9901
Co-authored-by: hrb-hub <hrb-hub@users.noreply.github.com>
Users who aren't admins cannot edit their own name as their GroupInfo is
read-only for them.
Note that this is not a new restriction; the edit button was simply
showing up when it should not, and actually editing your name wouldn't
actually work.
Closes#9835
This would cause some emails to not render in dark mode.
Basically, MDN claimed that all colors from getComputedStyle will be in
rgb or rgba form. However, they have since updated their page, and it
turns out the getComputedStyle ALSO passes through colors that are in
a given colorspace.
To fix this, we can transmute the color into srgb using the CSS color()
function and extract the RGB/A components using a regular expression.
Co-authored-by: hrb <hrb-hub@users.noreply.github.com>
rename EncryptionCompatTest.kt to CompatibilityTest.kt to keep in sync with the class name
Note that the tests are not automatically executed by the ci and have to be run manually.
Using REMOVEITEM and ADDITEM when going from aggregate ids don't match
causes invalid association cardinality even though we remove and add,
presumably due to mismatching aggregate ids. This commit fixes the issue
by using REPLACE instead.
We are attempting to fix the case where a null value was added to the
association, it has happened that a mail instance in the offline
database ended up with null in the list ([null]) for the 1729
aggregation.
This happened on the parsedInstance level, and for a zeroOrOne
aggregation it should have been just an empty list.
Usually, entityUpdates with operation UPDATE only have patches
on the entityUpdate instead of the full instance, like e.g. on
entityUpdates with operation CREATE. However, in certain
scenarios we are still / again sending full instances on entityUpdates
with operation UPDATE, and should therefore also correctly process them.
This commit change the processing of UPDATE entityUpdates to be:
-> use entityUpdate.patches
--> if not available use entityUpdate.instance
---> if not available re-load instance from server
This commit fixes issues where instance were re-loaded from the server
even though the instances was available on the entityUpdate.
We make PatchMerger more robust by:
- Handling patches sent for the newly added attributes (not in the
parsed instance in the offline storage) without needing a reload of the
entire entity from the server.
- Using the session key from the instance we initially read from the
offline storage, as calling mapToInstance on the "intermediate" parsed
instances is not always possible since instances only adhere to
cardinality constraints only after all patches are applied.
Co-authored-by: jomapp <17314077+jomapp@users.noreply.github.com>
By using only integers for progress, we trigger the re-render fewer
times for the component, which implies in less paint events for the
browser to handle. Also lowering the frequency for the error
Compositing failedUnsupported CSS property: width
we have a rate limit on the server, but it's easy to run into and the
resulting suspension will cause a 30s wait.
This new method should limit the request rate (from one client) to just
under the limit the server has, without feeling slow for the vast
majority of users.
When mails before the cutoff were updated and a user received
entityUpdates for these mails, after prefetching, these mails were
immediately deleted from the offline storage again by the
MailOfflineCleaner. However, the MailIndex re-downloads these very same
mails afterward again. To fix this issue, we move the execution of the
MailOfflineCleaner after the MailIndexer.
Co-authored-by: das <das@tutao.de>
Co-authored-by: jomapp <17314077+jomapp@users.noreply.github.com>