Just presume that we any present side data is actually valid.
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Otherwise the buffer for the hdr10+ blockadditional would
be clobbered if both are present (the buffers can only be
reused after the ebml_writer_write() call).
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
7faa6ee2aa added support
for writing AV_PKT_DATA_DYNAMIC_HDR_SMPTE_2094_APP5,
yet forgot to update the size of the EBML element buffer.
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Add support for parsing and muxing smpte 2094-50 metadata. It will
be stored as an ITUT-T35 message in the BlockAdditional element with
an AddId type of 4 (which is reserved for ITUT-T35 in the matroska
spec).
https://www.matroska.org/technical/codec_specs.html#itu-t35-metadata
Signed-off-by: Vignesh Venkatasubramanian <vigneshv@google.com>
Precompute the SILK NLSF residual weights from the stage-1 codebooks and use the table during LPC decode. This removes the per-coefficient mandated fixed-point weight calculation in silk_decode_lpc() while preserving the same decoded values.
Instead of implicitly relying on SwsComps.unused, which contains the exact
same information. (cf. ff_sws_op_list_update_comps)
Signed-off-by: Niklas Haas <git@haasn.dev>
The implementation of AARCH64_SWS_OP_LINEAR loops over elements of this mask
to determine which *output* rows to compute. However, it is being set by this
loop to `op->comps.unused`, which is a mask of unused *input* rows. As such,
it should be looking at `next->comps.unused` instead.
This did not result in problems in practice, because none of the linear
matrices happened to trigger this case (more input columns than output rows).
Signed-off-by: Niklas Haas <git@haasn.dev>
Needed to allow us to phase out SwsComps.unused altogether.
It's worth pointing out the change in semantics; while unused tracks the
unused *input* components, the mask is defined as representing the
computed *output* components.
This is 90% the same, expect for read/write, pack/unpack, and clear; which
are the only operations that can be used to change the number of components.
Signed-off-by: Niklas Haas <git@haasn.dev>
Makes this logic a lot simpler and less brittle. We can trivially adjust the
list of linear masks that are required, whenever it changes as a result of any
future modifications.
Signed-off-by: Niklas Haas <git@haasn.dev>
Using the power of libswscale/tests/sws_ops -summarize lets us see which
kernels are actually needed by real op lists.
Note: I'm working on a separate series which will obsolete this implementation
whack-a-mole game altogether, by generating a list of all possible op kernels
at compile time.
Signed-off-by: Niklas Haas <git@haasn.dev>
This is far more commonly used without an offset than with; so having it there
prevents these special cases from actually doing much good.
Signed-off-by: Niklas Haas <git@haasn.dev>
First vector is %2, not %3. This was never triggered before because all of
the existing masks never hit this exact case.
Signed-off-by: Niklas Haas <git@haasn.dev>
Since this now has an explicit mask, we can just check that directly, instead
of relying on the unused comps hack/trick.
Additionally, this also allows us to distinguish between fixed value and
arbitrary value clears by just having the SwsOpEntry contain NAN values iff
they support any clear value.
Signed-off-by: Niklas Haas <git@haasn.dev>
This does come with a slight change in behavior, as we now don't print the
range information in the case that the range is only known for *unused*
components. However, in practice, that's already guaranteed by update_comps()
stripping the range info explicitly in this case.
Signed-off-by: Niklas Haas <git@haasn.dev>
Instead of implicitly excluding NAN values if ignore_den0 is set. This
gives callers more explicit control over which values to print, and in
doing so, makes sure "unintended" NaN values are properly printed as such.
Signed-off-by: Niklas Haas <git@haasn.dev>
Instead of implicitly testing for NaN values. This is mostly a straightforward
translation, but we need some slight extra boilerplate to ensure the mask
is correctly updated when e.g. commuting past a swizzle.
Signed-off-by: Niklas Haas <git@haasn.dev>
This accidentally unconditionally overwrote the entire clear mask, since
Q(n) always set the denominator to 1, resulting in all channels being
cleared instead of just the ones with nonzero denominators.
Signed-off-by: Niklas Haas <git@haasn.dev>
This currently completely fails for images smaller than 12x12; and even in that
case, the limited resolution makes these tests a bit useless.
At the risk of triggering a lot of spurious SSIM regressions for very
small sizes (due to insufficiently modelling the effects of low resolution on
the expected noise), this patch allows us to at least *run* such tests.
Incidentally, 8x8 is the smallest size that passes the SSIM check.
Not only does this take into account extreme edge cases where the plane
padding can significantly exceed the actual width/stride, but it also
correctly takes into account the filter offsets when scaling; which the
previous code completely ignored.
Simpler, robuster, and more correct. Now valgrind passes for 100% of format
conversions for me, with and without scaling.
Signed-off-by: Niklas Haas <git@haasn.dev>
This is a mostly straightforward internal mechanical change that I wanted
to isolate from the following commit to make bisection easier in the case of
regressions.
While the number of tail blocks could theoretically be different for input
vs output memcpy, the extra complexity of handling that mismatch (and
adjusting all of the tail offsets, strides etc.) seems not worth it.
I tested this commit by manually setting `p->tail_blocks` to higher values
and seeing if that still passed the self-check under valgrind.
Signed-off-by: Niklas Haas <git@haasn.dev>
The x86 kernel e.g. assumes that at least one block is processed; so avoid
calling this with an empty width. This is currently only possible if e.g.
operating on an unpadded, very small image whose total linesize is less than
a single block.
Signed-off-by: Niklas Haas <git@haasn.dev>
This code had two issues:
1. It was over-allocating bytes for the input offset map case, and
2. It was hard-coding the assumption that there is only a single tail block
We can fix both of these issues by rewriting the way the tail size is derived.
In the non-offset case, and assuming only 1 tail block:
aligned_w - safe_width
= num_blocks * block_size - (num_blocks - 1) * block_size
= block_size
Additionally, the FFMAX(tail_size_in/out) is unnecessary, because:
tail_size = pass->width - safe_width <= aligned_w - safe_width
In the input offset case, we instead realize that the input kernel already
never over-reads the input due to the filter size adjustment/clamping, so
the only thing we need to ensure is that we allocate extra bytes for the
input over-read.
Signed-off-by: Niklas Haas <git@haasn.dev>
The over_read/write fields are not documented as depending on the subsampling
factor. Actually, they are not documented as depending on the plane at all.
If and when we do actually add support for horizontal subsampling to this
code, it will most likely be by turning all of these key variables into
arrays, which will be an upgrade we get basically for free.
Signed-off-by: Niklas Haas <git@haasn.dev>
This makes it far less likely to accidentally add or remove a +7 bias when
repeating this often-used expression.
Signed-off-by: Niklas Haas <git@haasn.dev>
This could trigger if e.g. a backend tries to operate on monow formats with
a block size that is not a multiple of 1. In this case, `block_size_in`
would previously be miscomputed (to e.g. 0), which is obviously wrong.
Signed-off-by: Niklas Haas <git@haasn.dev>
As well as weird edge cases like trying to filter `monow` and pixels landing
in the middle of a byte. Realistically, this will never happen - we'd instead
pre-process it into something byte-aligned, and then dispatch a byte-aligned
filter on it.
However, I need to add a check for overflow in any case, so we might as well
add the alignment check at the same time. It's basically free.
Signed-off-by: Niklas Haas <git@haasn.dev>
Prevents valgrind from complaining about operating on uninitialized bytes.
This should be cheap as it's only done once during setup().
Signed-off-by: Niklas Haas <git@haasn.dev>
This code made the input read conditional on the byte count, but not the
output, leading to a lot of over-write for cases like 15, 5.
Signed-off-by: Niklas Haas <git@haasn.dev>
These align the filter size to a multiple of the internal tap grouping
(either 1/2/4 for vpgatherdd, or the XMM size for the 4x4 transposed kernel).
This may over-read past the natural end of the input buffer, if the aligned
size exceeds the true size.
Signed-off-by: Niklas Haas <git@haasn.dev>
The V-Nova LCEVC pipeline processes frames on internal background
worker threads. LCEVC_ReceiveDecoderPicture returns LCEVC_Again (-1)
when the worker has not yet completed the frame, which is the
documented "not ready, try again" response. The original code treated
any non-zero return as a fatal error (AVERROR_EXTERNAL), causing decode
to abort mid-stream.
Poll until LCEVC_Success or a genuine error is returned.
Signed-off-by: Peter von Kaenel <Peter.vonKaenel@harmonicinc.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Avoids the post_process_opaque_free callback; the only user of
this is already a RefStruct reference and presumably other users
would want to use a pool for this, too, so they would use
RefStruct-objects, too.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>