1. 18 Jul, 2017 1 commit
    • Johann's avatar
      variance: call C comp_avg_pred · 4b9a848b
      Johann authored
      Keep optimized code out of the reference implementation. This matches
      the style of the other sub calls.
      Change-Id: I3da6acd4f2c647b029c420e22ac9410a18259689
  2. 30 May, 2017 1 commit
    • Johann's avatar
      comp_avg_pred: alignment · ea8b4a45
      Johann authored
      x86 requires 16 byte alignment for some vector loads/stores.
      arm does not have the same requirement.
      The asserts are still in avg_pred_sse2.c. This just removes them from
      the common code.
      Change-Id: Ic5175c607a94d2abf0b80d431c4e30c8a6f731b6
  3. 17 Apr, 2017 1 commit
    • Johann's avatar
      re-enable vpx_comp_avg_pred_sse2 · 9fa24f03
      Johann authored
      Buffers on 32 bit x86 builds only guaranteed 8 byte alignment. Fixed
      with "AvgPred test: use aligned buffers" and "sad avg: align
      intermediate buffer"
      Also re-enable asserts on the C version.
      Change-Id: I93081f1b0002a352bb0a3371ac35452417fa8514
  4. 14 Apr, 2017 1 commit
    • Johann's avatar
      Disable vpx_comp_avg_pred_sse2 · eaa7cdf0
      Johann authored
      Failures on windows:
      unknown file: error: SEH exception with code 0xc0000005 thrown in the
      test body.
      Alignment check errors on linux:
      test_libvpx: ../libvpx/vpx_dsp/variance.c:230: void
      vpx_comp_avg_pred_c(uint8_t *, const uint8_t *, int, int, const uint8_t
      *, int): Assertion `((intptr_t)comp_pred & 0xf) == 0' failed.
      Change-Id: I5eed5381c0f1a8fe594a128eb415e77232f544ea
  5. 13 Apr, 2017 1 commit
    • Johann's avatar
      vpx_comp_avg_pred: sse2 optimization · 28a86221
      Johann authored
      Provides over 15x speedup for width > 8.
      Due to smaller loads and shifting for width == 8 it gets about 8x
      For width == 4 it's only about 4x speedup because there is a lot of
      shuffling and shifting to get the data properly situated.
      Change-Id: Ice0b3dbbf007be3d9509786a61e7f35e94bdffa8
  6. 06 Apr, 2017 1 commit
  7. 24 Aug, 2016 1 commit
    • Johann's avatar
      Remove halfpix specialization · d393885a
      Johann authored
      This function only exists as a shortcut to subpixel variance with
      predefined offsets. xoffset = 4 for horizontal, yoffset = 4 for vertical
      and both for "hv"
      Removing this allows the existing optimizations for the variance
      functions to be called. Instead of having only sse2 optimizations, this
      gives sse2, ssse3, msa and neon.
      Change-Id: Ieb407b423b91b87d33c4263c6a1ad5e673b0efd6
  8. 28 Jul, 2016 1 commit
  9. 25 Jul, 2016 1 commit
  10. 22 Jun, 2016 1 commit
    • Yaowu Xu's avatar
      Prevent negative variance · ef665996
      Yaowu Xu authored
      Due to rounding used computation, HDB variance computation may produce
      slightly negative values. This commit adds clamping to make sure
      output variance values for 10 and 12 to be non-negative.
      Change-Id: Id679aa55a4c201958c4c7d28cd8733b9246a71c8
  11. 16 Jun, 2016 1 commit
    • Yaowu Xu's avatar
      vpx_dsp/variance.c: change to use correct type · e5e998a6
      Yaowu Xu authored
      This commit change to use int64_t to represent the sum of pixel
      differences, which can be negative.
      This fixes a number of ubsan warnings.
      Change-Id: I885f245ae895ab92ca5f3b9848d37024b07aac98
  12. 13 Jan, 2016 1 commit
  13. 25 Nov, 2015 1 commit
    • Alex Converse's avatar
      Change highbd variance rounding to prevent negative variance. · 022c848b
      Alex Converse authored
      Always round sum error and sum square error toward zero in variance
      calculations. This prevents variance from becoming negative.
      Avoiding rounding variance at all might be better but would be far
      more invasive.
      Change-Id: Icf24e0e75ff94952fc026ba6a4d26adf8d373f1c
  14. 07 Jul, 2015 1 commit
  15. 26 May, 2015 1 commit