1. 16 Apr, 2013 - 1 commit
    • John Koleszar's avatar
      Remove the mb_no_coeff_skip flag · e3cfe4e8
      John Koleszar authored
      This flag was added to VP8 to allow a mode where MB-level skipping
      was not allowed, saving a bit per mb. It was never used in practice,
      and hasn't been tested in VP9, so remove it.
      
      Change-Id: Id450ec6904c6d06c1919508e7efc52d05cde5631
      e3cfe4e8
  2. 15 Apr, 2013 - 1 commit
  3. 12 Apr, 2013 - 3 commits
    • Jingning Han's avatar
      Enable inter predictor for rectangular block size · 3ba9dd41
      Jingning Han authored
      Combine superblock inter predictors into a unified function that
      allows configurable block width and height. The inter predictions
      of block sizes smaller than 16x16 are handled differently. To be
      continued on merging them later.
      
      Change-Id: I14075959dd5e221f00c205c99ca35c1c31ef728e
      3ba9dd41
    • Yaowu Xu's avatar
      Rename B_PRED to I4X4_PRED · 7de5edd1
      Yaowu Xu authored
      So it is consistent with I8x8_PRED.
      
      Change-Id: Iefa65124b2419690d83e526c611129c0ede29d11
      7de5edd1
    • Ronald S. Bultje's avatar
      Move prediction hit counting to update_state(). · 79fd8c29
      Ronald S. Bultje authored
      The probabilities derived from these statistics are used in bitstream
      writing; therefore, we should only do this when we actually decide to
      use macroblock coding (over superblock coding). Derf gains +0.15%.
      
      Change-Id: I196814c070a7c79889590658ce10a6eb07454389
      79fd8c29
  4. 11 Apr, 2013 - 6 commits
  5. 10 Apr, 2013 - 2 commits
    • Ronald S. Bultje's avatar
      Make RD superblock mode search size-agnostic. · b4f6098e
      Ronald S. Bultje authored
      Merge various super_block_yrd and super_block_uvrd versions into one
      common function that works for all sizes. Make transform size selection
      size-agnostic also. This fixes a slight bug in the intra UV superblock
      code where it used the wrong transform size for txsz > 8x8, and stores
      the txsz selection for superblocks properly (instead of forgetting it).
      Lastly, it removes the trellis search that was done for 16x16 intra
      predictors, since trellis is relatively expensive and should thus only
      be done after RD mode selection.
      
      Gives basically identical results on derf (+0.009%).
      
      Change-Id: If4485c6f0a0fe4038b3172f7a238477c35a6f8d3
      b4f6098e
    • Ronald S. Bultje's avatar
      Make SB coding size-independent. · a3874850
      Ronald S. Bultje authored
      Merge sb32x32 and sb64x64 functions; allow for rectangular sizes. Code
      gives identical encoder results before and after. There are a few
      macros for rectangular block sizes under the sbsegment experiment; this
      experiment is not yet functional and should not yet be used.
      
      Change-Id: I71f93b5d2a1596e99a6f01f29c3f0a456694d728
      a3874850
  6. 05 Apr, 2013 - 1 commit
  7. 03 Apr, 2013 - 1 commit
  8. 28 Mar, 2013 - 2 commits
    • Deb Mukherjee's avatar
      Framework changes in nzc to allow more flexibility · fe9b5143
      Deb Mukherjee authored
      The patch adds the flexibility to use standard EOB based coding
      on smaller block sizes and nzc based coding on larger blocksizes.
      The tx-sizes that use nzc based coding and those that use EOB based
      coding are controlled by a function get_nzc_used().
      By default, this function uses nzc based coding for 16x16 and 32x32
      transform blocks, which seem to bridge the performance gap
      substantially.
      
      All sets are now lower by 0.5% to 0.7%, as opposed to ~1.8% before.
      
      Change-Id: I06abed3df57b52d241ea1f51b0d571c71e38fd0b
      fe9b5143
    • Paul Wilkins's avatar
      Fix crash when --tune=ssim is selected. · befb0393
      Paul Wilkins authored
      Crash fix only. No functional change or testing.
      
      Change-Id: I0c6d114d024c29fc11ae61666f5938f11b01dd6a
      befb0393
  9. 27 Mar, 2013 - 1 commit
    • Dmitry Kovalev's avatar
      Removing redundant function arguments. · 17cddb4e
      Dmitry Kovalev authored
      Almost all arguments for vp9_build_inter32x32_predictors_sb and
      vp9_build_inter64x64_predictors_sb can be deduced from the first macroblock
      argument.
      
      Change-Id: I5d477a607586d05698d5b3b9b9bc03891dd3fe83
      17cddb4e
  10. 26 Mar, 2013 - 3 commits
    • Deb Mukherjee's avatar
      Implicit weighted prediction experiment · 23144d23
      Deb Mukherjee authored
      Adds an experiment to use a weighted prediction of two INTER
      predictors, where the weight is one of (1/4, 3/4), (3/8, 5/8),
      (1/2, 1/2), (5/8, 3/8) or (3/4, 1/4), and is chosen implicitly
      based on consistency of the predictors to the already
      reconstructed pixels to the top and left of the current macroblock
      or superblock.
      
      Currently the weighting is not applied to SPLITMV modes, which
      default to the usual (1/2, 1/2) weighting. However the code is in
      place controlled by a macro. The same weighting is used for Y and
      UV components, where the weight is derived from analyzing the Y
      component only.
      
      Results (over compound inter-intra experiment)
      derf: +0.18%
      yt: +0.34%
      hd: +0.49%
      stdhd: +0.23%
      
      The experiment suggests bigger benefit for explicitly signaled weights.
      
      Change-Id: I5438539ff4485c5752874cd1eb078ff14bf5235a
      23144d23
    • Ronald S. Bultje's avatar
      Use above/left (instead of previous in scan-order) as token context. · 790fb132
      Ronald S. Bultje authored
      Pearson correlation for above or left is significantly higher than for
      previous-in-scan-order (absolute values depend on position in scan, but
      in general, we gain about 0.1-0.2 by using either above or left; using
      both basically just makes this even better). For eob branch skipping,
      we continue to use the previous token in scan order.
      
      This helps about 0.9% on derf after re-training on a limited data set.
      Full re-training and results on larger-resolution clips are pending.
      
      Note that this commit breaks trellis, so we can probably get further
      gains out of it by fixing trellis at some later point.
      
      Change-Id: Iead68e296fc3a105cca746b5e3da9555d6010cfe
      790fb132
    • Deb Mukherjee's avatar
      Modeling default coef probs with distribution · fd18d5df
      Deb Mukherjee authored
      Replaces the default tables for single coefficient magnitudes with
      those obtained from an appropriate distribution. The EOB node
      is left unchanged. The model is represeted as a 256-size codebook
      where the index corresponds to the probability of the Zero or the
      One node. Two variations are implemented corresponding to whether
      the Zero node or the One-node is used as the peg. The main advantage
      is that the default prob tables will become considerably smaller and
      manageable. Besides there is substantially less risk of over-fitting
      for a training set.
      
      Various distributions are tried and the one that gives the best
      results is the family of Generalized Gaussian distributions with
      shape parameter 0.75. The results are within about 0.2% of fully
      trained tables for the Zero peg variant, and within 0.1% of the
      One peg variant.
      
      The forward updates are optionally (controlled by a macro)
      model-based, i.e. restricted to only convey probabilities from the
      codebook. Backward updates can also be optionally (controlled by
      another macro) model-based, but is turned off by default. Currently
      model-based forward updates work about the same as unconstrained
      updates, but there is a drop in performance with backward-updates
      being model based.
      
      The model based approach also allows the probabilities for the key
      frames to be adjusted from the defaults based on the base_qindex of
      the frame. Currently the adjustment function is a placeholder that
      adjusts the prob of EOB and Zero node from the nominal one at higher
      quality (lower qindex) or lower quality (higher qindex) ends of the
      range. The rest of the probabilities are then derived based on the
      model from the adjusted prob of zero.
      
      Change-Id: Iae050f3cbcc6d8b3f204e8dc395ae47b3b2192c9
      fd18d5df
  11. 22 Mar, 2013 - 1 commit
  12. 16 Mar, 2013 - 1 commit
  13. 14 Mar, 2013 - 2 commits
  14. 13 Mar, 2013 - 1 commit
    • Yaowu Xu's avatar
      removed reference to "LLM" and "x8" · 00555263
      Yaowu Xu authored
      The commit changed the name of files and function to remove obselete
      reference to LLM and x8.
      
      Change-Id: I973b20fc1a55149ed68b5408b3874768e6f88516
      00555263
  15. 09 Mar, 2013 - 1 commit
    • Deb Mukherjee's avatar
      Continued experiment with nonzero count · a28139c8
      Deb Mukherjee authored
      Adds probability updates for extra bits for the nzcs, code for
      getting nzc stats, plus some minor cleanups and fixes.
      
      Change-Id: If2814e7f04fb52f5025ad9f400f3e6c50a00b543
      a28139c8
  16. 08 Mar, 2013 - 2 commits
    • Jingning Han's avatar
      Extend diff MV limit from +/-256 to +/-1024 · 2a5278bd
      Jingning Han authored
      Increase the motion search range by 4x. Change MV_CLASS tree of the
      entropy coding to allow two additional mv classes to cover the
      extended motion vector limit. The codec determines the effective
      motion search range conditioned on the actual frame dimension.
      
      It provides coding gains:
      
      stdhd 0.39%
      yt    0.56%
      hd    0.47%
      
      Major coding performance gains are packed in several sequences with
      intense motion activities, e.g., ped_1080p gains 7% at high bit-rates,
      and on average 3%.
      
      TODO: Need to further tune the rate control and motion search units.
      
      Change-Id: Ib842540a6796fbee5a797809433ef6a477c6d78d
      2a5278bd
    • Ronald S. Bultje's avatar
      Add support for tx_select in i8x8 encoding in keyframes. · b41dee84
      Ronald S. Bultje authored
      Also enable tx_select for keyframes.
      
      Change-Id: Iadb1231d9fa7af0c8dce3d9b41830b93a302479e
      b41dee84
  17. 07 Mar, 2013 - 1 commit
    • Deb Mukherjee's avatar
      Coding con-zero count rather than EOB for coeffs · eb6ef241
      Deb Mukherjee authored
      This patch revamps the entropy coding of coefficients to code first
      a non-zero count per coded block and correspondingly remove the EOB
      token from the token set.
      
      STATUS:
      Main encode/decode code achieving encode/decode sync - done.
      Forward and backward probability updates to the nzcs - done.
      Rd costing updates for nzcs - done.
      Note: The dynamic progrmaming apporach used in trellis quantization
      is not exactly compatible with nzcs. A suboptimal approach has been
      used instead where branch costs are updated to account for changes
      in the nzcs.
      
      TODO:
      Training the default probs/counts for nzcs
      
      Change-Id: I951bc1e22f47885077a7453a09b0493daa77883d
      eb6ef241
  18. 05 Mar, 2013 - 1 commit
    • Ronald S. Bultje's avatar
      Make superblocks independent of macroblock code and data. · 111ca421
      Ronald S. Bultje authored
      Split macroblock and superblock tokenization and detokenization
      functions and coefficient-related data structs so that the bitstream
      layout and related code of superblock coefficients looks less like it's
      a hack to fit macroblocks in superblocks.
      
      In addition, unify chroma transform size selection from luma transform
      size (i.e. always use the same size, as long as it fits the predictor);
      in practice, this means 32x32 and 64x64 superblocks using the 16x16 luma
      transform will now use the 16x16 (instead of the 8x8) chroma transform,
      and 64x64 superblocks using the 32x32 luma transform will now use the
      32x32 (instead of the 16x16) chroma transform.
      
      Lastly, add a trellis optimize function for 32x32 transform blocks.
      
      HD gains about 0.3%, STDHD about 0.15% and derf about 0.1%. There's
      a few negative points here and there that I might want to analyze
      a little closer.
      
      Change-Id: Ibad7c3ddfe1acfc52771dfc27c03e9783e054430
      111ca421
  19. 04 Mar, 2013 - 1 commit
    • Jingning Han's avatar
      Support 16K sequence coding · 5957b2b5
      Jingning Han authored
      Fixed a couple of variable/function definitions, as well as header
      handling to support 16K sequence coding at high bit-rates.
      
      The width and height are each specified by two bytes in the header.
      Use an extra byte to explicitly indicate the scaling factors in
      both directions, each ranging from 0 to 15.
      
      Tested coding up to 16400x16400 dimension.
      
      Change-Id: Ibc2225c6036620270f2c0cf5172d1760aaec10ec
      5957b2b5
  20. 28 Feb, 2013 - 1 commit
    • Dmitry Kovalev's avatar
      Code cleanup. · 0d9cc0a9
      Dmitry Kovalev authored
      Removing redundant 'extern' keyword, better formatting, code
      simplification.
      
      Change-Id: I132fea14f08c706ee9ea147d19464d03f833f25b
      0d9cc0a9
  21. 27 Feb, 2013 - 2 commits
    • John Koleszar's avatar
      Use ref_frame_map vice active_ref_idx on the encoder · 800ad0b8
      John Koleszar authored
      This patch makes the encoder's use of ref_frame_map and active_ref_idx
      consistent with the decoder. ref_frame_map[] maps a reference buffer
      index to its actual location in the yv12_fb array, since many
      references may share an underlying buffer. active_ref_idx[] mirrors
      cpi->{lst,gld,alt}_fb_idx, holding the active references in each
      slot.
      
      This also fixes a bug in setup_buffer_inter() where the incorrect
      reference was used to populate the scaling factors.
      
      Change-Id: Id3728f6d77cffcd27c248903bf51f9c3e594287e
      800ad0b8
    • John Koleszar's avatar
      Spatial resamping of ZEROMV predictors · eb939f45
      John Koleszar authored
      This patch allows coding frames using references of different
      resolution, in ZEROMV mode. For compound prediction, either
      reference may be scaled.
      
      To test, I use the resize_test and enable WRITE_RECON_BUFFER
      in vp9_onyxd_if.c. It's also useful to apply this patch to
      test/i420_video_source.h:
      
        --- a/test/i420_video_source.h
        +++ b/test/i420_video_source.h
        @@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
      
           virtual void FillFrame() {
             // Read a frame from input_file.
        +    if (frame_ != 3)
             if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
               limit_ = frame_;
             }
      
      This forces the frame that the resolution changes on to be coded
      with no motion, only scaling, and improves the quality of the
      result.
      
      Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
      eb939f45
  22. 26 Feb, 2013 - 1 commit
    • John Koleszar's avatar
      Refactor inter recon functions to support scaling · 6a4f708c
      John Koleszar authored
      Ensure that all inter prediction goes through a common code path
      that takes scaling into account. Removes a bunch of duplicate
      1st/2nd predictor code. Also introduces a 16x8 mode for 8x8
      MVs, similar to the 8x4 trick we were doing before. This has an
      unexpected effect with EIGHTTAP_SMOOTH, so it's disabled in that
      case for now.
      
      Change-Id: Ia053e823a8bc616a988a0af30452e1e75a739cba
      6a4f708c
  23. 23 Feb, 2013 - 1 commit
  24. 20 Feb, 2013 - 1 commit
  25. 19 Feb, 2013 - 1 commit
    • Yaowu Xu's avatar
      Use lossless for Q0 · 93d6b86c
      Yaowu Xu authored
      The commit changes the coding mode to lossless whenever the lowest
      quantizer is choosen.
      
      As expected, test results showed no difference for cif and std-hd
      set where Q0 is rarely used. For yt and yt-hd set, Q0 is used for
      a number of clips, where this commit helped a lot in the high end.
      
      Average over all clips in the sets:
      yt: 2.391% 1.017% 1.066%
      hd: 1.937%  .764%  .787%
      
      Change-Id: I9fa9df8646fd70cb09ffe9e4202b86b67da16765
      93d6b86c
  26. 15 Feb, 2013 - 1 commit