- 19 Apr, 2013 - 6 commits
-
-
Paul Wilkins authored
This experiment has failed to give much benefit but does add complexity so deprecated. Change-Id: Ic7b929ba706390b9907ef0b4f965bd401ca799a4
-
Paul Wilkins authored
void __attribute__((noinline)) hi(void) { } Causes build failure in VS2008 Change-Id: Ie2f2a09d90bd5502c492e4d9f4983532a0edbc01
-
Paul Wilkins authored
As we are no longer able to sort the candidate mvrefs in both encoder and decode and given that the cost of explicit signalling has proved prohibitive, it no longer makes sense to find more than 2 candidates. This patch: Modifies and simplifies add_candidate_mv() Removes the forced addition of a 0 vector in the MAX_MV_REF_CANDIDATES-1 position (in preparation to reducing MAX_MV_REF_CANDIDATES to 2). Re-orders the addition of candidates slightly. This actually gives small gains (circa 0.2% on std-hd) A subsequent patch will remove NEW_MVREF experiment, reduce MAX_MV_REF_CANDIDATES to 2 and remove distance weights as these are implicit now in the order. Change-Id: I3dbe1a6f8a1a18b3c108257069c22a1141a207a4
-
Paul Wilkins authored
Adjustments take heavier account of the frame near a kf in deciding boost and limit the total number that can contribute. Also adjusted the minq calculations such that in most cases we generate a smaller key frame. Modified the code that accounts for how static the sequence is and added some adjustment based on image size. This is still very crude but smaller images tend to behave better with a larger delta between KF Q and other frames than larger image formats. Changes give sizable gains in overall PSNR on all the test sets but the biggest gains (~3%) were on the std-hd set. The gains were smaller for SSIM but still significant. Average PSNR results are mixed because this metric can very easily be altered by having a very good / lossless coding of one or two frames. Some of the YT and YT-HD clips in particular have blank lead ins and allowing lossless coding of these appears to make a big difference to average PSNR but it reality does not help much at all. Change-Id: I6bfe485a1d330b47c783832f1717c95c535464ec
-
John Koleszar authored
Consider the previous behavior for the MV 1 3/8 (11/8 pel). In the existing code, the fractional part of the MV is considered separately, and rounded is applied, giving a result of 6/8. Rounding is not required in this case, as we're increasing the precision from a q3 to a q4, and the correct value 11/16 can be represented exactly. Slight gain observed (+.033 average on derf) Change-Id: I320e160e8b12f1dd66aa0ce7966b5088870fe9f8
-
John Koleszar authored
This commit converts the luma versions of vp9_build_inter_predictors_sb to use a common function. Update the convolution functions to support block sizes larger than 16x16, and add a foreach_predicted_block walker. Next step will be to calculate the UV motion vector and implement SBUV, then fold in vp9_build_inter16x16_predictors_mb and SPLITMV. At the 16x16, 32x32, and 64x64 levels implemented in this commit, each plane is predicted with only a single call to vp9_build_inter_predictor. This is not yet called for SPLITMV. If the notion of SPLITMV/I8X8/I4X4 goes away, then the prediction block walker can go away, since we'll always predict the whole bsize in a single step. Implemented using a block walker at this stage for SPLITMV, as a 4x4 "prediction block size" within the BLOCK_SIZE_MB16X16 macroblock. It would also support other rectangular sizes too, if the blocks smaller than 16x16 remain implemented as a SPLITMV-like thing. Just using 4x4 for now. There's also a potential to combine with the foreach_transformed_block walker if the logic for calculating the size of the subsampled transform is made more straightforward, perhaps as a consequence of supporing smaller macroblocks than 16x16. Will watch what happens there. Change-Id: Iddd9973398542216601b630c628b9b7fdee33fe2
-
- 18 Apr, 2013 - 6 commits
-
-
Dmitry Kovalev authored
Change-Id: I7209a05919162a8155520bc543658ddb69ba12ce
-
Dmitry Kovalev authored
Using predefined clamp function, removing redundant variables, declare and init on the same line. Change-Id: I14636eb242194bac33f8a9d4a273a416d32856fc
-
Jingning Han authored
Use in-place buffers (dst of MACROBLOCKD) for macroblock prediction. This makes the macroblock buffer handling consistent with those of superblock. Remove predictor buffer MACROBLOCKD. Change-Id: Id1bcd898961097b1e6230c10f0130753a59fc6df
-
Dmitry Kovalev authored
Change-Id: I71369a30a86111ae737168c795a29b4d8cff6ebf
-
John Koleszar authored
Updates the common convoloution code to support blocks larger than 16x16, and rectangular blocks. This uncovered a bug in the SSSE3 filtering routines due to the order of application of saturation. This commit fixes that bug, adjusts the unit test to bias its random values towards the extremes, and adds a test to ensure that all filters conform to the expected pairwise addition structure. Change-Id: I81f69668b1de0de5a8ed43f0643845641525c8f0
-
Dmitry Kovalev authored
Change-Id: I9790baedbd4acb7113575efc6f228b2656c42ff7
-
- 17 Apr, 2013 - 18 commits
-
-
Ronald S. Bultje authored
Change-Id: I83aa188d414922db19cccb210c4001c02d5a404c
-
Yunqing Wang authored
Removed skip_recon_sb(). Cleanup code so that we could combine decode_sb and decode_mb later. Change-Id: I24d1dd5283e2565072838a03c344938b88bfd35c
-
John Koleszar authored
Change-Id: I655305c9e22bdd9abc893d3c40d4bc6616aa1d35
-
Dmitry Kovalev authored
Also moving frame size check into read_frame_size function. Change-Id: Ib098d83bd50081bfc2941c87aea0dc58cb39583e
-
Ronald S. Bultje authored
Change-Id: Id268ccaf1aefee6a3ed3e31486d4370f1c25e8cb
-
Dmitry Kovalev authored
List of moved functions: vp9_decode_uniform, vp9_decode_term_subexp, vo9_inv_recenter_nonneg, vp9_decode_unsigned_max. Change-Id: Ib518beb90b791690c5c93de17b8bdbf560033b41
-
Dmitry Kovalev authored
Also using ALLOWED_REFS_PER_FRAME instead of 3. Change-Id: I810dd8521d8138edb9dbd78edede49b62f706554
-
Dmitry Kovalev authored
Change-Id: I28c3026946fc1bde7074e6e0198da93bb0d75dfe
-
Adrian Grange authored
alt_extra_bits is now only used in a local context so remove it from the twopass_rc structure. Change-Id: I5bbf0a3dba9712a3da45760f7bb865243705b53e
-
Yaowu Xu authored
that are related to using reconstructed pixel for selecting reference motion vectors. Change-Id: I048dfae39ca7385e344b57d46347ecc6e753e1bb
-
Ronald S. Bultje authored
It is unused. Change-Id: Ied3269ffacf9b6303bc9d85f996384c3575ef812
-
Yaowu Xu authored
Change-Id: Idb0d11e3ae9afabe667a9f327bf4d3aa84f63649
-
Yaowu Xu authored
Using filter_level/16 instead. Change-Id: I73a7e83a785d6aa6f9b5d22cf66e22f0a39ed078
-
Ronald S. Bultje authored
About 11% overall encoder speedup with the sbsegment experiment enabled. Change-Id: Iffb1bdba6932d9f11a6c791cda8697ccf9327183
-
Yaowu Xu authored
Change-Id: I41b3f5932ecd6256e8207369ad19aa81e7987be1
-
Ronald S. Bultje authored
Adds RD integration for 32x16, 16x32, 64x32 and 32x64 rectangular blocks. Derf almost +0.6%, HD a little over +1.0%, STDHD +1.3%. Change-Id: Id651fdb6a655fdbb5c47009757e63317acfb88a5
-
Jingning Han authored
Enable recursive partition information coding from SB64X64 down to MB16X16. The bit-stream syntax is now supporting rectangular block sizes. It starts from SB64X64 and recursively describes the partition type of the current block. If the partition type is PARTITION_NONE, the block is coded as a single unit; if it is PARTITION_HORZ or PARTITION_VERT, the block is segmented into two independently coded rectangular units, with no further partition needed; otherwise, the block is segmented into 4 square blocks. i.e., PARTITION_SPLIT case, each can be potentially further partitioned. Forward adaptive probability modeling is used for the partition information coding, conditioned on the current block size. Change-Id: I499365fb547839d555498e3bcc0387d8a3587d87
-
Dmitry Kovalev authored
Also a little bit of code cleanup: replacing pbi->common with cm, pbi->mb with xd. Change-Id: I2f70a005704a2833d644dfaafc4cd354e6e8532b
-
- 16 Apr, 2013 - 10 commits
-
-
Dmitry Kovalev authored
Change-Id: I7057ed8e2a13a3c5367e2923eb4b3260bd7cf546
-
Dmitry Kovalev authored
Change-Id: Ic795cf6fc202bf32c9b5b0b3cef9ac422af53cd0
-
Christian Duvivier authored
Scalar path is about 1.3x faster (2.1% overall encoder speedup). SSE2 path is about 5.0x faster (8.4% overall encoder speedup). Change-Id: I360d167b5ad6f387bba00406129323e2fe6e7dda
-
Adrian Grange authored
This function is now called from configures the ARNR filter so it belongs with the other temporal filter functions. Change-Id: I64211875918364b5b8edfb97743e573c6def1663
-
Dmitry Kovalev authored
Change-Id: I3bbc31840af69481e1d9bb4427c9ee25abf82946
-
Adrian Grange authored
Normalization of the frame boost value was being done when it reached the value 1028. The intention was to keep to a range of 10 bits, so it should have been clipped above 1023. Change-Id: I0afdddc1d2eb9e7822ec4578903cbe6ec0b33b91
-
Dmitry Kovalev authored
New names are y_dc_delta_q, uv_dc_delta_q, uv_ac_delta_q. Change-Id: I4acae1fc23a4697ce2c5a5becb8dc28ef0a4b552
-
Ronald S. Bultje authored
Change-Id: I8a4da6925f2d58a426c4d122df8b97bb69452e49
-
John Koleszar authored
This flag was added to VP8 to allow a mode where MB-level skipping was not allowed, saving a bit per mb. It was never used in practice, and hasn't been tested in VP9, so remove it. Change-Id: Id450ec6904c6d06c1919508e7efc52d05cde5631
-
Dmitry Kovalev authored
tx_type == DCT_DCT check is an implementation detail of iht_add. Also adding dequant_add_y function with explicit DCT_DCT check inside. Change-Id: Ia3cb0225601752cdef0ff6f0acd3a09d9dbd8938
-