- 18 Dec, 2012 - 11 commits
-
-
Ronald S. Bultje authored
For coefficients, use int16_t (instead of short); for pixel values in 16-bit intermediates, use uint16_t (instead of unsigned short); for all others, use uint8_t (instead of unsigned char). Change-Id: I3619cd9abf106c3742eccc2e2f5e89a62774f7da
-
Yaowu Xu authored
-
Yaowu Xu authored
The MAX_PSNR was used to assign a "psnr" number when the mse is close to zero. The direct assignment is used to prevent divide by zero in computation. Changing it from 60 to 100 to be consistent against what is being done in VP9 Change-Id: I4854ffc4961e59d372ec8005a0d52ca46e3c4c1a
-
Yaowu Xu authored
-
Ronald S. Bultje authored
-
Ronald S. Bultje authored
-
Yaowu Xu authored
Change-Id: I004ded11983b7fda85793912ebc5c6f266dc5eb5
-
Yunqing Wang authored
Fixed uninitialized warning for txfm_size. Change-Id: I42b7e802c3e84825d49f34e632361502641b7cbf
-
Yunqing Wang authored
Fixed the warning: the size of array ‘intermediate_buffer’ can’t be evaluated [-Wvla]. Change-Id: Ibcffd6969bd71cee0c10f7cf18960e58cd0bd915
-
Ronald S. Bultje authored
This matches the names of tables for all other transform sizes. Change-Id: Ia7681b7f8d34c97c27b0eb0e34d490cd0f8d02c6
-
Ronald S. Bultje authored
Change-Id: I9548891d7b8ff672a31579bcdce74e4cea529883
-
- 17 Dec, 2012 - 1 commit
-
-
John Koleszar authored
Prefer the standard fixed-size integer typedefs. Change-Id: Iad75582350669e49a8da3b7facb9c259e9514a5b
-
- 13 Dec, 2012 - 7 commits
-
-
Yaowu Xu authored
The mismatch was caused by an improper merge of cleanup code around tokenize_b() and stuff_b() with TX32X32 experiment. Change-Id: I225ae62f015983751f017386548d9c988c30664c
-
Yaowu Xu authored
not defined in msvc Change-Id: I8fe8462a0c2f636d8b43c0243832ca67578f3665
-
Deb Mukherjee authored
Change-Id: I3c751f8d57ac7d3b754476dc6ce144d162534e6d
-
Deb Mukherjee authored
-
Deb Mukherjee authored
Modifies the scanning pattern and uses a floating point 16x16 dct implementation for now to handle scaling better. Also experiments are in progress with 2/6 and 9/7 wavelets. Results have improved to within ~0.25% of 32x32 dct for std-hd and about 0.03% for derf. This difference can probably be bridged by re-optimizing the entropy stats for these transforms. Currently the stats used are common between 32x32 dct and dwt/dct. Experiments are in progress with various scan pattern - wavelet combinations. Ideally the subbands should be tokenized separately, and an experiment will be condcuted next on that. Change-Id: Ia9cbfc2d63cb7a47e562b2cd9341caf962bcc110
-
Ronald S. Bultje authored
-
Ronald S. Bultje authored
Gives 0.5-0.6% improvement on derf and stdhd, and 1.1% on hd. The old tables basically derive from times that we had only 4x4 or only 4x4 and 8x8 DCTs. Note that some values are filled with 128, because e.g. ADST ever only occurs as Y-with-DC, as does 32x32; 16x16 ever only occurs as Y-with-DC or as UV (as complement of 32x32 Y); and 8x8 Y2 ever only has 4 coefficients max. If preferred, I can add values of other tables in their place (e.g. use 4x4 2nd order high-frequency probabilities for 8x8 2nd order), so that they make at least some sense if we ever implement a larger 2nd order transform for the 8x8 DCT (etc.), please let me know Change-Id: I917db356f2aff8865f528eb873c56ef43aa5ce22
-
- 12 Dec, 2012 - 2 commits
-
-
Ronald S. Bultje authored
-
Ronald S. Bultje authored
Add a function clip_pixel() to clip a pixel value to the [0,255] range of allowed values, and use this where-ever appropriate (e.g. prediction, reconstruction). Likewise, consistently use the recently added function clip_prob(), which calculates a binary probability in the [1,255] range. If possible, try to use get_prob() or its sister get_binary_prob() to calculate binary probabilities, for consistency. Since in some places, this means that binary probability calculations are changed (we use {255,256}*count0/(total) in a range of places, and all of these are now changed to use 256*count0+(total>>1)/total), this changes the encoding result, so this patch warrants some extensive testing. Change-Id: Ibeeff8d886496839b8e0c0ace9ccc552351f7628
-
- 11 Dec, 2012 - 3 commits
- 10 Dec, 2012 - 3 commits
-
-
Deb Mukherjee authored
-
Deb Mukherjee authored
The switchable count update was mistakenly inside a macro. Change-Id: Iec04c52ad57034b88312dbaf05eee1f47ce265b3
-
Paul Wilkins authored
Some further changes and refactoring of mv reference code and selection of center point for searches. Mainly relates to not passing so many different local copies of things around. Some place holder comments. Change-Id: I309f10ffe9a9cde7663e7eae19eb594371c8d055
-
- 08 Dec, 2012 - 7 commits
-
-
John Koleszar authored
-
Yaowu Xu authored
This commit changed the ENTROPY_CONTEXT conversion between MBs that have different transform sizes. In additioin, this commit also did a number of cleanup/bug fix: 1. removed duplicate function vp9_fix_contexts() and changed to use vp8_reset_mb_token_contexts() for both encoder and decoder 2. fixed a bug in stuff_mb_16x16 where wrong context was used for the UV. 3. changed reset all context to 0 if a MB is skipped to simplify the logic. Change-Id: I7bc57a5fb6dbf1f85eac1543daaeb3a61633275c
-
John Koleszar authored
In addition to allowing tests to use the RTCD-enabled functions (perhaps transitively) without having run a full encode/decode test yet, this fixes a linking issue with Apple's G++ whereby the Common symbols (the function pointers themselves) wouldn't be resolved. Fixing this linking issue is the primary impetus for this patch, as none of the tests exercise the RTCD functionality except through the main API. Change-Id: I12aed91ca37a707e5309aa6cb9c38a649c06bc6a
-
Jim Bankoski authored
-
Jim Bankoski authored
-
Ronald S. Bultje authored
Don't use vp9_decode_coefs_4x4() for 2nd order DC or luma blocks. The code introduces some overhead which is unnecessary for these cases. Also, remove variable declarations that are only used once, remove magic offsets into the coefficient buffer (use xd->block[i].qcoeff instead of xd->qcoeff + magic_offset), and fix a few Google Style Guide violations. Change-Id: I0ae653fd80ca7f1e4bccd87ecef95ddfff8f28b4
-
Ronald S. Bultje authored
Use these, instead of the 4/5-dimensional arrays, to hold statistics, counts, accumulations and probabilities for coefficient tokens. This commit also re-allows ENTROPY_STATS to compile. Change-Id: If441ffac936f52a3af91d8f2922ea8a0ceabdaa5
-
- 07 Dec, 2012 - 4 commits
-
-
Frank Galligan authored
Change-Id: I0cb06d77805246fe39d39ad3bc5df3c3f52c7050
-
Frank Galligan authored
Change-Id: I366e6d175da3012f1c8607fd7fad99fbbb616091
-
Frank Galligan authored
Change-Id: I1eb7433061a6c529471026e0ebdc6467942062eb
-
Ronald S. Bultje authored
This adds Debargha's DCT/DWT hybrid and a regular 32x32 DCT, and adds code all over the place to wrap that in the bitstream/encoder/decoder/RD. Some implementation notes (these probably need careful review): - token range is extended by 1 bit, since the value range out of this transform is [-16384,16383]. - the coefficients coming out of the FDCT are manually scaled back by 1 bit, or else they won't fit in int16_t (they are 17 bits). Because of this, the RD error scoring does not right-shift the MSE score by two (unlike for 4x4/8x8/16x16). - to compensate for this loss in precision, the quantizer is halved also. This is currently a little hacky. - FDCT and IDCT is double-only right now. Needs a fixed-point impl. - There are no default probabilities for the 32x32 transform yet; I'm simply using the 16x16 luma ones. A future commit will add newly generated probabilities for all transforms. - No ADST version. I don't think we'll add one for this level; if an ADST is desired, transform-size selection can scale back to 16x16 or lower, and use an ADST at that level. Additional notes specific to Debargha's DWT/DCT hybrid: - coefficient scale is different for the top/left 16x16 (DCT-over-DWT) block than for the rest (DWT pixel differences) of the block. Therefore, RD error scoring isn't easily scalable between coefficient and pixel domain. Thus, unfortunately, we need to compute the RD distortion in the pixel domain until we figure out how to scale these appropriately. Change-Id: I00386f20f35d7fabb19aba94c8162f8aee64ef2b
-
- 06 Dec, 2012 - 2 commits
-
-
John Koleszar authored
In addition to allowing tests to use the RTCD-enabled functions (perhaps transitively) without having run a full encode/decode test yet, this fixes a linking issue with Apple's G++ whereby the Common symbols (the function pointers themselves) wouldn't be resolved. Fixing this linking issue is the primary impetus for this patch, as none of the tests exercise the RTCD functionality except through the main API. Change-Id: I12aed91ca37a707e5309aa6cb9c38a649c06bc6a
-
Johann authored
Change-Id: I92d613e89c8f1174eca0789116120bfa20c25c28
-