Jump to content

Talk:Y′UV

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

I made a minor change to the wording of how chroma signals are made from the U and V channels as the previous wording implied that U and V are directly added when they're actually conveyed separately on the color subcarrier.

U+V = C ?

[edit]

Hi, first of all, what do U and V mean? Then, I don't understand the way how U and V are mapped on the u and v color map. Do the x- and y-axis equal voltages that are on the U and V channel? For example +0.3 Volts on the V-channel, and -0,2 Volts on the U-Channel equal a brownish color?

I really have problems with the following sentence:

"For instance if you amplitude-modulate the U and V signals onto quadrature phases of a subcarrier you end up with a single signal called C, for chroma, which can then make the YC signal that is S-Video."

What does quadrature blahblah mean? Aren't the U and V signals totally destroyed when they are mixed?

This article is way too verbose, poorly presented and confusing even to someone who understands the basics. — Preceding unsigned comment added by 65.197.211.2 (talk) 01:35, 29 May 2015 (UTC)[reply]

Thank you so much for all the efford you put into this article, --Abdull 21:45, 17 Jan 2005 (UTC)

i don't really understand this myself but the following pages may help
Chrominance Colorburst Quadrature amplitude modulation Plugwash 23:30, 18 Feb 2005 (UTC)
The U and V are mapped by the equations. The R, G, & B values are normalized to a range [0, 1]. So, for example, (R, G, B) = (1, 0, 0) would be red and (1, 1, 1) would be white. Using the equations in the article you can see Y, U, and V are on different ranges. The values have no direct relationship to voltages (that's for the physical implementation to deal with as this is just the color space).
Abdull, you need to understand this "quadrature blahblah" before you can understand that information is not lost. It's the process of converting two real signals, say, U and V onto a complex signal C = U + j*V. So your equation as the header here is mising the imaginary number: C = U+jV not C = U+V. Cburnett 00:41, 19 Feb 2005 (UTC)

I don't know if this will help or hinder, but I find it helpful to think of complex numbers as simply pairs of numbers. It is elegant mathematically to convert the two real numbers a and b into the complex number C = a + j.b but doesn't really change anything, except to make some kinds of formula quicker to write. For example, you can add two complex numbers together and it is just the same as adding together the separate numbers in each pair. Consider

C1 = a1 + j.b1
C2 = a2 + j.b2
C1 + C2 = a1 + j.b1 + a2 + j.b2 = (a1 + a2) + j.(b1+b2)

But take care, if you are new to complex mathematics, not to generalise. You can add and subtract complex numbers in this way, and you can multiply or divide by a single constant. But multiplying two complex numbers is NOT the same as just multiplying their elements.

If I read this quadrature stuff right, the two signals are mixed in such a way that they can be separated (because they are at 90 degree angles?)* The complex notation is just a convenient way of representing this combined information with a single item in a formula. Notinasnaid 10:27, 19 Feb 2005 (UTC)

If complex numbers are hard to understand (I think the odd aspects tend to be over-emphasized to novices), you don't need them to understand how color is transmitted in analog color TV. The subcarrier is split into two sinusoids (sine-shaped) waves with a 90-degree phase difference. (You could call them sine and cosine without being too far off). In NTSC analog, the I and Q signals are similar to U and V. In the NTSC encoder, I and Q are zero if the image is black and white; they can be positive or negative voltages depending upon the color they represent.

One of those sinusoids is modulated by I, and the other, at 90 degrees, by Q. What's nice is that I and Q don't interfere with each other in this process. The amplitude at the output of each modulator is proportional to the magnitude of I (or Q). If I (or Q) goes negative, the output is inverted. If the image is black and white, the modulator outputs are ideally zero.

The outputs of these two modulators still differ in phase by 90 degrees. They are added, and the result is a single sinusoid with a phase that represents the hue, and amplitude that represents saturation. (By all means, please do learn what "hue" and "saturation" mean! Hue is probably better known – red, orange, yellow, etc. are different hues. Pale pink is an unsaturated red. A hazy sky without clouds that is pale blue is a unsaturated. (Note that brightness, Y, luminance, and value (artist's term) all mean about the same thing – how much light the color sends to your eye.))

For some unknown reason, the author apparently ignored the term "color difference"; NTSC used that term often, and U and V are still color difference signals. By all means, study that multi-panel image of a barn. The color-difference panes should render as a uniform, featureless gray, ideally, if saturation is reduced to zero, or if they are photographed with black and white film. Although they are "bright" enough to see (for practical reasons), they do not carry any brightness information themselves! They only show the color deviation from neutral gray (or dark or light gray).

As to digital TV, the same general ideas hold, although comparatively-narrowband chroma has been left behind, it seems. The same general principles apply, although modulating two sinusoids 90 degrees apart is analog; digital TV doesn't need a color subcarrier.

Btw, the color burst in NTSC is yellow-green, I'm fairly sure! It has to have a hue, after all.

HTH, a bit! Nikevich (talk) 18:52, 23 May 2009 (UTC)[reply]


Thank you all for your great help. Right now I have the idea that the colors red, green and blue all lie 120° apart from each other on the U-V color plane. Am i right? Well, my problem is to understand on what principle this u-v color plane is made up... why is reddish colorin the upper region, why is a purple tint in the right region, etc? there must be some argumentable logic behind it. Has YUV always its roots in RGB color space (thus, you always need RGB before you can give YUV values)?

As to the image, please realize that color space is three-dimensional, and that this is only one plane of many (as many as the number of bits for the third axis defines...) in U,V color space. (Note the absence of yellow, for instance!) Nikevich (talk) 18:52, 23 May 2009 (UTC)[reply]

And still, what does U and V stand for? Ultra and Vulvar? Thanks, --Abdull 13:14, 23 Mar 2005 (UTC)
U, V and W are secondary coordinate identifiers, whereas X, Y and Z are the primary ones. Maybe they used variables with the names U and V for color because X and Y were already taken for the pixel location in the being edited? --Zom-B 9:07, 8 Sep 2006 (UTC)

Essentially, I'd say that U, V, and W are simply the next letters of the roman alphabet, starting from the end. X, Y, (and Z, but I don't know, or recall, what Z is...) are already assigned, so, working backwards, U.V. and W are the next available letters. Nikevich (talk) 18:52, 23 May 2009 (UTC)[reply]

Exact numbers

[edit]

Are the 3 decimal place figures in the matrices and formulae of the article exact, or can they be written completely accurately in a surd/trigonometrical/whatever form? Just curious where they came from... – drw25 (talk) 15:37, 14 July 2005 (UTC)[reply]

I don't know the details in this case, but I think these are most likely to be first generation numbers - not derived numbers. Some people picked them to get the results they wanted. So in that sense they are probably exact. The exotic numbers in Lab are exact in that sense. Notinasnaid 16:20, 14 July 2005 (UTC)[reply]
The numbers are from "Rec. 601". The document was originally known as "CCIR Rec. 601". It's now ITU-R BT.601, the current version is BT.601-5 . So, yes, they're just as precise as they are in that document. NikolasCo 10:51, 19 March 2006 (UTC)[reply]


Possible values for BT.601

Color space NTSC, C white point, Illuminant 2 degree give us:

HDTV with BT.709 If you play with sRGB( D65, 2 Degree) you will see that possible exact values are:

— Preceding unsigned comment added by 178.221.115.111 (talk) 16:46, 25 December 2016 (UTC)[reply]

Code sample

[edit]

I removed the following from the page:

Below is a function which convert RGB565 into YUV:

void RGB565ToYCbCr(U16 *pBuf,int width, int height)
{
	U8 r1,g1,b1,r2,g2,b2,y1,cb1,cr1,y2,cb2,cr2;
	int i,j,nIn,val;
	for(i=0;i<height;i++)
	{
		nIn = i*width;
		for(j=0;j<width;j+=2)
		{
			b1 = (U8)((pBuf[nIn+j]&0x001f)<<3);
			g1 = (U8)((pBuf[nIn+j]&0x07e0)>>3);
			r1 = (U8)((pBuf[nIn+j]&0xf800)>>8);
			val = (int)((77*r1 + 150*g1 + 29*b1)/256);
			y1=((val>255) ? 255 : ((val<0) ? 0 : val)); 
			val = (int)(((-44*r1 - 87*g1 + 131*b1)/256) + 128);
			cb1=((val>255) ? 255 : ((val<0) ? 0 : val)); 
			val = (int)(((131*r1 - 110*g1 - 21*b1)/256) + 128);
			cr1=((val>255) ? 255 : ((val<0) ? 0 : val)); 
 			
			b2 = (U8)((pBuf[nIn+j+1]&0x001f)<<3);
			g2 = (U8)((pBuf[nIn+j+1]&0x07e0)>>3);
			r2 = (U8)((pBuf[nIn+j+1]&0xf800)>>8);
			val = (int)((77*r2 + 150*g2 + 29*b2)/256);
			y2=((val>255) ? 255 : ((val<0) ? 0 : val)); 
			val = (int)(((-44*r2 - 87*g2 + 131*b2)/256) + 128);
			cb2=((val>255) ? 255 : ((val<0) ? 0 : val)); 
			val = (int)(((131*r2 - 110*g2 - 21*b2)/256) + 128);
			cr2=((val>255) ? 255 : ((val<0) ? 0 : val));  

			pBuf[nIn+j] = (U16)(y1<<8 | ((cb1+cb2)/2));
			pBuf[nIn+j+1] = (U16)(y2<<8 | ((cr1+cr2)/2));
		}
	}
} 

I removed it for the following reasons: 1. It is not formatted (yes I could fix that) 2. While wikipedia does have algorithms, there doesn't seem to be any great amount of completed program code. Does it belong here? 3. Wasn't in the right part of the article. 4. RGB565 should be defined, linked etc. As it is a representation rather than a color space, it isn't likely to be a familiar term even to color scientists, unless they happen to work with this format. 5. The code is not self-contained; it relies on external definitions, not included, like U8 and U16.

Comments?

Notinasnaid 08:06, 10 August 2005 (UTC)[reply]

I've formatted it by adding spaces at the start of each line (which turn off normal line wrapping, and force a special raw text display format). I agree: Wikipedia articles should not contain code unless they are about algorithms. An equation is more appropriate here. -- The Anome 08:13, August 10, 2005 (UTC)

Can you explain me what language does this code example (fixed-point approximation) uses?

It does not seem to do a fixed point approximation, those programs repeat a computation of the form until where the reached is a fixed point of the (converging) function.
Those programs have a loop structure like this:

 x_n = x_0;
 x_plus1 = f(x_n); 
 while(abs(x_nplus1-x_n)>epslilon)
 {  x_n = x_nplus1;
    x_nplus1 = f(x_n);
 }
 return x_nplus1; 

Although I only gave a superficial read to the article, I did not see any fixed point computation. All the transformations work straight forward, simple (matrix) arithmetic. Reading and writing to different (low level) structures.


If this is C/C++ what is := operator?

I did not see any := just a x += c form which means x = x + c, if you mean the form x=boolean?yes:no is an if function equivalent to if(boolean) x=yes; else x=no;

If this is pascal what >> operator?

no ">> and <<" are bit shift operators in C.

Pekka Lehtikoski: Language is C, please refer to "The C Programming Language, Kernighan & Ritchie, ISBN 0-13-110370-9". This kind of nit picking will take us nowhere, at least it will not make wikipedia better. As far this is best we got, let's publish it. Until when and if someone contributes improved or new version on the topic. About the code: I see no problem in incuding code samples within wikipedia, maybe not in main article, but linked to it. I would think that this would make wikipedia more useful to programmers, and would not hurt anyone else.

I think, in any case, the article needs reformatting - at least. There should be either fragments of pseudocode or mathematics, but not a mashup of both. The use of "+=" or ">>" in mathematical formulae is neither pleasing to the eye nor does it follow any convention. Also, I don't consider using \times for notating multiplication appropriate if matrix notation is used in the same article. Either the target audience is general (use \times) or mathematical (use \cdot, matrix notation, whatever). 137.226.196.26 (talk) 13:19, 30 March 2009 (UTC)[reply]

I Agree! If those nested loops stand for a matrix product, the program could be more clear, and one with better structure could be written. Maybe the enthusiastic wikipedians learning to program, are interested in learning good programming practices like literate programming, may write this article which works as a literate form for any program as suggested below.
I shortened that code, but I did it blindly, just added a loop, to remove repeated code.
Observe that +k in the loop is used where no term exist for the first time the loop is executed, i.e. +0 and +1 in the second time.
Also observe that by commenting with indentation as if comments where instruction, can make the code more readable. This style of commenting allows to skip the detail of the code. I did not fill the comments because I do not know what is this program doing.
Good programs have more comments than code, an even better way to write a program is the literate programming style.
If this article were well written, (see the comment this is not clear, to see what I mean) it could serve as a form to fill with code as a literate program (with minor changes to order the dependencies if those exist).

void RGB565ToYCbCr(U16 *pBuf,int width, int height) {

  U8 r[2],g[2],b[2],y[2],cb[2],cr[2];
  int i,j,k,nIn,val;

//this part computes x....

  for(i=0;i<height;i++)
  {  nIn = i*width;
     for(j=0;j<width;j+=2)
     {

// this part does y ....

        for(k=0;k<2;k++)
        {
           b[k] = (U8)((pBuf[nIn+j+k]&0x001F)<<3);
           g[k] = (U8)((pBuf[nIn+j+k]&0x07E0)>>3);
           r[k] = (U8)((pBuf[nIn+j+k]&0xF800)>>8);
           val  = (int)((77*r[k] + 150*g[k] + 29*b[k])/0x0100);
           y[k] = ((val>0xFF) ? 0xFF : ((val<0) ? 0 : val)); 
           val  = (int)(((-44*r[k] - 87*g[k] + 131*b[k])/0x0100) + 128);
           cb[k]= ((val>0xFF) ? 0xFF : ((val<0) ? 0 : val)); 
           val  = (int)(((131*r[k] - 110*g[k] - 21*b[k])/0x0100) + 128);
           cr[k]= ((val>0xFF) ? 0xFF : ((val<0) ? 0 : val)); 
         }   

// this part computes z ....

         pBuf[nIn+j  ] = (U16)(y[0]<<8 | ((cb[0]+cb[1])/2));
         pBuf[nIn+j+1] = (U16)(y[1]<<8 | ((cr[0]+cr[1])/2));
     }
  }

}

Oh, dear...

[edit]

When uploading the original image for this article, I absolutely forgot to mention that I actually created it from scratch. I am going to re-upload it and properly licence it this time.  Denelson83  04:42, 29 October 2005 (UTC)[reply]

YUV to RGB image misrepresenting the transformation

[edit]

Let A denote the 3x3 transformation matrix from RGB-space to YUV-space. It is true that each in-range RGB-vector transforms into an in-range YUV-vector using A, it is _not_ the case that any in-range YUV-vector (as stated in the tex: 0<=Y<=1, -0.436<=U<=0.436, -0.615<=V<=0.615) has a valid RGB vector as it is transformed by inv(A). The point is that the picture for this article presents a whole bunch of transformed in-range YUV-vectors that do not map to the RGB space. These are located around the edge of the picture. For clarity, shouldn't non-valid RGB-vectors be left out of that image?

Example: y = [0.5, -0.4, -0.4]' (it's on the image) transforms to (approx.): r = [0.044, 0.89, -0.31]' (which is a non-valid RGB vector)

--81.172.252.164 14:47, 1 March 2006 (UTC)[reply]

The actual color range within the UV color space is actually a hexagonal shape, which doesn't fill the (-0.5,-0.5)-(0.5,0.5) square but touches the inner sides of the (-0.436,-0.615)-(0.436,0.615) square.
Animation of all the possible RGB colors in the YUV colorspace. Time is Y, X-axis is U and Y-axis is V. The black square delimits the (-0.5,-0.5)-(0.5,0.5) range.
And by the way, do I need to remind you that this is an analog colorspace. When tuning things on a TV like color and/or brightness, the internal voltages might indeed drop below 0 (depends on the circuitry) and only at the very last moment, when the voltage enters the tube, the negative values are clipped (actually negative values just don't manipulate the electron beam, just as if the voltage were 0) --Zom-B 10:05, 8 Sep 2006 (UTC)

color

[edit]

The article says,

standards such as NTSC reduce the amount of data consumed by the chrominance channels considerably, leaving the eye to extrapolate much of the color. NTSC saves only 11% of the original blue and 30% of the red. The green information is usually preserved in the Y channel. Therefore, the resulting U and V signals can be substantially compressed.

What does it mean to save X% of the original color? What is the source of this information? The article on NTSC doesn't even say this.

In the YIQ article, it says that broadcast NTSC limits I to 1.3 MHz, Q to 0.4 MHz, and Y to 4 MHz. That's where the percentages come from, though it's rather simplistic to map YIQ directly to RGB. Having less bandwidth is like having a smaller number of colors, if I understand correctly. - mako 00:41, 24 March 2006 (UTC)[reply]
Not necessarily a smaller number of colors. It could be a smaller number of chroma pixels per field, which is indeed an aspect of the NTSC format.
Your phrase "rather simplistic" is a euphemism for "inaccurate".

I was really distressed by referring to narrowband chrominance as "compression". As I understand compression, it's almost always a different process from simple low-pass filtering. That's why I edited that section. I think the author doesn't know the history of developing compatible color, but I could be off-base. (Just now, I haven't yet read the article (or section) on Y'IQ.) Considering the capabilities of NTSC receivers, I and Q did provide enough bandwidth to provide a good color image, but true I and Q demodulation was rare, if ever used after the RCA CT-100 receiver. It was somewhat more expensive, and "high video fidelity" hadn't caught on.Nikevich (talk) 19:04, 23 May 2009 (UTC)[reply]

Coment: From my TV days it's better to express the TV color signal as Hue (the color) and Saturation (the ammount of color). The 'compression' comes from the fact that the color carrier occupies a narrower frequency spectrum than the luminance signal. (Reduced to mimmic how the human eye percieves things.) In the time domain, the color information is a single frequency sine wave added to the luminance signal (care taken not to violate transmission standards). The height of the color frequency sinewave corresponds the saturaion, while the phase of the sine wave (relitive to the reference 'color burst') gives the hue. As a side note the Hue and Saturtion are in Polar coordinates, not in XY coordinates, as any color vector scope's display will show. I'll try and clean up with references from my Tektronix TV Measurements book and pictures someday, maybe. —Preceding unsigned comment added by 12.108.21.226 (talk) 23:02, 25 October 2010 (UTC)[reply]

Fixed point approximation

[edit]

I can't understand how the given formula for fixed point approximation can work. Unless I'm missing something, it seems plain wrong to me. It is not explained what range is expected for RGB components and what range the YUV result would have. I see in the history the formula has already been removed and readded, please double check it. I can figure out one myself, and post it if needed, but maybe it's worth asking where the one currently in the article came from. SalvoIsaja 21:04, 30 April 2006 (UTC)[reply]

I have a working code snippet for the inverse transformation (YUV->RGB).
 /* YUV->RGB conversion      */
 /* According to Rec. BT.601 */
 void
 convert(unsigned char y, unsigned char u, unsigned char v,
         unsigned char *r, unsigned char *g, unsigned char *b)
 {
    int compo, y1, u1, v1;
 
    /* Remove offset from components */
    y1 = (int) y - 16;
    u1 = (int) u - 128;
    v1 = (int) v - 128;
 
    /* Red component conversion and clipping */
    compo = (298 * y1 + 409 * v1) >> 8;
    *r = (unsigned char) ((compo < 0) ? 0: ((compo > 255) ? 255: compo));
 
    /* Green component conversion and clipping */
    compo = (298 * y1 - 100 * u1 - 208 * v1) >> 8;
    *g = (unsigned char) ((compo < 0) ? 0: ((compo > 255) ? 255: compo));
 
    /* Blue component conversion and clipping */
    compo = (298 * y1 + 516 * u1) >> 8;
    *b = (unsigned char) ((compo < 0) ? 0: ((compo > 255) ? 255: compo));
 }

Don't know if it can be inserted in the main article. Think about it. --Cantalamessa 14:35, 30 May 2006 (UTC)[reply]

Discussions I've seen in the past seemed to suggest that people don't want to see sample code in article, preferring that articles instead give algorithms and formulae. (By the way, your code also does not give expected range of inputs or outputs; it is NOT the case that 0 to 255 is a universal representation for color values; worth adding that in a comment. The formulae on the page are open to the same criticism, but they do at least use the 0 to 1 range which color scientists use). Notinasnaid 17:17, 30 May 2006 (UTC)[reply]

Maybe the code is not intended to please 'color scientists', but I can guarantee that if you download a YUV sequence from the internet, well, this kind of demapping is what makes the sequence to be seen on the screen with decent colors. --Cantalamessa 15:03, 31 May 2006 (UTC)[reply]

Whether it pleases colour scientists or not, the key point is that you don't define the range of input and output values. Notinasnaid 17:21, 31 May 2006 (UTC)[reply]
By the way I was mistaken to say the article doesn't define this. It says "Here, R, G and B are assumed to range from 0 to 1, with 0 representing the minimum intensity and 1 the maximum. Y is in range 0 to 1, U is in range -0.436 to 0.436 and V is in range -0.615 to 0.615." Notinasnaid 18:35, 31 May 2006 (UTC)[reply]

Notinasnaid, please don't misunderstand me: I don't want to struggle against you at all! For the input and output ranges, they are defined in the BT.601 recommendation, as per the second line of comment (YUV means BT.601 in almost 90% of the downloadable sequences). Major details to be found in YCbCr. --Cantalamessa 23:50, 31 May 2006 (UTC)[reply]

Ah, this is interesting stuff. According to[1], BT.601 is based on YCbCr, which as this article says '...is sometimes inaccurately called "YUV".' Notinasnaid 08:31, 1 June 2006 (UTC)[reply]

The problem with "YUV" is that it is a term that is used incorrectly pretty much by everyone. Y'UV is the analog system used for PAL video similar to Y'IQ for NTSC. Things like YUV 4:4:4, and YUV 4:2:0 are incorrect terms. YUV in this case is referring to Y'CbCr digital video. YUV is just being used incorrectly by everyone, and thus causing much confusion. Read [2] for more information.

Pekka Lehtikoski: Most readers of YUV-RDB conversions are programmers, and practical code by Cantalmessa is most appreciated. It should be in the main article, or linked clearly to it.

Both the formulae as are, and the code snippet above, are wrong indeed. I'm adding a missing absolute-value. -Ofek Shilon —Preceding unsigned comment added by 62.219.243.155 (talk) 11:06, 2 December 2010 (UTC)[reply]

Exact numbers

[edit]

I noticed the original RGB-to-YUV matrix is rounded to 3 decimal places, and the YUV-to-RGB matrix

is the exact matrix-inverse of the first matrix, but displayed using all decimals. As you can see, elements 0,1 and 2,2 are all zeros up to the 4th decimal place. Because of this, I thought maybe these elements ought to have been 0 and some elements in the RGB-to-YUV matrix in the 2nd and 3rd row should be more accurate than 3 decimal places.

--Zom-B 07:45, 8 Sep 2006 (UTC)

Using a formula slightly different from the one on the YCbCr page I managed to find the exact values of the first matrix.
In matrix form:
And the exact inverse:
--Zom-B 05:25, 9 Sep 2006 (UTC)


I think you mixed up the signs in that last matrix, at least if I compare it to the one above. Shouldn't it rather say:

--Quasimondo (talk) 11:40, 25 June 2010 (UTC)[reply]

Y'CbCr information should go on Y'CbCr page

[edit]

See Poynton's note about the difference. [3] If I wasn't lazy myself, I would revamp this article.Glennchan 06:18, 10 December 2006 (UTC)[reply]

Any relation to UV mapping?

[edit]

Do UV color values have any corelation to UV mapping on 3D surfaces? --24.249.108.133 16:49, 1 March 2007 (UTC)[reply]

No. 92.35.31.146 (talk) 17:08, 5 December 2023 (UTC)[reply]

Θ


Difference between YUV and Y'CbCr

[edit]

The difference is very poorly explained. The conversion matrix coefficients given in the YUV conversion equations here are exactly the same as the equations given in the Y'CbCr for the (601) variation. The explanation says they have difference scaling factors and that YUV is analog. That sounds weird, surely digitising an analog signal does not change anything about the fundamental signal content. —Preceding unsigned comment added by 61.8.12.133 (talkcontribs)

The analog and digital scale factors are different because different formats are used to encode the signal (e.g. NTSC encoding may be used on the analog signal). In digital, there are many flavours of Y'CbCr with different scale factors... in rec. 601 y'cbcr, the chroma is in the 16-240 range while in JPEG it's something else.Glennchan (talk) 00:53, 31 October 2008 (UTC)[reply]

Example numbers

[edit]

The RGB articles lists a bunch of example values for white, black, red, blue, green, yellow, and so forth. There is no such entry in the YUV article. Can someone add these values to make it less abstract and theoretical? I would do it myself but I don't trust my ability to calculate accurate values. I came here to find what the YUV value for white is, and didn't see any information. -Rolypolyman (talk) 21:43, 16 January 2008 (UTC)[reply]

YUV to RGB formula not working properly

[edit]

I have been using the formula:

C = Y - 16
D = U - 128
E = V - 128

R = clip(( 298 * C           + 409 * E + 128) >> 8)
G = clip(( 298 * C - 100 * D - 208 * E + 128) >> 8)
B = clip(( 298 * C + 516 * D           + 128) >> 8)

for calculating RGB from YUV. In order to make it work properly I have to change it to:

C = Y + 16
D = U - 128
E = V - 128

Any ideas why this is? Is this just a typing error or is the YUV-frame I use different from the standard specifications somehow?

Y'' in mathematical derivation section

[edit]

Is Y'' an error for Y'? This article used to have Y and was then changed to Y'. In the old article the variables were all Y, in the words and in the matrices. If this is not an error, Y'' should be defined. SpinningSpark 08:20, 25 April 2008 (UTC)[reply]

Hidden comment YUV / RGB

[edit]

I am removing a hidden comment in the article text and moving it here because the article is not the place to discuss errors. With context;

  • Y'UV models human perception of color in a different way from the standard RGB model used in computer graphics hardware.
  • <hidden>this is wrong; HSL and HSV are just different coordinate systems for RGB: but not as closely as the HSL color model and HSV color model.</hidden>

Requested move

[edit]
The following discussion is an archived discussion of the proposal. Please do not modify it. Subsequent comments should be made in a new section on the talk page. No further edits should be made to this section.

The result of the proposal was Move. Every source cited gives "YUV" so I recommend changing the content here as well. —Wknight94 (talk) 10:57, 24 July 2008 (UTC)[reply]

Y'UVYUV — YUV is the more popular usage by a wide margin, even though is Y'UV might be the mathematically correct usage according to some editor. — Voidvector (talk) 21:04, 2 July 2008 (UTC)[reply]

Survey

[edit]
Feel free to state your position on the renaming proposal by beginning a new line in this section with *'''Support''' or *'''Oppose''', then sign your comment with ~~~~. Since polling is not a substitute for discussion, please explain your reasons, taking into account Wikipedia's naming conventions.
  • Oppose. Most of the information on this page should be moved/merged with Y'CbCr, as that is what is what it describes. There is no reason why an encyclopedia should promote an erroneous term, just because many users does. An encyclopedia should rather strive to use terms consistent with itself, with relevant litterature and public standards. Suggested article structure:
  • 'yuv' containing information about the "yuv" color file formats, such as yuv422, no matter what the actual colorspace used in them.
  • 'Y'UV' containing information about the "Y'UV" colorspace used in analog television (that 1% of the users probably are interested in)
  • 'Y'CbCr' containing information about the digital colorspace used in MPEG, h26x, jpg, etc. (that 99% of the users probably are interested in)
Of course, all three articles should be cross-referenced at the intro and wherever it makes sense. —Preceding unsigned comment added by Knutinh (talkcontribs) 09:27, July 6, 2008
  • Support. YUV was orignally used with PAL/NTSC television when MPEG was unheard of. Anyone looking for information from that era (and still actually in use though rapidly obsolescent) will type YUV. Y' is not used in any of my 1960s/1970s textbooks. It is only later that the Y'UV terminology arises. YCbCr is related but not identical. SpinningSpark 18:02, 16 July 2008 (UTC)[reply]

Discussion

[edit]
Any additional comments:

I understand that Y'UV might be the mathematically correct notation, but I think the article should be located at YUV because it is the more widespread usage by a wide margin. --Voidvector (talk) 11:28, 1 July 2008 (UTC)[reply]

Well it used to be at YUV but someone changed it without discussion. I don't see the name of the article as a big issue because YUV redirects here. What was more of a problem is that Y was replaced in the text with some automatic editor which made a complete mess of the page. Several other editors have had to work at cleaning this up. What is really missing from the article is an explanation of what Y and Y' actually mean, the relationship between them and how the terms are used both colloquially and formally. As I see it, it would be perfectly ok to use Y in the article instead of Y' as long as the article made it clear upfront what the term meant and that their was a more formal notation as well. SpinningSpark 19:08, 1 July 2008 (UTC)[reply]
I have proposed a move. I think the article should use "YUV" normally as it is the more widespread usage, and use Y' when describing it in more technical details (such as in the matrix calculations). --Voidvector (talk) 01:19, 4 July 2008 (UTC)[reply]

Reply to Knutinh above, first of all YUV is not wrong, it is simply a form of simplification that ignores "primes". In fact based on the current logic, this article should be located at Y′UV (prime symbol is a separate codepoint under Unicode). Quote from Wikipedia:Naming conflict: "Bear in mind that Wikipedia is descriptive, not prescriptive. We cannot declare what a name should be, only what it is." --Voidvector (talk) 07:39, 7 July 2008 (UTC)[reply]

According to the book by Poynter, "YUV" and "Y'UV" is wrong when used for the regular separation of luma and chroma used in digital image/video coding. Moreover, those terms are already occupied by different colorcodings. When public standards say one thing, wikipedia should not contradict them, no matter how much easier it is to write yuv on a keyboard. My suggestion still stands: Use terminology that corresponds to public standards and reference litterature. Cross-reference and explain in order to clear up the common misconception. —Preceding unsigned comment added by Knutinh (talkcontribs) 13:56, 28 July 2008 (UTC)[reply]
The above discussion is preserved as an archive of the proposal. Please do not modify it. Subsequent comments should be made in a new section on this talk page. No further edits should be made to this section.

10-bit 4:2:2 in YUV10 or YUV16

[edit]

All material currently supplied via the EBU is provided as either 10-bit 4:2:2 in YUV10 or YUV16 file format or as 8-bit 4:2:2 material. Isn't that just YUV at 10 and 16 bits? Can we have some info about them here?

EBU - HD test sequences

IncidentFlux [ TalkBack | Contributions ] 13:35, 28 October 2008 (UTC)[reply]


Image is for Y,R-Y,B-Y not for YUV

[edit]

The image on the top of the page is not the YUV color space. It is the Y,Y-R,Y-B color space. In YUV the color difference signals are scaled. In the image, it is labled from -.5 to .5 on both axis, which if were used in Y,R-Y,B-Y color space with a lumanance of .5 it will be correct because the blue level will be 1 at B-Y=.5 and 0 at B-Y=-.5, and same with the red. —Preceding unsigned comment added by 75.57.161.130 (talk) 02:23, 31 March 2009 (UTC)[reply]

Seems that somebody should correct this, or am I confusen? Nikevich (talk) 19:36, 23 May 2009 (UTC)[reply]

Historical background and human visual perception

[edit]

Back before the merger that created the IEEE, the electronics people were the IRE. Their journal, Proceedings of the IRE, Color Television Issue, was a lifetime inspiration to me (I was probably a freshman in high school, but learned electronics early, from the MIT Rad. Lab. series). IIrc, it was the June, 1949 issue; just about positive about the year.

Although there was a lot of explanation of the color-subcarrier scheme for carrying chroma in NTSC, that's probably of interest to people who want to learn about NTSC analog (and why a 3-ns time error creates a noticeable hue shift!) (The spoof name was "Never Twice The Same Color", btw; people usually don't get that word "twice".)

If you're a serious student, the quadrature suppressed-carrier modulation scheme is worth learning about; it's still in use, I'm pretty sure, although in a more-developed form. It could be a background for learning about OFDM (and COFDM).

Human perception

[edit]

Main point, though, is that some articles in that issue explained human color vision very clearly, and made it very evident that chrominance could have much less bandwidth than luminance yet still provide a quite-satisfactory image. As well, it was explained why the axes were chosen as they were, to correspond to human color perception.

Moreover, that issue made the concept of color difference signals (U and V, or I and Q) quite clear. It does take some rethinking! I dare say that, back then, NTSC represented sophisticated engineering. Nikevich (talk) 19:36, 23 May 2009 (UTC)[reply]

YUV to YIQ Matrix?

[edit]

Does anyone here have any proper formulae to convert YUV to YIQ? I know of sites that have matrices to convert YIQ to YUV but not the other way around. WikiPro1981X (talk) 09:00, 12 September 2009 (UTC)[reply]


Other direction use exactly the same matrix.

YUYV or UYVY ???

[edit]

People, I'm quite puzzled about the YUV422 format. The article says that YUV422 represent a macropixel in the format YUYV, but this[4] website says the contrary: a macropixel is of the format UYVY. Additionally, the Aldebaran Nao robocup documentation says that the YUV422 format is UYVY, so who's right?

(Aldebaran documentation[5], unfortunately, is readeble only for Robocup participants, sorry!) —Preceding unsigned comment added by 151.50.50.94 (talk) 22:01, 14 January 2010 (UTC)[reply]

Further Confusion

[edit]

YUV and Y'UV were used for a specific analog encoding of color information in television systems, while YCbCr was used for digital encoding of color information suited for video and still-image compression and transmission such as MPEG and JPEG. Today, the term YUV is commonly used in the computer industry to describe file-formats that are encoded using YCbCr.

The article begins with an explanation that says YCbCr is used for digital but that it's called YUV. This corresponds well with the YCbCr article which states: when referring to signals in video or digital form, the term "YUV" mostly means "Y′CbCr"

However, then this article has another description under Confusion with Y'CbCr. This section states that YUV is analogue and YCbCr is digital and that computer textbooks/implementations erroneously use the term YUV where Y'CbCr would be correct.

Right after that, the article describes "getting a digital signal", followed by Converting between Y'UV and RGB. This is very confusing as it seems to be addressing digital video (referencing pixels and FourCC) but uses YUV repeatedly. Is this about analogue-to-digital or about digital methods erroneously called 'YUV'?

Eradicator (talk) 22:20, 16 May 2010 (UTC)[reply]

abbreviation

[edit]

So does anybody know if YUV just stand for "Y-signal" "U-signal" "V-signal" or some reference saying what it stands for or who coined it that way or why? Thanks! Rogerdpack (talk) 18:35, 26 April 2012 (UTC)[reply]

Incorrect RGB matrix

[edit]

From all article information this matrix

should look like this matrix:


Note that, if and then — Preceding unsigned comment added by Versatranitsonlywaytofly (talkcontribs) 15:33, 19 June 2012 (UTC)[reply]
If Y is 1, U and V can be only 0. Also the matrix is just inverted from direct matrix. 2A00:1370:812D:F205:4C51:7CD:D2B7:2086 (talk) 22:48, 18 April 2021 (UTC)[reply]

Rename and make into an overview article

[edit]

I'd suggest renaming this article into something like YUV family of color spaces or YUV and YUV derivates. That way, all the different kinds of "YUV" could have their own small overview section each (with three ===, ordered chronologically by date of invention or first employment, and by larger supersections with two == dividing between analogue and digital, planar and packed branches within the YUV family tree), with each overview section only giving the proper name, date, and use/function (why people created it, for what purpose/special device). Plus some rather limited, simple prose in one sentence about how its technical makeup differs from the previous or parent version.

Of course, each small overview section would contain a link to the particular version's main article, where we could get really down into the nasty details, including those long, complicated formulas for conversion to and from RGB and between different YUV versions, and illustrations about what the technological makeup looks like.

Many of those illustrations could also be used in this overview article here, provided they're not the kind of final image plus one picture for each of the three respective channels, all stacked on top of each other, as those 4-pic stacks are usually too large for small overview sections. Although I do agree that the one 4-pic stack for the seminal analogue YUV that started it all should be used at the top of this article. However, for the small overview sections per YUV version, I'm thinking of a corresponding pic of the type found here: [6], as they are neatly sized squares (though we are free to use different symbols, of course).

Another main issue of this current article could also be tackled by such a general overview article with small overview sections for each version of YUV: At the moment, this article only mentions very few of all the YUV derivates currently in use. A more complete list (with 31 packed and 20 planar versions) can be found here: [7]

This article's lead should also stress more that YUV is definitely not identical with RGB (fewer number of color depth samples between pure black and pure white even in the luma channel Y, roughly 88% of range samples between pure black and pure white in V relating to Blue when comparing to Y, roughly 49% in U relating to Red, and regarding pixel resolution, both chroma channels have about half the pixel resolution each of luma) and thus requires conversion (mainly channel-specific gamma correction) for correct display on RGB monitors!

After all, I've run into quite a few disputes before that stemmed from the fact that RGB monitors do show a picture when fed a YUV video signal or displaying a YUV-encoded video file, which has many people believe that YUV would be the exact same thing as RGB, not realizing that their RGB monitor only shows a more or less distorted rendering of the data. Further confusion is added by the fact that JPGs also use a YUV derivative, however the ICC profile for JPG (*NOT* necessarily MotionJPEG!) specifies an automated gamma correction for RGB monitors, an automatic feature that never existed for the interpretation of YUV video data in the RGB domain.

In other words, YUV was made for TVs (even modern flatscreen TVs use a more-or-less working YUV decoder for correction), whereas most computer monitors (be they CRT or flat) strictly use RGB, and you can't properly judge a YUV-encoded video file on an RGB monitor without conversion.

It's also because people lack the knowledge about the differences between YUV and RGB that they often mistake YUV for only a color model, because they only consider the two chroma axes (blue–luminance and red–luminance) compared to the "absolute" or "fixed" values of RGB that only reside on a more "simple" scale between 0 and 255. But YUV is really an own color space because of lower, channel-specific color depth, i. e. less samples between pure black and pure white, and because the two chroma channels have a pixel resolution that's half that of luma Y. --85.182.140.98 (talk) 11:32, 24 January 2013 (UTC)[reply]

What's the difference of NV21 and YV12 ?

[edit]

I see both these sentences: Y'UV420p (and Y'V12 or YV12) to RGB888 conversion Y'UV420p (NV21) to ARGB8888 conversion

They are all Y'UV420p, and with YV12 or NV21 in bracket. is YV12 == NV21 ? — Preceding unsigned comment added by Sinojelly (talkcontribs) 16:09, 2 March 2013 (UTC)[reply]

See the difference between YV12 and NV21 here: http://www.fourcc.org/yuv.php --37.80.57.245 (talk) 20:19, 9 March 2013 (UTC)[reply]

Android?

[edit]

I'd correct this if I knew what the author intended.

Quite near the end of the text, it says "Y'UV420p (NV21) to ARGB8888 conversion [edit] This format (NV21) is the standard picture format on android camera preview"

Afaik, Android, the operating system, has little to do with cameras specifically; this probably refers to (analog?) studio cameras.

If it's just a slip of the mind, there's no need to retain this section.

User Nikevich, who often forgets to log in, and uses DHCP.

74.104.161.88 (talk) 05:30, 15 June 2013 (UTC)[reply]

I think this is right, NV21 is the default format returned e.g. by onPreviewFrameWithBuffer. — Preceding unsigned comment added by 109.2.154.6 (talk) 10:19, 11 October 2013 (UTC)[reply]

Correct conversion from YCbCr to RGB

[edit]

One can try YCbCr to RGB conversion given here. Equations given in this OpenCV page seem practically more accurate than the once mentioned in current wiki article — Preceding unsigned comment added by 59.185.236.53 (talk) 07:26, 2 December 2013 (UTC)[reply]

Y'UV420sp (NV21) to ARGB8888 conversion

[edit]

Suspect a bug in the conversion code. In the for loop, (offset+k) and (offset+k+1) both are used as indices into data[]. offset does not change and k is incremented by 1 after each iteration. Doesn't it mean that some elements of data[] are used in two consecutive iterations? Is that intended? Request the author to either fix it (say k += 2 instead of k ++) OR if that is what you intend, then please add a comment to explain why. Regards, Pramod Ranade

Clarification of images and Luma across U/V plane at constant Y'

[edit]

From the content of the article, I cannot make sense of the images entitled "Y' value of 0", "Y' value of 0.5", and "Y' value of 1". When Y' is 0, the center point obviously is and should be black. Shouldn't the rest of the plane be either black as well or out of gamut? How can there be any color at all with zero Luma? Likewise, with a Y' of 1, how can there be any color other than white, and shouldn't all positions outside the center of the plane be out of gamut since one or more of the RGB components would have to be > maximum at any other point? Finally, when Y' is at 0.5, there appears to be a horizontal line of higher brightness through the green quadrant, so how this image represent a constant Luma?

Steve Jorgensen (talk) 08:40, 18 December 2014 (UTC)[reply]

Appropriateness of referencing BT.601

[edit]

Current article states that the following constants are defined in BT.601:

The most recent version of the recommendation (BT.601-7) does not define the or constants (in fact, the values 0.436 and 0.615 do not appear in the rec at all), nor does it make any mention of U/V values in general.

Further, BT.601 defines conversion from the analog RGB color space to the digital YCrCb color space, not to the analog YUV (or Y'UV) spaces. I question whether referencing this document is appropriate in an article on YUV. DarkMorford (talk) 16:28, 25 March 2015 (UTC)[reply]

Indeed, in BT.601 and are not mentioned. If you try to derive and from the constants in BT.601 you will see that they are both 0.5, as it maps both Cr and Cb to the range to [0,1]. It looks like the constants are actually related to PAL, which would also be more fitting anyway.
I also find it strange that both the YUV and YCbCr articles goes in depth in about BT.601 and BT.709 conversion, but it is omitted in the BT.601 and BT.709 articles. I think these kind of "overview" articles should just focus on the general concepts and try to only use variables in the formulas. The articles about the specific standards can then dig into stuff like matrix representations and SIMD optimizations. Spillerrec (talk) 00:30, 4 August 2015 (UTC)[reply]
Because this has nothing to do with BT.601, which is YCbCr. It is all about BT.470 spec. Fixed. 2A00:1FA0:487C:BE81:695D:16C0:3F0:DCCC (talk) 09:50, 16 April 2021 (UTC)[reply]

This is not clear

[edit]

Converting between Y'UV and RGB

[edit]

RGB files are typically encoded in 8, 12, 16 or 24 bits per pixel. In these examples, we will assume 24 bits per pixel, which is written as RGB888. The standard byte format is:

r0 = rgb[0];
g0 = rgb[1];
b0 = rgb[2];
r1 = rgb[3];
g1 = rgb[4];
b1 = rgb[5];
 ...

Y'UV files can be encoded in 12, 16 or 24 bits per pixel. The common formats are Y'UV444 (or YUV444), YUV411, Y'UV422 (or YUV422) and Y'UV420p (or YUV420). The apostrophe after the Y is often omitted, as is the "p" after YUV420p. In terms of actual file formats, YUV420 is the most common, as the data is more easily compressed, and the file extension is usually ".YUV".

What does r0 = rgb[0] means? r0 a one byte var? then:

r0 = rgb[0];
g0 = rgb[1];
b0 = rgb[2];

stands for the 8bit case, and

r0 = rgb[0];
g0 = rgb[1];
b0 = rgb[2];
r1 = rgb[3];
g1 = rgb[4];
b1 = rgb[5];

for 16bit per pixel?

r0 = rgb[0];
g0 = rgb[1];
b0 = rgb[2];
r1 = rgb[3];
g1 = rgb[4];
b1 = rgb[5];
r2 = rgb[6];
g2 = rgb[7];
b2 = rgb[8];

is for 24bits per pixel?

If that is the right interpretation? how is the 12 bits per pixel represented?

r0 = rgb[0];
g0 = rgb[1];
b0 = rgb[2];
r1 = rgb[3] & 0x0F;
g1 = rgb[3] & 0xF0 >> 4;
b1 = rgb[4] & 0x0F;
r'0 = rgb[4] & 0xF0 >> 4 | rgb[5] & 0x0F << 4;
g'0 = rgb[5] ???
b'0 = rgb[6] ???

or maybe something like:

r0 = rgb[0];
g0 = rgb[1];
b0 = rgb[2];
r1 = rgb[3] & 0x0F;
g1 = rgb[4] & 0x0F;
b1 = rgb[5] & 0x0F;
r'0 = rgb[3] & 0xF0 >> 4;
g'0 = rgb[4] & 0xF0 >> 4;
b'0 = rgb[5] & 0xF0 >> 4
r'1 = rgb[6];
g'1 = rgb[7];
b'1 = rgb[8];

It is not clear how is it encoded.

maybe an schema like:

  R   G   B   R   G   B
+---+---+---+---+---+---+
|x01|x23|x45|x67|x89|xAB|
+---+---+---+---+---+---+
R = 0x0167;
G = 0x2389;
B = 0x45AB;

(I'm sorry for such format, I am not a skilled wikipedian)

At a glance I do not understand how this works. It seems there is a relation between the numbers 444 422 411 and 420 and the 24,16,12 and 8 bit sizes, which is not clear to me.

The article is not clear, it can be rewritten a more synthetic version.

Also it seems this in an erratum where it is written:

Y'UV420p (or YUV420)

shouldn't better say: YUV420p (or Y'UV420)?

I do not know about this subject, I came here to learn. For that reason I did not change the article. It is maybe more clear if one reads carefully all the article, but a well written article should be more readable.

If linear algebra makes possible to be more synthetic, use just that language. A good informal description of what the formulas mean, should be enough to understand it.

Should the removed algorithms be included? if the formulas are clear, anyone could translate them to C or any other programming language. I do not agree with the exaggerate rule to omit detailed information from articles, like the C code in this article, or including recipes in articles about meals. What is important is to write clear articles, without redundant information, being easy to read. If that additional information does not interfere with the lecture flow, why not to include it, if it could be useful for the reader?

In synthesis: can someone who knows the subject, rewrite it in a more clear way by omitting redundant formulas. Writing succinct formulas describing informally their meaning, and giving a more schematic description of this format?

A suggested new structure for this article, may be better to rewrite it!

[edit]

This article is about YUV a family of low-level structures to represent color images. The more simple structure is the RGB format. It is intuitive and every reader of this article should be familiar with it, right. That can be used as the departure of the article.

The article could start by explaining that images are captured by cameras and scanners which mainly use RGB sensors. Then explain why RGB had disadvantages for video streaming and why the YUV family of standards emerged. (that part is already in the article, although a it could be improved by only writing the steps important to understand the diverse YUV formats.

Stated the need for YUV, it could be explained using pictures of the structure. There is no need to explain it with programs code.

Explain how a stream of RGB pixels is represented in a YUV stream, first using a diagram, then using mathematics not programming code nor pseudo-code, explaining how the transformation is done. In this article I understood that the constants employed have changed and those constants may change in different variants of the YUV codes. For that reason, at first time, it can be explained using symbols for constants, not the actual numbers.

Once the transformation is abstractly explained, problems with calculations may be explained, but, please do not use program code to explain that. Just draw cells and draw the needed: shifts, ANDs, ORs, etc.

If one thinks the structure of the article as a literate program form to be filled by any reader. An example C code could be used to illustrate the implementation.

My skills in embellishing web pages is asymptotically close to zero, maybe I'm going to tell something stupid, just take it as a wish from a wiki-language ignorant, right!

If there is a way to dynamically change the page, it could be nice that code in different languages may be changed according to some parameter. In other sites that could be done with by means of javascript, but I do not even suggest to include any kind of scripts in wikipedia, I prefer it simple and safe. But some skilled wikipedian reading this, may know a trick or an alternative to include different actual encodings for the explained transformation. I am thinking in something simple as some frame with programming languages like C, C++, Java, Perl, etc. and by clicking each language name, a piece of code be shown. That is just bells and whistles, but I suggest it, because my point of view about the inclusion or not of code in articles of subjects like this, is not to forbid it, but write the article as the context to develop a program in a literate style. It does not really matters if code is included or not, if the code is omitted, any programmer could write the code simply following the explanation and the structure of the article, as I said, the article should be seen as form to fill with code to obtain a literate program.

Following with the content of the article. It would be important to explain where the actual constants (weighs) came from. I did not understood the origin, I mean if it is arbitrary or are obtained following some optimality criteria. That may be important.

After explaining the first version of YUV, the variants could follow, explaining the reason for each one, always starting with a diagram, without repeating the things to take into account when implementing it in a computer, like endianess (not yet mentioned in the actual article), roundoff errors, floating point usage, etc. because that was explained in the first example. A reference to that part should be enough, unless the new variant has another particularity which needs more caution during implementation.

A comparison among the variants, citation of the standards document, pictures and videos showing any difference in the quality of the different YUV encoding schema. May also be included.

Links to articles about programs used to encode YUV. Advantages, if any, in using those formats to tasks like editing videos, compressing videos, encoding digital pictures, etc.

Thats all, if I knew about this subject, it would be very relaxant for me to write the new article. But as I said in other comment, I came to this article to learn about YUV.

Of course, the actual article has great diagrams and parts that can be reused, but I suggest a new article, because it has a lot of explanations using confusing programs code.

I hope my suggestion be useful for other wikipedians — Preceding unsigned comment added by 189.178.32.207 (talk) 06:49, 29 October 2015 (UTC)[reply]

Error in Numerical Approximations section.

[edit]

In this section https://en.wikipedia.org/wiki/YUV#Numerical_approximations of the article here, specifically in the Full Swing subsection, it claims that the factors for calculating the Y component are 76, 150, and 29. Unfortunately this is incorrect. While these values do sum to 255 themselves, after performing the remaining required operations, with actual pixel data, a white pixel can be seen to produce a Y value of 254, when it SHOULD be 255. That is to say (76*255 + 150*255 + 29*255 + 128) >> 8 = 254. This means that the factors were selected such than when the calculation is performed you get 254, when in fact the factors should have been selected such that the above calculation results in the value 255 instead. I'm not sure which of the 3 factors is incorrect here, or if in fact the entire matrix will need to be recalculated (lest changing one factor throw off the correctness of the entire matrix, assuming that the rest of the matrix is currently correct).

Please, some advanced mathematician on here, please re-write this matrix, so that I can use it in a program that I'm developing. I'm using Wikipedia as my source material, to gain the knowledge that I will need to write my program, but it can't help me if it has so many inaccuracies.

Benhut1 (talk) 01:28, 9 June 2016 (UTC)[reply]

I can confirm this problem, the values compute to 254. The floating point factors for this conversion are R * 0.299 + G * 0.587 + B * 0.114. Multiplying with 256.0 yields R * 76.544 + G * 150.27 + B * 29.184. So my guess would be 76 should be 77 when you round it. Note that the remainders perfectly sum up to one, so rounding one value only, i.e., choosing 77 instead of 76, does the trick.

Simonfuhrmann (talk) 18:55, 26 June 2017 (UTC)[reply]

Color depth

[edit]

It would *TREMENDOUSLY* help and clarify a lot of things if the article would also explain the differences between RGB and YUV and its many derivates in regards to color depth per pixel, and clarify that the 4:x:x color subsampling schemes only refer to pixel resolution per color channel. Regarding color depth, the following applies to YUV and its many derivates:

  • a.) The maximum depth or resolution per color channel between pure black and pure white (the absolute lower and upper thresholds that are identical in RGB and YUV) is not 255 values but 255 - 16 - 15 = 224 values (in other words, RGB samples 0-255, whereas YUV etc. only samples 16-240), which would be roughly 11.23 million colors all-in-all. The difference to the 16.7 million of colors of RGB24 aka TruColor is invisible to the human eye when it comes to the number of unique colors, but results in notably different gamma when RGB and YUV material is displayed on a monitor working in the other color space, which is why converting between the two color spaces requires gamma correction.
  • b.) Additionally, YUV and its derivates use color subsampling not only in pixel resolution but also in color depth, where Y stands for brightness aka green and is the one channel that actually samples all 224 values, while blue and red as U and V are color-subsampled (meaning they sample less values between pure black and pure white), where the blue channel samples 88% (of the values in Y) = 198 different values between pure black and pure white, and red only 49% (of the values in Y) = 110 values, so the *ACTUAL* TV signal color depth of YUV and its derivates is 224 * 198 * 110 = slightly under 5 million colors.
Again, the difference between 5 million colors and RGB TruColor is virtually invisible to the human eye as for the number of *DIFFERENT COLOR VALUES* between pure black and pure white, but this latter fact requires different gamma correction per color channel in order to convert between YUV and RGB, instead of one global gamma value for all three channels. So that's all that YUV <--> RGB conversion is: Applying three different gamma correction values upon the data, one for every color channel, plus color subsampling also in regards to pixel resolution. The practical result of these different gamma values per color channel is that what looks clean and neutral on a YUV screen looks like it has a slight pink tint on an RGB screen if no proper YUV --> RGB conversion is applied. The complimentary case is where unconverted RGB footage is shown on a YUV monitor and turns out slightly greenish. (Note that while chroma subsampling is pretty much invisible to the human eye in *COLOR DEPTH*, it's *SOMEWHAT* noticeable in number of *PIXELS* per color channel. But especially color subsampling in *PIXEL RESOLUTION* is only what they call a "psycho-visual" compression nowadays, based upon the strengths and weaknesses of the human eye that can see colors far less well than brightness, plus YUV monitors blur especially the pixels of the red channel in order to make the huge red pixels less obvious and blocky to the naked eye, while their blockiness stands out like a sore thumb when displaying a YUV signal on RGB monitors. But the latter only applies to video YUV, whereas the same red blur as common in YUV monitors is also applied on RGB monitors as part of the ICC profile of the JPEG standard, which is why nobody sees chunky red pixels on JPEG images.) --2003:EF:13C6:EE54:B4F5:387A:FA44:9009 (talk) 03:42, 17 September 2018 (UTC)[reply]

Where does "YUV420p" come from?

[edit]

I realise it would have helped to ask this on August 17th 2006, but I'm curious where the "YUV420p" terminology originated. It's not anything to do with FourCC and Microsoft only appear to use the term in passing; I don't believe Khronos use it, and I've not seen Apple use it. If it was made up for this page, it's stated in very authoritative terms (which means that others start to use it in an official context and I worry about correcting them...)

If it's somewhere official, it would be nice to link; if not, it should be clear that the terminology is only being used for the sake of abbreviation in the article. 09:40, 3 July 2020 (UTC) — Preceding unsigned comment added by Fluppeteer (talkcontribs)

Kodak Photo CD used 4:2:0 subsampling. Considering it was as long as back in 1993, and they are in ICC... Maybe it was them. 109.252.90.66 (talk) 15:00, 29 January 2021 (UTC)[reply]
I know that yuvj420p (j means full range, p means planar) was a hack in libav and was deprecated in FFmpeg. 176.213.135.201 (talk) 09:42, 16 April 2021 (UTC)[reply]

Merging with YCbCr article

[edit]

Most of this article is redundant with the YCbCr article. I would suggest merging them and keeping only one, or possibly removing all the Rec. 601/709/etc colorspace information from the article and only leaving the pixel format information. Thoughts? TD-Linux (talk) 17:19, 20 October 2020 (UTC)[reply]

What? BT.601 stuff has nothing to do with YCbCr article, just look into scaling coeff. It is actually YUV. And BT.709 is also not YCbCr one, that matrix has two 0.5 numbers in it. I added BT.709 matrices in the YCbCr article here: https://en.wikipedia.org/w/index.php?title=YCbCr&diff=1017259797&oldid=1017034650 2A00:1370:812D:178D:2197:F930:8167:C89A (talk) 15:51, 11 April 2021 (UTC)[reply]

Incorrect information on gamma

[edit]

This article explicitly states that Rec. 601 and Rec. 709 operate on linear values, however they don't - they operate on R'G'B', the gamma corrected values. This is backed up by the sources of Rec._709 and YCbCr, and even more explicitly by Rec._2020, which actually has a mode that can operate on RGB. I'm going to change the equations on this page to match YCbCr in a few days if no one objects. TD-Linux (talk) 17:22, 20 October 2020 (UTC)[reply]

→ Did you do already? I just struggled there, too. (unknown) 21:54, 18 December 2020 (UTC) — Preceding unsigned comment added by 153.96.12.26 (talk)

I agree. The problem with this article is that except for Y no matrix is the same with BT.601 spec. 2A00:1370:812D:864F:2132:15C2:E4F9:6517 (talk) 20:53, 13 January 2021 (UTC)[reply]
So it is a little more complicated. It can be both linear and nonlinear. "SECAM IV was a colour standard developed by the Russian research institute NIR (Nautschnuiu Issledowatelskaya Rabota). In fact two standards were developed: Non-linear NIR in which the square root of the amplitude of the chroma signals is transmitted (in a process analogous to gamma correction) and Linear NIR which omits this process. SECAM-IV as described below is the Linear version of NIR." http://web.archive.org/web/20190227232952/http://www.radios-tv.co.uk/Pembers/World-TV-Standards/Colour-Standards.html#SECAM 2A00:1370:812D:864F:3CE5:C0DC:2383:A90E (talk) 18:10, 21 January 2021 (UTC)[reply]
I fixed it and added info about SECAM IV. 2A00:1FA0:487C:BE81:695D:16C0:3F0:DCCC (talk) 09:52, 16 April 2021 (UTC)[reply]

Formula no longer in Android

[edit]

Polling before taking action...

There's currently a some code quoted from Android for NV21 conversion. That code is no longer part of Android/stagefright as far as I can tell, and I'm really struggling to tell where those constants came from (no mangling I can do to any obvious TV standard seems to produce that matrix).

If that matrix is meaningful, can anyone refer to a more authoritative source or show some derivation, please? If there's no evidence of it being correct or still being used, the code should go. Fluppeteer (talk) 20:36, 4 June 2021 (UTC)[reply]

You gotta be kidding me. It is still there https://android.googlesource.com/platform/frameworks/av/+/0e4e5a8/media/libstagefright/yuv/YUVImage.cpp#375 as for more modern see libyuv. Valery Zapolodov (talk) 00:25, 8 June 2021 (UTC)[reply]
Valery, your link is pointing to a version of libstagefright that is 6 years old. It is irrelevant to this discussion, which is about the current state of libstagefright. --Krótki (talk) 06:26, 8 June 2021 (UTC)[reply]
This code is no longer in libstagefright, it is in libyuv. Valery Zapolodov (talk) 21:43, 10 June 2021 (UTC)[reply]

Did ATSC 1.0, 2.0, 3.0 really use analog Umax, Vmax scaled BT.709?

[edit]

I doubt that, I did give matrices with BT.709 if Umax and Vmax are applied, yet I doubt those are real. Valery Zapolodov (talk) 15:18, 28 August 2021 (UTC)[reply]

[edit]

External links: Defunct fourcc.org redirects to malicious website! 185.63.98.8 (talk) 12:50, 19 June 2023 (UTC)[reply]