July 29, 2019 – The Graphics card section is no longer relevant due to Nvidia updating its GeForce graphics card drivers to offer 30-bit (10-bit per channel) in OpenGL programs to match what was previously only available in their Quadro graphics card line.
Nvidia article on topic: https://www.nvidia.com/en-us/geforce/news/studio-driver/?ncid=so-twit-88958#cid=organicSocial_en-us_Twitter_NVIDIA_Studio
If you are a photo or video editor wondering if you should go with 8-bit or 10-bit hardware, this article is for you!
In respect to monitors and graphics cards, there is some hardware available today that is 8-bit and some that is 10-bit.
Before we get too far, let's define what a "bit" is. In software coding terms, a "bit" is the smallest container of information. A bit can hold a maximum of (2) values, either 1 or 0, (on or off, true or false). Bits can refer to values in software program languages, storage space, or color space. In this article we will be discussing color, and how bits relate to color space.
There are different ways that bits can be referred to when it comes to color. We will be discussing bits per channel/component (bpc).
The higher bit value that you are dealing with, the larger the set of potential finite colors (color palette) that can be assigned to each pixel in an image. Finding the number of values per given bit is calculated by 2 to the exponent of the bit number. For example, a 4-bit value would be 2 x 2 x 2 x 2 = 16 values.
To illustrate, if you are working with 8-bit per channel in a photo editing program, there will be a total of 256 color values per color channel (Red, Green, and Blue) in its color palette to choose from per pixel in that image. For a total of 24-bit worth of values (8-bit red, 8-bit green, 8-bit blue), or 16,777,216 values.
So as you can imagine, the higher the bit depth of color, the more colors available in the color pallet. The more colors available to display means smoother transitions from one color in a gradient to another.
Color information is sent from your graphics card to the monitor as a number that represents the color that a pixel should be within the confines of a given color palette. The monitor then takes that number and reproduces the color that the number corresponds to for a given pixel of an image on screen.
1 bit = 2 values
2-bit = 4
3-bit = 8
4-bit = 16
5-bit = 32
6-bit = 64
7-bit = 128
8-bit = 256
9-bit = 512
10-bit = 1,024
11-bit = 2,048
12-bit = 4,096
13-bit = 8,192
14-bit = 16,384
15-bit = 32,768
16-bit = 65,536
17-bit = 131,072
18-bit = 262,144
19-bit = 524,288
20-bit = 104,8576
21-bit = 2,097,152
22-bit = 4,194,304
23-bit = 8,388,608
24-bit = 16,777,216
25-bit = 33,554,432
26-bit = 67,108,864
27-bit = 134,217,728
28-bit = 268,435,456
29-bit = 536,870,912
30-bit = 1,073,741,824
The Output Color Depth for mainstream graphics cards is listed as 8 bpc, or (Bit Per Component) for mainstream class of graphics cards, such as Nvidia Geforce, or AMD Radeon. This refers to 8-bit color values for Red, 8-bit for Green, & 8-bit for Blue. Essentially 8R + 8G + 8B. In other words, it would be 24-bit color = 16,777,216 values, just more defined in that Red, Green, & Blue colors each get 8-bit worth of values to use for each color. When looking at monitors, they will often be listed at 16.7 Million display colors.
Workstation class graphics cards, such as the Nvidia Quadro, AMD FirePro line, 10-bit I/O card such as a Blackmagic Design DeckLink card or similar, supply 10 bpc. So there is a larger pool of color options available with 10-bit Red channel + 10-bit Green channel, & 10-bit Blue channel, for a total of 30-bit RGB or 1,073,741,824 values. When looking at monitors, you will often see "10-bit" monitors listed as '1.07 Billion display colors'. (It is worth noting 10-bit I/O cards like the Blackmagic Decklink tend to only display either a timeline or only photo being edited for color correction. So when you are not editing the photo or video, the display is blank, not showing the desktop. I/O cards are best used on a separate secondary 10-bit monitor dedicated just for image color correction, not programs.)
The higher bit rating your hardware is, the larger the group of colors that will be available to you to utilize for potentially smoother gradients from one color to another, as can be seen in a sunset photo. Otherwise, you may get some colors substituted for the actual color you have captured in an image if you are working with lower bit hardware / settings.
Conclusion
So what does all of this mean when choosing hardware? If you are working professionally with images, and are going to have your images professionally printed, you will be better off with a 10-bit graphics card, or 10-bit I/O card and 10-bit monitor, as professional print shops are able to print more colors. However, if you are just editing photos for personal use, or to post on the web, then an 8-bit graphics card and monitor would be sufficient, since the vast majority of people accessing the Internet have 8-bit hardware, and would not be able to see the difference. Although most people will not be able to tell the difference between 8-bit and 10-bit, it is easier to decrease the quality of a 10-bit image for web use, than it is to increase the quality of an 8-bit image for professional printing. So having the additional colors that 10-bit is capable of is an advantage as it provides flexibility to save to web or professionally print.
7/19/2017 Update: One reader mentioned that when you check the specs of new monitors to avoid 8+FRC monitors.
For those not aware, 8-bit+FRC (Frame Rate Control) monitors are 8-bit monitors that essentially fake the output of a 10-bit monitor by flashing two colors quickly to give the illusion of the color that should be displayed. For example if the color that should be displayed on the 10-bit monitor is number 101 in the Look Up Table, and an 8-bit monitor is only capable of displaying color number 100, or 104, an 8-bit+FRC monitor would flash the display of color number 100 and number 104 quickly enough that one should not notice the flashing, but attempts to fake the human eye into thinking it is really color number 101. To do this the 8-bit+FRC monitor would flash between color number 100 for 75% of the time, and color 104 for 25% of the time, to give the illusion of color number 101, similar to how moving pictures work to give the illusion of motion. If color 102 needed to be displayed, an 8-bit+FRC monitor would flash between displaying color number 100 for 50% of the time, and color number 104 for 50% of the time. To represent color number 103, as you can imagine by now, the 8-bit+FRC monitor would flash between colors number 100 for 25% of the time, and color number 104 for 75% of the time to give the illusion of color 103, as opposed to a true 10-bit monitor which would be able to simply display color number 103.
I hope this helps!
You may also be interested in these related articles:
Setting Graphics Card Software to Display 10-bit Output
How to enable 30-bit in Photoshop