Video card

Print Print
Reading time 27:1

Video card
RTX 3090 Founders Edition!.jpg
Nvidia GeForce RTX 3090 Founders Edition
Connects toMotherboard via one of:
  • ISA
  • MCA
  • VLB
  • PCI
  • AGP
  • PCI-X
  • PCI Express
  • Others

Display via one of:

A video card (also called a graphics card, display card, graphics adapter, or display adapter) is an expansion card which generates a feed of output images to a display device (such as a computer monitor). Frequently, these are advertised as discrete or dedicated graphics cards, emphasizing the distinction between these and integrated graphics. At the core of both is the graphics processing unit (GPU), which is the main part that does the actual computations, but should not be confused with the video card as a whole, although "GPU" is often used as a metonymic shorthand to refer to video cards.

Most video cards are not limited to simple display output. Their integrated graphics processor can perform additional processing, removing this task from the central processor of the computer.[1] For example, Nvidia and AMD (previously ATI) produced cards that render the graphics pipelines OpenGL and DirectX on the hardware level.[2] In the later 2010s, there has also been a tendency to use the computing capabilities of the graphics processor to solve non-graphic tasks, which can be done through the use of OpenCL and CUDA. Video cards can also be used for AI training.[3][2]

Usually, the graphics card is made in the form of a printed circuit board (expansion board) and inserted into an expansion slot, universal or specialized (AGP, PCI Express).[4] Some have been made using dedicated enclosures, which are connected to the computer via a docking station or a cable. These are known as eGPUs.

History

Standards such as MDA, CGA, HGC, Tandy, PGC, EGA, VGA, MCGA, 8514 or XGA were introduced from 1982 to 1990 and supported by a variety of hardware manufacturers.

3dfx Interactive was one of the first companies to develop a GPU with 3D acceleration (with the Voodoo series) and the first to develop a graphical chipset dedicated to 3D, but without 2D support (which therefore required the presence of a 2D card to work). Now the majority of modern video cards are built with either AMD-sourced or Nvidia-sourced graphics chips.[5] Until 2000, 3dfx Interactive was also an important, and often groundbreaking, manufacturer. Most video cards offer various functions such as the accelerated rendering of 3D scenes and 2D graphics, MPEG-2/MPEG-4 decoding, TV output, or the ability to connect multiple monitors (multi-monitor). Video cards also have sound card capabilities to output sound – along with the video for connected TVs or monitors with integrated speakers.

Within the industry, video cards are sometimes called graphics add-in-boards, abbreviated as AIBs,[5] with the word "graphics" usually omitted.

Dedicated vs integrated graphics

Classical desktop computer architecture with a distinct graphics card over PCI Express. Typical bandwidths for given memory technologies, missing are the memory latency. Zero-copy between GPU and CPU is not possible, since both have their distinct physical memories. Data must be copied from one to the other to be shared.
Integrated graphics with partitioned main memory: a part of the system memory is allocated to the GPU exclusively. Zero-copy is not possible, data has to be copied, over the system memory bus, from one partition to the other.
Integrated graphics with unified main memory, to be found AMD "Kaveri" or PlayStation 4 (HSA).

As an alternative to the use of a video card, video hardware can be integrated into the motherboard, CPU, or a system-on-chip. Both approaches can be called integrated graphics. Motherboard-based implementations are sometimes called "on-board video". Almost all desktop computer motherboards with integrated graphics allow the disabling of the integrated graphics chip in BIOS, and have a PCI, or PCI Express (PCI-E) slot for adding a higher-performance graphics card in place of the integrated graphics. The ability to disable the integrated graphics sometimes also allows the continued use of a motherboard on which the on-board video has failed. Sometimes both the integrated graphics and a dedicated graphics card can be used simultaneously to feed separate displays. The main advantages of integrated graphics include cost, compactness, simplicity and low energy consumption. The performance disadvantage of integrated graphics arises because the graphics processor shares system resources with the CPU. A dedicated graphics card has its own random access memory (RAM), its own cooling system, and dedicated power regulators, with all components designed specifically for processing video images. Upgrading to a dedicated graphics card offloads work from the CPU and system RAM, so not only will graphics processing be faster, but the computer's overall performance will significantly improve. This is often necessary for playing videogames, working with 3D animation or editing video.

Both AMD and Intel have introduced CPUs and motherboard chipsets which support the integration of a GPU into the same die as the CPU. AMD markets CPUs with integrated graphics under the trademark Accelerated Processing Unit (APU), while Intel markets similar technology under the "Intel HD Graphics and Iris" brands. With the 8th Generation Processors, Intel announced the Intel UHD series of Integrated Graphics for better support of 4K Displays.[6] Although they are still not equivalent to the performance of discrete solutions, Intel's HD Graphics platform provides performance approaching discrete mid-range graphics, and AMD APU technology has been adopted by both the PlayStation 4 and Xbox One video game consoles.[7][8][9]

Power demand

As the processing power of video cards has increased, so has their demand for electrical power. Current high-performance video cards tend to consume large amounts of power. For example, the thermal design power (TDP) for the GeForce Titan RTX is 280 watts.[10] When tested while gaming, the GeForce RTX 2080 Ti Founder's Edition averaged 300 watts of power consumption.[11] While CPU and power supply makers have recently moved toward higher efficiency, power demands of GPUs have continued to rise, so video cards may have the largest power consumption of any individual part in a computer.[12][13] Although power supplies are increasing their power too, the bottleneck is due to the PCI-Express connection, which is limited to supplying 75 watts.[14] Modern video cards with a power consumption of over 75 watts usually include a combination of six-pin (75 W) or eight-pin (150 W) sockets that connect directly to the power supply. Providing adequate cooling becomes a challenge in such computers. Computers with multiple video cards may require power supplies over 750 watts. Heat extraction becomes a major design consideration for computers with two or more high-end video cards.

Size

Video cards for desktop computers come in one of two size profiles, which can allow a graphics card to be added even to small-sized PCs. Some video cards are not of the usual size, and are thus categorized as being low profile.[15][16] Video card profiles are based on height only, with low-profile cards taking up less than the height of a PCIe slot, some can be as low as "half-height".[citation needed] Length and thickness can vary greatly, with high-end cards usually occupying two or three expansion slots, and with dual-GPU cards -such as the Nvidia GeForce GTX 690- generally exceeding 250 mm (10 in) in length.[17] Generally, most users will prefer a lower profile card if the intention is to fit multiple cards or they run into clearance issues with other motherboard components like the DIMM or PCIE slots. This can be fixed with a larger case that comes in sizes like a mid-tower and full tower. Full towers can usually fit larger motherboards in sizes like ATX and micro ATX. The larger the case, the larger the motherboard, the larger the graphics card or multiple other components that will acquire case real-estate.

Multi-card scaling

Some graphics cards can be linked together to allow scaling of the graphics processing across multiple cards. This is done using either the PCIe bus on the motherboard or, more commonly, a data bridge. Generally, the cards must be of the same model to be linked, and most low power cards are not able to be linked in this way.[18] AMD and Nvidia both have proprietary methods of scaling, CrossFireX for AMD, and SLI (since the Turing generation, superseded by NVLink) for Nvidia. Cards from different chipset manufacturers or architectures cannot be used together for multi-card scaling. If a graphics card has different sizes of memory, the lowest value will be used, with the higher values being disregarded. Currently, scaling on consumer-grade cards can be done using up to four cards.[19][20][21] The use of four cards requires a large motherboard with a proper configuration. Nvidia's GeForce GTX 590 video card can be configured in this four-card configuration.[22] As stated above, users will want to stick to the same performance card for optimal use. Motherboards like ASUS Maximus 3 Extreme and Gigabyte GA EX58 Extreme are certified to work with this configuration.[23] A certificated large power supply is necessary to run the cards in SLI or CrossFireX. Power demands must be known before a proper supply is installed. For the four card configuration, a 1000+ watt supply is needed. AcBel PC8055-000G and Corsair AX1200 supplies are examples.[23] With any relatively powerful video card, thermal management can not be overlooked. Video cards require a well-vented chassis and thermal solution. Air or water cooling are usually required, though low power GPUs can use passive cooling, larger configurations use water solutions or immersion cooling to achieve proper performance without thermal throttling.[24]

SLI and Crossfire, are increasingly uncommon, as most games do not fully utilize multiple GPUs, as most users cannot afford them.[25][26][27] Multiple GPUs are still used on supercomputers (like in Summit), on workstations to accelerate video[28][29][30] and 3D rendering,[31][32][33][34][35] for VFX[36][37] and for simulations,[38] and in AI to expedite training, as is the case with Nvidia's lineup of DGX workstations and servers.

3D graphic APIs

A graphics driver usually supports one or multiple cards by the same vendor and has to be specifically written for an operating system. Additionally, the operating system or an extra software package may provide certain programming APIs for applications to perform 3D rendering.

3D rendering API availability across operating systems
OS Vulkan Direct X GNMX Metal OpenGL OpenGL ES
Windows 10 Nvidia/AMD Microsoft No No Yes Yes
macOS MoltenVK No No Apple Apple No
Linux Yes Wine No No Yes Yes
Android Yes No No No Nvidia Yes
iOS MoltenVK No No Apple No Apple
Tizen In development No No No No Yes
Sailfish OS In development No No No No Yes
Xbox 360 No Yes No No No No
Orbis OS (PS4) No No Yes No No No
Wii U Yes No No No Yes Yes

Usage specific GPU

Some GPUs are designed with specific usage in mind:

  1. Gaming
    • GeForce GTX
    • GeForce RTX
    • Nvidia Titan
    • Radeon HD
    • Radeon RX
  2. Cloud gaming
  3. Workstation
    • Nvidia Quadro
    • AMD FirePro
    • Radeon Pro
  4. Cloud Workstation
    • Nvidia Tesla
    • AMD FireStream
  5. Artificial Intelligence Cloud
    • Nvidia Tesla
    • Radeon Instinct
  6. Automated/Driverless car
    • Nvidia Drive PX

Industry

As of 2016, the primary suppliers of the GPUs (video chips or chipsets) used in video cards are AMD and Nvidia. In the third quarter of 2013, AMD had a 35.5% market share while Nvidia had a 64.5% market share,[39] according to Jon Peddie Research. In economics, this industry structure is termed a duopoly. AMD and Nvidia also build and sell video cards, which are termed graphics add-in-board (AIBs) in the industry. (See Comparison of Nvidia graphics processing units and Comparison of AMD graphics processing units.) In addition to marketing their own video cards, AMD and Nvidia sell their GPUs to authorized AIB suppliers, which AMD and Nvidia refer to as "partners".[5] The fact that Nvidia and AMD compete directly with their customer/partners complicates relationships in the industry. The fact that AMD and Intel are direct competitors in the CPU industry is also noteworthy, since AMD-based video cards may be used in computers with Intel CPUs. Intel's move to APUs may weaken AMD, which until now has derived a significant portion of its revenue from graphics components. As of the second quarter of 2013, there were 52 AIB suppliers.[5] These AIB suppliers may market video cards under their own brands, or produce video cards for private label brands or produce video cards for computer manufacturers. Some AIB suppliers such as MSI build both AMD-based and Nvidia-based video cards. Others, such as EVGA, build only Nvidia-based video cards, while XFX, now builds only AMD-based video cards. Several AIB suppliers are also motherboard suppliers. The largest AIB suppliers, based on global retail market share for graphics cards, include Taiwan-based Palit Microsystems, Hong Kong-based PC Partner (which markets AMD-based video cards under its Sapphire brand and Nvidia-based video cards under its Zotac brand), Taiwan-based computer-maker Asustek Computer (Asus), Taiwan-based Micro-Star International (MSI), Taiwan-based Gigabyte Technology,[40]Brea, California, USA-based EVGA (which also sells computer components such as power supplies) and Ontario, California USA-based XFX. (The parent corporation of XFX is based in Hong Kong.)

Market

Video card shipments peaked at a total of 114 million in 1999. By contrast, they totaled 14.5 million units in the third quarter of 2013, a 17% fall from Q3 2012 levels,[39] and 44 million total in 2015. The sales of video cards have trended downward due to improvements in integrated graphics technologies; high-end, CPU-integrated graphics can provide performance competitive with low-end video cards. At the same time, video card sales have grown within the high-end segment, as manufacturers have shifted their focus to prioritize the gaming and enthusiast market.[40][41]

Beyond the gaming and multimedia segments, video cards have been increasingly used for general-purpose computing, such as big data processing.[42] The growth of cryptocurrency has placed a severely high demand on high-end video cards, especially in large quantities, due to their advantages in the process of mining. In January 2018, mid-to-high-end video cards experienced a major surge in price, with many retailers having stock shortages due to the significant demand among this market.[43][41][44] Video card companies released mining-specific cards designed to run 24 hours a day, seven days a week, and without video output ports.[45]

Parts

A Radeon HD 7970 with the main heatsink removed, showing the major components of the card. The large, tilted silver object is the GPU die, which is surrounded by RAM chips, which are covered in extruded aluminum heatsinks. Power delivery circuitry is mounted next to the RAM, near the right side of the card.

A modern video card consists of a printed circuit board on which the components are mounted. These include:

Graphics Processing Unit

A graphics processing unit (GPU), also occasionally called visual processing unit (VPU), is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the building of images in a frame buffer intended for output to a display. Because of the large degree of programmable computational complexity for such a task, a modern video card is also a computer unto itself.

Heat sink

A heat sink is mounted on most modern graphics cards. A heat sink spreads out the heat produced by the graphics processing unit evenly throughout the heat sink and unit itself. The heat sink commonly has a fan mounted as well to cool the heat sink and the graphics processing unit. Not all cards have heat sinks, for example, some cards are liquid-cooled and instead have a water block; additionally, cards from the 1980s and early 1990s did not produce much heat, and did not require heatsinks. Most modern graphics cards need a proper thermal solution. This can be the liquid solution or heatsinks with an additional connected heat pipe usually made of copper for the best thermal transfer. The correct case; either Mid-tower or Full-tower or some other derivative, has to be properly configured for thermal management. This can be ample space with a proper push-pull or opposite configuration as well as liquid with a radiator either in lieu or with a fan setup.

Video BIOS

The video BIOS or firmware contains a minimal program for the initial set up and control of the video card. It may contain information on the memory timing, operating speeds and voltages of the graphics processor, RAM, and other details which can sometimes be changed.

The modern Video BIOS does not support all the functions of the video card, being only sufficient to identify and initialize the card to display one of a few frame buffer or text display modes. It does not support YUV to RGB translation, video scaling, pixel copying, compositing or any of the multitude of other 2D and 3D features of the video card.

Video memory

Type Memory clock rate (MHz) Bandwidth (GB/s)
DDR 200-400 1.6-3.2
DDR2 400–1066.67 3.2-8.533
DDR3 800-2133.33 6.4-17.066
DDR4 1600-4866 12.8-25.6
GDDR4 3000–4000 160–256
GDDR5 1000–2000 288–336.5
GDDR5X 1000–1750 160–673
GDDR6 1365-1770 336-672
HBM 250–1000 512–1024

The memory capacity of most modern video cards ranges from 2 GB to 24 GB.[46] But with up to 32 GB as of the last 2010s, the applications for graphics use is becoming more powerful and widespread. Since video memory needs to be accessed by the GPU and the display circuitry, it often uses special high-speed or multi-port memory, such as VRAM, WRAM, SGRAM, etc. Around 2003, the video memory was typically based on DDR technology. During and after that year, manufacturers moved towards DDR2, GDDR3, GDDR4, GDDR5, GDDR5X, and GDDR6. The effective memory clock rate in modern cards is generally between 2 GHz to 15 GHz.

Video memory may be used for storing other data as well as the screen image, such as the Z-buffer, which manages the depth coordinates in 3D graphics, textures, vertex buffers, and compiled shader programs.

RAMDAC

The RAMDAC, or random-access-memory digital-to-analog converter, converts digital signals to analog signals for use by a computer display that uses analog inputs such as cathode ray tube (CRT) displays. The RAMDAC is a kind of RAM chip that regulates the functioning of the graphics card. Depending on the number of bits used and the RAMDAC-data-transfer rate, the converter will be able to support different computer-display refresh rates. With CRT displays, it is best to work over 75 Hz and never under 60 Hz, to minimize flicker.[47] (With LCD displays, flicker is not a problem.[citation needed]) Due to the growing popularity of digital computer displays and the integration of the RAMDAC onto the GPU die, it has mostly disappeared as a discrete component. All current LCD/plasma monitors and TVs and projectors with only digital connections, work in the digital domain and do not require a RAMDAC for those connections. There are displays that feature analog inputs (VGA, component, SCART, etc.) only. These require a RAMDAC, but they reconvert the analog signal back to digital before they can display it, with the unavoidable loss of quality stemming from this digital-to-analog-to-digital conversion.[citation needed] With VGA standard being phased out in favor of digital, RAMDACs are beginning to disappear from video cards.[citation needed]

Output interfaces

Video In Video Out (VIVO) for S-Video (TV-out), Digital Visual Interface (DVI) for High-definition television (HDTV), and DE-15 for Video Graphics Array (VGA)

The most common connection systems between the video card and the computer display are:

Video Graphics Array (VGA) (DE-15)

Video Graphics Array (VGA) (DE-15)

Also known as D-sub, VGA is an analog-based standard adopted in the late 1980s designed for CRT displays, also called VGA connector. Some problems of this standard are electrical noise, image distortion and sampling error in evaluating pixels.

Today, the VGA analog interface is used for high definition video including 1080p and higher. While the VGA transmission bandwidth is high enough to support even higher resolution playback, the picture quality can degrade depending on cable quality and length. How discernible this quality difference is depends on the individual's eyesight and the display; when using a DVI or HDMI connection, especially on larger sized LCD/LED monitors or TVs, quality degradation, if present, is prominently visible. Blu-ray playback at 1080p is possible via the VGA analog interface, if Image Constraint Token (ICT) is not enabled on the Blu-ray disc.

Digital Visual Interface (DVI)

Digital-based standard designed for displays such as flat-panel displays (LCDs, plasma screens, wide high-definition television displays) and video projectors. In some rare cases, high-end CRT monitors also use DVI. It avoids image distortion and electrical noise, corresponding each pixel from the computer to a display pixel, using its native resolution. It is worth noting that most manufacturers include a DVI-I connector, allowing (via simple adapter) standard RGB signal output to an old CRT or LCD monitor with VGA input.

Video In Video Out (VIVO) for S-Video, Composite video and Component video

Included to allow connection with televisions, DVD players, video recorders and video game consoles. They often come in two 10-pin mini-DIN connector variations, and the VIVO splitter cable generally comes with either 4 connectors (S-Video in and out + composite video in and out), or 6 connectors (S-Video in and out + component PB out + component PR out + component Y out [also composite out] + composite in).

High-Definition Multimedia Interface (HDMI)

High-Definition Multimedia Interface (HDMI)

HDMI is a compact audio/video interface for transferring uncompressed video data and compressed/uncompressed digital audio data from an HDMI-compliant device ("the source device") to a compatible digital audio device, computer monitor, video projector, or digital television.[48] HDMI is a digital replacement for existing analog video standards. HDMI supports copy protection through HDCP.

DisplayPort

DisplayPort is a digital display interface developed by the Video Electronics Standards Association (VESA). The interface is primarily used to connect a video source to a display device such as a computer monitor, though it can also be used to transmit audio, USB, and other forms of data.[49] The VESA specification is royalty-free. VESA designed it to replace VGA, DVI, and LVDS. Backward compatibility to VGA and DVI by using adapter dongles enables consumers to use DisplayPort fitted video sources without replacing existing display devices. Although DisplayPort has a greater throughput of the same functionality as HDMI, it is expected to complement the interface, not replace it.[50][51]

USB-C

Other types of connection systems

Composite video Analog systems with resolution lower than 480i use the RCA connector. The single pin connector carries all resolution, brightness and color information, making it the lowest quality dedicated video connection.[52]
Composite-video-cable.jpg
Component video It uses three cables, each with RCA connector (YCBCR for digital component, or YPBPR for analog component); it is used in older projectors, video-game consoles, DVD players.[53] It can carry SDTV 480i and EDTV 480p resolutions, and HDTV resolutions 720p and 1080i, but not 1080p due to industry concerns about copy protection. Contrary to popular belief it looks equal to HDMI for the resolutions it carries,[54] but for best performance from Blu-ray, other 1080p sources like PPV, and 4K Ultra HD, a digital display connector is required.
Component video jack.jpg
DB13W3 An analog standard once used by Sun Microsystems, SGI and IBM.
DB13W3 Pinout.svg
DMS-59 A connector that provides two DVI or VGA outputs on a single connector.
DMS-59.jpg

Motherboard interfaces

Chronologically, connection systems between video card and motherboard were, mainly:

  • S-100 bus: Designed in 1974 as a part of the Altair 8800, it is the first industry-standard bus for the microcomputer industry.
  • ISA: Introduced in 1981 by IBM, it became dominant in the marketplace in the 1980s. It is an 8- or 16-bit bus clocked at 8 MHz.
  • NuBus: Used in Macintosh II, it is a 32-bit bus with an average bandwidth of 10 to 20 MB/s.
  • MCA: Introduced in 1987 by IBM it is a 32-bit bus clocked at 10 MHz.
  • EISA: Released in 1988 to compete with IBM's MCA, it was compatible with the earlier ISA bus. It is a 32-bit bus clocked at 8.33 MHz.
  • VLB: An extension of ISA, it is a 32-bit bus clocked at 33 MHz. Also referred to as VESA.
  • PCI: Replaced the EISA, ISA, MCA and VESA buses from 1993 onwards. PCI allowed dynamic connectivity between devices, avoiding the manual adjustments required with jumpers. It is a 32-bit bus clocked 33 MHz.
  • UPA: An interconnect bus architecture introduced by Sun Microsystems in 1995. It is a 64-bit bus clocked at 67 or 83 MHz.
  • USB: Although mostly used for miscellaneous devices, such as secondary storage devices and toys, USB displays and display adapters exist.
  • AGP: First used in 1997, it is a dedicated-to-graphics bus. It is a 32-bit bus clocked at 66 MHz.
  • PCI-X: An extension of the PCI bus, it was introduced in 1998. It improves upon PCI by extending the width of bus to 64 bits and the clock frequency to up to 133 MHz.
  • PCI Express: Abbreviated as PCIe, it is a point to point interface released in 2004. In 2006 provided double the data-transfer rate of AGP. It should not be confused with PCI-X, an enhanced version of the original PCI specification.

The following table is a comparison between a selection of the features of some of those interfaces.

ATI Graphics Solution Rev 3 from 1985/1986, supporting Hercules graphics. As can be seen from the PCB the layout was done in 1985, whereas the marking on the central chip CW16800-A says "8639" meaning that chip was manufactured week 39, 1986. This card is using the ISA 8-bit (XT) interface.
Bus Width (bits) Clock rate (MHz) Bandwidth (MB/s) Style
ISA XT 8 4.77 8 Parallel
ISA AT 16 8.33 16 Parallel
MCA 32 10 20 Parallel
NUBUS 32 10 10–40 Parallel
EISA 32 8.33 32 Parallel
VESA 32 40 160 Parallel
PCI 32–64 33–100 132–800 Parallel
AGP 1x 32 66 264 Parallel
AGP 2x 32 66 528 Parallel
AGP 4x 32 66 1000 Parallel
AGP 8x 32 66 2000 Parallel
PCIe x1 1 2500 / 5000 250 / 500 Serial
PCIe x4 1 × 4 2500 / 5000 1000 / 2000 Serial
PCIe x8 1 × 8 2500 / 5000 2000 / 4000 Serial
PCIe x16 1 × 16 2500 / 5000 4000 / 8000 Serial
PCIe ×1 2.0[55] 1 500 / 1000 Serial
PCIe x4 2.0 1 × 4 2000 / 4000 Serial
PCIe x8 2.0 1 × 8 4000 / 8000 Serial
PCIe ×16 2.0 1 × 16 5000 / 10000 8000 / 16000 Serial
PCIe ×1 3.0 1 1000 / 2000 Serial
PCIe ×4 3.0 1 × 4 4000 / 8000 Serial
PCIe ×8 3.0 1 × 8 8000 / 16000 Serial
PCIe ×16 3.0 1 × 16 16000 / 32000 Serial

See also

  • List of computer hardware
  • ATI – defunct GPU company (merged with AMD)
  • AMD, Nvidia – duopoly of 3D chip GPU and graphics card designers
  • Computer display standards – a detailed list of standards like SVGA, WXGA, WUXGA, etc.
  • Diamond Multimedia – alternative
  • GeForce, Radeon – examples of popular video card series
  • GPGPU (i.e.: CUDA, AMD FireStream)
  • Free and open-source device drivers: graphics – about the available FOSS device drivers for graphic chips
  • Framebuffer – the computer memory used to store a screen image
  • Hercules – monochrome
  • List of video card manufacturers
  • Video In Video Out (VIVO)
  • Capture card – the inverse of a video card

References

  1. ^ "ExplainingComputers.com: Hardware". www.explainingcomputers.com. Retrieved 2017-12-11.
  2. ^ a b "OpenGL vs DirectX - Cprogramming.com". www.cprogramming.com. Retrieved 2017-12-11.
  3. ^ "Powering Change with NVIDIA AI and Data Science". NVIDIA.
  4. ^ "Graphic Card Components". pctechguide.com. 2011-09-23. Retrieved 2017-12-11.
  5. ^ a b c d "Add-in board-market down in Q2, AMD gains market share [Press Release]". Jon Peddie Research. 16 August 2013. Retrieved 30 November 2013.
  6. ^ "Intel HD Graphics: The 2018 Guide to Integrated Graphics". Tech Centurion. 2018-05-21. Retrieved 2018-05-21.
  7. ^ "Intel HD Graphics Guide". Laptop Magazine. Retrieved 2018-01-22.
  8. ^ Shimpi, Anand Lal. "The Xbox One: Hardware Analysis & Comparison to PlayStation 4". Anandtech. Retrieved 2018-01-22.
  9. ^ Crijns, Koen (6 September 2013). "Intel Iris Pro 5200 graphics review: the end of mid-range GPUs?". hardware.info. Retrieved 30 November 2013.
  10. ^ "Introducing The GeForce GTX 780 Ti". Retrieved 30 November 2013.
  11. ^ "Test Results: Power Consumption For Mining & Gaming - The Best GPUs For Ethereum Mining, Tested and Compared". Tom's Hardware. 2018-03-30. Retrieved 2018-11-30.
  12. ^ "Faster, Quieter, Lower: Power Consumption and Noise Level of Contemporary Graphics Cards". xbitlabs.com. Archived from the original on 2011-09-04.
  13. ^ "Video Card Power Consumption". codinghorror.com.
  14. ^ Maxim Integrated Products. "Power-Supply Management Solution for PCI Express x16 Graphics 150W-ATX Add-In Cards".
  15. ^ "What is a Low Profile Video Card?". Outletapex.
  16. ^ "Best 'low profile' graphics card". Tom's Hardware.
  17. ^ "GTX 690 | Specifications". GeForce. Retrieved 2013-02-28.
  18. ^ "SLI". geforce.com.
  19. ^ "SLI vs. CrossFireX: The DX11 generation". techreport.com.
  20. ^ Adrian Kingsley-Hughes. "NVIDIA GeForce GTX 680 in quad-SLI configuration benchmarked". ZDNet.
  21. ^ "Head to Head: Quad SLI vs. Quad CrossFireX". Maximum PC.
  22. ^ "How to Build a Quad SLI Gaming Rig | GeForce". www.geforce.com. Retrieved 2017-12-11.
  23. ^ a b "How to Build a Quad SLI Gaming Rig | GeForce". www.geforce.com. Retrieved 2017-12-11.
  24. ^ "NVIDIA Quad-SLI|NVIDIA". www.nvidia.com. Retrieved 2017-12-11.
  25. ^ Abazovic, Fuad. "Crossfire and SLI market is just 300.000 units". www.fudzilla.com.
  26. ^ "Is Multi-GPU Dead?". Tech Altar. January 7, 2018.
  27. ^ "Nvidia SLI and AMD CrossFire is dead – but should we mourn multi-GPU gaming? | TechRadar". www.techradar.com.
  28. ^ "Hardware Selection and Configuration Guide" (PDF). documents.blackmagicdesign.com. Retrieved 2020-11-10.
  29. ^ "Recommended System: Recommended Systems for DaVinci Resolve". Puget Systems.
  30. ^ "GPU Accelerated Rendering and Hardware Encoding". helpx.adobe.com.
  31. ^ "V-Ray Next Multi-GPU Performance Scaling". Puget Systems.
  32. ^ "FAQ | GPU-accelerated 3D rendering software | Redshift". www.redshift3d.com.
  33. ^ "OctaneRender 2020 Preview is here!".
  34. ^ Williams, Rob. "Exploring Performance With Autodesk's Arnold Renderer GPU Beta – Techgage". techgage.com.
  35. ^ "GPU Rendering — Blender Manual". docs.blender.org.
  36. ^ "V-Ray for Nuke – Ray Traced Rendering for Compositors | Chaos Group". www.chaosgroup.com.
  37. ^ "System Requirements | Nuke | Foundry". www.foundry.com.
  38. ^ "What about multi-GPU support?".
  39. ^ a b "Graphics Card Market Up Sequentially in Q3, NVIDIA Gains as AMD Slips". Retrieved 30 November 2013.
  40. ^ a b Chen, Monica (16 April 2013). "Palit, PC Partner surpass Asustek in graphics card market share". DIGITIMES. Retrieved 1 December 2013.
  41. ^ a b Shilov, Anton. "Discrete Desktop GPU Market Trends Q2 2016: AMD Grabs Market Share, But NVIDIA Remains on Top". Anandtech. Retrieved 2018-01-22.
  42. ^ Chanthadavong, Aimee. "Nvidia touts GPU processing as the future of big data". ZDNet. Retrieved 2018-01-22.
  43. ^ "Here's why you can't buy a high-end graphics card at Best Buy". Ars Technica. Retrieved 2018-01-22.
  44. ^ "GPU Prices Skyrocket, Breaking the Entire DIY PC Market". ExtremeTech. 2018-01-19. Retrieved 2018-01-22.
  45. ^ Parrish, Kevin (2017-07-10). "Graphics cards dedicated to cryptocurrency mining are here, and we have the list". Digital Trends. Retrieved 2020-01-16.
  46. ^ "NVIDIA TITAN RTX is Here". NVIDIA.
  47. ^ "Refresh rate recommended". Archived from the original on 2007-01-02. Retrieved 2007-02-17.
  48. ^ "HDMI FAQ". HDMI.org. Retrieved 2007-07-09.
  49. ^ "DisplayPort Technical Overview" (PDF). VESA.org. January 10, 2011. Retrieved 23 January 2012.
  50. ^ "FAQ Archive – DisplayPort". VESA. Retrieved 2012-08-22.
  51. ^ "The Truth About DisplayPort vs. HDMI". dell.com.
  52. ^ "Video Signals and Connectors". Apple. Retrieved 29 January 2016.
  53. ^ "How to Connect Component Video to a VGA Projector". AZCentral. Retrieved 29 January 2016.
  54. ^ "Quality Difference Between Component vs. HDMI". Extreme Tech. Retrieved 29 January 2016.
  55. ^ PCIe 2.1 has the same clock and bandwidth as PCIe 2.0

Sources

  • Mueller, Scott (2005) Upgrading and Repairing PCs. 16th edition. Que Publishing. ISBN 0-7897-3173-8

By: Wikipedia.org
Edited: 2021-06-18 18:55:33
Source: Wikipedia.org