Digital video is an electronic representation of moving visual images (video) in the form of encoded digital data. This is in contrast to analog video, which represents moving visual images in the form of analog signals. Digital video comprises a series of digital images displayed in rapid succession. 

Digital video can be copied and reproduced with no degradation in quality. In contrast, when analog sources are copied, they experience generation loss. Digital video can be stored on digital media such as Blu-ray Disc, on computer data storage, or streamed over the Internet to end users who watch content on a desktop computer screen or a digital smart TV. Today, digital video content such as TV shows and movies also include a digital audio soundtrack. Some compressed digital video formats are H.264 and MPEG-4.

Digital video coding Formats:

The first digital video coding standard was H.120 developed in 1984.

H.120 was not practical due to weak performance. H.120 was based on differential pulse-code modulation (DPCM), a compression algorithm that was inefficient for video coding.

MPEG-1, developed by the Motion Picture Experts Group (MPEG), followed in 1991, and it was designed to compress VHS-quality video. It was succeeded in 1994 by MPEG-2/H.262 which became the standard video format for DVD and SD digital television. It was followed by MPEG-4/H.263 in 1999, and then in 2003 it was followed by H.264/MPEG-4 AVC, which has become the most widely used video coding standard.

 RGB and YUV Representation of Video Signals:

Representing the colors in an image or video requires several values for each pixel. There are several color models, and video codec makes use of one or more of these to represent their pixels during the encoding process as well as after decoding the video frames.

RGB:

Most computer graphics models use the RGB color system, wherein some number of bits of data are used to represent each of the red, green, and blue components of the color of an individual pixel, and an image is comprised of a two-dimensional array of these pixels.

YUV concept:

The color encoding system used for analog television worldwide (NTSC, PAL and SECAM).

The Y in YUV stands for "luma," which is brightness, or lightness, and black and white TVs decode only the Y part of the signal. U and V provide color information and are "color difference" signals of blue minus luma (B-Y) and red minus luma (R-Y). Through a process called "color space conversion," the video camera converts the RGB data captured by its sensors into either composite analog signals (YUV) or component versions (analog YPbPr or digital YCbCr). For rendering on screen, all these color spaces must be converted back again to RGB by the TV or display system.

YUV works by defining a color space with three components:

Luma (Y')

This is for the brightness of the pixel. Without the other two components, the luma of each pixel in the frame produces a greyscale representation of the image.

Blue-difference (U or Cb)

The blue-difference component of the chroma, or color, of the color sample. This value is computed by subtracting the luma from the gamma corrected blue value; that is, U = B' - Y'.

Red-difference (V or Cr)

The red-difference component of the chroma of the color sample. Computed by subtracting luma from the gamma corrected red value: V = R' - Y'.

Composite Video and S-video:

The original TV standard combined luma (Y) and both color signals (B-Y, R-Y) into one channel, which uses one cable and is known as "composite video." An option known as "S-video" keeps the luma separate from the color signals, using one cable, but with separate wires internally. S-video is a bit sharper than composite video.

Component Video:

When luma and each of the color signals (B-Y and R-Y) are maintained in separate channels, it is called "component video," designated as YPbPr when in the analog domain and YCbCr when it is digital. Component video is the sharpest of all.



Video Connection

Analog video can be transmitted as composite video, S-video or component video. High-end consumer and professional equipment uses component analog video (YPbPr) and three separate cables.



Compression:

In signal processing, data compression, source encoding or bit-rate reduction is the process of encoding information using fewer bits than the original representation. 
Any particular compression is either lossy or lossless. 
No information is lost in lossless compression. 
Lossy compression reduces bits by removing unnecessary or less important information. 
Typically, a device that performs data compression is referred to as an encoder, and one that performs the reversal of the process (decompression) as a decoder.
Video compression is a process that reduces and removes redundant video information so that a digital video file/ stream can be sent across a network and stored more efficiently. 
An encoding algorithm is applied to the source video to create a compressed stream that is ready for transmission, recording, or storage. To decode (play) the compressed stream, an inverse algorithm is applied. The time it takes to compress, send, decompress and ultimately display a stream is known as latency.
A video codec (encoder/decoder) employs a pair of algorithms that work together. The process for encoding and decoding must be matched; video content that is compressed using one standard cannot be decompressed with a different standard.

 Compression formats for video:

The two key video compression techniques used in video coding standards are the discrete cosine transform (DCT) and motion compensation (MC). Most video coding standards, such as the H.26x and MPEG-x formats, typically use motion-compensated DCT video coding.
MPEG formats are used in various multimedia systems. The most well known older MPEG media formats typically use MPEG-1, MPEG-2, and MPEG-4 AVC media coding. 
The primary early MPEG compression formats and related standards include:
MPEG-1 (1993): Coding of moving pictures and associated audio for digital storage media at up to about 1.5 Mbit/s.  This initial version is known as a lossy file format and is the first MPEG compression standard for audio and video. It is commonly limited to about 1.5 Mbit/s although the specification is capable of much higher bit rates. It was basically designed to allow moving pictures and sound to be encoded into the bitrate of a Compact Disc. It is used on Video CD and can be used for low-quality video on DVD Video. It includes the popular MPEG-1 Audio Layer III (MP3) audio compression format.

MPEG-2 (1995): 

Generic coding of moving pictures and associated audio information. It supports interlacing and high definition. 

MPEG-2 

is considered important because it was chosen as the compression scheme for over-the-air digital television ATSC, DVB and ISDB, digital satellite TV services like Dish Network, digital cable television signals, SVCD and DVD Video.

MPEG-4 (1998):

 Coding of audio-visual objects. MPEG-4 provides a framework for more advanced compression algorithms potentially resulting in higher compression ratios compared to MPEG-2. Uses of MPEG-4 include compression of AV data for Internet video and CD distribution, voice (telephone, videophone) and broadcast television applications. 

H.26x formats:

Video compression standards related to the H.26xSeries are: H.261, H.262, H.263, H.264.
H.261 is a codec designed by ITU (International Telecom Union) for video conferencing over PSTN (Public Switched Telephone Network).
It does not work well over TCP/IP Internet as it is optimized only for low data rates and relatively low motion video. So, it led to the development of H.262.
H.262 or MPEG-2 Part 2, also known as MPEG-2 Video, is a digital video compression and encoding standard  developed and maintained jointly by ITU-T Video Coding Experts Group  (VCEG) and Moving Picture Experts Group (MPEG). 
It is used to serve the transmission of compressed television programs via broadcast, cablecast, and satellite, and subsequently adopted for DVD production and for some online delivery systems.
H.263 was developed after H.261 with a focus on enabling better quality at even lower bitrates. One of the major original targets was video over ordinary telephone modems that ran at 28.8 Kbps at the time. 
H.264/AVC was finalized in March 2003. It is an ITU standard for compressing video based on MPEG-4 that is popular, especially for high-definition video such as Blu-ray. Taking advantage of today's high-speed chips, H.264 delivers MPEG-4 quality with a frame size up to four times greater. H.264 is also known as "MPEG-4 Part 10.
H.265/HEVC (High efficiency video coding) is targeted at next-generation HDTV displays. H.265 is more advanced than H.264 in several ways. The main difference is that HEVC allows for further reduced file size, and therefore reduced required bandwidth, of your live video streams