3. Graphics Overview

In this chapter we present Linux’s display model.

3.1. Generalities

From a programmer’s point of view, screens basically display rectangular grids of sample points (or “pixels” standing for “picture elements”) where each point can have a different color. The display resolution is the number of pixels in each dimension: for example, a display resolution of 1920x1080 means that there are 1920 pixels in the horizontal dimension and 1080 in the vertical dimension.

On some hardware, we may also use sub-pixel rendering: pixel colors are generated by blending several sub-pixel colors. If we know the physical layout of these sub-pixels, we may want to precisely select the sub-pixel colors to enhance the overall output.

There can be a lot of different hardware configurations. It is common to have computers with multiple connected display devices: multi-screens, video-projector, TV, etc. Each display supports its own modes of operation (display resolution, refresh rates, etc.). Besides, we may want each device to display something different or some of them to display the same thing (“clone” mode): in the latter case, what if the display resolutions or the other mode settings are different?

It is also possible to have several graphic cards in the same computer:

  • each card can provide its own set of output display ports;
  • several cards may be physically connected together to accelerate the display on one of their display ports;
  • several cards may be connected to the same output connectors and the system can switch between the different graphic cards. Usually there are two cards: a power-consuming performant one and a less power-consuming basic one used to save power.

In this chapter we present a simple approach to display rendering: basically the picture to display is generated by the CPU in host memory and then transferred to the GPU memory (implicitly by using memory mapping). Most recent graphic cards propose more efficient approaches: the picture to display is generated and transformed directly by the graphic card. Instead of sending a picture, the host sends commands or programs to be executed by the GPU.

Currently Linux doesn’t propose a unified interface to advanced graphic card capabilities from different vendors (these are usually handled by MESA in user-space and accessed through the OpenGL interface). haskus-system doesn’t provide support for them yet.

In this chapter, we describe the Kernel Mode Setting (KMS) and the Direct Rendering Manager (DRM) interfaces. In usual Linux distributions, some graphic card manufacturers provide closed-source proprietary drivers that do not support theses interfaces: they use a kernel module and user-space libraries that communicate together by using a private protocol. The user-space libraries provide implementations of standard high-level interfaces such as OpenGL and can be used by rendering managers such as X.org. haskus-system doesn’t offer a way to use these drivers.

3.2. Display Model

Linux’s display model is composed of several entities that interact with each other. These entities are represented on the following graph:


Numbers indicate relationship arities: for instance a Controller can be connected to at most a single Framebuffer, but a Framebuffer can be used by a variable amount of Controllers. The arrows indicate entities that store references to others entities (at the tip of the arrow).

In order to display something, we have to configure a pipeline that goes from some surfaces (pixel data stored in a memory buffers) to some connectors (entity representing physical ports onto which display devices are connected).

3.2.1. Card

The following schema shows a contrived example of a graphic card containing some of these entities.


Entities belong to a single graphic card: they can’t be shared between several graphic cards (if your system has more than one of these).

3.2.2. Connectors

Each physical port where you can plug a display device (a monitor, a video-projector, etc.) corresponds to a Connector entity in the display model.

The following code shows how to retrieve the graphic card objects and how to display information about each connector:

import Haskus.System
import Haskus.Arch.Linux.Graphics.State

main :: IO ()
main = runSys <| do

   sys   <- defaultSystemInit
   term  <- defaultTerminal

   -- get graphic card devices
   cards <- loadGraphicCards (systemDeviceManager sys)

   forM_ cards <| \card -> do
      state <- readGraphicsState (graphicCardHandle card)
               >..~!!> assertShow "Cannot read graphics state"

      -- get connector state and info
      let conns = graphicsConnectors state

      -- show connector state and info
      writeStrLn term (show conns)

   void powerOff

When executed in QEMU, this code produces the following output:

-- Formatting has been enhanced for readability
[ Connector
   { connectorID = ConnectorID 21
   , connectorType = Virtual
   , connectorByTypeIndex = 1
   , connectorState = Connected (ConnectedDevice
      { connectedDeviceModes =
         [ Mode
            { ...
            , modeClock = 65000
            , modeHorizontalDisplay = 1024
            , modeVerticalDisplay = 768
            , modeVerticalRefresh = 60
            , modeFlags = fromList [ModeFlagNHSync,ModeFlagNVSync]
            , modeStereo3D = Stereo3DNone
            , modeType = fromList [ModeTypePreferred,ModeTypeDriver]
            , modeName = "1024x768" }
         , ...
      , connectedDeviceWidth = 0
      , connectedDeviceHeight = 0
      , connectedDeviceSubPixel = SubPixelUnknown
      , connectedDeviceProperties =
         [ Property
            { propertyMeta = PropertyMeta
               { ...
               , propertyName = "DPMS"
               , propertyType = PropEnum
                  [ (0,"On")
                  , (1,"Standby")
                  , (2,"Suspend")
                  , (3,"Off")]
            , propertyValue = 0
   , connectorPossibleEncoderIDs = [EncoderID 20]
   , connectorEncoderID = Just (EncoderID 20)
   , connectorHandle = Handle ...

Each connector reports its type in the connectorType field: in our example it is a virtual port because we use QEMU, but it could have been VGA, HDMI, TV, LVDS, etc.

If there are several connectors of the same type in the same card, you can distinguish them with the connectorByTypeIndex field.

You can check that a display device is actually plugged in a connector with the connectorState property: in our example, there is a (virtual) screen connected.

We can get more information about the connected device:

  • connectedDeviceModes: modes supported by the connected display device. In particular, a display resolution is associated to each mode. In our example, the display resolution of the first mode is 1024x768; the other modes have been left out for clarity.
  • connectedDeviceWidth and connectedDeviceHeight: some display devices report their physical dimensions in millimeters.
  • connectedDeviceSubPixel: whether the device uses some kind of sub-pixel technology.
  • connectedDeviceProperties: device specific properties. In this example, there is only a single property named “DPMS” which can take 4 different values (“On”, “Standby”, “Suspend”, “Off”) and whose current value is 0 (“On”): this property can be used to switch the power mode of the screen.

A connector gets the data to display from an encoder:

  • connectorPossibleEncoderIDs: list of encoders that can be used as sources.
  • connectorEncoderID: identifier of the currently connected encoder, if any.

3.2.3. Detecting Plugging/Unplugging

We can adapt what our system displays to the connected screens, but how do we detect when a screen is connected or disconnected?

A solution would be to periodically check the value of the connectorState property. But a better method is to use a mechanism explained in the basic device management page: when the state of a connector changes, the kernel sends to the user-space an event similar to the following one:

   { kernelEventAction = ActionChange
   , kernelEventDevPath = "/devices/.../drm/card0"
   , kernelEventSubSystem = "drm"
   , kernelEventDetails = fromList

When the system receives this event, it knows it has to check the state of the connectors.

Note that the number of connector entities may change dynamically. For instance a single DisplayPort connector supporting the Multi-Stream Transport (MST) allows several monitors to be connected in sequence (daisy-chaining): each monitor receives its own video stream and appears as a different connector entity. It is also possible to connect a MST hub that increases the number of connector entities.

3.2.4. Encoders

Encoders convert pixel data into signals expected by connectors: for instance DVI and HDMI connectors need a TMDS encoder. Each card provides a set of encoders and each of them can only work with some controllers and some connectors. There may be a 1-1 relationship between an encoder and a connector, in which case the link between them should already be set.

We can display information about encoders using a code similar to the code above for connectors. When executed into QEMU, we get the following result:

[ Encoder
   { encoderID = EncoderID 20
   , encoderType = EncoderTypeDAC
   , encoderControllerID = Just (ControllerID 19)
   , encoderPossibleControllers = [ControllerID 19]
   , encoderPossibleClones = []
   , encoderHandle = Handle ...

As we can observe, the graphic card emulated by QEMU emulates a single DAC encoder.

The encoderPossibleClones field contains the sibling encoders that can be used for cloning: only these encoders can share the same controller as a source.

3.2.5. Controllers

Controllers let you configure:

  • The display mode (display resolution, etc.) that will be used by the display devices that are connected to the controller through an encoder and a connector.
  • The primary source of the pixel data from a FrameBuffer entity

We can display information about controllers using a code similar to the code above for connectors. When executed into QEMU, we get the following result:

[ Controller
   { controllerID = ControllerID 19
   , controllerMode = Just (Mode { ...})
   , controllerFrameBuffer = Just (FrameBufferPos
      { frameBufferPosID = FrameBufferID 46
      , frameBufferPosX = 0
      , frameBufferPosY = 0
   , controllerGammaTableSize = 256
   , controllerHandle = Handle ...
  • controllerMode: the display mode that has to be used by the display device(s).
  • controllerFrameBuffer: the FrameBuffer entity used as a data source and the coordinates in the FrameBuffer contents.

3.2.6. Planes

Some controllers can blend several layers together from different FrameBuffer entities: these layers are called Planes. Controller support at least a primary plane and they can support others such as cursor or overlay planes.

   * List plane resources
   * primary plane
   * cursor planes
   * overlay planes
   * example

3.2.7. Framebuffers And Surfaces

Planes take their input data from FrameBuffer entities. FrameBuffer entities describe how pixel data are encoded and where to find them in the GPU memory. Some pixel encoding formats require more than one memory buffers (Surface entities) that are combined to obtain final pixel colors.

   * Pixel formats
   * FrameBuffer dirty
   * Mode
   * Generic buffers
   * Note on accelerated buffers

If we use an unaccellerated method (“dumb buffers” in Linux terminology) where the graphics data are fulling generated by the CPU, applications only have to map the contents of the Surface entities into their memory address spaces and to modify it to change what is displayed.

3.3. Further Reading

As explained in the Device management section, device drivers can support the ioctl system call to handle device specific commands from the user-space. The display interface is almost entirely based on it. Additionally, mmap is used to map graphic card memory in user-space and read is used to read events (V-Blank and page-flip asynchronous completion).

In usual Linux distributions, the libdrm library provides an interface over these system calls. You can learn about the low-level interface by reading the drm manual (man drm) or its source code.

David Herrmann has written a good tutorial explaining how to use the legacy low-level display interface in the form of C source files with detailed comments. While some details of the interface have changed since he wrote it (e.g., the way to flip frame buffers and the atomic interface), it is still a valuable source of information.

The newer atomic interface is described in an article series on LWN called “Atomic mode setting design overview” (August 2015) by Daniel Vetter.

Wayland is the new display system for usual Linux based distributions. It can be a great source of inspiration and of information.

You can also read the Linux kernel code located in drivers/gpu/drm in the kernel sources.

Linux multi-GPU: