HDMI Output with Nvidia-XRun

Linux users with Nvidia Optimus (hybrid graphics) hardware currently face many challenges in order to optimally utilize their system. While there are many solutions that allow both integrated and discrete GPUs to be active, it is still not yet possible to utilize video outputs from both sources concurrently through a single display server session (X11 or Wayland). Although X.org 1.20 may allow this to be possible (see end notes for details), until then there is only one other solution: separate session for each GPU. With a bit of work, it’s possible to use the Nvidia-XRun project to accomplish this.

In my previous post, discussing the pros and cons of nvidia-xrun, I was under the impression that this tool would not allow utilization of a separate output. In my case, my HDMI output is wired through my Nvidia GPU, while the others are through the motherboard (i.e. through Intel integrated graphics). As it turns out, the combination of a faulty cable and loose adapter lead to the wrong conclusion, and it is indeed possible to achieve.

The intended goal of nvidia-xrun is to start an X11 session on the discrete GPU, and essentially have the video output copied to the outputs connected to the integrated GPU (accomplished through sinks). This means that all displays will be mirrored, showing what is being rendered on the Nvidia GPU. While this may be ideal for some circumstances, it isn’t always the desired behaviour, since discrete GPUs generally have a higher power consumption. Furthermore, it only starts a basic X session, meaning running a desktop environment, or at the very least a customized window manager can require a fair bit of extra work. It is also not possible to run a Wayland session using this method.

Running nvidia-xrun in Parallel

Although the tool is intended to be used in a separate virtual console (i.e. TTY), it is possible to run it in tandem to one’s existing session, regardless if this be an X11 or Wayland. To accomplish this, a few modifications need to be made to the files included with the nvidia-xrun package. Firstly, a supplemental X11 configuration is created to override the default, in /etc/X11/nvidia-xorg.conf.d/ (which may be called for example hdmi.conf). The first section added prevents the integrated displays from being overwritten:

Section "ServerLayout"
Identifier "layout"
Screen 1 "nvidia"
EndSection

This removes the inactive Intel option in order to prevent the session from taking over all screens.

Next, the nvidia-xrun script itself needs to be modified. Typically, it will run the fgconsole command to determine what virtual console is currently being used. Unfortunately, when you’re already running a display server, this command returns an error unless you’re running it as root user. Edit the nvidia-xrun script (run which nvidia-xrun to find the location) and change the following line:

LVT=`fgconsole`


LVT=`sudo fgconsole`

Lastly, the proper permissions must be granted to allow an X11 session to be started with root priviledges by any user, in a terminal emulator. Create a file /etc/X11/Xwrapper.config with the following contents:

allowed_users=anybody
needs_root_rights=yes

With these changes, running nvidia-xrun in a terminal emulator will start the session only on your discrete GPU. As much as I’d like to say that’s all good and done, there is still a problem to deal with.

Separating Input Devices

Because both sessions are running on the same virtual console, the input devices will (normally) be shared. In other words, one mouse and keyboard will control both sessions at the same time. While this is amusing, it’s very difficult to utilize the Nvidia session properly. There are two possible solutions to this issue:

  • Using a separate keyboard and mouse for each session
  • Using a virtual-kvm application to share one set between both sessions

Having a separate device for each isn’t always possible, let alone easy to use, which is why I opt for the second option.

Synergy is a virtual-kvm solution designed to share mice and keyboards between multiple PCs, regardless if they are on Windows, Mac, or Linux. It can also be used to share these devices between multiple sessions on the same PC. While the tool has paid binaries, it is open source and offers the source-code on github. As such, most Linux distributions offer it as a package to install. In order to use it locally, two sessions must be run concurrently: a client (the Nvidia session), and a server (the host session). To start the client, adding a line to the nvidia-xinitrc file may be done:

synercyc -f -n nvidia 127.0.0.1 &

This will start a session with the name nvidia as a background process.

The server may then be started with the following:

synergys -f -n host 127.0.0.1 &

This will use the configuration file found in ~/synergy.conf. The server may alternately be started with just synergy and be configured using the GUI.

The configuration file may resemble:

section: screens
host:
nvidia:
end

section: links
host:
right = nvidia
nvidia:
left = host
end

The last thing to configure is to disable the inputs on the nvidia-xrun session. In the xorg configuration previously created, add the following to disable the input devices:

Section "InputClass"
Identifier "libinput pointer catchall"
MatchIsPointer "on"
MatchDevicePath "/dev/input/event*"
Driver "libinput"
Option "Ignore" "true"
EndSection

Section "InputClass"
Identifier "libinput keyboard catchall"
MatchIsKeyboard "on"
MatchDevicePath "/dev/input/event*"
Driver "libinput"
Option "Ignore" "true"
EndSection

A few things to note about the behaviour of Synergy. Because inputs are being simulated on the client, some inputs are not properly passed along. For instance, if the client is running a first-person game, where the mouse is intended to be fixed to the centre of the screen, the simulated version does not know how to handle this correctly, and causes the game to spin around. Also, I’ve lately found that the alt-tab shortcut does not behave correctly.

Extra Step: Reduce Screen Tearing

One problem I’ve noticed when using the native HDMI output is terrible screen tearing. Although there are many guides to reduce or correct this with Nvidia outputs, I’ve found a certain combination to give the best results. In the xorg configuration file, add the following:

Section "Extensions"
Option "Composite" "Enable"
EndSection


Section "Device"
Identifier "nvidia"
Driver "nvidia"
BusID "PCI:1:0:0"
Option "RenderAccel" "true"
Option "AllowGLXWithComposite" "true"
Option "TrippleBuffer" "true"
EndSection

Section "Module"
Load "glx"
EndSection

Section "Screen"
Identifier "nvidia"
Device "nvidia"
Option "metamodes" "nvidia-auto-select +0+0 {ForceCompositionPipeline=On, ForceFullCompositionPipeline=On}"
Option "AllowIndirectGLXProtocol" "off"
EndSection

Final Remarks

All-in-all, it’s nice to know that it’s actually possible to use the HDMI output of my laptop alongside the current session being used. That being said, I don’t expect I will use it regularly. The monitor I use does not have an HDMI input (thus the adapter) and I sometimes get artifacts appearing due to this. Also, the use of Synergy, while effective, is not a perfect solution. Considering X.org 1.20 is not too far away, and may possibly lead to native support for the output device, I don’t think waiting a little longer for support is particularly bad. This appears to not be the case, and I recommend those interested in reading up on my multi-gpu on Linux post for details.

5 thoughts on “HDMI Output with Nvidia-XRun

  1. Are you planning to look into this again?

    I just saw that on the mailing list that xorg 1.20 is imminent. Although I am not sure what to look for to see if, as you write, if it is “possible to utilize video outputs from both sources concurrently through a single display server session”.

    Like

    • As far as I’m aware, version 1.20 doesn’t explicitly address this. The main feature it adds (relative to multi-gpu support) is support for XWayland applications to run correctly on the proprietary nvidia drivers. I don’t know that we’ll ever truly see a functional solution on X.org sessions, though I expect it will eventually be properly realized with Wayland. Version 1.20 has in fact already been released, and currently on Arch Linux has reached its testing phase. Once it is considered stable and released on the regular repositories, I will definitely do more testing.

      Like

      • Awesome, thank you for your reply. I didn’t in fact realize that you did. I am definitely going to watch this space to see what you come up with. I don’t have much time to spare on this myself, unfortunately. 😦

        I have an Alienware 13R3 with the Nvidia card responsible for all external connections, and with the Intel card responsible for the laptop’s screen. Merging this into one big (virtual) deskop would be awesome.

        Like

      • Just a quite update, for information sake. Version 1.20 has made it to stable repositories, however I’m seeing a few bug reports about it, and another version already in testing. Once it appears to be more stable, I’ll upgrade and experiment.

        Liked by 1 person

Leave a comment