Fixing Wayland: Xwayland screen casting

This blog introduces a new tool to make it easy to stream wayland windows and screens to existing applications running under Xwayland that don't have native pipewire support in an easy-to-use manner than retains full user control throughout.

Intro

On my Plasma wayland session right now if I use the screen share function in Discord, I'm presented with the following.

It doesn't work 🙁

I have no windows listed, even though I clearly have windows open. The full screen shares are blank.
The same will be true in MS Teams, Slack, Zoom as well as any other Xwayland apps that involve screencasts.

Linux enthusiasts - and by reading developer blogs you're definitely in this camp - will understand why this is and probably know some technique to avoid this involving changing my setup or workflow in some way to work round things on a per-app basis.

For our wider userbase this isn't enough. Wayland is a technical detail and we want any switch has to be as transparent as possible for as many apps as possible for all cases - including cases we haven't thought of.

Introducing XwaylandVideoBridge

With our new tool, written by Aleix Pol and myself, running the workflow is as follows: I click on the share button. Immediately I'm presented with an option to choose what windows or screens I wish to make available to X.

This is now selectable in my existing application. The stream continues until I stop sharing within the application.

Left: a prompt to choose windows to stream.
Right: How it appears after selection

Security

The bridge works via the same mechanisms as any "native wayland" video streaming tool would work, through the XDG Desktop Portals. Getting data through PipeWire as requested it through the portal with explicit user consent.
Whilst the bridge does mean that any X application could eavesdrop the shared window the user still remains completely at the helm of which windows are shared to X applications and most importantly when.

We could go the route of trying to be completely seamless to the X client with N fake windows all forwarding content on demand, but I like the demonstration that it's possible to not break existing user applications without having to compromise on our lofty wayland goals.

Performance

Technically there's an overhead, pragmatically it uses no CPU and any latency is negligible compared to the cost of streaming.

Installation

Grab a flatpak from: Our gitlab builder or from source.

Note it requires a non-released libkpipewire, something the flatpak resolves.

Whilst only tested on a Plasma wayland session, this should be usable by any Wayland desktop with a working xdg-desktop-portal, screencasting and standard system tray.
(Edit March 24th: User testing has shown some mixed results on Gnome with colours, and completely not working at all Sway)

Usage

Ensure our proxy is running in the background flatpak run org.kde.xwaylandvideobridge optionally setting it to autostart. After that everything should kick in automatically the next time an Xwayland application starts streaming.

How it works under the hood

The inspiration for this came from the debug tool to show pipewire streams in a window whilst we were working on Plasma remote desktop support. If we force that debug tool to run as an Xwayland client, it becomes visible to other Xwayland chat / streaming programs. We had 90% of the code already.

The only remaining step was some sneaky tricks to hide this X11 window from the user's view making it unfocussable, invisible and out of view. We then added detection for when we're being streamed by using the XRecord extension to monitor all clients redirecting the window we own.

It's an excellent example of X11 allowing you to do really, really stupid things, for novel and creative puposes.

Future Plans

This is only an initial Alpha release. How we take it in the future is not completely decided; it might remain a standalone tool moving to flathub or distros, we might proposed it into Plasma 6 by default. There's a possibility the Linux desktop might be at a point where this is redundant.

There's definitely some minor tweaks still to do on the exact workflow.

Please do give feedback and let us know!

http://disuss.kde.org thread

Plasma 5.22 Beta testing day

Plasma 5.22 is now in beta. This gives us, KDE developers, one month of intense testing, bugfixing and polishing. During this time we need as many hands on deck to help find regressions, triage incoming reports and generally be on top of things as much as possible.

With that in mind, you are officially invited to our "Beta Review Day". Join us online and do QA with us together as a group.

Who?

Any Plasma user able to install the latest beta or run a live ISO with our beta on it and who wants to help.

When?

Saturday 22nd May. A full timetable with areas of focus is available on the wiki: https://community.kde.org/Schedules/Plasma_5_BetaReviewDay#When_will_it_take_place.3F

Where?

Join us over videoconferencing at https://meet.kde.org/b/dav-gmr-qzx. There will be plenty of room available, and you can join with a camera and a microphone or just use the classic text chat.

What will we do?

• Introduce you to Bugzilla so you can support us filing or triaging bugs
• Assign short lists to experienced participants so you can validate and de-duplicate bugs
• Go through a defined list of all the new areas of Plasma to check for regressions
• Devs will be online so you can give us debug info for issues you have, and identify and fix things in real time

What should I prepare?

Set up a machine, real or virtual, with the latest Plasma beta desktop. Go here: https://community.kde.org/Plasma/Live_Images#Shipping_the_latest_code_from_Git and download choose and install a distro that ships either the master or Plasma 5.21.90.

See you all soon!

Plasma and the systemd startup

About

Landing in master, plasma has an optional new startup method to launch and manage all our KDE/Plasma services via a systemd user instance rather than the current boot scripts. This will be available in Plasma 5.21.

It is currently opt-in, off by default. I hope to make it the default where available after more testing and feedback, but it is important to stress that the current boot-up method will exist and be supported into the future. A lot of work was put into splitting and tidying so the actual amount of duplication in the end result is quite small and managable.

Our logos are weirldy similar...conspiracy?

Motivation

Overlapping requirements

We have to start a lot of things in a given order. Systemd naturally is designed for handling doing this.

Whilst we already have a somewhat working system, it has issues. One classic problem is services that talk to other services needing to be both spawned but also fully initialised before the other can send a message. DBus activation solves a lot; but not quite enough.

For example, we have the real world case of scripts run long before plasmashell trying to send notifications; we want DBus daemon to know our notification server it's activatable so that it will pause the dispatch of the message, but making it DBus activatable can't guarantee the dependencies are run in the correct order. This is currently solved with a genius horrific hack.

But starting things up is only half the battle. We currently have a big issue with shutting things down. Whose responsibility is it to stop running services? A lot of things only exit because their wayland connection is swept away. When dealing with services that potentially restart during the session lifespan, this solution is far from trivial and the current situation is fundamentally broken.

Customisation / Sysadmin Familiarity

Most users and especially sysadmins already have to learn how to use their init system. Whether it's simply enabling a service on boot, or something much more complex users are exposed to it already. If not, there is good existing documentation and countless Stackoverflow posts to answer any question.

We had a meeting akademy 2 years ago about use of systemd and it was the sysadmins from Limux who clearly knew far more than any programmer as to how it should all work. There is a lot more merit in using existing things than just sharing code.


Another big motivating factor was the ability for customisation. The root of Plasma's startup is very hardcoded. What if you want to run krunner with a different environment variable set? or have a script run every time plasmashell restarts, or show a UI after kwin is loaded but before plasma shell to perform some user setup? You can edit the code, but that's not easy and you're very much on your own.

Systemd provides that level of customisation; both at a distro or a user level out of the box. From our POV for free.

CGroups and resource limits

I've talked about use of cgroups for applications for better control of resources. Especially as more things become multi-process.

CGroups and slices provide several benefits over what we can do currently. Because resources are always relative to within the parent slice, we are able to boost resources as well as limit things. This will allow us to bump kwin and such ever so slightly without needing elevated cap privileges.

This also works the other way; cgroups have some extra scheduler features not otherwise available. We can not just set weigh to a process, but also absolute limits. This could be useful for file indexers and alike to minimise battery drainage or capture runaway processes. CGroups are where all the new kernel features are.

Memory management is another factor, when we run out the kernel comes in and kills some processes.
We can tag some services as being safer to OOM kill than others with a lot more granularity than a single oomscore adjustment. We can provide expected memory usages for a service, we can put defaults levels on entire slices at once, and it's all easily customisable by the user if their opinions don't match our upstream defaults.

Logging

The current state of logging is a mess. One of two things happen:
Either stderr from a service is lost, or logs go into one giant "xsession-errors" file. This file exists on a real file system slowly expanding and expanding. It doesn't rotate, if it gets too big your home folder becomes full and things explode. Each time you log in, we override that log, so we lose any history.

The alternative is that logs are simply lost.

Systemd provides queryable logging with each unit, and each invocation of each unit being separate. It's already making my life a lot easier for the last bugs I've been working on. When people on bugzilla have this too, it'll make everything easier.

The road to get here

Previous implementations

A plasma systemd boot has been tried before by Elias Probst. It worked well as a demo, and was used as inspiration for some parts. The key difference now is instead of trying to design around plasma, we're changing plasma to fit what we need and really put the focus on every last detail.

Refactor, Split, Refactor, Split, ...

Unintuitively the first start of a port to systemd was a complete rewrite of the legacy path.

Plasma used to be managed by a tree of processes that would spawn, set up some environment variables, spawn the next part of the boot and continue running. We eventually get to ksmserver, which was a monolith that did many many things. It started several hardcoded processes like kwin, then all the autostart .desktop files, and then session restore. There were many reasons to break this up that I outlined when I started in 2018.

Since then we've been chopping a tiny part out every release. Each release would split out a tiny part at a time so we could always be on top of any regressions that happened.

Once we'd refactored and rewrote some parts we could have a very clear and readable understanding of what actually happens in a plasma boot and what is really dependent on what.

The biggest problem that we have is many things rely on environment variables or even xrdb databases to be correct at the time of launching; several early parts of the boot (kcminit, kded phase0) will adjust these and the next parts may rely on these for correct theming.

Another massive rewrite that landed (by Kai Uwe Broulik) was the detaching of kwin from ksmserver. kwin needs to know which session we're restoring so that restored windows can be placed at the right place; this previously came from an environment variable that means we needed ksmserver up and running to spawn kwin. We rewrote how that all works and gets communicated so the order can be agnostic but still without any features getting lost.

Final patchset

Eventually we got to a point where ksmserver is only dealing with X11 session management. We have another quite tiny binary (plasma_session) that only deals with starting services in a specific order. And everything else is completely separate standalone independent components. This has been released for a while. Even without the systemd boot, everything is in a much much cleaner better off state.

The final patch to use systemd is then really quite boring and small. We just don't call the plasma_session binary, and instead try to start a systemd target; plasma-workspace.target.

All our core services ship their own .service files which plasma-workpace requires for a full Plasma session.

This picture shows random circles and lines. I felt it helped break up the wall of text.

Autostart files

One of the key aspects of startup is handling autostart .desktop files in /etc/xdg/autostart or ~/.config/autostart. We need this to work as before.

Benjamin Berg (of Red Hat and Gnome) came up with a very elegant solution using a systemd generator. It goes through all the .desktop files parsing them and generates an appropriate systemd service automagically.

For application developers everything keeps working the same, but to an end user we can interact with them like native services. We get the best of both worlds.

Despite being a shared implementation even KDE-specific things like X-KDE-AutostartCondition just work. Implementation wise the generator converts this into an ExecCondition= line which then calls into a plasma-shipped binary to check if it's allowed.

Current state

Is it faster?

A lot of the prep work over the past few years to get to this state has made absolutely massive differences. The splitting found bugs dead code and potential optimisations that add up to magnitudes of difference.

As for switching over to the systemd itself, it should be roughly the same. Ultimately we're triggering the exact same set of things in the exact same order with the exact same blocks if we've done our job properly. Sorry to disappoint!

Is it finished?

The fundamentals are definitely at a point that I think are stable and working; but we haven't enabled all of the potential extra features we have available. I would welcome people who are experiences to give feedback and help out.

Enabling

You must have latest master of Plasma, it is not in the 5.20 beta.

Enable with:

kwriteconfig5 --file startkderc --group General --key systemdBoot true

As mentioned above there are checks that you have the systemd 246 or newer for the generator of existing autostart services to work.

To confirm the boot worked correctly you can run

systemctl --user status plasma-plasmashell.service

It should show as active.

This safety check of 246 can be skipped. You will lose auto-activation of apps using the classic .desktop file approach, but everything else still works. I won't mention how publicly to avoid false bug reports, but as a dev you can probably find out.

Dev setups - a caveat

If you build your own KDE and install into a non standard prefix, instead of getting Plasma from your distribution, there is one new hurdle. Systemd user sessions starts earlier than most people set their custom prefixes so naturally it can't find things in your new paths.

There are multiple solutions, simplest is to re-run ./install-sessions script in plasma-workspace which contains a workaround for systemd to find services.

Wrap Up

A lot of the work done to get to this stage has been extremely beneficial to all users regardless of whether they end up actually using this. We fixed and cleaned so much along the way.

I strongly believe the benefits it offers are very real, and look forward to hearing feedback from users using it.
If you do have any issues please do reach out to me in the usual channels.

Plasma Beta Review Day

Plasma 5.20 is now in beta, which gives us one month of intense testing, bugfixing and polishing.

During this time we need as many hands on deck as possible to help with finding regressions, triaging incoming reports and generally being on top of as much as possible.

In order to make this process more accessible, more systematic and hopefully more fun we are going to run an official "Plasma Beta Review Day"

Who?

Any user of plasma able to install the latest beta or run a live ISO with our beta on it who want to help.

When?

Thursday the 24th September; midday to midnight CEST, with people welcome to turn up and drop out whenever.

Where?

Join us in the webconferencing channel: https://meet.kde.org/b/dav-gmr-qzx The webserver we used for Akademy. There will be a room available.
You can join with a camera and a microphone or just in the classic text chat.

What will this consistent of?

  • Introductions to bugzilla for people who want support filing or triaging their first bugs
  • Being assigned short buglists to validate, de-duplicate, for those more experienced
  • Going through a defined list of all the new areas of Plasma to check for regressions
  • Devs being online so we can get debug info for issues you have so we can identify and fix things in real time

What should I prepare?

Ideally get yourself set up with a beta of the latest Plasma. Follow the links for:
https://community.kde.org/Plasma/Live_Images for an image running either master or Plasma 5.19.90

I hope to see you all soon!

Running PlasmaShell with Vulkan

About QtQuick and RHI

QtQuick, in one word, is amazing.

QtQuick, in slightly more words, is a scene graph implementation. At a developer level we create abstract "Items" which might be some text or a rectangle etc or a picture. This in turn gets transformed into a tree of nodes with geometry, "materials" and transforms. In turn this gets translated into a big long stream of OpenGL instructions which we send to the graphic card.

Qt6 will see this officially change to sit on top of the "Render Hardware Interface" stack, that instead of always producing OpenGL, will support Vulkan, Metal and Direct3D natively. The super clever part about it is that custom shaders (low level fast drawing) are also abstracted; meaning we will write some GLSL and generate the relevant shader for each API without having to duplicate the work.

This blog series gives a lot more detail: https://www.qt.io/blog/qt-quick-on-vulkan-metal-direct3d.

Plasma code primarily interacts with Items and occasionally nodes, slightly above the level being abstracted.

Current State

Qt 5.15 ships with a tech preview of RHI and the Vulkan interface. I spent some time to set it up and explore what we need to do to get our side fully working. With some packages installed, a few plasma code fixes and some env vars set, I have a fully working Vulkan plasmashell.

Screenshot

Unsurprisingly it looks the same, so a screenshot is very unexciting. I enabled the Mesa overlay as some sort of proof.
The reason it shows 2fps is because plasmashell only updates when something has changed; in this case the textcursor blinking every 500ms.

Despite it being a preview it is in a damn good state! Things are usable, and really quite snappy, especially notification popups.

What needs work

Some things need work on our side, in particular:

  • All of our custom shaders need porting to the updated shader language.
  • Taskbar thumbnails use low level GL code that needs one extra layer of implementation.
  • Use of QtQuickWidget in systemsettings.

It means some elements are invisible or don't render with the full graphical effects, or in the last case, crash.

But in the whole scheme of things, everything is in a very encouraging state

What about KWin?

Whilst QtQuick is the cornerstone of plasmashell, systemsettings and so many applications, for historical reasons KWin made use of OpenGL before it was a mainstream part of Qt. Therefore this setup is mostly unrelated to KWin. Fortunately there's no reason these two have to be in sync.

Wrap up

This isn't usable for end users, and given it's only a tech preview upstream, this is not something we can ever expect to officially support within Plasma 5.

But we can start doing the prep work, also the current state is so promising I think we can deliver a native Vulkan experience to users on the first day of Plasma 6.

What we can learn from Plasma telemetry

Since Plasma 5.18, about 7 months ago, Plasma has shipped with a telemetry system. Opt in (i.e off by default) it requires users to go to choose if (and how much) data to send to us.
No private or identifying information is sent, and everything is stored inline with our privacy policy.

Currently we have hit just shy of 100,000 updates!

We have started off requesting very little information. Versions, GPU info and some basic screen information. However the library powering this is extremely powerful and capable of so much more that we can try and build on in the future to try and identify weak areas and areas we need to invest time and effort and also to identify features or platforms that maybe are under utilised and can be dropped.

I have recently been trying to improve on how we can extract and visaulise data from the data collected and draw some conclusions. I want to present some of the aggregated metrics.

What we can learn from Plasma statistics

Used Plasma Versions

To explain our numerical versioning additons.
.80 = git master before the next version, up until the beta
.90 = after the stable branch forks for release up until the next.0 release.

LTS

The biggest surprise is that LTS is currently only used by around 5% of people with 93% of reporting users on the lastest stable (5.19)

At the current rate it does make me question whether the LTS is worth it. Maybe LTS will only gain traction when the next LTS distro gets a release which could come later?

Obviously any decision will only be made as the result of a discussion with all stakeholders, but this is definitely raising questions.

Testing

Right now about 1.5% of people run master, I had expected those users to be the most into helping with the telemetry and skew this further.
The bigger surprise is the number of people running master seems to fluctuate. Weekly users can go between 30-60. I had expected this to be constant.

It is comforting to see that betas do get more users, going up to 2.5% of our reporting userbase.

Interesting observations

There is one person who is still activitly using Plasma 5.18 beta. Not 5.18, the beta for 5.18.. which was 8 months ago.
I have so many questions.

Screens

800x600 resolution setups are still a thing we need to support, even if they are just VMs they're still used. This is very relevant as we often get commits blindly setting a minimum size hint of a window to be bigger because it "looks nicer". I now know I do still need to ensure in review that we don't break these setups.

We also see that ultrawide monitors are surprisingly unpopular despite clearly being the best monitor setup possible.

Graphic drivers

The graph is pretty self-explanatory, Intel has the most, but the Nvidia proprietory driver comes in at ~1/4 of the total users.
It makes it an important target to support as best as we can even if it comes with its share of problems.

My personal setup doesn't match your conclusions!

The reason we want to use telemetrics is to drive decisions with real data.

As a user you are more than welcome to choose to opt into the statistics or not, we understand privacy is important which is why everything is opt-in only.
However, real decisions will ultimately be based on the data we have available, if you want your usecases to be noted, please do consider submitting to the telemetry to us.
To enable telemetry please go to "System Settings" and select the "User Feedback" tab.

Bringing modern process management to the desktop

A desktop environment's sole role is to connect users to their applications. This includes everything from launching apps to actually displaying apps but also managing them and making sure they run fairly. Everyone is familiar the concept of a "Task manager" (like ksysguard), but over time they haven't kept up with the way applications are being developed or the latest developments from Linux.

The problem

Managing running processes

There used to be a time where one PID == one "application". The kwrite process represents Kwrite, the firefox process represents Firefox, easy. but this has changed. To pick an extreme example:
Discord in a flatpak is 13 processes!

It basically renders our task manager's process view unusable. All the names are random gibberish, trying to kill the application or setting nice levels becomes a guessing game. Parent trees can help, but they only get you so far.

It's unusable for me, it's probably unusable for any user and gets in the way of feeling in control of your computer.

We need some metadata.

Fair resource distribution

As mentioned above discord in a flatpak is 13 processes. Krita is one process.

  • One will be straining the CPU because it is a highly sophisticated application doing highly complicated graphic operations
  • One will be straining the CPU because it's written in electron

To a kernel scheduler all it would see are 14 opaque processes. It has no knowledge that they are grouped as two different things. It won't be able to come up with something that's fair.

We need some metadata.

(caveat: Obviously most proceses are idling, and I've ignored threads for the purposes of making a point, don't write about it)

It's hard to map things

Currently the only metadata of the "application" is on a window. To show a user friendly name and icon in ksysguard (or any other system monitor) we have to fetch a list of all processes, fetch a list of all windows and perform a mashup. Coming up with arbitrary heuristics for handling parent PIDs which is unstable and messy.

To give some different real world examples:

  • In plasma's task manager we show an audio indicator next to the relevant window, we do this by matching PIDs of what's playing audio to the PID of a window. Easy for the simple case... however as soon as we go multi-process we have to track the parent PID, and each "fix" just alternates between one bug and another.
  • With PID namespaces apps can't correctly report client PIDs anymore.
  • We lose information on what "app" we've spawned. We have bug reports where people have two different taskmanager entries for "Firefox" and "Firefox (nightly)" however once the process is spawned that information is lost - the application reports itself as one consistent name and our taskbar gets confused.

We need some metadata.

Solution!

This is a solved problem!

A modern sysadmin doesn't deal in processes, but cgroups. The cgroup manager (which will be typically systemd) spawns each service as one cgroup. It uses cgroups to know what's running, the kernel can see what things belong together.

On the desktop flatpaks will spawn themselves in cgroups so that they can use the relevant namespace features.

You're probably already using cgroups. As part of a cross-desktop effort we want to bring cgroups to the entire desktop.

Example

Before and after of our system monitor

Ultimately the same data but way easier to read..

Slices

Another key part of cgroup usage is the concept of slices. Cgroups are based on a heirachical structure, with slices as logical places to split resource usage. We don't adjust resources globally, we adjust resources within our slice, which then provides information to the scheduler.

Conceptually you can imagine that we just adjust resources within our level of a tree. Then the kernel magically takes care of the rest.

More information can be found on slices in this excellent series World domination with cgroups.

Default slices

This means we can set up some predefined slices. Within the relevant user slice this will consist shared of

  • applications
  • the system (kwin/mutter, plasmashell)
  • background services (baloo, tracker)

Each of these slices can be given some default prioritisations and OOM settings out of the box.

Dynamic resource shifting

Now that we are using slices, and only adjusting our relative weight within the slice, we can shift resource priority to the application owning the focused window.

This only has any effect if your system is running at full steam from mulitple sources at once, but it can provide a slicker response at no drawback.

Why slice, doesn't nice suffice?

Nice is a single value, global across the entire system. Because of this user processes can only be lowered, but never raised to avoid messing with the system. With slices we're only adjusting relative weight compared to services within our slice. So it's safe to give the user full control within their slice. Any adjustments to an application, won't impact system services or other users.

It also doesn't conflict with nice values set by the application explicitly. If we set kdevelop to have greater CPU weight, clang won't suddenly take over the whole computer when compiling.

Fixing things is just the tip of the iceberg

CGroup extra features

CGroup's come with a lot of new features that aren't available on a per-process level.

We can:

  • Set limits so that a CPU can't use more than N%
  • We can gracefully close processes on logout
  • We can disable networking
  • We can set memory limits
  • We can prevent forkbombs
  • We can provide hints to the OOM killer not just with a weight but with expected ranges that should be considered normal
  • We can freeze groups of processes (which will be useful for Plasma mobile)
    ...

All of this is easy to add for a user / system administrator. Using drop in's one can just add a .service file [example file link] to ~/.config/systemd/user.control/app-firefox@.service and manipulate any of these.

[caveat, some of those features works for applications created as new transient services, not the lite version using scopes that's currently merged in KDE/Gnome - maybe worth mentioning]

Steps taken so far

Plasma 5.19 and recent Gnome now spawn applications into respective cgroups, but we're not yet surfacing the results that we can get from this.

For the KDE devs providing the metadata is easy.

If spawning a new application from an existing application be sure to use either ApplicationLauncherJob or CommandLauncherJob and set the respective service. Everything else is then handled automagically. You should be using these classes anyway for spawning new services.

For users, you can spawn an application with either kstart5 --application foo.desktop"

That change to the launching is relatively tiny, but getting to this point in Plasma wasn't easy - there were a lot of edge cases that messed up the grouping correctly.

  • kinit, our zygote process really meddled with keeping things grouped correctly
  • drkonqi, our crash handler and application restarter
  • dbus activation has no knowledge of the associated .desktop file if an application is DBus activated (such as spectacle our screenshot tool)
  • and many many more papercuts throughout of different launches

Also to fully capitalise on slices we need to move all our background processes into managed services and slices. This is worthy of another (equally lengthy) blog post.

How you can help?

It's been a battle to find these edge cases.
Whilst running your system, please run systemd-cgls and point out any applications (not background services yet) that are not in their appropriate cgroup.

What if I don't have an appropriate cgroup controller?

(e.g BSD users)

As we're just adding metadata, everything used now will continue to work exactly as it does now. All existing tools work exactly the same. Within our task manager we still will keep a process view (it's still useful regardless) and we won't put in any code that relies on the cgroup metadata present. We'll keep the existing heuristics for matching windows with external events, cgroup metadata would just be a strongly influence factor in that. Things won't get worse, but we won't be able to capitalise on the new features discussed here,

Plasma 6 porting work we need help doing now!

"6.0" is a word that brings a lot of excitement but also a lot of aprehension and for good reason. The reason our .0 releases sometimes struggle to match the quality isn't due to changes in the underlying libraries changing but how much we have to port away from the things that we've deprecated internally and have put off porting to.

Too many changes in one release becomes overwhelming and bugs creep in without time to get address them.

We want to be proactive in avoiding that.

At a recent Plasma sprint, we went through some of our bigger targets that we want to finish completely porting away from in time for the 6.0 release that we can actively start doing within the 5.x series where we can do things more gradually and inrecementally.

I've listed two big topics below.

DataEngines

History

DataEngines were a piece of tech from the KDE4 era. Effectively the idea was to provide an abstract mechanism to expose arbitrary data or plugins to a scriptable format usable by various language bindings. A valid task at the time, but Qt5 provided a much more efficent mechanism to do this capitalising on the metaobject system and models

DataEngines mostly form a now unnecessary layer of indirection that makes QML code hard to parse and suboptimal to run. They also are Plasma specific, which doesn't help other QML consumers of the same data.

We kept with them for Plasma 5 as there's some valid, perfectly working code exposed through them, and have been slowly porting away.

Task

The task isn't to port all existing DataEngines. Some already have replacements under different names, some aren't widely used.

We want to find all existing applets that use DataEngines and make sure there are modern new QML bindings we can port the repsective applets to.

A list of tasks can be found:
https://phabricator.kde.org/T13315

System settings modules

History

Systemsettings is powered by a multitude of KDE Configuration Modules (KCMs) each one is a standalone plugin. There are nearly 100 modules, and with dated code stretching back over 20 years.

KDE, plasma especially is on the turning point between two toolkits. Some things are written in QWidgets and other things use the newer QtQuick. We try and hide this from the user; everything should of course look seamless. Implementation details shouldn't affect the UI.

We've been slowly porting our system settings modules, and whilst some were rocky at first, the later results are looking really nice and really polished, able to make use of emerging new patterns in our Kirigami toolkit.

Mixing two toolkits in the same process leads to a lot of complications that leak into the UI as well as overhead. We want to make it an objective to see if we can port everything to lead to an overall simpler stack.

Task

The overall task is to find KCMs that haven't been ported yet and make the appropriate changes.

However, this isn't just a goal of blindly porting.

We want to take a step back, really polishing the UX, tidying the code to have a good logic vs UI split and revsit some modules that haven't been touched in a long time, putting effort into our underlying frameworks to make sure we have everything ready on the QtQuick side to do so.

A list of KCMs and their porting status can be found:
https://phabricator.kde.org/tag/plasma_kcm_redesign/

Getting involved

The best way to get involved is to comment on the relevant phabricator ticket. Or swing by #plasma on IRC and let people know what you want to work on and we'll help you get started.
Some of the systemsettings modules have mockups from the VDG and we want to keep them in the loop.

Running kwin wayland on Nvidia

Note: If you are visiting from the future please refer to the wiki here which will be more up-to-date than this blog post.


I recently switched to running wayland fulltime on all my machines, including my Nvidia box. It was good that I did as it seems Qt regressed, fortunately I was able to fix it just in time for the final Qt LTS.

I've also updated the wiki with the steps needed to run nvidia wayland whilst I had the information fresh.

Steps

Pre-requisites

You need a very up-to-date Qt. Make sure you have >= Qt5.15.0 or 4bd13402f0293e85b8dfdf92254e250ac28094c7 cherry-picked.

Mode setting

Check our driver is running in modesettings mode

cat /sys/module/nvidia_drm/parameters/modeset

It should print "Y"

If not, modify your kernel command line and add the line nvidia-drm.modeset=1. Search for "kernel parameters" in your distribution's documentation. Though https://wiki.archlinux.org/index.php/Kernel_parameters is typically a good guide for any distro.

Tell kwin to run with EGLStreams.

You should set the environment variable "KWIN_DRM_USE_EGL_STREAMS=1"

The easiest place is to add a file /etc/profile.d/kwin.sh with the line
export KWIN_DRM_USE_EGL_STREAMS=1

Verify this is working with the command line tool env after rebooting.

Login

Select "Plasma (wayland)" from your login manager
Enjoy your beautiful super-fast accelerated wayland desktop!

That's a lot of steps!

Obviously this number of manual steps is unnacceptable and I hope to reduce this over the next Plasma cycle.

With the user telemetry we get from Plasma users, we know a sizable amount of X11 users run the proprietory nvidia wayland driver. Ultimately user experience comes first above all else, so it is important.

Clearing up some facts on Nvidia and wayland

I read a lot of misinformation that on the wayland nvidia situation on the social medias. I'll try and clarfify some bits.

  • You cannot say "Nvidia doesn't support wayland". This doesn't make any sense. There's no relevant part of wayland for it to support or not support.

  • You can say "Nvidia doesn't support GBM" . A system for passing buffers (window contents) between places. Despite the name "Generic" Buffer Manager, GBM is very closely tied to Mesa. Mesa is the base of effectively all the open source Linux drivers.


  • You cannot say "Nvidia doesn't follow /the/ standard". There are a few. EGLStreams actually is a fully defined open standard defined in Khronos, which is as much of a core extension as possible. It's used by multiple vendors, albeit mostly horrible Mali chips

  • You can say it would be better if there was just one magic implementation that compositors and toolkits had to support. Maybe Vulkan (or more specifically WSI) will fix it 🙂


  • You cannot say that the experience of using wayland apps is worse on wayland. Practically it's just as fast, accelerated with all the shiny effects.

  • You can say that the experience of using GL X11 apps on wayland is worse. As XWayland doesn't have EGLStream support data gets copied the slow way. If you're playing an X game or doing some Blender work this is noticable.


  • You cannot say that KDE devs won't fix it. As this post starts with me literally fixing a toolkit when it came up!

  • You can say that Nvidia wayland experience has fewer testers. Though like all things wayland this is something you can help with! Get involved, file bugs, come join us on #kwin on Freenode. Start your kwin wayland hacking journey!

Improving Plasma’s Rendering (Part 1/2)

Background:

Many parts of Plasma are powered by QtQuick, an easy to use API to render shapes/text/buttons etc.
QtQuick contains a rendering engine powered by OpenGL making full use of the graphics card keeping our drawing super fast, super lightweight and in general amazing...when things work.

Handling Nvidia context loss events

When the proprietary nvidia driver comes out of suspend, or from a different terminal the pictures that it had stored get corrupted. This leads to returning to an image like this. Worst as we show stray video memory it even leak data on the lock screen.

When this occurs it might look something like this:
Like something from the Tate Modern...a horrible mess.

With various text or icons getting distorted seemingly at random.

Fortunately NVidia do have an API to emit a signal when this happens, allowing us to at least do something about it.

This handling to be done inside every single OpenGL powered application, which with the increasing popularity of QtQuick is a lot of places.

The new state

After over a year of gradual changes all of Plasma now handles this event and recovers gracefully. Some of these changes are in Qt5.13, but some more have only just landed in Qt 5.14 literally this evening.

Some notes for other devs

A QtQuick application

Due to some complications, handling this feature has to be opt-in. Triggering the notification leads to some behavioural changes. These behavioural changes if you're not prepared for will freeze or crash your application.

To opt-in one must explicitly set the surfaceFormat used for the backing store to have:
QSurfaceFormat::setOption(QSurfaceFormat::ResetNotification)

Within KDE code this can be done automagically with the helper function
KQuickAddons::QtQuickSettings::init() early in your main method.
This sets up the default surface format as well as several other QtQuick configurable settings.

Everything else is now all handled within Qt, if we detect an event the scene graph is invalidated cleaned and recreated.

However, if you create custom QQuickItem's using the QSG classes, you must be really really sure to handle resources correctly.

As a general rule all GL calls should be tied to the lifespan of QSGNodes and not of the QQuickItem itself. Otherwise items should connect to the window's sceneGraphInvalidated signals.

Using QtOpenGL outside QtQuick

To detect a context loss, check for myQOpenGLContext->isValid() if a makeCurrent fails.

In the event of a context loss one must discard all known textures, shaders, programs, vertex buffers, everything known to GL and recreate the context.

One especially quirky aspect of this flag is that in the event of a context loss glGetError will not clear itself, but continue to report a context loss error. Code trying to reset all gl errors will get stuck in a loop. This was the biggest battle in the seemingly never-ending series of patches upstream and why it has to be opt-in.

In the case of a shared context a reset notification is sent to all contexts who should recreate independently.

You can read more about the underlying GL spec.