Plasma 6.0 Alpha – What this means

What the Alpha means

The alpha release primarily focuses on preparing our software for a future release. It involves handling unreleased dependencies, version numbers, co-installation conflicts, and all the relevant bookkeeping work.

This release has been somewhat manic, with issues surfacing up to the last minute. However, that's precisely what this early release is for: resolving these issues now and gathering feedback on packaging to ensure a smoother transition to the beta phase.

Feature Freezes

The complete feature freeze for Plasma is scheduled for the day of the first beta, which is on November 29th. After that, bug fixing will be the sole focus for a period of three months leading up to the final release.

A soft freeze is set for the week before, on the 22nd, to accommodate any significant changes and ensure a seamless beta release.
This is mostly a case of doing a final round of landing straggling merge requests rather than developers starting anything new.

Should I run it?

Plasma 6 is in a pretty good state; I've been using it as a daily driver without issues for months

The alpha release does have known issues, some already fixed, but unreleased with the pending Qt 6.6.1, some our side fixed since alpha tagging, and some we need to follow up, particularly in the more esoteric areas of Plasma.
If you're the sort of user that wants to help out Plasma and are of a skill level where you're happy to log into another desktop session if things are temporarily down.

I would recommend as a user finding a distribution that covers 'git master' builds rather than any snapshot as it can provide a more dynamic list . A list can be found at

Pre-upgrade steps

Please take a backup of ~/.config/plasma* before upgrading. Just in case you need to file bug reports about config migration from 5.

Can I get involved?

Absolutely! It's an exciting time for Plasma and KDE in general. There are numerous tasks you can dive into. Check out our onboarding wiki here:

QtWayland 6.6 Brings Robustness Through Compositor Handoffs

Every release has a killer feature. Qt 6.6 features the opposite - staying alive. This blog post describes work to make Qt clients more robust and seemlessly migrate between compositors, providing resistance against compositor crashes and more.


Right now if you restart pulseaudio your sound might cut out, restart NetworkManager and you lose your wifi, restart an X11 window manager and your decorations disappear.

But within a second it's all back to normal exactly where you left off with everything recovering fine.

This isn't true for display servers. If X11 restarts you're back at the login prompt. All drafts lost, games unsaved, work wasted.


For X11 this was unfixable; clients relied on memory stored by the Xserver, they made synchronous calls that were expected to return values, and multiple clients talked to multple clients.

This was a real problem in my early days of Linux, X11 would lock up frequently enough that most distributions had a shortcut key to restart the server and send you back to the login prompt.

It's less of an issue now as X11 has been in a lengthy period of feature freeze.


Wayland makes it possible to fix all this. Memory allocations are client side, all operations are async, all protocols are designed that the compositor always has complete control.

Yet the current user-facing state is considerably worse:

  • Compositors and displays servers are now the same process, doubling the space for errors

  • Compositors are typically extensible with 3rd party plugins and scripts

  • The wayland security model means the compositor absorbs even more functions from global shortcuts to screencasting and input-method handling

  • The wayland ecosystem is not in a period of feature freeze with wayland protocols constantly evolving to cover missing features and new ideas.

    Even if there was a perfect compositor:

  • 40% of kwin's crash bug reports are either upstream or downstream causes

  • The current compositor developer experience is limited with developers having to relogin and restart their apps and development setup just to test their latest changes.


The solution for this? Instead of exiting when the compositor closes, simply...don't!

If we could connect to a new compositor we just need to send the right amount of information to bring it in sync and notify the application code of any changes to bring this in sync.

For Qt applications all this information is handled in the backend, in the Wayland Qt Platform Abstraction (QPA).

Qt already has to handle screens and input devices being removed, clipboards being revoked and drag and drops cancelled. Supporting a whole reset isn't introducing any new work, we just have to trigger all of these actions at once, then reconnect to the newly restored compositor and restore our contents.

Applications already have to support all of these events too as well as handle callbacks to redraw buffers. There's no changes needed at an application code level, it's all done as transparently as possible.

Handling OpenGL is a challenge, right now we don't have a way to keep that alive. Fortunately we put in lots of effort previously in supporting GPU resets throughout the Qt stack. The QtWayland backend fakes to the applications that a GPU reset has occured through the Qt abstractions.

For the majority of applications including all QtQuick users this happens automatically and flawlessly, building on the work that we put in previously.

Whilst the overall concepts might sound invasive, the final merge-request to support all of this for all Qt applications was fewer lines than supporting middle-click paste.

Almost no work needs doing on the compositor side. For a compositor there's no difference between a new client, and a client that was running previously reconnecting. The only big change we made within Kwin is having a helper process so the whole process can be seemless and race-free.


Path Forward

This post is about Qt, but the world is bigger than that. Not only does this technique work here, but we have pending patches for GTK, SDL and even XWayland, with key parts of SDL merged but disabled already.

The challenge for the other toolkits is we can't use the same OpenGL trick as Qt. They either lack GPU reset handling either at a toolkit level or within the applications.

Supporting OpenGL requires new infrastructure. We've tried from the start to get some infrastructure in place to allow this.It was clear that proposing library changes for client reconnection support was an uphill battle whilst being unproven in the wild.

After Qt6 is rolled out to a wider audience and shown to work reliably, we'll refocus efforts on pushing upstream changes to the other platforms.

Potential Perks

This isn't just about crashes. If support becomes mainstream there will be additional opportunities:

  • We can easily upgrade to new features without having to log out and back. This is one of the reasons why kwin development is happening at such a rapid pace in recent months. We're not having to test in fake nested sessions or waste time logging in and setting up between changes.

  • We can support multihead, running different compositors per group of outputs and move seemlessly between them.

  • It's feasible to switch between compositors at runtime. With the application handling the reconnect logic, they can easily handle the case where compositor feature sets vary.

  • Checkpoint restore in userspace, being able to suspend your application to disk and then pick up where you left off like nothing happened. It could never work for X applications, but with wayland reconnect support we can make it work.


More information about the long term goal this can be found at the kwin wiki, or each out to me directly if you want to add support.

Discuss this at

New ideas using Wayland Input Methods


What are Input Methods?

Input Methods are a way of allowing third parties programs to manipulate text in text input boxes in applications.

The two traditional uses of this are on-screen keyboards like one has on a mobile phone, and Chinese, Japanese and Korean (CJK) input where a keyboard doesn't map directly to the text we want to see on screen.

It's a lot more than merely injecting keys; the input method needs to have an awareness of the text cursor and the ability to manipulate whole words and sentences at a time. This is extremely important for the CJK way of typing where multiple keyboard keys are used to compose a single character. It's also important for virtual keyboards that have advanced features like auto-completion and spell-checking words.

X vs Wayland

On X11, the standard protocol (XIM) is hardly used, we ended up with every input method creating a plugin for every toolkit. This sufficed, but was not very scalable, doesn't have the best desktop interaction, and is messy to deploy.

The story with wayland is also far from perfect. With the compositor acting as a broker, we have a protocol from the plugins themselves (InputMethod) and protocols to clients (TextInput). Unfortunately there are at least 6 active versions of these protocols, which needs some work.

A recent meeting with other wayland devs has hopefully helped agree on a path forward for the TextInput side, with me taking on a lot of active tasks moving this forward. But we still need to sort out the InputMethod side.

I used Akademy to speak to people using CJK input and got very valuable feedback about the Korean language and some changes needed to make FICTX work properly there.

If you have experience with Chinese or Japanese input on wayland, please do reach out.

A Playground of Input Method Ideas

I don't use CJK keyboard entry nor a virtual keyboard on a daily basis, but there are things I do use that could benefit from the InputMethod tech. Now we have a standard we can look at using this stack for more creative ideas. Doing this now can help work out what the requirements are for the protocols as we stretch it in new directions.

This post describes some ideas of how else we can use Input Methods to improve the Linux desktop experience.

Convenient Clipboard Connections

With a shortcut we can enter clipboard entries, previewing in place. We're able to use the preedit text to display metadata on available entries which won't get committed to provide a streamlined interface, or we could bring up a menu, or both.

Compared to the existing klipper integration we can open at the current carret position, and provide something a bit more streamlined. The preview also helps solve the frustrating problem of pasting only to find you've now got too much or too little whitespace where you hit paste.

Easy Emoji Entry

My attempt at understanding young people. Typing ':' and then a string launches an inbuilt emoji browser based on the typed string.

These days a lot of clients now have in-built support, including all GTK4 text areas, so it might not be needed. Maybe the ship for this has sailed, or there could be still value in centralising it and having a common pattern.

Direct Diacritic Display

Building on an existing merge-request by Janet Blackquill, pressing and holding a key will bring up a prompt with relevant characters based on the held key.

The original version merge request injected itself into the application repurposing a hook meant for themeing, this broke too many clients to be released as part of Plasma and it was unfortunately reverted.

Taking that idea but using input methods should avoid those problems, whilst also solving the issue of key-repeats being sometimes needed.

Timely Translation Tasks

This was solved by having a second text field open when activated where the user can type their native text, so the translations can be shown in the main text area. The current code utilises the command line tool "trans" which in turn makes web requests to proprietory web services, but we could look at other options if we turned this into a product.

Simply Speak

The one I'm most excited about. Working local dictation software that can integrate into every textfield out of the box.

I got best results using whisper-fast with a voice-activity filter on top. This was wrapped into a daemon with a DBus interface we can then use to start and stop on demand. It needs some work to be a shippable product.

Current State

All of the above are fully working examples, but everything was pieced together in a few evenings at Akademy and some time at the airport.

It's not something I would ship to users; the point right now is to be a playground, uncover issues throughout the stack, and try a few different things. That's why we use different techniques for bringing up the input methods and prompts within it rather than trying to be coherent.

That said, I have made an effort to separate the high level application code from the wayland boiler plate so we can quickly iterate to InputMethodV3 to see what's needed, and turn this into a product as we start to come up with a long term direction.

Running it for yourself

The repo exists at the readme has instructions on how to get set up.

What does it work with?

A form of TextInput is supported in almost all toolkits from Electron, GTK, Qt, SDL and more. This means it's fairly universal across wayland clients.

On the compositor level, InputMethodV1, the only specification in upstream protocols, is supported by only Kwin and Weston, but as noted in the intro this is subject to change in the future.


The playground has exposed several pre-existing deficiencies, especially regarding the placeholder text styling, input panel positioning, key repeats. We can take those forward into fixes that might affect the more important CJK input methods.

As well as being a playground I think there are some ideas worth exploring further, to find a path into the desktop and is yet another reason to be excited about the path to Wayland,

Have more ideas? Let us know in the discussion at

Fixing Wayland: Xwayland screen casting

This blog introduces a new tool to make it easy to stream wayland windows and screens to existing applications running under Xwayland that don't have native pipewire support in an easy-to-use manner than retains full user control throughout.


On my Plasma wayland session right now if I use the screen share function in Discord, I'm presented with the following.

It doesn't work 🙁

I have no windows listed, even though I clearly have windows open. The full screen shares are blank.
The same will be true in MS Teams, Slack, Zoom as well as any other Xwayland apps that involve screencasts.

Linux enthusiasts - and by reading developer blogs you're definitely in this camp - will understand why this is and probably know some technique to avoid this involving changing my setup or workflow in some way to work round things on a per-app basis.

For our wider userbase this isn't enough. Wayland is a technical detail and we want any switch has to be as transparent as possible for as many apps as possible for all cases - including cases we haven't thought of.

Introducing XwaylandVideoBridge

With our new tool, written by Aleix Pol and myself, running the workflow is as follows: I click on the share button. Immediately I'm presented with an option to choose what windows or screens I wish to make available to X.

This is now selectable in my existing application. The stream continues until I stop sharing within the application.

Left: a prompt to choose windows to stream.
Right: How it appears after selection


The bridge works via the same mechanisms as any "native wayland" video streaming tool would work, through the XDG Desktop Portals. Getting data through PipeWire as requested it through the portal with explicit user consent.
Whilst the bridge does mean that any X application could eavesdrop the shared window the user still remains completely at the helm of which windows are shared to X applications and most importantly when.

We could go the route of trying to be completely seamless to the X client with N fake windows all forwarding content on demand, but I like the demonstration that it's possible to not break existing user applications without having to compromise on our lofty wayland goals.


Technically there's an overhead, pragmatically it uses no CPU and any latency is negligible compared to the cost of streaming.


Grab a flatpak from: Our gitlab builder or from source.

Note it requires a non-released libkpipewire, something the flatpak resolves.

Whilst only tested on a Plasma wayland session, this should be usable by any Wayland desktop with a working xdg-desktop-portal, screencasting and standard system tray.
(Edit March 24th: User testing has shown some mixed results on Gnome with colours, and completely not working at all Sway)


Ensure our proxy is running in the background flatpak run org.kde.xwaylandvideobridge optionally setting it to autostart. After that everything should kick in automatically the next time an Xwayland application starts streaming.

How it works under the hood

The inspiration for this came from the debug tool to show pipewire streams in a window whilst we were working on Plasma remote desktop support. If we force that debug tool to run as an Xwayland client, it becomes visible to other Xwayland chat / streaming programs. We had 90% of the code already.

The only remaining step was some sneaky tricks to hide this X11 window from the user's view making it unfocussable, invisible and out of view. We then added detection for when we're being streamed by using the XRecord extension to monitor all clients redirecting the window we own.

It's an excellent example of X11 allowing you to do really, really stupid things, for novel and creative puposes.

Future Plans

This is only an initial Alpha release. How we take it in the future is not completely decided; it might remain a standalone tool moving to flathub or distros, we might proposed it into Plasma 6 by default. There's a possibility the Linux desktop might be at a point where this is redundant.

There's definitely some minor tweaks still to do on the exact workflow.

Please do give feedback and let us know! thread

Plasma 5.22 Beta testing day

Plasma 5.22 is now in beta. This gives us, KDE developers, one month of intense testing, bugfixing and polishing. During this time we need as many hands on deck to help find regressions, triage incoming reports and generally be on top of things as much as possible.

With that in mind, you are officially invited to our "Beta Review Day". Join us online and do QA with us together as a group.


Any Plasma user able to install the latest beta or run a live ISO with our beta on it and who wants to help.


Saturday 22nd May. A full timetable with areas of focus is available on the wiki:


Join us over videoconferencing at There will be plenty of room available, and you can join with a camera and a microphone or just use the classic text chat.

What will we do?

• Introduce you to Bugzilla so you can support us filing or triaging bugs
• Assign short lists to experienced participants so you can validate and de-duplicate bugs
• Go through a defined list of all the new areas of Plasma to check for regressions
• Devs will be online so you can give us debug info for issues you have, and identify and fix things in real time

What should I prepare?

Set up a machine, real or virtual, with the latest Plasma beta desktop. Go here: and download choose and install a distro that ships either the master or Plasma 5.21.90.

See you all soon!

Plasma and the systemd startup


Landing in master, plasma has an optional new startup method to launch and manage all our KDE/Plasma services via a systemd user instance rather than the current boot scripts. This will be available in Plasma 5.21.

It is currently opt-in, off by default. I hope to make it the default where available after more testing and feedback, but it is important to stress that the current boot-up method will exist and be supported into the future. A lot of work was put into splitting and tidying so the actual amount of duplication in the end result is quite small and managable.

Our logos are weirldy similar...conspiracy?


Overlapping requirements

We have to start a lot of things in a given order. Systemd naturally is designed for handling doing this.

Whilst we already have a somewhat working system, it has issues. One classic problem is services that talk to other services needing to be both spawned but also fully initialised before the other can send a message. DBus activation solves a lot; but not quite enough.

For example, we have the real world case of scripts run long before plasmashell trying to send notifications; we want DBus daemon to know our notification server it's activatable so that it will pause the dispatch of the message, but making it DBus activatable can't guarantee the dependencies are run in the correct order. This is currently solved with a genius horrific hack.

But starting things up is only half the battle. We currently have a big issue with shutting things down. Whose responsibility is it to stop running services? A lot of things only exit because their wayland connection is swept away. When dealing with services that potentially restart during the session lifespan, this solution is far from trivial and the current situation is fundamentally broken.

Customisation / Sysadmin Familiarity

Most users and especially sysadmins already have to learn how to use their init system. Whether it's simply enabling a service on boot, or something much more complex users are exposed to it already. If not, there is good existing documentation and countless Stackoverflow posts to answer any question.

We had a meeting akademy 2 years ago about use of systemd and it was the sysadmins from Limux who clearly knew far more than any programmer as to how it should all work. There is a lot more merit in using existing things than just sharing code.

Another big motivating factor was the ability for customisation. The root of Plasma's startup is very hardcoded. What if you want to run krunner with a different environment variable set? or have a script run every time plasmashell restarts, or show a UI after kwin is loaded but before plasma shell to perform some user setup? You can edit the code, but that's not easy and you're very much on your own.

Systemd provides that level of customisation; both at a distro or a user level out of the box. From our POV for free.

CGroups and resource limits

I've talked about use of cgroups for applications for better control of resources. Especially as more things become multi-process.

CGroups and slices provide several benefits over what we can do currently. Because resources are always relative to within the parent slice, we are able to boost resources as well as limit things. This will allow us to bump kwin and such ever so slightly without needing elevated cap privileges.

This also works the other way; cgroups have some extra scheduler features not otherwise available. We can not just set weigh to a process, but also absolute limits. This could be useful for file indexers and alike to minimise battery drainage or capture runaway processes. CGroups are where all the new kernel features are.

Memory management is another factor, when we run out the kernel comes in and kills some processes.
We can tag some services as being safer to OOM kill than others with a lot more granularity than a single oomscore adjustment. We can provide expected memory usages for a service, we can put defaults levels on entire slices at once, and it's all easily customisable by the user if their opinions don't match our upstream defaults.


The current state of logging is a mess. One of two things happen:
Either stderr from a service is lost, or logs go into one giant "xsession-errors" file. This file exists on a real file system slowly expanding and expanding. It doesn't rotate, if it gets too big your home folder becomes full and things explode. Each time you log in, we override that log, so we lose any history.

The alternative is that logs are simply lost.

Systemd provides queryable logging with each unit, and each invocation of each unit being separate. It's already making my life a lot easier for the last bugs I've been working on. When people on bugzilla have this too, it'll make everything easier.

The road to get here

Previous implementations

A plasma systemd boot has been tried before by Elias Probst. It worked well as a demo, and was used as inspiration for some parts. The key difference now is instead of trying to design around plasma, we're changing plasma to fit what we need and really put the focus on every last detail.

Refactor, Split, Refactor, Split, ...

Unintuitively the first start of a port to systemd was a complete rewrite of the legacy path.

Plasma used to be managed by a tree of processes that would spawn, set up some environment variables, spawn the next part of the boot and continue running. We eventually get to ksmserver, which was a monolith that did many many things. It started several hardcoded processes like kwin, then all the autostart .desktop files, and then session restore. There were many reasons to break this up that I outlined when I started in 2018.

Since then we've been chopping a tiny part out every release. Each release would split out a tiny part at a time so we could always be on top of any regressions that happened.

Once we'd refactored and rewrote some parts we could have a very clear and readable understanding of what actually happens in a plasma boot and what is really dependent on what.

The biggest problem that we have is many things rely on environment variables or even xrdb databases to be correct at the time of launching; several early parts of the boot (kcminit, kded phase0) will adjust these and the next parts may rely on these for correct theming.

Another massive rewrite that landed (by Kai Uwe Broulik) was the detaching of kwin from ksmserver. kwin needs to know which session we're restoring so that restored windows can be placed at the right place; this previously came from an environment variable that means we needed ksmserver up and running to spawn kwin. We rewrote how that all works and gets communicated so the order can be agnostic but still without any features getting lost.

Final patchset

Eventually we got to a point where ksmserver is only dealing with X11 session management. We have another quite tiny binary (plasma_session) that only deals with starting services in a specific order. And everything else is completely separate standalone independent components. This has been released for a while. Even without the systemd boot, everything is in a much much cleaner better off state.

The final patch to use systemd is then really quite boring and small. We just don't call the plasma_session binary, and instead try to start a systemd target;

All our core services ship their own .service files which plasma-workpace requires for a full Plasma session.

This picture shows random circles and lines. I felt it helped break up the wall of text.

Autostart files

One of the key aspects of startup is handling autostart .desktop files in /etc/xdg/autostart or ~/.config/autostart. We need this to work as before.

Benjamin Berg (of Red Hat and Gnome) came up with a very elegant solution using a systemd generator. It goes through all the .desktop files parsing them and generates an appropriate systemd service automagically.

For application developers everything keeps working the same, but to an end user we can interact with them like native services. We get the best of both worlds.

Despite being a shared implementation even KDE-specific things like X-KDE-AutostartCondition just work. Implementation wise the generator converts this into an ExecCondition= line which then calls into a plasma-shipped binary to check if it's allowed.

Current state

Is it faster?

A lot of the prep work over the past few years to get to this state has made absolutely massive differences. The splitting found bugs dead code and potential optimisations that add up to magnitudes of difference.

As for switching over to the systemd itself, it should be roughly the same. Ultimately we're triggering the exact same set of things in the exact same order with the exact same blocks if we've done our job properly. Sorry to disappoint!

Is it finished?

The fundamentals are definitely at a point that I think are stable and working; but we haven't enabled all of the potential extra features we have available. I would welcome people who are experiences to give feedback and help out.


You must have latest master of Plasma, it is not in the 5.20 beta.

Enable with:

kwriteconfig5 --file startkderc --group General --key systemdBoot true

As mentioned above there are checks that you have the systemd 246 or newer for the generator of existing autostart services to work.

To confirm the boot worked correctly you can run

systemctl --user status plasma-plasmashell.service

It should show as active.

This safety check of 246 can be skipped. You will lose auto-activation of apps using the classic .desktop file approach, but everything else still works. I won't mention how publicly to avoid false bug reports, but as a dev you can probably find out.

Dev setups - a caveat

If you build your own KDE and install into a non standard prefix, instead of getting Plasma from your distribution, there is one new hurdle. Systemd user sessions starts earlier than most people set their custom prefixes so naturally it can't find things in your new paths.

There are multiple solutions, simplest is to re-run ./install-sessions script in plasma-workspace which contains a workaround for systemd to find services.

Wrap Up

A lot of the work done to get to this stage has been extremely beneficial to all users regardless of whether they end up actually using this. We fixed and cleaned so much along the way.

I strongly believe the benefits it offers are very real, and look forward to hearing feedback from users using it.
If you do have any issues please do reach out to me in the usual channels.

Plasma Beta Review Day

Plasma 5.20 is now in beta, which gives us one month of intense testing, bugfixing and polishing.

During this time we need as many hands on deck as possible to help with finding regressions, triaging incoming reports and generally being on top of as much as possible.

In order to make this process more accessible, more systematic and hopefully more fun we are going to run an official "Plasma Beta Review Day"


Any user of plasma able to install the latest beta or run a live ISO with our beta on it who want to help.


Thursday the 24th September; midday to midnight CEST, with people welcome to turn up and drop out whenever.


Join us in the webconferencing channel: The webserver we used for Akademy. There will be a room available.
You can join with a camera and a microphone or just in the classic text chat.

What will this consistent of?

  • Introductions to bugzilla for people who want support filing or triaging their first bugs
  • Being assigned short buglists to validate, de-duplicate, for those more experienced
  • Going through a defined list of all the new areas of Plasma to check for regressions
  • Devs being online so we can get debug info for issues you have so we can identify and fix things in real time

What should I prepare?

Ideally get yourself set up with a beta of the latest Plasma. Follow the links for: for an image running either master or Plasma 5.19.90

I hope to see you all soon!

Running PlasmaShell with Vulkan

About QtQuick and RHI

QtQuick, in one word, is amazing.

QtQuick, in slightly more words, is a scene graph implementation. At a developer level we create abstract "Items" which might be some text or a rectangle etc or a picture. This in turn gets transformed into a tree of nodes with geometry, "materials" and transforms. In turn this gets translated into a big long stream of OpenGL instructions which we send to the graphic card.

Qt6 will see this officially change to sit on top of the "Render Hardware Interface" stack, that instead of always producing OpenGL, will support Vulkan, Metal and Direct3D natively. The super clever part about it is that custom shaders (low level fast drawing) are also abstracted; meaning we will write some GLSL and generate the relevant shader for each API without having to duplicate the work.

This blog series gives a lot more detail:

Plasma code primarily interacts with Items and occasionally nodes, slightly above the level being abstracted.

Current State

Qt 5.15 ships with a tech preview of RHI and the Vulkan interface. I spent some time to set it up and explore what we need to do to get our side fully working. With some packages installed, a few plasma code fixes and some env vars set, I have a fully working Vulkan plasmashell.


Unsurprisingly it looks the same, so a screenshot is very unexciting. I enabled the Mesa overlay as some sort of proof.
The reason it shows 2fps is because plasmashell only updates when something has changed; in this case the textcursor blinking every 500ms.

Despite it being a preview it is in a damn good state! Things are usable, and really quite snappy, especially notification popups.

What needs work

Some things need work on our side, in particular:

  • All of our custom shaders need porting to the updated shader language.
  • Taskbar thumbnails use low level GL code that needs one extra layer of implementation.
  • Use of QtQuickWidget in systemsettings.

It means some elements are invisible or don't render with the full graphical effects, or in the last case, crash.

But in the whole scheme of things, everything is in a very encouraging state

What about KWin?

Whilst QtQuick is the cornerstone of plasmashell, systemsettings and so many applications, for historical reasons KWin made use of OpenGL before it was a mainstream part of Qt. Therefore this setup is mostly unrelated to KWin. Fortunately there's no reason these two have to be in sync.

Wrap up

This isn't usable for end users, and given it's only a tech preview upstream, this is not something we can ever expect to officially support within Plasma 5.

But we can start doing the prep work, also the current state is so promising I think we can deliver a native Vulkan experience to users on the first day of Plasma 6.

What we can learn from Plasma telemetry

Since Plasma 5.18, about 7 months ago, Plasma has shipped with a telemetry system. Opt in (i.e off by default) it requires users to go to choose if (and how much) data to send to us.
No private or identifying information is sent, and everything is stored inline with our privacy policy.

Currently we have hit just shy of 100,000 updates!

We have started off requesting very little information. Versions, GPU info and some basic screen information. However the library powering this is extremely powerful and capable of so much more that we can try and build on in the future to try and identify weak areas and areas we need to invest time and effort and also to identify features or platforms that maybe are under utilised and can be dropped.

I have recently been trying to improve on how we can extract and visaulise data from the data collected and draw some conclusions. I want to present some of the aggregated metrics.

What we can learn from Plasma statistics

Used Plasma Versions

To explain our numerical versioning additons.
.80 = git master before the next version, up until the beta
.90 = after the stable branch forks for release up until the next.0 release.


The biggest surprise is that LTS is currently only used by around 5% of people with 93% of reporting users on the lastest stable (5.19)

At the current rate it does make me question whether the LTS is worth it. Maybe LTS will only gain traction when the next LTS distro gets a release which could come later?

Obviously any decision will only be made as the result of a discussion with all stakeholders, but this is definitely raising questions.


Right now about 1.5% of people run master, I had expected those users to be the most into helping with the telemetry and skew this further.
The bigger surprise is the number of people running master seems to fluctuate. Weekly users can go between 30-60. I had expected this to be constant.

It is comforting to see that betas do get more users, going up to 2.5% of our reporting userbase.

Interesting observations

There is one person who is still activitly using Plasma 5.18 beta. Not 5.18, the beta for 5.18.. which was 8 months ago.
I have so many questions.


800x600 resolution setups are still a thing we need to support, even if they are just VMs they're still used. This is very relevant as we often get commits blindly setting a minimum size hint of a window to be bigger because it "looks nicer". I now know I do still need to ensure in review that we don't break these setups.

We also see that ultrawide monitors are surprisingly unpopular despite clearly being the best monitor setup possible.

Graphic drivers

The graph is pretty self-explanatory, Intel has the most, but the Nvidia proprietory driver comes in at ~1/4 of the total users.
It makes it an important target to support as best as we can even if it comes with its share of problems.

My personal setup doesn't match your conclusions!

The reason we want to use telemetrics is to drive decisions with real data.

As a user you are more than welcome to choose to opt into the statistics or not, we understand privacy is important which is why everything is opt-in only.
However, real decisions will ultimately be based on the data we have available, if you want your usecases to be noted, please do consider submitting to the telemetry to us.
To enable telemetry please go to "System Settings" and select the "User Feedback" tab.

Bringing modern process management to the desktop

A desktop environment's sole role is to connect users to their applications. This includes everything from launching apps to actually displaying apps but also managing them and making sure they run fairly. Everyone is familiar the concept of a "Task manager" (like ksysguard), but over time they haven't kept up with the way applications are being developed or the latest developments from Linux.

The problem

Managing running processes

There used to be a time where one PID == one "application". The kwrite process represents Kwrite, the firefox process represents Firefox, easy. but this has changed. To pick an extreme example:
Discord in a flatpak is 13 processes!

It basically renders our task manager's process view unusable. All the names are random gibberish, trying to kill the application or setting nice levels becomes a guessing game. Parent trees can help, but they only get you so far.

It's unusable for me, it's probably unusable for any user and gets in the way of feeling in control of your computer.

We need some metadata.

Fair resource distribution

As mentioned above discord in a flatpak is 13 processes. Krita is one process.

  • One will be straining the CPU because it is a highly sophisticated application doing highly complicated graphic operations
  • One will be straining the CPU because it's written in electron

To a kernel scheduler all it would see are 14 opaque processes. It has no knowledge that they are grouped as two different things. It won't be able to come up with something that's fair.

We need some metadata.

(caveat: Obviously most proceses are idling, and I've ignored threads for the purposes of making a point, don't write about it)

It's hard to map things

Currently the only metadata of the "application" is on a window. To show a user friendly name and icon in ksysguard (or any other system monitor) we have to fetch a list of all processes, fetch a list of all windows and perform a mashup. Coming up with arbitrary heuristics for handling parent PIDs which is unstable and messy.

To give some different real world examples:

  • In plasma's task manager we show an audio indicator next to the relevant window, we do this by matching PIDs of what's playing audio to the PID of a window. Easy for the simple case... however as soon as we go multi-process we have to track the parent PID, and each "fix" just alternates between one bug and another.
  • With PID namespaces apps can't correctly report client PIDs anymore.
  • We lose information on what "app" we've spawned. We have bug reports where people have two different taskmanager entries for "Firefox" and "Firefox (nightly)" however once the process is spawned that information is lost - the application reports itself as one consistent name and our taskbar gets confused.

We need some metadata.


This is a solved problem!

A modern sysadmin doesn't deal in processes, but cgroups. The cgroup manager (which will be typically systemd) spawns each service as one cgroup. It uses cgroups to know what's running, the kernel can see what things belong together.

On the desktop flatpaks will spawn themselves in cgroups so that they can use the relevant namespace features.

You're probably already using cgroups. As part of a cross-desktop effort we want to bring cgroups to the entire desktop.


Before and after of our system monitor

Ultimately the same data but way easier to read..


Another key part of cgroup usage is the concept of slices. Cgroups are based on a heirachical structure, with slices as logical places to split resource usage. We don't adjust resources globally, we adjust resources within our slice, which then provides information to the scheduler.

Conceptually you can imagine that we just adjust resources within our level of a tree. Then the kernel magically takes care of the rest.

More information can be found on slices in this excellent series World domination with cgroups.

Default slices

This means we can set up some predefined slices. Within the relevant user slice this will consist shared of

  • applications
  • the system (kwin/mutter, plasmashell)
  • background services (baloo, tracker)

Each of these slices can be given some default prioritisations and OOM settings out of the box.

Dynamic resource shifting

Now that we are using slices, and only adjusting our relative weight within the slice, we can shift resource priority to the application owning the focused window.

This only has any effect if your system is running at full steam from mulitple sources at once, but it can provide a slicker response at no drawback.

Why slice, doesn't nice suffice?

Nice is a single value, global across the entire system. Because of this user processes can only be lowered, but never raised to avoid messing with the system. With slices we're only adjusting relative weight compared to services within our slice. So it's safe to give the user full control within their slice. Any adjustments to an application, won't impact system services or other users.

It also doesn't conflict with nice values set by the application explicitly. If we set kdevelop to have greater CPU weight, clang won't suddenly take over the whole computer when compiling.

Fixing things is just the tip of the iceberg

CGroup extra features

CGroup's come with a lot of new features that aren't available on a per-process level.

We can:

  • Set limits so that a CPU can't use more than N%
  • We can gracefully close processes on logout
  • We can disable networking
  • We can set memory limits
  • We can prevent forkbombs
  • We can provide hints to the OOM killer not just with a weight but with expected ranges that should be considered normal
  • We can freeze groups of processes (which will be useful for Plasma mobile)

All of this is easy to add for a user / system administrator. Using drop in's one can just add a .service file [example file link] to ~/.config/systemd/user.control/app-firefox@.service and manipulate any of these.

[caveat, some of those features works for applications created as new transient services, not the lite version using scopes that's currently merged in KDE/Gnome - maybe worth mentioning]

Steps taken so far

Plasma 5.19 and recent Gnome now spawn applications into respective cgroups, but we're not yet surfacing the results that we can get from this.

For the KDE devs providing the metadata is easy.

If spawning a new application from an existing application be sure to use either ApplicationLauncherJob or CommandLauncherJob and set the respective service. Everything else is then handled automagically. You should be using these classes anyway for spawning new services.

For users, you can spawn an application with either kstart5 --application foo.desktop"

That change to the launching is relatively tiny, but getting to this point in Plasma wasn't easy - there were a lot of edge cases that messed up the grouping correctly.

  • kinit, our zygote process really meddled with keeping things grouped correctly
  • drkonqi, our crash handler and application restarter
  • dbus activation has no knowledge of the associated .desktop file if an application is DBus activated (such as spectacle our screenshot tool)
  • and many many more papercuts throughout of different launches

Also to fully capitalise on slices we need to move all our background processes into managed services and slices. This is worthy of another (equally lengthy) blog post.

How you can help?

It's been a battle to find these edge cases.
Whilst running your system, please run systemd-cgls and point out any applications (not background services yet) that are not in their appropriate cgroup.

What if I don't have an appropriate cgroup controller?

(e.g BSD users)

As we're just adding metadata, everything used now will continue to work exactly as it does now. All existing tools work exactly the same. Within our task manager we still will keep a process view (it's still useful regardless) and we won't put in any code that relies on the cgroup metadata present. We'll keep the existing heuristics for matching windows with external events, cgroup metadata would just be a strongly influence factor in that. Things won't get worse, but we won't be able to capitalise on the new features discussed here,