Saturday 4 November 2023

EEVDF & the mainline linux kernel scheduler

 A number of people have already asked me my opinion on the development of EEVDF on top of CFS for the mainline kernel. All of my previous schedulers - staircase first, followed by BFS, and finally MuQSS, were all EEVDF designs, so in principle at least you can imagine I'm mildly intrigued and pleased with this direction. I think it's the best known way to tackle interactivity and responsiveness in a CPU process scheduler.

Any qualms I may have about it would be the reluctance to move processes/threads from one CPU to another to achieve said goal of tackling the earliest eligible virtual deadline process first. As it is common for processes to be relatively "sticky" to per-CPU runqueues for cache warmth and throughput reasons, this ends up being orthogonal to the demands of scheduling for minimal latency first. 

In my original BFS design there was only one runqueue  for all CPUs which was the optimal design for a global EEVDF design but this would eventually not have scaled to many CPUs in throughput. This led to the development of MuQSS for which I moved to multiple runqueues, but soon found that sharing runqueues for latency reasons was more important than worrying about the last bit of throughput. This is why in configuration it was possible to choose the degree to which runqueues were shared to choose to optimise primarily around throughput or latency - the more sharing, the more latency focused the scheduler behaved. Sharing runqueues between shared cache CPUs provided the best compromise at the time, though modern CPUs have far more cores and threads which all share various levels of cache. 

Much like there are sorting algorithms which excel at different sizes (nothing beats insertion sort for up to ~16 variables), I expect runqueue sharing would exhibit a similar phenomenon and that it would actually be disadvantageous to have many runqueues for small numbers of CPUs. My random prediction based on older anecdotal observation is that number is also up to about 16 threads/cores per runqueue (provided they're all sharing at least some form of cache.)

As I've not looked at mainline kernel code at depth in years, and not at all at this new EEVDF development I cannot comment with any authority at all on the code nor implementation at this stage, but it's certainly an admirable goal and I'm cautiously optimistic about it.

Wednesday 9 March 2022

lrzip version 0.651

 As often happens shortly after a substantial release, some minor issues are discovered and a small update is required so here is 0.651. The main issue was potentially confusing locale dependent output which has been reverted. Hopefully it's been short enough a period from 0.650 that distros will not have adopted that one yet.

Get it here:

http://ck.kolivas.org/apps/lrzip/

or via git here:

https://github.com/ckolivas/lrzip

 What's new:

  • Remove redundant files
  • Revert locale dependent output
  • Add warnings for low memory and threads

-ck


Sunday 27 February 2022

lrzip version 0.650

A number of accumulated bug reports had collected since the last lrzip release and since I regularly use lrzip I want to make sure it stays bug free as far as I am aware, even if I'm not planning any new features for it. As some of the changes are potentially security fixes, I urge any user to update.

Get it here:

http://ck.kolivas.org/apps/lrzip/

or via git here:

https://github.com/ckolivas/lrzip 

Here is the what's new list:

  • Minor optimisations.
  • Exit status fixes.
  • Update and beautify information output.
  • Fix Android build.
  • Enable MD5 on Apple build.
  • Deprecate and remove liblrzip which was unused and at risk of bitrot.
  • Fix failures with compressing to STDOUT with inadequate memory.
  • Fix possible race conditions.
  • Fix memory leaks.
  • Fix -q to only hide progress.
  • Add -Q option for very quiet.
-ck

Tuesday 31 August 2021

5.14 and the future of MuQSS and -ck once again

 Having missed the update for the 5.13 kernel entirely, I thought I'd just skip ahead to merge up with 5.14 and started looking at/working on it today. The size of the changes are depressingly large and whilst it's mostly trivial changes, and features I wouldn't implement in MuQSS, I'm once again left wondering if I should be bothering with maintaining this patch-set, as I've mentioned before on this blog.

 The size of my user-base seems to be diminishing with time, and I'm getting further and further out of touch with what's happening in the linux kernel space at all, with countless other things to preoccupy me in my spare time. 

 As much as I still prefer running my own kernel on my hardware, I'm having trouble motivating myself after the last 18 months of world madness due to Covid19 and feel that I should really sadly bring this patch-set to a graceful end. My first linux kernel patches stretch back 20 years and with almost no passion for working on it any more, I feel it may be long overdue.

 Unfortunately I also do not have faith that there is anyone I can reliably hand the code over to as a successor as well, as almost all forks I've seen on my work have been prone to problems I've tried hard to avoid myself.

 There is always the possibility that mainline linux kernel will be so bad that I'll be forced to create a new kernel of my own out of disgust, which is how I got here in the first place, but that looks very unlikely. Many of you would have anticipated this coming after my last motivation blog-post, but unless I can find the motivation to work on it again, or something comes up that gives me a meaningful reason to work on it, I will have to sadly declare 5.12-ck the last of the MuQSS and -ck patches.

Final word. If you want to get the most out of the mainline kernel without trying to port MuQSS, then using at least the hrtimer patches from -ck and 1000Hz should make a significant difference.

-ck