Now, about this Spectre vulnerability which is going around. I'm reasonably certain Linus ain't gonna like the implications. And I'm going to tell why.
Early on in the developoment of the Linux kernel, and many others, there was a certain well-publicised exchange between Linus Torvalds, the developer of the Linux kernel, and Andrew Tanenbaum, a noted computer scientist, and the developer of MINIX, which at least the first versions of Linux's kernel and file system relied upon. Tanenbaum advocated for a microkernel design, on the theoretical isolation grounds of the time, while Torvalds went with a monolithic kernel, based on performance and simplicity reasoning.
Linus won that war. Nowadays we all run monolithic kernels. At best structured, hybrid kernels derived from microkernel work, but running in the trusted kernel environment. There perhaps the best known example would be Mac OS X, which was originally derived via NeXTSTEP and OPENSTEP, both based on the Mach microkernel. Yet they soon went the hybrid kernel route, for performance reasons.
I believe the Spectre vulnerability might just tip the balance backwards. Because, you know, Linus himself talked about on LKML about how it was idiotic for a processor to do speculative execution over "protection domains".
Yet what is a protection domain, really? I'd argue it is a single piece of code, written by a single author. Simply because there is no way for any author to really trust another. Every other might turn. Even amongst long standing kernel developers.
Thus, the only way to be sure is to fully isolate, even within a kernel, every contributor's work from each other. Right down to the metal, so that even Spectre kinds of exploits cannot touch any piece of code contributed by another person.
Really the only way to guarantee such a level of isolation between our new protection domains, is to accept a cost of isolation on par with the original microkernel designs. Or rather, probably, some of the newer, more optimized ones. The cost would be significant, but if you really want to defend against the problems implicit in the Spectre paper in full, you ought to be willing to take the hit, and then some 30-50% more to boot. Because if you want to defend fully against the problem, you're going to have to shoot most of your caching circuitry in the foot, hard, and constantly, and by design.
The upside is that once you do that, systematically, no holds barred, an HURD like pure microkernel construction suddenly isn't that much slower. It might even be faster at first, simply because it was designed against this precise performance limit from the very beginning.
Thus, I think Linus will be unhappy with this development, sooner or later. It might just be that this singular paper in security, ends up reversing his debate with Tanenbaum; in my and many others' mind his singular architectural feat.
If so, the James Bond reference would be doubly apt; SPECTRE after all is mostly known for its surreptitious, longer term work.