72

By reading this question, I understood, for instance, why dynamic allocation or exceptions are not recommended in environments where radiation is high, like in space or in a nuclear power plant. Concerning templates, I don't see why. Could you explain it to me?

Considering this answer, it says that it is quite safe to use.

Note: I'm not talking about complex standard library stuff, but purpose-made custom templates.

Guillaume D
  • 1,958
  • 2
  • 6
  • 30
  • 13
    My guess is that it's not because of the environment, but rather because of running the program on embedded systems with very limited resources. Templates tend to create "bloat", as templates could lead to code duplication for the different instantiations. – Some programmer dude Jun 12 '19 at 08:24
  • 5
    The concerns about C++ on Mars are on page 34 of the Rover presentation, all unrelated to radiation. (The bottom half of the answer I think you're referring to is not about radiation concerns.) – molbdnilo Jun 12 '19 at 08:25
  • 9
    Templates are just normal classes and functions in the end. Ignoring other reasons, like possible code bloat or long compile times, there should be no reason not to use them. – One Man Monkey Squad Jun 12 '19 at 08:29
  • 22
    It has nothing to do with radiation or code size. Safety design guidelines usually try to reduce the complexity of the code (short function, no indirect calls, only static memory allocation and so on). Many of these guide lines were written at a time when LINT was the best thing you could do for code analysis. So not all of these rules still make sense. – user6556709 Jun 12 '19 at 08:34
  • 11
    In theory you can use a restricted subset of C++ for these kind of systems. In practice, you avoid C++ like the plague simply because it is too complex and you can never trust C++ programmers to stick to the safe subset. Before you know it there's template metaprogramming hell all over the program. In addition, many new features from C++11 and beyond, such as behavior of `auto`, will blow your whole leg off. – Lundin Jun 12 '19 at 09:23
  • 2
    You are conflating two issues. The question was about radioactive environment, and by way an an answer someone mentioned NASA's coding standards for space hardened applications - the coding standards were intended to deal with issues other than radiation hardening - there are other more general issues to be considered in using C++ (or any language) in embedded systems - especially those that may be difficult to impossible to update once deployed. The "safe to use" question is not specific to embedded systems - but that is not to say it is unsafe, but there are consequences to be considered. – Clifford Jun 12 '19 at 16:36
  • Check the `this answer` link - it does not point to a specific answer. – Marc.2377 Jun 13 '19 at 02:16
  • 2
    slightly related: [What makes Ada the language of choice for the ISS's safety-critical systems?](https://space.stackexchange.com/q/36538/12102) – uhoh Jun 13 '19 at 02:31

3 Answers3

114

Notice that space-compatible (radiation-hardened, aeronautics compliant) computing devices are very expensive (including to launch in space, since their weight exceeds kilograms), and that a single space mission costs perhaps hundred million € or US$. Losing the mission because of software or computer concerns has generally a prohibitive cost so is unacceptable and justifies costly development methods and procedures that you won't even dream using for developing your mobile phone applet, and using probabilistic reasoning and engineering approaches is recommended, since cosmic rays are still somehow an "unusual" event. From a high-level point of view, a cosmic ray and the bit flip it produces can be considered as noise in some abstract form of signal or of input. You could look at that "random bit-flip" problem as a signal-to-noise ratio problem, then randomized algorithms may provide a useful conceptual framework (notably at the meta level, that is when analyzing your safety-critical source code or compiled binary, but also, at critical system run-time, in some sophisticated kernel or thread scheduler), with an information theory viewpoint.

Why C++ template use is not recommended in space/radiated environment?

That recommendation is a generalization, to C++, of MISRA C coding rules and of Embedded C++ rules, and of DO178C recommendations, and it is not related to radiation, but to embedded systems. Because of radiation and vibration constraints, the embedded hardware of any space rocket computer has to be very small (e.g. for economical and energy-consumption reasons, it is more -in computer power- a Raspberry Pi-like system than a big x86 server system). Space hardened chips cost 1000x much as their civilian counterparts. And computing the WCET on space-embedded computers is still a technical challenge (e.g. because of CPU cache related issues). Hence, heap allocation is frowned upon in safety-critical embedded software-intensive systems (how would you handle out-of-memory conditions in these? Or how would you prove that you have enough RAM for all real run time cases?)

Remember that in the safety-critical software world, you not only somehow "guarantee" or "promise", and certainly assess (often with some clever probabilistic reasoning), the quality of your own software, but also of all the software tools used to build it (in particular: your compiler and your linker; Boeing or Airbus won't change their version of GCC cross-compiler used to compile their flight control software without prior written approval from e.g. FAA or DGAC). Most of your software tools need to be somehow approved or certified.

Be aware that, in practice, most C++ (but certainly not all) templates internally use the heap. And standard C++ containers certainly do. Writing templates which never use the heap is a difficult exercise. If you are capable of that, you can use templates safely (assuming you do trust your C++ compiler and its template expansion machinery, which is the trickiest part of the C++ front-end of most recent C++ compilers, such as GCC or Clang).

I guess that for similar (toolset reliability) reasons, it is frowned upon to use many source code generation tools (doing some kind of metaprogramming, e.g. emitting C++ or C code). Observe, for example, that if you use bison (or RPCGEN) in some safety critical software (compiled by make and gcc), you need to assess (and perhaps exhaustively test) not only gcc and make, but also bison. This is an engineering reason, not a scientific one. Notice that some embedded systems may use randomized algorithms, in particular to cleverly deal with noisy input signals (perhaps even random bit flips due to rare-enough cosmic rays). Proving, testing, or analyzing (or just assessing) such random-based algorithms is a quite difficult topic.

Look also into Frama-Clang and CompCert and observe the following:

  • C++11 (or following) is an horribly complex programming language. It has no complete formal semantics. The people expert enough in C++ are only a few dozens worldwide (probably, most of them are in its standard committee). I am capable of coding in C++, but not of explaining all the subtle corner cases of move semantics, or of the C++ memory model. Also, C++ requires in practice many optimizations to be used efficiently.

  • It is very difficult to make an error-free C++ compiler, in particular because C++ practically requires tricky optimizations, and because of the complexity of the C++ specification. But current ones (like recent GCC or Clang) are in practice quite good, and they have few (but still some) residual compiler bugs. There is no CompCert++ for C++ yet, and making one requires several millions of € or US$ (but if you can collect such an amount of money, please contact me by email, e.g. to basile.starynkevitch@cea.fr, my work email). And the space software industry is extremely conservative.

  • It is difficult to make a good C or C++ heap memory allocator. Coding one is a matter of trade-offs. As a joke, consider adapting this C heap allocator to C++.

  • proving safety properties (in particular, lack of race conditions or undefined behavior such as buffer overflow at run-time) of template-related C++ code is still, in 2Q2019, slightly ahead of the state of the art of static program analysis of C++ code. My draft Bismon technical report (it is a draft H2020 deliverable, so please skip pages for European bureaucrats) has several pages explaining this in more details. Be aware of Rice's theorem.

  • a whole system C++ embedded software test could require a rocket launch (a la Ariane 5 test flight 501, or at least complex and heavy experimentation in lab). It is very expensive. Even testing, on Earth, a Mars rover takes a lot of money.

Think of it: you are coding some safety-critical embedded software (e.g. for train braking, autonomous vehicles, autonomous drones, big oil platform or oil refinery, missiles, etc...). You naively use some C++ standard container, e.g. some std::map<std::string,long>. What should happen for out of memory conditions? How do you "prove", or at least "convince", to the people working in organizations funding a 100M€ space rocket, that your embedded software (including the compiler used to build it) is good enough? A decade-year old rule was to forbid any kind of dynamic heap allocation.

I'm not talking about complex standard library stuff but purposed-made custom templates.

Even these are difficult to prove, or more generally to assess their quality (and you'll probably want to use your own allocator inside them). In space, the code space is a strong constraint. So you would compile with, for example, g++ -Os -Wall or clang++ -Os -Wall. But how did you prove -or simply test- all the subtle optimizations done by -Os (and these are specific to your version of GCC or of Clang)? Your space funding organization will ask you that, since any run-time bug in embedded C++ space software can crash the mission (read again about Ariane 5 first flight failure - coded in some dialect of Ada which had at that time a "better" and "safer" type system than C++17 today), but don't laugh too much at Europeans. Boeing 737 MAX with its MACS is a similar mess).


My personal recommendation (but please don't take it too seriously. In 2019 it is more a pun than anything else) would be to consider coding your space embedded software in Rust. Because it is slightly safer than C++. Of course, you'll have to spend 5 to 10 M€ (or MUS$) in 5 or 7 years to get a fine Rust compiler, suitable for space computers (again, please contact me professionally, if you are capable of spending that much on a free software Compcert/Rust like compiler). But that is just a matter of software engineering and software project managements (read both the Mythical Man-Month and Bullshit jobs for more, be also aware of Dilbert principle: it applies as much to space software industry, or embedded compiler industry, as to anything else).

My strong and personal opinion is that the European Commission should fund (e.g. through Horizon Europe) a free software CompCert++ (or even better, a Compcert/Rust) like project (and such a project would need more than 5 years and more than 5 top-class, PhD researchers). But, at the age of 60, I sadly know it is not going to happen (because the E.C. ideology -mostly inspired by German policies for obvious reasons- is still the illusion of the End of History, so H2020 and Horizon Europe are, in practice, mostly a way to implement tax optimizations for corporations in Europe through European tax havens), and that after several private discussions with several members of CompCert project. I sadly expect DARPA or NASA to be much more likely to fund some future CompCert/Rust project (than the E.C. funding it).


NB. The European avionics industry (mostly Airbus) is using much more formal methods approaches that the North American one (Boeing). Hence some (not all) unit tests are avoided (since replaced by formal proofs of source code, perhaps with tools like Frama-C or Astrée - neither have been certified for C++, only for a subset of C forbidding C dynamic memory allocation and several other features of C). And this is permitted by DO-178C (not by the predecessor DO-178B) and approved by the French regulator, DGAC (and I guess by other European regulators).

Also notice that many SIGPLAN conferences are indirectly related to the OP's question.

Basile Starynkevitch
  • 1
  • 16
  • 251
  • 479
  • 1
    true for standard library templates and complex features (I once used lambdas in a project that doubled my code size) but I was thinking more of custom made ones. If you make your own templates, you should know what you are doing, right? I mean if you don't know what you are coding, that is a pretty big problem. – Guillaume D Jun 12 '19 at 08:42
  • But how do you prove, to the people paying 100M€ a space mission, that your software is "bug-free"? – Basile Starynkevitch Jun 12 '19 at 08:43
  • templates are just a way of writing. For instance If i make a template function that can be used with 2 classes, I [unit test](https://stackoverflow.com/questions/17079702/gtest-testing-template-class) the 2 uses, and a third use with another class to be sure everything is correct. – Guillaume D Jun 12 '19 at 08:49
  • 1
    this is still a bit tangentially related to templates. Could you elaborate more on the questions specific problem: templates. – Tarick Welling Jun 12 '19 at 08:50
  • 3
    "since any run-time bug in embedded C++ space software can crash the mission (read again about Ariane 5 first flight failure," that ain't a argument in favour of C in the embedded space though. C++ has stronger type checking which would have helpen in this instance. – Tarick Welling Jun 12 '19 at 13:13
  • 1
    AFAIK, that is not true. Because Ariane 5 was coded in some dialect of Ada, which, at the time, had a better type system than C++17 today – Basile Starynkevitch Jun 12 '19 at 14:20
  • 1
    The [Arianne thing](https://en.wikipedia.org/wiki/Cluster_(spacecraft)#Launch_failure) has been much discussed. They used fairly bog-standard Ada, but turned off some unnecessary bounds checks to save CPU cycles. That worked great. Then they ported the same code to the later Arianne 5, which had different specs, and never readdressed if those checks should be turned back on or the bounds readdressed. It was a human error, and likely would have happened in C++ as well (since by default it has no bounds checks to start with), but has nothing to do with templates or language complexity. – T.E.D. Jun 12 '19 at 18:30
  • Ariane was a management error. I gave a few references regarding management – Basile Starynkevitch Jun 12 '19 at 18:38
  • 1
    For certain reasonable values of "error-free", it is not *very difficult* to make an error-free C++ compiler, it's *provably impossible* to make an error-free C++ compiler. Turing-completeness of the template metaprogramming system means that in order to accept all well-formed C++ programs while rejecting all ill-formed ones, you need to solve the Halting Problem. – Mark Jun 12 '19 at 19:52
  • It is still *very difficult* to make an efficient, *optimizing* C++ compiler – Basile Starynkevitch Jun 12 '19 at 19:53
  • 1
    Unfortunately your answer really answers why not use the STL not why not use developer-provided templates. – Joshua Jun 12 '19 at 20:32
  • 1
    Comments are not for extended discussion; this conversation has been [moved to chat](https://chat.stackoverflow.com/rooms/194844/discussion-on-answer-by-basile-starynkevitch-why-is-c-template-use-not-recomme). – Samuel Liew Jun 12 '19 at 22:51
  • 2
    I find these arguments about the complexity of the C++ language unconvincing. If the language of choice were C, they would be valid. But I read somewhere that Ada is their preferred language, and it's also a complex language, I think comparable to C++ (although I admit that I've never actually used it, I only read the specifications back in the 80's when it was being developed). – Barmar Jun 12 '19 at 23:36
  • 8
    I find it suspicious that your example of a C++ template was `std::map`, and then you argue against it for dynamic allocation reasons, not because it's a template. I guess you wanted to go into detail about dynamic alloc since the OP mentioned it, too, after covering templates for code-bloat and as part of the general complexity that makes verification maybe harder. It is possible to use templates safely *if* you think about what you're doing, but sure it's easy to get code bloat. – Peter Cordes Jun 13 '19 at 00:37
  • 2
    @Barmar If it convinces the C++ standards committee, it should convince you. In C, there are a fairly limited number of areas of undefined behaviour, and they are explicitly undefined in the standard. (Nasal demons, etc..) In C++, the standards committee literally *stopped counting* because there were too many to track. – Graham Jun 13 '19 at 08:00
  • Undefined behavior is not a measure of complexity, IMHO. – Barmar Jun 13 '19 at 08:03
  • 4
    Re: Rust on safety-critical systems: https://ferrous-systems.com/blog/sealed-rust-the-pitch/ – Sebastian Redl Jun 13 '19 at 08:11
  • 1
    1. Terms like 'horribly complex language" don't look good on a technical discussion / argument. 2. I don't see how it's such a big of a deal that a compiler is not verified / certified. You need to test your actual program either way, even if your compiler is formally proven. And a good test suite does not care if a bug comes from the compiler or the programmer. – Violet Giraffe Jun 13 '19 at 12:33
  • @VioletGiralle: that is your opinion, but Airbus is avoiding several (not all) units test thru a [formal methods](https://en.wikipedia.org/wiki/Formal_methods) approach, and that is possible in DO178C. I know that Boeing do differently. IIRC, this is the major difference between European and North American software safety approach in avionics. Even if I might know more, I don't feel allowed to speak of it. However, I did heard talks by qualified Airbus and [DGAC](https://en.wikipedia.org/wiki/Directorate_General_for_Civil_Aviation_(France)) personnel explaining all that in great details. – Basile Starynkevitch Jun 13 '19 at 12:59
  • @VioletGiralle: If you know of any [formal semantics](https://en.wikipedia.org/wiki/Semantics_(computer_science)) - e.g. [denotational semantics](https://en.wikipedia.org/wiki/Denotational_semantics) or [axiomatic semantics](https://en.wikipedia.org/wiki/Axiomatic_semantics) or [operational semantics](https://en.wikipedia.org/wiki/Operational_semantics) of a very large subset of C++17 (including the memory model & multi-threading aspects) please share it with us. Since I know none, I stand on my position: C++17 is an *horribly* complex language. – Basile Starynkevitch Jun 13 '19 at 13:39
  • @VioletGiraffe: even a formal proof of *some* particular implementation of C++ [standard containers](https://en.cppreference.com/w/cpp/container) does interest me. AFAIK, such proofs are *really* incomplete, but I am curious if you know them better than I do. Please share some reference with us – Basile Starynkevitch Jun 13 '19 at 13:43
  • 8
    How all this is related to templates? – Reuven Abliyev Jun 16 '19 at 07:54
  • @Reuven: That's also what I'd like to know. Most of this post and many of the comments talk about things completely unrelated to templates. – MikeMB Jun 18 '19 at 12:55
  • I addressed that by adding two paragraphs – Basile Starynkevitch Jun 18 '19 at 16:34
  • 1
    @ReuvenAbliyev: IMO, these "displaced" comments are due to the fact that the OP is a pretty void topic. Is there any connection between the use of templates and robustness to radiations ? Or between Camel Casing and resistance to fire ? :-) – Yves Daoust Jun 19 '19 at 07:14
8

The argumentation against the usage of templates in safety code is that they are considered to increase the complexity of your code without real benefit. This argumentation is valid if you have bad tooling and a classic idea of safety. Take the following example:

template<class T>  fun(T t){
   do_some_thing(t);
}

In the classic way to specify a safety system you have to provide a complete description of each and every function and structure of your code. That means you are not allowed to have any code without specification. That means you have to give a complete description of the functionality of the template in its general form. For obvious reasons that is not possible. That is BTW the same reason why function-like macros are also forbidden. If you change the idea in a way that you describe all actual instantiations of this template, you overcome this limitation, but you need proper tooling to prove that you really described all of them.

The second problem is that one:

fun(b);

This line is not a self-contained line. You need to look up the type of b to know which function is actually called. Proper tooling which understands templates helps here. But in this case it is true that it makes the code harder to check manually.

Toby Speight
  • 23,550
  • 47
  • 57
  • 84
user6556709
  • 1,246
  • 6
  • 13
  • 1
    Agreed, but my answer suggested that before your answer. And manual test for embedded C++ software is really too expensive. You cannot afford many Ariane 5 test flights like its 501. – Basile Starynkevitch Jun 12 '19 at 09:25
  • 4
    "The argumentation against the usage of templates in safety code is that they are considered to increase the complexity of your code without real benefit." No, that's the argument against using templates in embedded systems overall. The argument against using templates in safety code, is that there is no use whatsoever for templates in 100% deterministic code. In such systems, there's no generic programming anywhere. You can't use stuff like std::vector, because you will unlikely find a std lib compliant to safety standards. Or if you do, it will cost lots of cash. – Lundin Jun 12 '19 at 09:26
  • 3
    @Lundin Generic programming in the embedded world is a thing. Even down to the deep embedded stuff. That for the same raeson why it had become thing on other levels: Well tested algorithms are a nice thing. – user6556709 Jun 12 '19 at 09:35
  • @user6556709 Yes, you use drivers + HAL, but you don't use _type generic programming_. – Lundin Jun 12 '19 at 10:37
  • @uset6556709 But if you change the data type, or change ranges within the data type, it's very easy for the algorithm to not behave correctly. Ariane-501 was lost to that exact cause. So if you change *anything* then you no longer have a "well tested algorithm". – Graham Jun 13 '19 at 08:05
  • @Graham The algorithm is still well tested in opposite to what you write from the scratch. You always have to check if you can use it but that is a different story. – user6556709 Jun 13 '19 at 08:48
  • 2
    @Lundin: Templates have nothing to do with deterministic or non-deterministic code. In the end, they are just a way to reuse code without dynamic dispatch(virtual functions or function pointers) and without copy-pasting code, while being a tad safer than macros. E.g. reusing the same sort algorithm to sort an array of ints and an array of shorts. And the fact that std::vector is unsuitable for safety critical real time code has nothing to do with it being a template. – MikeMB Jun 18 '19 at 13:05
  • @MikeMB Explain that to the C++ programmers, who must also let the same function handle sorting a `Foo` and a `Bar` too, despite no such types being relevant to the application itself. – Lundin Jun 18 '19 at 13:07
  • 1
    Who does? This may be the case for the author of a general purpose algorithm library, but when we are talking about safety-critical realtime code we have left the "general purpose" domain anyway and also the OP was explicit talking about purpose made custom templates. – MikeMB Jun 18 '19 at 13:38
  • 1
    "That means you have to give a complete description of the functionality of the template in its general form" -- Isn't it sufficient to explicitly instantiate each template type that will be used, and then provide a complete description of the functionality of each type thus created? – odougs Sep 10 '20 at 23:01
1

This statement about templates being a cause of vulnerability seems completely surrealistic to me. For two main reasons:

  • templates are "compiled away", i.e. instantiated and code-generated like any other function/member, and there is no behavior specific to them. Just as if they never existed;

  • no construction in any language is neither safe or vulnerable; if an ionizing particule changes a single bit of memory, be it in code or in data, anything is possible (from no noticeable problem occurring up to processor crash). The way to shield a system against this is by adding hardware memory error detection/correction capabilities. Not by modifying the code !

Yves Daoust
  • 48,767
  • 8
  • 39
  • 84
  • So you trust both the most complex part of the C++ compiler front-end, and the code defining the templates. How do you *exhaustively* test both of them? Of course, unrelated to any cosmic ray switching a bit – Basile Starynkevitch Jun 19 '19 at 14:21
  • BTW, this is more a comment (quite interesting one) than an answer – Basile Starynkevitch Jun 19 '19 at 14:22
  • @BasileStarynkevitch: no, this is a clear answer that templates have nothing to do with cosmic rays. Nor are loops, unsafe casts, lack of documentation and the age of the programmer. – Yves Daoust Jun 20 '19 at 06:36
  • I might disagree with the second point. I remember having read some academic papers claiming to detect bit changes in kernel code. I really forgot the details, because that topic does not interest me. BTW Guillaume D. understanding of the relation between radiation-hardened embedded systems and dynamic allocation is too simplistic (and we both agree on that, I hope) – Basile Starynkevitch Jun 20 '19 at 07:51
  • @BasileStarynkevitch: this makes little sense. When vital parts of the kernel are corrupt, say the scheduler, the kernel doesn't work anymore. – Yves Daoust Jun 20 '19 at 08:00
  • These papers are *duplicating* the most critical code and checking its validity from time to time. Again, I forgot details, because thy don't interest me that much. And of course, they have a probabilistic approach (e.g. assume at most a 1 *random* bit flip per second) – Basile Starynkevitch Jun 20 '19 at 08:07
  • I downvoted, because you ignore probabilisitic approaches. It makes sense to design a system which fail only 1 out a billion times every minute, assuming a bit switch frequency of less than 1 *random* bit flip per second – Basile Starynkevitch Jun 20 '19 at 08:47
  • @BasileStarynkevitch: the meaning of a downvote is "this answer is not useful". – Yves Daoust Jun 20 '19 at 08:50
  • But it is not even an answer, it is a long and insightful comment about my answer – Basile Starynkevitch Jun 20 '19 at 08:51
  • @BasileStarynkevitch: not at all, it is addressing the OP. – Yves Daoust Jun 20 '19 at 08:51
  • Then you forgot clever probabilistic approaches (and that is a good enough reason to downvote). And these do exist (even if I don't understand them, because they are out of my area of expertise). Any good book on [randomized algorithms](https://en.wikipedia.org/wiki/Randomized_algorithm) would explain the relevant concepts and approaches. – Basile Starynkevitch Jun 20 '19 at 08:57
  • @BasileStarynkevitch: sorry, I didn't notice that the OP was focused on randomization. (By the way, randomization is an algorithmic technique to achieve good expected complexity and has nothing to do with kernel robustness.) – Yves Daoust Jun 20 '19 at 09:05
  • As I am suggesting, there is some indirect relation. But I don't have that much time to chat about it. Read some [SIGPLAN](http://www.sigplan.org/) related papers or conferences, and also [TACO](https://dl.acm.org/pub.cfm?id=J924), [TAAS](https://dl.acm.org/pub.cfm?id=J1010), ... – Basile Starynkevitch Jun 20 '19 at 09:09
  • ... and [JETC](https://dl.acm.org/pub.cfm?id=J967), [PACMPL](https://dl.acm.org/pub.cfm?id=J1568), [TCPS](https://dl.acm.org/pub.cfm?id=J1536), [TECS](https://dl.acm.org/pub.cfm?id=J840), [TOCS](https://dl.acm.org/pub.cfm?id=J774) – Basile Starynkevitch Jun 20 '19 at 09:15
  • Randomized algorithms are very well suited to handle signal (in the information theoretic sense, not the Unix sense) with noise, and the kernel robustness to e.g. random bit flips due to cosmic radiations is an instance of "signal with noise" handling – Basile Starynkevitch Jun 23 '19 at 12:08
  • @BasileStarynkevitch: you are misinformed, randomized algorithms are not suited to handle noise. And signal processing has absolutely nothing to do with kernel design. – Yves Daoust Jun 24 '19 at 07:58
  • @BasileStarynkevitch: by the way, you forgot to mention quantum computing, this is so trendy. – Yves Daoust Jun 24 '19 at 07:59
  • But not yet used in embedded computing. And I even heard a talk explaining that quantum computing has no practical application (outside of quantum chemistry simulation) before my expected death. The point being that practical quantum computers have very few qbits. And yes, signal processing (at the math level, not in coding) has indirect relation with OS kernel reliability techniques. – Basile Starynkevitch Jun 24 '19 at 08:06
  • @BasileStarynkevitch: yes but they require robustification means because a fraction of the computations are wrong. I am sure you will find an indirect way. – Yves Daoust Jun 24 '19 at 08:09
  • I am not interested at all in quantum computing. I'm leaving that to the next generation of software developer. At 60 years of age, I am too old for quantum computing. I never saw a quantum computer (in real life), and I don't even expect to see one – Basile Starynkevitch Jun 24 '19 at 08:10
  • 1
    @BasileStarynkevitch: we are not discussing your personal interests, but the way to help the OP deal with radiations. – Yves Daoust Jun 24 '19 at 08:12
  • And I understand that quantum computing is not relevant for that goal, within the next 20 years. But statistical and probabilistic techniques (including randomized algorithms) are relevant. Both for static source code analysis of the program, and even for radiation-proofing of the kernel (e.g. by duplicating cleverly its scheduler code). Of course, all this is *indirect*, as you mention – Basile Starynkevitch Jun 24 '19 at 08:17
  • @BasileStarynkevitch: I said indirectly, don't you remember ? – Yves Daoust Jun 24 '19 at 08:18