@Robert thanks for your comments! I agree with you as far as I can see. - Yes, the proposed purpose of the assert is to ensure that the *benchmark* remains valid (i.e. that all of the individual settings to each solver are such that they are apples-to-apples). Many current benchmarks involving lobpcg are invalid in this sense, e.g., lobpcg produces garbage without failing. The proposed simple assert serves as a sanity check forcing the contributor to tune the benchmark example and parameters of the solvers under benchmarking. - No, adding an accuracy assert in benchmarks is not a substitution to having a similar (but preferably smaller-scale, i.e., faster) run in unit tests. That is why I place the code generating the examples for benchmarking outside of benchmarking thus also callable from the unit tests (Some of these examples then might deserve having a public API but this is a separate discussion that can be postponed). Currently, there appears to be a disconnect between benchmarking and unit testing at least for eigenvalue solvers.