New GitHub issue #118686 from tim-one:<br>
<hr>
<pre>
# Bug report
### Bug description:
I'm seeing inconsistent failures in `test_int`'s `test_denial_of_service_prevented_str_to_int`. Typical:
```python
File "C:\Code\Python\Lib\test\test_int.py", line 738, in test_denial_of_service_prevented_str_to_int
self.assertLessEqual(sw_fail_extra_huge.seconds, sw_convert.seconds/2)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: 0.015625 not less than or equal to 0.0078125
```
The displayed times are small enough that they're well within the range a non-idle Windows may, at times, essentially not give any cycles to a process.
I expect the underlying cause is that various hard-coded constants in the test were picked when CPython's int<->str conversions, for "large" values, were much slower than they are now. They're no longer quadratic-time. str->int is `O(d**1.585)` now (inherited from the asymptotics of CPython's Karatsuba multiplicatiom), and int->str the much better `O(d * messy expression involving logarithms)` (inherited from the asymptotics of `_decimal`'s fancier NTT multiplication) - where `d` is number if digits (or bits - same thing to `O()`).
Comments like:
```python
# Ensuring that we chose a slow enough conversion to measure.
# It takes 0.1 seconds on a Zen based cloud VM in an opt build.
```
suggest a cure: don't use hard-coded constants. Instead take a starting guess at the number of digits and keep boosting it until a conversion actually does take 0.1 seconds.
Or don't bother timing it at all :wink:. It's already checking that "big" cases raise `ValueError` with "conversion" in the exception detail message, so it's essentially certain that `int()` failed before doing any real work.
### CPython versions tested on:
CPython main branch
### Operating systems tested on:
Windows
</pre>
<hr>
<a href="https://github.com/python/cpython/issues/118686">View on GitHub</a>
<p>Labels: type-bug, tests</p>
<p>Assignee: </p>