On 2021-05-08 01:43, Pablo Galindo Salgado wrote:
Some update on the numbers. We have made some draft implementation to corroborate the numbers with some more realistic tests and seems that our original calculations were wrong. The actual increase in size is quite bigger than previously advertised:
Using bytes object to encode the final object and marshalling that to disk (so using uint8_t) as the underlying type:
BEFORE:
❯ ./python -m compileall -r 1000 Lib > /dev/null ❯ du -h Lib -c --max-depth=0 70M Lib 70M total
AFTER: ❯ ./python -m compileall -r 1000 Lib > /dev/null ❯ du -h Lib -c --max-depth=0 76M Lib 76M total
So that's an increase of 8.56 % over the original value. This is storing the start offset and end offset with no compression whatsoever.
[snip] I'm wondering if it's possible to compromise with one position that's not as complete but still gives a good hint: For example: File "test.py", line 6, in lel return 1 + foo(a,b,c=x['z']['x']['y']['z']['y'], d=e) ^ TypeError: 'NoneType' object is not subscriptable That at least tells you which subscript raised the exception. Another example: Traceback (most recent call last): File "test.py", line 4, in <module> print(1 / x + 1 / y) ^ ZeroDivisionError: division by zero as distinct from: Traceback (most recent call last): File "test.py", line 4, in <module> print(1 / x + 1 / y) ^ ZeroDivisionError: division by zero