[issue40539] Docs - difflib.SequenceMatcher quick_ratio and real_quick_ratio improved docs
New submission from Lewis Ball <lrjball@gmail.com>: Currently the docs for `difflib.SequenceMatcher.quick_ratio()` just says 'Return an upper bound on ratio() relatively quickly', which doesn't give much of an idea about how that upper bound is calculated. `real_quick_ratio` has similarly brief documentation. I'll raise a PR shortly to add a more verbose description to each of these ratios, so that it is clear when each should be used. My current suggestions would be: quick_ratio Return an upper bound on ratio() relatively quickly. This is the highest possible ratio() given these letters, regardless of their order. real_quick_ratio Return an upper bound on ratio() very quickly. This is the highest possible ratio() given the lengths of a and b, regardless of their letters. i.e. 2*(min(len(a), len(b))/(len(a) + len(b)) ---------- assignee: docs@python components: Documentation messages: 368305 nosy: Lewis Ball, docs@python priority: normal severity: normal status: open title: Docs - difflib.SequenceMatcher quick_ratio and real_quick_ratio improved docs type: enhancement versions: Python 3.9 _______________________________________ Python tracker <report@bugs.python.org> <https://bugs.python.org/issue40539> _______________________________________
Change by Lewis Ball <lrjball@gmail.com>: ---------- keywords: +patch pull_requests: +19287 stage: -> patch review pull_request: https://github.com/python/cpython/pull/19971 _______________________________________ Python tracker <report@bugs.python.org> <https://bugs.python.org/issue40539> _______________________________________
Tim Peters <tim@python.org> added the comment: Thanks for the effort, but I'm rejecting this. The language deliberately defines nothing about how these are calculated. It defines how `.ratio()` is computed, but that's all. An implementation is free to do whatever it likes for the "quick" versions, provided only they return upper bounds on `.ratio()`. Indeed, it's perfectly fine if an implementation merely returns 1.0 for both, regardless of the arguments. If an implementation is cleverer than that, great, that's fine too - but it would be actively counterproductive to constrain them to be no _more_ clever than the current implementations. ---------- nosy: +tim.peters resolution: -> rejected stage: patch review -> resolved status: open -> closed _______________________________________ Python tracker <report@bugs.python.org> <https://bugs.python.org/issue40539> _______________________________________
participants (2)
-
Lewis Ball
-
Tim Peters