[DB-SIG] .rowcount issues with large affected row counts.
daniele.varrazzo at gmail.com
Mon Jul 3 09:08:14 EDT 2017
Please open a bug to the psycopg bug tracker:
specifying your platform (win/linux/other, 32/64 bit). Please also add
an idea of the number you are expecting to see (I think we should be
able to parse 2*10^9 no problem, if not it's a bug). If possible
compile psycopg in debug mode (see
report the debug line is printed for a failing case (it should say
"_read_rowcount: PQcmdTuples..." looking at the link James has kindly
Regardless of the report I'll look into parsing that value without
using 'atol()' for next bugfix release.
On Thu, Jun 29, 2017 at 11:24 PM, ..: Mark Sloan :..
<mark.a.sloan at gmail.com> wrote:
> Hi all,
> Kind of new to a lot things here so if I am way off please correct me.
> using psycopg2 with postgres / greenplum / redshift it's now pretty easy to
> have a single query have a affected row count higher than it seems .rowcount
> allows for.
> I am pretty sure libpq returns the affected row as a string ("for historical
> reasons" according to the pg mailing threads) however when I have a large
> update statement (e.g. several billion) I seem to get a .rowcount back that
> isn't correct.
> using the psql client I can't reproduce the affected row count being
> incorrect there.
> any ideas or suggestions?
> DB-SIG maillist - DB-SIG at python.org
More information about the DB-SIG