Hi Python Developers,
I'm Brian Kuhl, I've spent about 28 years working with embedded software. Since 2000 I've worked for Wind River. I'm currently a manager of documentation and customer training in our Ottawa, Canada office. Throughout my career I've had an interest in the use of open source software in embedded systems, have ported many libraries to Wind River's flagship product the VxWorks RTOS.
The safe and secure embedded market where VxWorks is the leader is evolving, and devices that use to be made up of multiple processors running multiple operating systems are now being consolidated. Device security and IoT trends mean a device must be more configurable and updateable, and a better host for portable code. In this evolving space our customers have asked us to add Python support to VxWorks.
Wind River would like cpython to officially support VxWorks. I've been ask by my colleagues to volunteer as a maintainer for cpython in order to support this effort. Wind River will also provide the needed buildbot clients required to verify continuing VxWorks support.
Myself and an intern were able to get the majority of Python tests suite passing with a POC last year.
An engineering group in Beijing have taken that POC and are improving and cleaning up that effort now with the hopes of up-steaming their efforts.
My Chinese colleagues have suggested that the first step to gaining your support would be to summit a PEP detailing the changes?
If you agree, I will proceed with authoring a PEP.
Many thanks in advance for your responses,
Brian
P.S. I can be found on github (and many other places) as kuhlenough.
Hi Core Developers,
First thing: I am *not* a CPython committer. I think most of you who are
will be somewhat familiar with me though.
Second: I was a Director of the PSF for a long while, and continue to chair
some Working Groups. I've been mentioned in some PEPs. I have written a
lot of articles and have given a lot of talks about Python, including about
recent or pending PEPs and similar matters. I continue to work and train
around Python and open source (now with a focus on "data science", whatever
that is).
Third: I follow python-ideas and python-dev rather closely, and fairly
often contribute ideas to those lists.
Fourth: As I read PEP 8016, I cannot nominate myself to the Steering
Committee. That seems good and proper to me. But I believe I would be a
relevant and helpful member of the future Steering Committee if someone
wishes to nominate me and if the voters wish to elect me.
Thanks, David Mertz...
--
Keeping medicines from the bloodstreams of the sick; food
from the bellies of the hungry; books from the hands of the
uneducated; technology from the underdeveloped; and putting
advocates of freedom in prisons. Intellectual property is
to the 21st century what the slave trade was to the 16th.
Hi,
The "AMD64 Windows 8.1 Refleaks 3.x" buildbot (which hunts reference
leaks and memory leaks) was failing on test_asyncio for 1 year:
https://bugs.python.org/issue32710
It was a leak of a single reference:
test_aiosend leaked [1, 1, 1] references, sum=3
I tried multiple times since last year (January 2018) to understand
the leak: it didn't make any sense (well, as any bug at the beginning,
no?). I checked several times the complex asyncio code: all transports
are cleanly closed, the event loop is properly closed, etc. The code
looks correct.
After a long sleepness night... I still failed to reproduce the bug
:-( But I succeeded to get a way shorter reproducer script. Thanks to
this script, I was able to loop to get 100 reference leaks instead of
leaking a single reference. Using tracemalloc, I found the faulty
line... but it still didn't make sense to me. After additional several
hours of debug, I found that an overlapped object wasn't released
properly: an asynchronous WSASend().
The problem was when an overlapped WSASend() failed immediately, the
internal buffer was not released, whereas it holds a reference to the
input byte string. **It means that an asyncio send() failure using the
ProactorEventLoop can cause a memory leak**... I'm very surprised that
nobody noticed such **huge** memory leak previously!
The _overlapped bugfix:
https://github.com/python/cpython/commit/a234e148394c2c7419372ab65b773d53a5…
Eventually, the "AMD64 Windows 8.1 Refleaks 3.x" buildbot is back to green!
https://buildbot.python.org/all/#/builders/80
It means that it will be easier and faster to spot reference or memory
leak regressions (specific to Windows, the Gentoo Refleaks buildbot
was already green for several months!).
Since ProactorEventLoop became the default event loop in Python 3.8
(on Windows, it's specific to Windows), I hope that we fixed all most
obvious bugs!
This story also means that any very tiny buildbot failure (a single
test method failure on a single very specific buildbot) can hide a
giant bug ;-) Sadly, we have to fix *all* buildbots failures to find
them... Hopefully, almost all buildbots are green currently.
Victor
--
Night gathers, and now my watch begins. It shall not end until my death.
Hi Team,
I have tried to insert in clob data type more than 32k byte data but
getting error:-
Statement Execute Failed: [IBM][CLI Driver][DB2/LINUXX8664] SQL0102N The
string constant beginning with "\'<p>Hi good day team:<br /> We had been
requested to Add Anniel to all" is too long. SQLSTATE=54002 SQLCODE=-102
could any one please help me is there any way i can achieve it?
Please find my DDL statement:-
CREATE TABLE "DB2IDEV "."CHANGE" (
"TICKET_ID" INTEGER NOT NULL ,
"SHORTDESC" VARCHAR(5000 OCTETS) NOT NULL ,
"LONGDESC" CLOB(2147483647 OCTETS) LOGGED NOT COMPACT ,
"RELEASENUM" VARCHAR(12 OCTETS) WITH DEFAULT NULL ,
"ISSOFTDELETED" INTEGER NOT NULL WITH DEFAULT 0 ,
"CREATETIMESTAMP" TIMESTAMP NOT NULL WITH DEFAULT CURRENT
TIMESTAMP ,
"LASTUPDATETIMESTAMP" TIMESTAMP WITH DEFAULT NULL )
IN "TABLESPACE"
ORGANIZE BY ROW ;
conn =
db.connect("DATABASE=%s;HOSTNAME=%s;PORT=60001;PROTOCOL=TCPIP;UID=%s;PWD=%s;"%(DB2_DATABASE,DB2_HOSTNAME,DB2_UID,DB2_PWD),
"", "")
cursor = conn.cursor()
sql="INSERT INTO Change
(Ticket_ID,shortDesc,longDesc,releaseNum,isSoftDeleted,createTimeStamp,lastUpdateTimeStamp)
VALUES ('296129','High Cisco Adaptive Security Appliance Remote Code
Execution and Denial of Service Vulnerabilit','<Some long data around
50KB>',NULL,'0','2018-02-07 02:11:50',NULL)"
cursor.execute(sql)
conn.commit()
Thanks & Regards
*Aman Agrawal*
Dear Developers,
I have been working on a piece of code development that need your sincere help (three code file attached here).
I use run_slurm.py to use python subprocess module to submit multiple jobs to slurm cluster by invoking a sbatch file with a for loop to reach the following target:
1) create some environmental variables for job_slurm.py to run simulations (values of these environmental variables will change for each job with the for loop)
2) invoke submit_slurm.sh to submit a sbatch job that will run job_slurm.py
3) each job_slurm.py will use python multiprocess.Pool to run parallized simulations on each ./mod_bart.sh exeutable file (will run for a few hours) on a single compute node in cluster using all cores of this compute node
I get the following error all the time, could you provide some insights on how our implementation is wrong to achieve the desired goal:
Exception in thread Thread-3:
Traceback (most recent call last):
File "/share/apps/python/2.7/lib/python2.7/threading.py", line 552, in __bootstrap_inner
self.run()
File "/share/apps/python/2.7/lib/python2.7/threading.py", line 505, in run
self.__target(*self.__args, **self.__kwargs)
File "/share/apps/python/2.7/lib/python2.7/multiprocessing/pool.py", line 347, in _handle_results
task = get()
TypeError: ('__init__() takes at least 3 arguments (1 given)', <class 'subprocess.CalledProcessError'>, ())
Hello,
As explained in
https://docs.python.org/3/reference/compound_stmts.html#the-try-statement ,
except E as N:
foo
is actually compiled as:
except E as N:
try:
foo
finally:
del N
But looking at the generated bytecode, it's worse than that:
16 STORE_NAME 1 (N)
18 POP_TOP
20 SETUP_FINALLY 8 (to 30)
4 22 LOAD_NAME 2 (foo)
24 POP_TOP
26 POP_BLOCK
28 LOAD_CONST 0 (None)
>> 30 LOAD_CONST 0 (None)
32 STORE_NAME 1 (N)
34 DELETE_NAME 1 (N)
36 END_FINALLY
It's clear that what happens there is that first None is stored to N,
just to del it as the next step. Indeed, that's what done in the
compile.c:
https://github.com/python/cpython/blob/master/Python/compile.c#L2905
Storing None looks superfluous. For example, DELETE_FAST explicitly
stores NULL on delete.
https://github.com/python/cpython/blob/master/Python/ceval.c#L2249
Likewise, for DELETE_NAME/DELETE_GLOBAL, PyObject_DelItem() is
called which translates to:
m->mp_ass_subscript(o, key, (PyObject*)NULL);
So hopefully any compliant mapping implementation actually clears
stored value, not leaving it dangling.
The "store None" behavior can be traced down to introduction of the
"del except target var" behavior back in 2007:
https://github.com/python/cpython/commit/b940e113bf90ff71b0ef57414ea2beea9d…
There's no clear explanation why it's done like that, so probably an
artifact of the initial implementation. Note that even
https://github.com/python/cpython/commit/520b7ae27e39d1c77ea74ccd1b184d7cb4…
which did quite a bunch of refactoring to "except" implementation, and
reformatted this code, otherwise left it in place.
P.S.
I actually wanted to argue that such an implementation is hardly ideal
at all. Consider:
----
e = "my precious data"
try:
1/0
except Exception as e:
pass
# Where's my *global* variable?
# Worse, my variable can be gone or not, depending on whether exception
# triggered or not.
print(e)
----
So, perhaps the change should be not removing "e = None" part, but
conversely, removing the "del e" part.
--
Best regards,
Paul mailto:pmiscml@gmail.com