Hi Wes!Right now we don’t push anything yet, we fetch everything into the Python’s runtime.But going forward the current idea is to push as much computation to the database aspossible (most of the time the database will do a better job then our engine).If we run on top PySpark/Hadoop I think we should be able to completely translate 100% ofPythonQL into these jobs.
On 1 Nov 2016, at 19:42, Wes Turner <firstname.lastname@example.org> wrote:Cool!How do I determine how much computation is pushed to the data? (Instead of pulling all the data and running the computation with one local node) ... https://en.wikipedia.org/
wiki/Bulk_synchronous_parallel(MapReduce,)On Tuesday, November 1, 2016, Pavel Velikhov <email@example.com> wrote:Hi Folks,
We have released PythonQL, a query language extension to Python (we have extended Python’s comprehensions with a full-fledged query language,
drawing from the useful features of SQL, XQuery and JSONiq). Take a look at the project here: http://www.pythonql.org and lets us know what you think!
The way PythonQL currently works is you mark PythonQL files with a special encoding and the system runs a preprocessor for all such files. We have
an interactive interpreter and Jupyter support planned.
Python-ideas mailing list
Code of Conduct: http://python.org/psf/codeofco