On Tuesday, November 1, 2016, Pavel Velikhov email@example.com wrote:
Right now we don’t push anything yet, we fetch everything into the Python’s runtime. But going forward the current idea is to push as much computation to the database as possible (most of the time the database will do a better job then our engine).
If we run on top PySpark/Hadoop I think we should be able to completely translate 100% of PythonQL into these jobs.
That would be great; and fast.
A few more links that may be of use (in addition to ops._ in alchemy.py):
- http://www.ibis-project.org/faq.html#ibis-and-spark-pyspark - https://github.com/cloudera/ibis/tree/master/ibis/spark - http://spark.apache.org/docs/latest/sql-programming-guide.html#datasets-and-... - https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apac... - http://spark.apache.org/docs/latest/sql-programming-guide.html#datasets-and-...
How do I determine how much computation is pushed to the data? (Instead of pulling all the data and running the computation with one local node) ... https://en.wikipedia.org/wiki/Bulk_synchronous_parallel (MapReduce,)
We have released PythonQL, a query language extension to Python (we have extended Python’s comprehensions with a full-fledged query language, drawing from the useful features of SQL, XQuery and JSONiq). Take a look at the project here: http://www.pythonql.org and lets us know what you think!
The way PythonQL currently works is you mark PythonQL files with a special encoding and the system runs a preprocessor for all such files. We have an interactive interpreter and Jupyter support planned.
Best regards! PythonQL team _______________________________________________ Python-ideas mailing list Pythonfirstname.lastname@example.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/