[IPython-dev] javascript / Python communication for image viewer

James Booth jabooth at gmail.com
Thu Aug 28 03:45:15 EDT 2014

Hey Matthew, Cyrille,

I’m developing a three.js-based web app for image/object annotation called landmarker.io (I work in computer vision/machine learning, getting high quality annotations of 3D and 2D data is often critical to training models). It’s located here

www.landmarker.io (tool, launches in demo by default)

https://github.com/menpo/landmarker.io (code)

I’m also one of the developer’s of Menpo (www.menpo.io), which is a Python package for building and testing deformable models. I’m going to be developing an IPython widget-version of the landmarker.io tool, so we can correct annotations as we sport problems in the notebook.

I mention all this as:

1. I’m also interested in understanding how we can transfer large arrays efficiently to IPython widgets

2. I have a little experience in how best to handle this in a traditional client/server model, but I don’t know how well this will translate to the comm interface.

Basically, landmarker.io expects to be able to talk to a RESTful interface that is implemented currently by:


Originally, I sent meshes as large JSON objects, but this was pretty slow as:

1. It’s more work in JS to parse the JSON into an array

2. It’s more work to turn this into the most efficient JS type for numerical arrays (which WebGL loves) which is an ArrayBuffer.



This meant that even with a server on localhost, skipping through faces to annotate felt a little sluggish.

Instead now I just directly build a pure ArrayBuffer in Python on the server and ship it to the client. That happens here in the server:


(I’m a little lazy here and literally save the file to disk, then gzip it. It could be done in memory with StringIO through, but I do this as a caching process so it’s going to disk anyway).

On the client side, I make an XMLHttpRequest and demand an array buffer from the server:


For completion, the actual parsing of the array buffer is done here:


Because three.js supports ArrayBuffers, this is really fast. I just point three to the array and we are away. With this implementation, browsing through subjects to annotate with a server on localhost (akin to moving between slices in your brain scan I imagine) is very fast.

I’m afraid I don’t have much knowledge of the comm interface - is there a document I could be pointed out that lays out the protocol? Does it sound possible to send a pure array to JS in the way I’m doing in the landmarkerio-server?



Sent from Mailbox

On Wed, Aug 27, 2014 at 9:53 PM, Cyrille Rossant
<cyrille.rossant at gmail.com> wrote:

> We have pretty similar requirements for Vispy, as we target fast
> visualization of big datasets using OpenGL in the IPython notebook. In
> particular, we can't use Canvas or SVG because it's just too slow for
> big data, so we have to use WebGL. This lets us leverage the GPU for
> visualization.
> I would be very interested in seeing the answers to your questions.
> Notably, is it possible to use binary sockets for transferring large
> amounts of binary data between Python and JavaScript?
> Cyrille
> 2014-08-27 20:52 GMT+01:00 Matthew Brett <matthew.brett at gmail.com>:
>> Guys / gals,
>> I want to ask for advice about writing a brain image display widget for
>> IPython.
>> I would like to make an IPython widget that can take an in-memory numpy array
>> and do an interactive display of orthogonal slices from the array.  The
>> display will look something like Papaya:
>> http://rii.uthscsa.edu/mango/papaya
>> where clicking or moving the mouse causes matching slices to be displayed
>> through the three axes of the numpy array (here a brain image).
>> Papaya is pure javascript, so I am assuming that it loads the whole array
>> (brain image) into a javascript variable and takes slices from that.
>> What I would like to do, is to be able to keep the whole 3D array only in
>> Python, and pass the slices as needed to a javascript viewer.
>> In my ignorance, I am not sure which approach to go for first.
>> Should I use the comm / widget interface for this?  In that case I guess the
>> procedure would be:
>> * mouse movement generates a 'need slice' message from javascript
>> * python kernel accepts 'need slice' message, takes slice from array, base64
>>   encodes into JSON, sends 'here is your slice' message back to javascript
>>   with the data
>> * javascript accepts message, unpacks base64'ed JSONed slice into variable and
>>   displays slice variable
>> Is that right?  I guess that involves Is there any chance that this
>> will be fast enough for
>> satisfying interactive movement through the image, which would likely require
>> something like 20 slices per second, each of say 64 x 64 floating point?
>> If not - is there something else I should look at instead?
>> Another question for more thanks - should I use a Canvas or SVG element to
>> display the images?  The fabric.js README [1] seems to imply the
>> Canvas element is
>> faster for some interactive stuff, does anyone have relevant experience to
>> share?
>> Thanks a lot,
>> Matthew
>> [1] https://github.com/kangax/fabric.js/#history
>> _______________________________________________
>> IPython-dev mailing list
>> IPython-dev at scipy.org
>> http://mail.scipy.org/mailman/listinfo/ipython-dev
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20140828/3e80f057/attachment.html>

More information about the IPython-dev mailing list