<br><br><div><span class="gmail_quote">On 1/11/07, <b class="gmail_sendername">Torgil Svensson</b> <<a href="mailto:email@example.com">firstname.lastname@example.org</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
On 1/11/07, Travis Oliphant <<a href="mailto:email@example.com">firstname.lastname@example.org</a>> wrote:<br>> Torgil Svensson wrote:<br>>> Example1: We have a very large amount of data with a compressed<br>>> internal representation
<br>>><br>>> Example2: We might want to generate data "on the fly" as it's needed<br>>><br>>> Example3: If module creators to deal with different byte alignments,<br>>> contiguousness etc it'll lead to lots of code duplication and
<br>>> unnecessarily much work<br>>><br>>> Is it possible to add a data access API to this PEP?><br>>> Could you give an example of what you mean? I have no problem with such<br>>> a concept. I'm mainly interested in getting the NumPy memory model into
<br>>> Python some-how. I know it's not the "only" way to think about memory,<br>>> but it is a widely-used and useful way.<br><br>Sure. I'm not objecting the memory model, what I mean is that data
<br>access between modules has a wider scope than just a memory model.<br>Maybe i'm completely out-of-scope here, I thought this was worth<br>considering for the inter-module-data-sharing - scope.</blockquote><div><br>
This is where separating the memory block from the API starts to show advantages. OTOH, we should try to keep this all as simple and basic as possible. Trying to design for every potential use will lead to over design, it is a fine line to walk.