Creating a sine wave with exponential decay
Hi everyone! Total Numpy newbie here. I'd like to create an array with a million numbers, that has a sine wave with exponential decay on the amplitude. In other words, I want the value of each cell n to be sin(n) * 2 ** (n * factor). What would be the most efficient way to do that? Someone suggested I do something like this: y = np.sin(x) * np.exp(newfactor * x) But this would create 2 arrays, wouldn't it? Isn't that wasteful? Does Numpy provide an efficient way of doing that without creating a redundant array? Thanks for your help, Ram Rachum.
Hi Ram, No, NumPy doesn’t have a way. And it newer versions, it probably won’t create two arrays if all the dtypes match, it’ll do some magic to re use the existing ones, although it will use multiple loops instead of just one. You might want to look into NumExpr or Numba if you want an efficient implementation. Get Outlook for iOS<https://aka.ms/o0ukef> ________________________________ From: NumPyDiscussion <numpydiscussionbounces+einstein.edison=gmail.com@python.org> on behalf of Ram Rachum <ram@rachum.com> Sent: Tuesday, July 23, 2019 7:29 pm To: numpydiscussion@python.org Subject: [Numpydiscussion] Creating a sine wave with exponential decay Hi everyone! Total Numpy newbie here. I'd like to create an array with a million numbers, that has a sine wave with exponential decay on the amplitude. In other words, I want the value of each cell n to be sin(n) * 2 ** (n * factor). What would be the most efficient way to do that? Someone suggested I do something like this: y = np.sin(x) * np.exp(newfactor * x) But this would create 2 arrays, wouldn't it? Isn't that wasteful? Does Numpy provide an efficient way of doing that without creating a redundant array? Thanks for your help, Ram Rachum.
(Full disclosure: I work on Numba...) Just to note, the NumPy implementation will allocate (and free) more than 2 arrays to compute that expression. It has to allocate the result array for each operation as Python executes. That expression is equivalent to: s1 = newfactor * x s2 = np.exp(s1) s3 = np.sin(x) y = s3 * s2 However, memory allocation is still pretty fast compared to special math functions (exp and sin), which dominate that calculation. I find this expression takes around 20 milliseconds for a million elements on my older laptop, so that might be negligible in your program execution time unless you need to recreate this decaying exponential thousands of times. Tools like Numba or numexpr will be useful to fuse loops so you only do one allocation, but they aren't necessary unless this becomes the bottleneck in your code. If you are getting started with NumPy, I would suggest not worrying about these issues too much, and focus on making good use of arrays, NumPy array functions, and array expressions in your code. If you have to write for loops (if there is no good way to do the operation with existing NumPy functions), I would reach for something like Numba, and if you want to speed up complex array expressions, both Numba and Numexpr will do a good job. On Tue, Jul 23, 2019 at 10:38 AM Hameer Abbasi <einstein.edison@gmail.com> wrote:
Hi Ram,
No, NumPy doesn’t have a way. And it newer versions, it probably won’t create two arrays if all the dtypes match, it’ll do some magic to re use the existing ones, although it will use multiple loops instead of just one.
You might want to look into NumExpr or Numba if you want an efficient implementation.
Get Outlook for iOS <https://aka.ms/o0ukef>
 *From:* NumPyDiscussion <numpydiscussionbounces+einstein.edison= gmail.com@python.org> on behalf of Ram Rachum <ram@rachum.com> *Sent:* Tuesday, July 23, 2019 7:29 pm *To:* numpydiscussion@python.org *Subject:* [Numpydiscussion] Creating a sine wave with exponential decay
Hi everyone! Total Numpy newbie here.
I'd like to create an array with a million numbers, that has a sine wave with exponential decay on the amplitude.
In other words, I want the value of each cell n to be sin(n) * 2 ** (n * factor).
What would be the most efficient way to do that?
Someone suggested I do something like this:
y = np.sin(x) * np.exp(newfactor * x)
But this would create 2 arrays, wouldn't it? Isn't that wasteful? Does Numpy provide an efficient way of doing that without creating a redundant array?
Thanks for your help,
Ram Rachum. _______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@python.org https://mail.python.org/mailman/listinfo/numpydiscussion
On Tue, 20190723 at 13:38 0500, Stanley Seibert wrote:
(Full disclosure: I work on Numba...)
Just to note, the NumPy implementation will allocate (and free) more than 2 arrays to compute that expression. It has to allocate the result array for each operation as Python executes. That expression is equivalent to:
That is mostly true, although – as Hameer mentioned – on many platforms (gcc compiler is needed I think) a bit of magic happens. If an array is temporary, the operation is replaced with an inplace operation for most python operators calls. For example: `abs(arr1 * arr2 / arr3  arr4)` should only create a single new array and keep reusing it in many cases[0]. You would achieve similar things with `arr1 *= arr2` manually. Another thing is that numpy will cache some arrays, so that the allocation cost itself may be avoided in many cases. NumPy does no "loop fusing", i.e. each operation is finished before the next is started. In many cases, with simple math loop fusing can give a very good speedup (which is wher Numba or numexpr come in). Larger speedups are likely if you have large arrays and very simple math (addition). [1] As Stanley noted, you probably should not worry too much about it. You have `exp`/`sin` in there, which are slow by nature. You can try, but it is likely that you simply cannot gain much speed there. Best, Sebastian [0] It requires that the shapes all match and that the result arrays are obviously temporary. [1] For small arrays overheads may be avoided using tools such as numba, which can help a lot as well. If you want to use multiple threads for a specific function that may also be worth a look.
s1 = newfactor * x s2 = np.exp(s1) s3 = np.sin(x) y = s3 * s2
However, memory allocation is still pretty fast compared to special math functions (exp and sin), which dominate that calculation. I find this expression takes around 20 milliseconds for a million elements on my older laptop, so that might be negligible in your program execution time unless you need to recreate this decaying exponential thousands of times. Tools like Numba or numexpr will be useful to fuse loops so you only do one allocation, but they aren't necessary unless this becomes the bottleneck in your code.
If you are getting started with NumPy, I would suggest not worrying about these issues too much, and focus on making good use of arrays, NumPy array functions, and array expressions in your code. If you have to write for loops (if there is no good way to do the operation with existing NumPy functions), I would reach for something like Numba, and if you want to speed up complex array expressions, both Numba and Numexpr will do a good job.
On Tue, Jul 23, 2019 at 10:38 AM Hameer Abbasi < einstein.edison@gmail.com> wrote:
Hi Ram,
No, NumPy doesn’t have a way. And it newer versions, it probably won’t create two arrays if all the dtypes match, it’ll do some magic to re use the existing ones, although it will use multiple loops instead of just one.
You might want to look into NumExpr or Numba if you want an efficient implementation.
Get Outlook for iOS
From: NumPyDiscussion < numpydiscussionbounces+einstein.edison=gmail.com@python.org> on behalf of Ram Rachum <ram@rachum.com> Sent: Tuesday, July 23, 2019 7:29 pm To: numpydiscussion@python.org Subject: [Numpydiscussion] Creating a sine wave with exponential decay
Hi everyone! Total Numpy newbie here.
I'd like to create an array with a million numbers, that has a sine wave with exponential decay on the amplitude.
In other words, I want the value of each cell n to be sin(n) * 2 ** (n * factor).
What would be the most efficient way to do that?
Someone suggested I do something like this:
y = np.sin(x) * np.exp(newfactor * x) But this would create 2 arrays, wouldn't it? Isn't that wasteful? Does Numpy provide an efficient way of doing that without creating a redundant array?
Thanks for your help,
Ram Rachum.
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@python.org https://mail.python.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@python.org https://mail.python.org/mailman/listinfo/numpydiscussion
(disclosure: I have been a long time maintainer of numexpr) An important thing to be noted is that, besides avoiding creating big temporaries, numexpr has support for multithreading out of the box and Intel VML for optimal evaluation times on Intel CPUs. For choosing your best bet, there is no replacement for experimentation: https://gist.github.com/FrancescAlted/203be8a44d02566f31dae11a22c179f3 I have no time now to check for memory consumption, but you can expect numexpr and Numba consuming barely the same amount of memory. Performance wise things are quite different, but this is probably due to my inexperience with Numba (in particular, paralellism does not seem to work for this example in Numba 0.45, but I am not sure why). Cheers! Probably numba has this too, but my attempts to use parallelism failed Missatge de Sebastian Berg <sebastian@sipsolutions.net> del dia dc., 24 de jul. 2019 a les 1:32:
On Tue, 20190723 at 13:38 0500, Stanley Seibert wrote:
(Full disclosure: I work on Numba...)
Just to note, the NumPy implementation will allocate (and free) more than 2 arrays to compute that expression. It has to allocate the result array for each operation as Python executes. That expression is equivalent to:
That is mostly true, although – as Hameer mentioned – on many platforms (gcc compiler is needed I think) a bit of magic happens.
If an array is temporary, the operation is replaced with an inplace operation for most python operators calls. For example: `abs(arr1 * arr2 / arr3  arr4)` should only create a single new array and keep reusing it in many cases[0]. You would achieve similar things with `arr1 *= arr2` manually.
Another thing is that numpy will cache some arrays, so that the allocation cost itself may be avoided in many cases.
NumPy does no "loop fusing", i.e. each operation is finished before the next is started. In many cases, with simple math loop fusing can give a very good speedup (which is wher Numba or numexpr come in). Larger speedups are likely if you have large arrays and very simple math (addition). [1]
As Stanley noted, you probably should not worry too much about it. You have `exp`/`sin` in there, which are slow by nature. You can try, but it is likely that you simply cannot gain much speed there.
Best,
Sebastian
[0] It requires that the shapes all match and that the result arrays are obviously temporary. [1] For small arrays overheads may be avoided using tools such as numba, which can help a lot as well. If you want to use multiple threads for a specific function that may also be worth a look.
s1 = newfactor * x s2 = np.exp(s1) s3 = np.sin(x) y = s3 * s2
However, memory allocation is still pretty fast compared to special math functions (exp and sin), which dominate that calculation. I find this expression takes around 20 milliseconds for a million elements on my older laptop, so that might be negligible in your program execution time unless you need to recreate this decaying exponential thousands of times. Tools like Numba or numexpr will be useful to fuse loops so you only do one allocation, but they aren't necessary unless this becomes the bottleneck in your code.
If you are getting started with NumPy, I would suggest not worrying about these issues too much, and focus on making good use of arrays, NumPy array functions, and array expressions in your code. If you have to write for loops (if there is no good way to do the operation with existing NumPy functions), I would reach for something like Numba, and if you want to speed up complex array expressions, both Numba and Numexpr will do a good job.
On Tue, Jul 23, 2019 at 10:38 AM Hameer Abbasi < einstein.edison@gmail.com> wrote:
Hi Ram,
No, NumPy doesn’t have a way. And it newer versions, it probably won’t create two arrays if all the dtypes match, it’ll do some magic to re use the existing ones, although it will use multiple loops instead of just one.
You might want to look into NumExpr or Numba if you want an efficient implementation.
Get Outlook for iOS
From: NumPyDiscussion < numpydiscussionbounces+einstein.edison=gmail.com@python.org> on behalf of Ram Rachum <ram@rachum.com> Sent: Tuesday, July 23, 2019 7:29 pm To: numpydiscussion@python.org Subject: [Numpydiscussion] Creating a sine wave with exponential decay
Hi everyone! Total Numpy newbie here.
I'd like to create an array with a million numbers, that has a sine wave with exponential decay on the amplitude.
In other words, I want the value of each cell n to be sin(n) * 2 ** (n * factor).
What would be the most efficient way to do that?
Someone suggested I do something like this:
y = np.sin(x) * np.exp(newfactor * x) But this would create 2 arrays, wouldn't it? Isn't that wasteful? Does Numpy provide an efficient way of doing that without creating a redundant array?
Thanks for your help,
Ram Rachum.
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@python.org https://mail.python.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@python.org https://mail.python.org/mailman/listinfo/numpydiscussion
NumPyDiscussion mailing list NumPyDiscussion@python.org https://mail.python.org/mailman/listinfo/numpydiscussion
 Francesc Alted
Hi Francesc, Those numbers are really eyepopping! But the formatting of the code as a string still bugs me a lot. Asking this as a totally naive user, do you know whether PEP523 <https://www.python.org/dev/peps/pep0523/> (adding a frame evaluation API) would allow numexpr to have a more Pythonic syntax? eg. with numexpr: y = np.sin(x) * np.exp(newfactor * x) ? Juan.
On 24 Jul 2019, at 6:39 pm, Francesc Alted <faltet@gmail.com> wrote:
(disclosure: I have been a long time maintainer of numexpr)
An important thing to be noted is that, besides avoiding creating big temporaries, numexpr has support for multithreading out of the box and Intel VML for optimal evaluation times on Intel CPUs. For choosing your best bet, there is no replacement for experimentation:
https://gist.github.com/FrancescAlted/203be8a44d02566f31dae11a22c179f3 <https://gist.github.com/FrancescAlted/203be8a44d02566f31dae11a22c179f3>
I have no time now to check for memory consumption, but you can expect numexpr and Numba consuming barely the same amount of memory. Performance wise things are quite different, but this is probably due to my inexperience with Numba (in particular, paralellism does not seem to work for this example in Numba 0.45, but I am not sure why).
Cheers!
Probably numba has this too, but my attempts to use parallelism failed
Missatge de Sebastian Berg <sebastian@sipsolutions.net <mailto:sebastian@sipsolutions.net>> del dia dc., 24 de jul. 2019 a les 1:32: On Tue, 20190723 at 13:38 0500, Stanley Seibert wrote:
(Full disclosure: I work on Numba...)
Just to note, the NumPy implementation will allocate (and free) more than 2 arrays to compute that expression. It has to allocate the result array for each operation as Python executes. That expression is equivalent to:
That is mostly true, although – as Hameer mentioned – on many platforms (gcc compiler is needed I think) a bit of magic happens.
If an array is temporary, the operation is replaced with an inplace operation for most python operators calls. For example: `abs(arr1 * arr2 / arr3  arr4)` should only create a single new array and keep reusing it in many cases[0]. You would achieve similar things with `arr1 *= arr2` manually.
Another thing is that numpy will cache some arrays, so that the allocation cost itself may be avoided in many cases.
NumPy does no "loop fusing", i.e. each operation is finished before the next is started. In many cases, with simple math loop fusing can give a very good speedup (which is wher Numba or numexpr come in). Larger speedups are likely if you have large arrays and very simple math (addition). [1]
As Stanley noted, you probably should not worry too much about it. You have `exp`/`sin` in there, which are slow by nature. You can try, but it is likely that you simply cannot gain much speed there.
Best,
Sebastian
[0] It requires that the shapes all match and that the result arrays are obviously temporary. [1] For small arrays overheads may be avoided using tools such as numba, which can help a lot as well. If you want to use multiple threads for a specific function that may also be worth a look.
s1 = newfactor * x s2 = np.exp(s1) s3 = np.sin(x) y = s3 * s2
However, memory allocation is still pretty fast compared to special math functions (exp and sin), which dominate that calculation. I find this expression takes around 20 milliseconds for a million elements on my older laptop, so that might be negligible in your program execution time unless you need to recreate this decaying exponential thousands of times. Tools like Numba or numexpr will be useful to fuse loops so you only do one allocation, but they aren't necessary unless this becomes the bottleneck in your code.
If you are getting started with NumPy, I would suggest not worrying about these issues too much, and focus on making good use of arrays, NumPy array functions, and array expressions in your code. If you have to write for loops (if there is no good way to do the operation with existing NumPy functions), I would reach for something like Numba, and if you want to speed up complex array expressions, both Numba and Numexpr will do a good job.
On Tue, Jul 23, 2019 at 10:38 AM Hameer Abbasi < einstein.edison@gmail.com <mailto:einstein.edison@gmail.com>> wrote:
Hi Ram,
No, NumPy doesn’t have a way. And it newer versions, it probably won’t create two arrays if all the dtypes match, it’ll do some magic to re use the existing ones, although it will use multiple loops instead of just one.
You might want to look into NumExpr or Numba if you want an efficient implementation.
Get Outlook for iOS
From: NumPyDiscussion < numpydiscussionbounces+einstein.edison=gmail.com@python.org <mailto:gmail.com@python.org>> on behalf of Ram Rachum <ram@rachum.com <mailto:ram@rachum.com>> Sent: Tuesday, July 23, 2019 7:29 pm To: numpydiscussion@python.org <mailto:numpydiscussion@python.org> Subject: [Numpydiscussion] Creating a sine wave with exponential decay
Hi everyone! Total Numpy newbie here.
I'd like to create an array with a million numbers, that has a sine wave with exponential decay on the amplitude.
In other words, I want the value of each cell n to be sin(n) * 2 ** (n * factor).
What would be the most efficient way to do that?
Someone suggested I do something like this:
y = np.sin(x) * np.exp(newfactor * x) But this would create 2 arrays, wouldn't it? Isn't that wasteful? Does Numpy provide an efficient way of doing that without creating a redundant array?
Thanks for your help,
Ram Rachum.
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@python.org <mailto:NumPyDiscussion@python.org> https://mail.python.org/mailman/listinfo/numpydiscussion <https://mail.python.org/mailman/listinfo/numpydiscussion>
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@python.org <mailto:NumPyDiscussion@python.org> https://mail.python.org/mailman/listinfo/numpydiscussion <https://mail.python.org/mailman/listinfo/numpydiscussion>
NumPyDiscussion mailing list NumPyDiscussion@python.org <mailto:NumPyDiscussion@python.org> https://mail.python.org/mailman/listinfo/numpydiscussion <https://mail.python.org/mailman/listinfo/numpydiscussion>
 Francesc Alted _______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@python.org https://mail.python.org/mailman/listinfo/numpydiscussion
Hi Juan, With the time I have grown to appreciate the simplicity of strings for representing the expressions that numexpr is designed to tackle. Having said that, PEP 523 looks intriguing indeed. As always, PRs are welcome! Francesc El dj., 25 jul. 2019, 4.29, Juan NunezIglesias <jni@fastmail.com> va escriure:
Hi Francesc,
Those numbers are really eyepopping! But the formatting of the code as a string still bugs me a lot. Asking this as a totally naive user, do you know whether PEP523 <https://www.python.org/dev/peps/pep0523/> (adding a frame evaluation API) would allow numexpr to have a more Pythonic syntax? eg.
with numexpr: y = np.sin(x) * np.exp(newfactor * x)
?
Juan.
On 24 Jul 2019, at 6:39 pm, Francesc Alted <faltet@gmail.com> wrote:
(disclosure: I have been a long time maintainer of numexpr)
An important thing to be noted is that, besides avoiding creating big temporaries, numexpr has support for multithreading out of the box and Intel VML for optimal evaluation times on Intel CPUs. For choosing your best bet, there is no replacement for experimentation:
https://gist.github.com/FrancescAlted/203be8a44d02566f31dae11a22c179f3
I have no time now to check for memory consumption, but you can expect numexpr and Numba consuming barely the same amount of memory. Performance wise things are quite different, but this is probably due to my inexperience with Numba (in particular, paralellism does not seem to work for this example in Numba 0.45, but I am not sure why).
Cheers!
Probably numba has this too, but my attempts to use parallelism failed
Missatge de Sebastian Berg <sebastian@sipsolutions.net> del dia dc., 24 de jul. 2019 a les 1:32:
On Tue, 20190723 at 13:38 0500, Stanley Seibert wrote:
(Full disclosure: I work on Numba...)
Just to note, the NumPy implementation will allocate (and free) more than 2 arrays to compute that expression. It has to allocate the result array for each operation as Python executes. That expression is equivalent to:
That is mostly true, although – as Hameer mentioned – on many platforms (gcc compiler is needed I think) a bit of magic happens.
If an array is temporary, the operation is replaced with an inplace operation for most python operators calls. For example: `abs(arr1 * arr2 / arr3  arr4)` should only create a single new array and keep reusing it in many cases[0]. You would achieve similar things with `arr1 *= arr2` manually.
Another thing is that numpy will cache some arrays, so that the allocation cost itself may be avoided in many cases.
NumPy does no "loop fusing", i.e. each operation is finished before the next is started. In many cases, with simple math loop fusing can give a very good speedup (which is wher Numba or numexpr come in). Larger speedups are likely if you have large arrays and very simple math (addition). [1]
As Stanley noted, you probably should not worry too much about it. You have `exp`/`sin` in there, which are slow by nature. You can try, but it is likely that you simply cannot gain much speed there.
Best,
Sebastian
[0] It requires that the shapes all match and that the result arrays are obviously temporary. [1] For small arrays overheads may be avoided using tools such as numba, which can help a lot as well. If you want to use multiple threads for a specific function that may also be worth a look.
s1 = newfactor * x s2 = np.exp(s1) s3 = np.sin(x) y = s3 * s2
However, memory allocation is still pretty fast compared to special math functions (exp and sin), which dominate that calculation. I find this expression takes around 20 milliseconds for a million elements on my older laptop, so that might be negligible in your program execution time unless you need to recreate this decaying exponential thousands of times. Tools like Numba or numexpr will be useful to fuse loops so you only do one allocation, but they aren't necessary unless this becomes the bottleneck in your code.
If you are getting started with NumPy, I would suggest not worrying about these issues too much, and focus on making good use of arrays, NumPy array functions, and array expressions in your code. If you have to write for loops (if there is no good way to do the operation with existing NumPy functions), I would reach for something like Numba, and if you want to speed up complex array expressions, both Numba and Numexpr will do a good job.
On Tue, Jul 23, 2019 at 10:38 AM Hameer Abbasi < einstein.edison@gmail.com> wrote:
Hi Ram,
No, NumPy doesn’t have a way. And it newer versions, it probably won’t create two arrays if all the dtypes match, it’ll do some magic to re use the existing ones, although it will use multiple loops instead of just one.
You might want to look into NumExpr or Numba if you want an efficient implementation.
Get Outlook for iOS
From: NumPyDiscussion < numpydiscussionbounces+einstein.edison=gmail.com@python.org> on behalf of Ram Rachum <ram@rachum.com> Sent: Tuesday, July 23, 2019 7:29 pm To: numpydiscussion@python.org Subject: [Numpydiscussion] Creating a sine wave with exponential decay
Hi everyone! Total Numpy newbie here.
I'd like to create an array with a million numbers, that has a sine wave with exponential decay on the amplitude.
In other words, I want the value of each cell n to be sin(n) * 2 ** (n * factor).
What would be the most efficient way to do that?
Someone suggested I do something like this:
y = np.sin(x) * np.exp(newfactor * x) But this would create 2 arrays, wouldn't it? Isn't that wasteful? Does Numpy provide an efficient way of doing that without creating a redundant array?
Thanks for your help,
Ram Rachum.
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@python.org https://mail.python.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@python.org https://mail.python.org/mailman/listinfo/numpydiscussion
NumPyDiscussion mailing list NumPyDiscussion@python.org https://mail.python.org/mailman/listinfo/numpydiscussion
 Francesc Alted _______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@python.org https://mail.python.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@python.org https://mail.python.org/mailman/listinfo/numpydiscussion
Thanks for your answers and insight, everybody! On Fri, Jul 26, 2019 at 9:07 AM Francesc Alted <faltet@gmail.com> wrote:
Hi Juan,
With the time I have grown to appreciate the simplicity of strings for representing the expressions that numexpr is designed to tackle. Having said that, PEP 523 looks intriguing indeed. As always, PRs are welcome!
Francesc
El dj., 25 jul. 2019, 4.29, Juan NunezIglesias <jni@fastmail.com> va escriure:
Hi Francesc,
Those numbers are really eyepopping! But the formatting of the code as a string still bugs me a lot. Asking this as a totally naive user, do you know whether PEP523 <https://www.python.org/dev/peps/pep0523/> (adding a frame evaluation API) would allow numexpr to have a more Pythonic syntax? eg.
with numexpr: y = np.sin(x) * np.exp(newfactor * x)
?
Juan.
On 24 Jul 2019, at 6:39 pm, Francesc Alted <faltet@gmail.com> wrote:
(disclosure: I have been a long time maintainer of numexpr)
An important thing to be noted is that, besides avoiding creating big temporaries, numexpr has support for multithreading out of the box and Intel VML for optimal evaluation times on Intel CPUs. For choosing your best bet, there is no replacement for experimentation:
https://gist.github.com/FrancescAlted/203be8a44d02566f31dae11a22c179f3
I have no time now to check for memory consumption, but you can expect numexpr and Numba consuming barely the same amount of memory. Performance wise things are quite different, but this is probably due to my inexperience with Numba (in particular, paralellism does not seem to work for this example in Numba 0.45, but I am not sure why).
Cheers!
Probably numba has this too, but my attempts to use parallelism failed
Missatge de Sebastian Berg <sebastian@sipsolutions.net> del dia dc., 24 de jul. 2019 a les 1:32:
On Tue, 20190723 at 13:38 0500, Stanley Seibert wrote:
(Full disclosure: I work on Numba...)
Just to note, the NumPy implementation will allocate (and free) more than 2 arrays to compute that expression. It has to allocate the result array for each operation as Python executes. That expression is equivalent to:
That is mostly true, although – as Hameer mentioned – on many platforms (gcc compiler is needed I think) a bit of magic happens.
If an array is temporary, the operation is replaced with an inplace operation for most python operators calls. For example: `abs(arr1 * arr2 / arr3  arr4)` should only create a single new array and keep reusing it in many cases[0]. You would achieve similar things with `arr1 *= arr2` manually.
Another thing is that numpy will cache some arrays, so that the allocation cost itself may be avoided in many cases.
NumPy does no "loop fusing", i.e. each operation is finished before the next is started. In many cases, with simple math loop fusing can give a very good speedup (which is wher Numba or numexpr come in). Larger speedups are likely if you have large arrays and very simple math (addition). [1]
As Stanley noted, you probably should not worry too much about it. You have `exp`/`sin` in there, which are slow by nature. You can try, but it is likely that you simply cannot gain much speed there.
Best,
Sebastian
[0] It requires that the shapes all match and that the result arrays are obviously temporary. [1] For small arrays overheads may be avoided using tools such as numba, which can help a lot as well. If you want to use multiple threads for a specific function that may also be worth a look.
s1 = newfactor * x s2 = np.exp(s1) s3 = np.sin(x) y = s3 * s2
However, memory allocation is still pretty fast compared to special math functions (exp and sin), which dominate that calculation. I find this expression takes around 20 milliseconds for a million elements on my older laptop, so that might be negligible in your program execution time unless you need to recreate this decaying exponential thousands of times. Tools like Numba or numexpr will be useful to fuse loops so you only do one allocation, but they aren't necessary unless this becomes the bottleneck in your code.
If you are getting started with NumPy, I would suggest not worrying about these issues too much, and focus on making good use of arrays, NumPy array functions, and array expressions in your code. If you have to write for loops (if there is no good way to do the operation with existing NumPy functions), I would reach for something like Numba, and if you want to speed up complex array expressions, both Numba and Numexpr will do a good job.
On Tue, Jul 23, 2019 at 10:38 AM Hameer Abbasi < einstein.edison@gmail.com> wrote:
Hi Ram,
No, NumPy doesn’t have a way. And it newer versions, it probably won’t create two arrays if all the dtypes match, it’ll do some magic to re use the existing ones, although it will use multiple loops instead of just one.
You might want to look into NumExpr or Numba if you want an efficient implementation.
Get Outlook for iOS
From: NumPyDiscussion < numpydiscussionbounces+einstein.edison=gmail.com@python.org> on behalf of Ram Rachum <ram@rachum.com> Sent: Tuesday, July 23, 2019 7:29 pm To: numpydiscussion@python.org Subject: [Numpydiscussion] Creating a sine wave with exponential decay
Hi everyone! Total Numpy newbie here.
I'd like to create an array with a million numbers, that has a sine wave with exponential decay on the amplitude.
In other words, I want the value of each cell n to be sin(n) * 2 ** (n * factor).
What would be the most efficient way to do that?
Someone suggested I do something like this:
y = np.sin(x) * np.exp(newfactor * x) But this would create 2 arrays, wouldn't it? Isn't that wasteful? Does Numpy provide an efficient way of doing that without creating a redundant array?
Thanks for your help,
Ram Rachum.
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@python.org https://mail.python.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@python.org https://mail.python.org/mailman/listinfo/numpydiscussion
NumPyDiscussion mailing list NumPyDiscussion@python.org https://mail.python.org/mailman/listinfo/numpydiscussion
 Francesc Alted _______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@python.org https://mail.python.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@python.org https://mail.python.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@python.org https://mail.python.org/mailman/listinfo/numpydiscussion
participants (6)

Francesc Alted

Hameer Abbasi

Juan NunezIglesias

Ram Rachum

Sebastian Berg

Stanley Seibert