Re: each_channel decorator
Tony, How about a third option, computing on each channel in LAB space? elif rgb_behavior == 'lab': lab = color.rgb2lab(image) c_new = [image_filter(c) for c in np.rollaxis(lab, -1)] out = np.rollaxis(np.array(c_new), 0, 3) out = color.lab2rgb(out) I tried this with "edges", and it produced a pretty interesting image. On Wednesday, October 17, 2012 8:48:58 PM UTC-5, Tony S Yu wrote:
On Wed, Oct 17, 2012 at 4:37 PM, Stéfan van der Walt <ste...@sun.ac.za<javascript:>
wrote:
On Wed, Oct 17, 2012 at 7:39 AM, Schönberger Johannes <hannessch...@gmail.com <javascript:>> wrote:
Sounds good. Do you have a paper or any reliable source about this? Just interested in this...
The question is, should we make this a decorator, or simply provide it as utility functions? I'm wondering, because now we already have two different ways of handling color images: split them up, RGB, or convert them to LAB and apply the filter to LAB. How do we expose both of these sensible alternatives to the user?
We can have a *default*, so that all single-channel filters are done on the luminance layer, but then combine that with utility functions. E.g.
@luminance_filter def gray_filter(image, ...): ...
gray_filter(color_image, params) --> convert to LAB, filter on L, convert back filter_layers(color_image, gray_filter, params) -> filter each layer separately and combine
Alternatively, skip the decorators completely, and just provide two utility functions: filter_layers and filter_luminance.
I'm not really sure how a utility function would work. Wouldn't you need to provide at least 2 utility functions when converting to LAB:
image, ab = prepare_rgb(image) # ... code to generate `filtered_image` from `image` filtered_image = finalize_rgb(filtered_image, ab)
Here, `ab` would be some dummy value if the input image is already gray. This is actually a bit cryptic in order to prevent special-casing for RGB images. Otherwise, you'd have to add some conditionals as well. And don't even know how the layer-by-layer approach would work as a utility function.
Basically, that's all to say that I think a decorator makes a lot more sense in this case. In fact, you can handle the different behaviors (layer-by-layer vs lightness channel) by either introducing a parameter to decorator or the wrapped filter (or both---the decorator parameter would set the default, which you could override at runtime). Here's a mock up adding a parameter to the filter:
https://gist.github.com/3909400
The edge filter example is a bit strange: if you filter by lightness (Oops, I called it "luminance" in the example), then you get weird results.
-Tony
On Wed, Oct 17, 2012 at 11:01 PM, Steven Silvester < steven.silvester@gmail.com> wrote:
Tony,
How about a third option, computing on each channel in LAB space?
elif rgb_behavior == 'lab': lab = color.rgb2lab(image) c_new = [image_filter(c) for c in np.rollaxis(lab, -1)] out = np.rollaxis(np.array(c_new), 0, 3) out = color.lab2rgb(out)
I tried this with "edges", and it produced a pretty interesting image.
Interesting! That's closer to what I would have expected when operating on the lightness channel (but still not quite what I expected). And it gives reasonable results with the smoothing operation. You've made me realize that I don't understand the LAB color space at all. :P -Tony
On Wednesday, October 17, 2012 8:48:58 PM UTC-5, Tony S Yu wrote:
On Wed, Oct 17, 2012 at 4:37 PM, Stéfan van der Walt <ste...@sun.ac.za>wrote:
On Wed, Oct 17, 2012 at 7:39 AM, Schönberger Johannes <hannessch...@gmail.com> wrote:
Sounds good. Do you have a paper or any reliable source about this? Just interested in this...
The question is, should we make this a decorator, or simply provide it as utility functions? I'm wondering, because now we already have two different ways of handling color images: split them up, RGB, or convert them to LAB and apply the filter to LAB. How do we expose both of these sensible alternatives to the user?
We can have a *default*, so that all single-channel filters are done on the luminance layer, but then combine that with utility functions. E.g.
@luminance_filter def gray_filter(image, ...): ...
gray_filter(color_image, params) --> convert to LAB, filter on L, convert back filter_layers(color_image, gray_filter, params) -> filter each layer separately and combine
Alternatively, skip the decorators completely, and just provide two utility functions: filter_layers and filter_luminance.
I'm not really sure how a utility function would work. Wouldn't you need to provide at least 2 utility functions when converting to LAB:
image, ab = prepare_rgb(image) # ... code to generate `filtered_image` from `image` filtered_image = finalize_rgb(filtered_image, ab)
Here, `ab` would be some dummy value if the input image is already gray. This is actually a bit cryptic in order to prevent special-casing for RGB images. Otherwise, you'd have to add some conditionals as well. And don't even know how the layer-by-layer approach would work as a utility function.
Basically, that's all to say that I think a decorator makes a lot more sense in this case. In fact, you can handle the different behaviors (layer-by-layer vs lightness channel) by either introducing a parameter to decorator or the wrapped filter (or both---the decorator parameter would set the default, which you could override at runtime). Here's a mock up adding a parameter to the filter:
https://gist.github.com/**3909400 <https://gist.github.com/3909400>
The edge filter example is a bit strange: if you filter by lightness (Oops, I called it "luminance" in the example), then you get weird results.
-Tony
--
participants (2)
-
Steven Silvester
-
Tony Yu