Symlinks vs API -- question for developers

So I have a question for all the developers on this list. Philip thinks that using symlinks will drive adoption better than an API to access package data. I think an API will have better adoption than a symlink hack. But the real question is what do people who maintain packages think? Since Philip's given his reasoning, here's mine: 1) Philip says that with symlinks distributions will likely have to submit patches to the build scripts to tag various files as belonging to certain categories. If you, as an upstream are going to accept a patch to your build scripts to place files in a different place wouldn't you also accept a patch to your source code to use a well defined API to pull files from a different source? This is a distribution's bread and butter and if there's a small, useful, well-liked, standard API for accessing data files you will start receiving patches from distributions that want to help you help them. 2) Symlinks cannot be used universally. Although it might not be common to want an FHS style install in such an environment, it isn't unheard of. At one time in the distant past I had to use cygwin so I know that while this may be a corner case, it does exist. 3) The primary argument for symlinks is that symlinks are compatible with __file__. But this compatibility comes at a cost -- symlinks can't do anything extra. In a different subthread Philip argues that setuptools provides more than distutils and that's why people switch and that the next generation tool needs to provide even more than setuptools. Symlinks cannot do that. 4) In contrast an API can do more: It can deal with writable files. On Unix, persistent, per user storage would go in the user's home directory, on other OS's it would go somewhere else. This is abstractable using an API at runtime but not using symlinks at install time. 5) cross package data. Using __file__ to detect file location is inherently not suitable for crossing package boundaries. Egg Translations would not be able to use a symlink based backend to do its work for this reason. 6) zipped eggs. These require an API. So moving to symlinks is actually a regression. 7) Philip says that the reason pkg_resources does not see widespread adoption is that the developer cost of using an API is too high compared to __file__. I don't believe that the difference between file and API is that great. An example of using an API could be something like this: Symlinks:: import os icondirectory = os.path.join(os.path.basename(__file__), 'icons') API:: import pkgdata icondirectory = pkgdata.resource(pkg='setuptools', \ category='icon', resource='setuptools.png') Instead I think the data handling portion of pkg_resources is not more widely adopted for these reasons: * pkg_resources's package handling is painful for the not-infrequent corner cases. So people who have encountered the problems with require() not overriding a default or not selecting the proper version when multiple packages specify overlapping version ranges already have a negative impression of the library before they even get to the data handling portion. * pkg_resources does too much: loading libraries by version really has nothing to do with loading data for use by a library. This is a drawback because people think of and promote pkg_resources as a way to enable easy_install rather than a way to enable abstraction of data location. * The only benefit (at least, being promoted in the documentation) is to allow zipped eggs to work. Distributions have no reason to create zipped eggs so they have no reason to submit patches to upstream to support the pkg_resources api. * Distributions, further, don't want to install all-in-one egg directories on the system. The pkg_resources API just gets in the way of doing things correctly in a distribution. I've had to patch code to not use pkg_resources if data is installed in the FHS mandated areas. Far from encouraging distributions to send patches upstream to make modules use pkg_resources this makes distributions actively discourage upstreams from using it. * The API isn't flexible enough. EggTranslations places its data within the metadata store of eggs instead of within the data store. This is because the metadata is able to be read outside of the package in which it is included while the package data can only be accessed from within the package. 8) To a distribution, symlinks are just a hack. We use them for things like php web apps when the web application is hardcoded to accept only one path for things (like the writable state files being intermixed with the program code). Managing a symlink farm is not something distributions are going to get excited over so adoption by distributions that this is the way to work with files won't happen until upstreams move on their own. Further, since the install tool is being proposed as a separate project from the metadata to mark files, the expectation is that the distributions are going to want to write an install tool that manages this symlink farm. For that to happen, you have to get distributions to be much more than simply neutral about the idea of symlinks, you have to have them enthused enough about using symlinks that they are willing to spend time writing a tool to do it. So once again, I think this boils down to these questions: if we have a small library whose sole purpose is to abstract a data store so you can find out where a particular non-code file lives on this system will you use it? If a distribution packager sends you a patch so the data files are marked correctly and the code can retrieve their location instead of hardcoding an offset against __file__ will you commit it? -Toshio

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Oct 17, 2008, at 1:32 PM, Toshio Kuratomi wrote:
7) Philip says that the reason pkg_resources does not see widespread adoption is that the developer cost of using an API is too high compared to __file__. I don't believe that the difference between file and API is that great. An example of using an API could be something like this:
Symlinks:: import os icondirectory = os.path.join(os.path.basename(__file__), 'icons')
s/basename/dirname/ I think.
API:: import pkgdata icondirectory = pkgdata.resource(pkg='setuptools', \ category='icon', resource='setuptools.png')
Having tried to be religious about using pkg_resources instead of __file__ in all my new code, I tend to agree that the API cost is not that high. I don't particularly like the verbosity of the names chosen, but I actually like not having to use the __file__ idiom.
* Distributions, further, don't want to install all-in-one egg directories on the system. The pkg_resources API just gets in the way of doing things correctly in a distribution. I've had to patch code to not use pkg_resources if data is installed in the FHS mandated areas. Far from encouraging distributions to send patches upstream to make modules use pkg_resources this makes distributions actively discourage upstreams from using it.
I hadn't thought of this, but yes, this is a serious negative.
So once again, I think this boils down to these questions: if we have a small library whose sole purpose is to abstract a data store so you can find out where a particular non-code file lives on this system will you use it?
I would. I apologize for not having followed the discussion that closely, but as an application developer, I would really like an API that hides all the location nonsense from me. As familiar as __file__ is, it's a fragile hack. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSPjP8nEjvBPtnXfVAQKOeQP/eJywpz1CUxJhUD9NhUj68rpoHbato8W4 fP2ZNRmKOSGUmtaj9hM1vduMoCszCN/vz8fX+gGZFu9ySkWyQfO5Q6Hh/kBrKSRN IzVYcd3lbV+e63+twk3Ht4gSX8j2iWnt375976kFgvmMc2iB7zn0r/TDblMIqvxV NUUi3d3zaPs= =5yrx -----END PGP SIGNATURE-----

2008/10/17 Barry Warsaw <barry@python.org>:
So once again, I think this boils down to these questions: if we have a small library whose sole purpose is to abstract a data store so you can find out where a particular non-code file lives on this system will you use it?
I would. I apologize for not having followed the discussion that closely, but as an application developer, I would really like an API that hides all the location nonsense from me. As familiar as __file__ is, it's a fragile hack.
I'd like an API, as well. It's probably the only truly cross-platform approach. Having said that, a key question is, what precisely is needed here? Python 2.6 has pkgutil.get_data, which abstracts the idea of grabbing the content of a file. There's not much else that can realistically be supported by fully general PEP 302 style loaders, so you have to start either (1) requiring additional functionality from loaders, or (2) restricting usage to filesystems, and losing the whole concept of a "loader protocol" which PEP 302 provided. (And that way lies incompatibilities with py2exe, which is very common on Windows, and zipped eggs, as well as possibly other more obscure cases). I'd fully support the development of an API for data access, but I'd suggest that it should take the form of a PEP extending PEP 302, plus an implementation in core Python (pkgutil is the obvious place), rather than being restricted purely to an external module like setuptools. Of course, having an external implementation for use in older versions of Python which don't have the API in core, would be fine, but the aim should be the core. (Otherwise, it's adding a dependency to projects that otherwise don't need it). Paul.

Toshio Kuratomi wrote:
So I have a question for all the developers on this list. Philip thinks that using symlinks will drive adoption better than an API to access package data. I think an API will have better adoption than a symlink hack. But the real question is what do people who maintain packages think? Since Philip's given his reasoning, here's mine:
1) Philip says that with symlinks distributions will likely have to submit patches to the build scripts to tag various files as belonging to certain categories. If you, as an upstream are going to accept a patch to your build scripts to place files in a different place wouldn't you also accept a patch to your source code to use a well defined API to pull files from a different source? This is a distribution's bread and butter and if there's a small, useful, well-liked, standard API for accessing data files you will start receiving patches from distributions that want to help you help them.
Annotating my files is extremely unlike to break code, so I am more likely to accept a patch that does that.
2) Symlinks cannot be used universally. Although it might not be common to want an FHS style install in such an environment, it isn't unheard of. At one time in the distant past I had to use cygwin so I know that while this may be a corner case, it does exist.
3) The primary argument for symlinks is that symlinks are compatible with __file__. But this compatibility comes at a cost -- symlinks can't do anything extra. In a different subthread Philip argues that setuptools provides more than distutils and that's why people switch and that the next generation tool needs to provide even more than setuptools. Symlinks cannot do that.
As a library writer I have no motivation to do any of this. New features do drive adoption more quickly than simple cleanup, but only features that would help me as a developer in some way (including making it easier to support users). A new API wouldn't help me, and might hurt as it means more conventions to communicate to other developers. Also I'd have to debug problems with the resource loading, which be nothing but frustration. I hate platform issues, and moving files around just means there's more platform issues I'd be exposed to. Nothing platform-specific is of any interest to me as a developer -- unfortunately such problems come up often, but I don't want to go looking for new platform issues.
4) In contrast an API can do more: It can deal with writable files. On Unix, persistent, per user storage would go in the user's home directory, on other OS's it would go somewhere else. This is abstractable using an API at runtime but not using symlinks at install time.
Writable stuff is quite different, IMHO. An API for writable files might be useful, but there's no current conventions around it, and I would expect that API to be entirely different from a resource API.
5) cross package data. Using __file__ to detect file location is inherently not suitable for crossing package boundaries. Egg Translations would not be able to use a symlink based backend to do its work for this reason.
You'll need to explain further, as I am unclear of the problem with __file__ in this context. For instance, couldn't you symlink somepackage/translations/ to /usr/share/lang/somepackage ?
6) zipped eggs. These require an API. So moving to symlinks is actually a regression.
True.
7) Philip says that the reason pkg_resources does not see widespread adoption is that the developer cost of using an API is too high compared to __file__. I don't believe that the difference between file and API is that great. An example of using an API could be something like this:
Symlinks:: import os icondirectory = os.path.join(os.path.basename(__file__), 'icons')
API:: import pkgdata icondirectory = pkgdata.resource(pkg='setuptools', \ category='icon', resource='setuptools.png')
Instead I think the data handling portion of pkg_resources is not more widely adopted for these reasons:
Just personally, it's entirely laziness on my part; I can't remember the signatures for the resource stuff, so I write what I most immediately remember. I think the Distro/package ambiguity also confuses me.
* pkg_resources's package handling is painful for the not-infrequent corner cases. So people who have encountered the problems with require() not overriding a default or not selecting the proper version when multiple packages specify overlapping version ranges already have a negative impression of the library before they even get to the data handling portion.
* pkg_resources does too much: loading libraries by version really has nothing to do with loading data for use by a library. This is a drawback because people think of and promote pkg_resources as a way to enable easy_install rather than a way to enable abstraction of data location.
* The only benefit (at least, being promoted in the documentation) is to allow zipped eggs to work. Distributions have no reason to create zipped eggs so they have no reason to submit patches to upstream to support the pkg_resources api.
* Distributions, further, don't want to install all-in-one egg directories on the system. The pkg_resources API just gets in the way of doing things correctly in a distribution. I've had to patch code to not use pkg_resources if data is installed in the FHS mandated areas. Far from encouraging distributions to send patches upstream to make modules use pkg_resources this makes distributions actively discourage upstreams from using it.
* The API isn't flexible enough. EggTranslations places its data within the metadata store of eggs instead of within the data store. This is because the metadata is able to be read outside of the package in which it is included while the package data can only be accessed from within the package.
8) To a distribution, symlinks are just a hack. We use them for things like php web apps when the web application is hardcoded to accept only one path for things (like the writable state files being intermixed with the program code). Managing a symlink farm is not something distributions are going to get excited over so adoption by distributions that this is the way to work with files won't happen until upstreams move on their own.
Further, since the install tool is being proposed as a separate project from the metadata to mark files, the expectation is that the distributions are going to want to write an install tool that manages this symlink farm. For that to happen, you have to get distributions to be much more than simply neutral about the idea of symlinks, you have to have them enthused enough about using symlinks that they are willing to spend time writing a tool to do it.
So once again, I think this boils down to these questions: if we have a small library whose sole purpose is to abstract a data store so you can find out where a particular non-code file lives on this system will you use it?
Realistically, no.
If a distribution packager sends you a patch so the data files are marked correctly and the code can retrieve their location instead of hardcoding an offset against __file__ will you commit it?
If it adds a dependency and an abstraction that isn't obvious, then no, I would not commit it. Just marking the files is fine, because it has no impact on other code. -- Ian Bicking : ianb@colorstudy.com : http://blog.ianbicking.org

Toshio Kuratomi wrote:
So once again, I think this boils down to these questions: if we have a small library whose sole purpose is to abstract a data store so you can find out where a particular non-code file lives on this system will you use it?
As part of the stdlib? Yes, I'd use it. -- Greg

At 10:32 AM 10/17/2008 -0700, Toshio Kuratomi wrote:
So I have a question for all the developers on this list. Philip thinks that using symlinks will drive adoption better than an API to access package data. I think an API will have better adoption than a symlink hack. But the real question is what do people who maintain packages think? Since Philip's given his reasoning, here's mine:
1) Philip says that with symlinks distributions will likely have to submit patches to the build scripts to tag various files as belonging to certain categories. If you, as an upstream are going to accept a patch to your build scripts to place files in a different place wouldn't you also accept a patch to your source code to use a well defined API to pull files from a different source? This is a distribution's bread and butter and if there's a small, useful, well-liked, standard API for accessing data files you will start receiving patches from distributions that want to help you help them.
I'll leave this to the developers, but please note that the real historical answer to this question is "no", or at least "not in the current release". Keep in mind that most "yeses" you get to this question will really mean, "when I can get around to understanding the API and testing it and have time to put it in a new release" -- while the "yeses" for adding spec metadata are more likely to mean, "yes, I'll check it in right now if it looks correct".
2) Symlinks cannot be used universally. Although it might not be common to want an FHS style install in such an environment, it isn't unheard of. At one time in the distant past I had to use cygwin so I know that while this may be a corner case, it does exist.
Cygwin does symlinks, actually.
3) The primary argument for symlinks is that symlinks are compatible with __file__. But this compatibility comes at a cost -- symlinks can't do anything extra. In a different subthread Philip argues that setuptools provides more than distutils and that's why people switch and that the next generation tool needs to provide even more than setuptools. Symlinks cannot do that.
I think Ian's already said this, but the API itself has to do something more, and so far nobody's proposed an API that does anything "more" than what setuptools does in this area, from the developer point of view. (Except for the request that such an API be in the stdlib and thus avoid an extra dependency... but that of course introduces yet another implementation delay, if it means a new release of Python.)
4) In contrast an API can do more: It can deal with writable files. On Unix, persistent, per user storage would go in the user's home directory, on other OS's it would go somewhere else. This is abstractable using an API at runtime but not using symlinks at install time.
This is all well and good, but it's actually quite orthogonal to most uses of __file__ and resources today.
5) cross package data. Using __file__ to detect file location is inherently not suitable for crossing package boundaries. Egg Translations would not be able to use a symlink based backend to do its work for this reason.
EggTranslations doesn't use __file__, it uses the API, so I don't see how this relates.
6) zipped eggs. These require an API. So moving to symlinks is actually a regression.
As I mentioned earlier, setuptools marks eggs that use __file__ as needing to be installed unzipped, so it's not a regression; it's simply providing the same level of compatibility that setuptools does. It's requiring the use of an API that's a regression wrt developer-side features.
7) Philip says that the reason pkg_resources does not see widespread adoption is that the developer cost of using an API is too high compared to __file__. I don't believe that the difference between file and API is that great.
It isn't; it's the *switching* cost that's high, and that's the cost that needs to be minimized in order to drive adoption quickly.
[snip]
I'll just note that the bullets I'm skipping are mostly irrelevant to the issue at hand: i.e., switching cost of using *any* API, AND switching cost for the developers who *are* using pkg_resources presently. Let's not forget that second group of people, because the fact they are using the API shows they are likely early adopters. Make it too hard for them to switch, and you might not have any early adopters left for the new thing. ;-)
* The API isn't flexible enough. EggTranslations places its data within the metadata store of eggs instead of within the data store. This is because the metadata is able to be read outside of the package in which it is included while the package data can only be accessed from within the package.
Actually, this is incorrect. EggTranslations' use of project-level data is so that it's not necessary to include a Python module in the egg, just to have a place to put the data. Access from other packages hasn't got anything to do with it.
8) To a distribution, symlinks are just a hack. We use them for things like php web apps when the web application is hardcoded to accept only one path for things (like the writable state files being intermixed with the program code). Managing a symlink farm is not something distributions are going to get excited over so adoption by distributions that this is the way to work with files won't happen until upstreams move on their own.
We need to distinguish between "providing the ability to have a low-cost transition" and "the recommended True Way". IOW, symlinks and an API are not mutually exclusive; I'm just pointing out that if an API is required, the transition of packages to the new standard will occur *only as quickly as the slowest upstream dependency*. If the developer of A depends on B, and B hasn't transitioned yet, then A can't transition.
Further, since the install tool is being proposed as a separate project from the metadata to mark files, the expectation is that the distributions are going to want to write an install tool that manages this symlink farm. For that to happen, you have to get distributions to be much more than simply neutral about the idea of symlinks, you have to have them enthused enough about using symlinks that they are willing to spend time writing a tool to do it.
Well, the question is whether they prefer to have a long, drawn out transition or not. Maybe they don't care about that part, but my assumption was that a replacement for setuptools/easy_install in this space was desired sooner rather than later. If that's the case, then making it possible for packages to transition without changing their runtime code is a must-have.
So once again, I think this boils down to these questions: if we have a small library whose sole purpose is to abstract a data store so you can find out where a particular non-code file lives on this system will you use it? If a distribution packager sends you a patch so the data files are marked correctly and the code can retrieve their location instead of hardcoding an offset against __file__ will you commit it?
I think the answer to both questions is "yes... eventually... if the API is in the stdlib for all Python versions I'm targeting and everybody else is doing it." Which is why *requiring* it for transition will prevent the distros from seeing benefits from a new standard for quite some time. Conversely, if the patch for installation metadata is separated from patches to code, I would expect a *much* faster uptake of the metadata patches. And, once having accepted the metadata patch, a developer is actually more likely to take the second step willingly, than if required to do both at once. (See "Influence" by Cialdini.) To be 100% clear (I hope): I have no objection to an API. It's unequivocally a good idea, and *should* be part of BUILDS. *Requiring* it, on the other hand, is unequivocally a *bad* idea, if you want adoption sooner rather than later. Now, if you want to establish a transition timetable for phasing out __file__ usage, deprecation, etc., based on when the API will be available in the stdlib etc., publicize and bless that schedule, etc... again, these are all good ideas. The ONLY thing I object to is requiring it up front from day 1, because then we're just shooting off a giant foot-gun wrt adoption.

Le dimanche 19 octobre 2008 à 19:28 -0400, Phillip J. Eby a écrit :
3) The primary argument for symlinks is that symlinks are compatible with __file__. But this compatibility comes at a cost -- symlinks can't do anything extra.
From my point of view, compatibility with __file__ is something that is not desired. We should discourage developers from using such a hack instead of going on supporting it. Yes, there is a cost for this migration. There is always a cost for developing properly, but it is a win in the long term.
I think Ian's already said this, but the API itself has to do something more, and so far nobody's proposed an API that does anything "more" than what setuptools does in this area, from the developer point of view.
Part of the solution is to introduce semantics in this API. Merely tagging files only allows to move them to a handful of usable locations (e.g. /usr/share/$package, /usr/share/doc/$package, /usr/share/pixmaps, /etc/$package). However, depending on the file type and what you want to use for, it may be useful in many more locations, and they will not depend on a mere tag. For example, when you use gettext, the thing you express is “get translations for the current locale”, not “load the LC_MESSAGES/fr/blah.po file”. Similarly for GTK+, the API allows to retrieve an icon at the size of a given widget, the file location being completely abstracted. As such, the solution is probably to develop an API allowing to access data in an abstract way, and one that can integrate properly with existing tools relying on file locations. Let’s take the icon example. You first need to define what is an icon file and where it goes. It could be defined in a file looking like this: <type name="icon"> <attribute name="theme"> <default>hicolor</default> </attribute> <attribute name="size" required="true"> <allowed_values> <value>scalable</value> <value>16x16</value> <value>22x22</value> ... </allowed_values> <attribute name="category"> <default>apps</default> </attribute> <location system="unix"> ${prefix}/icons/${theme}/${size}/${category} </location> </type> Then, you need to flag your icons; this is what you already proposed, in a more sophisticated way. foo.png: icon size="48x48" category="apps" foo.svg: icon size="scalable" category="apps" Finally, in your code: * if you already use GTK+, you only have to retrieve the icon named "foo"; * otherwise, you need to write a helper function based on the API, which will allow to write things like this: location=builds.get_path("icon", "foo.png", category="apps", size="48x48") I don’t think any other build system is going this far; which means it is not necessary to go this far, but it could be a way to get developers to abandon their hacks.
4) In contrast an API can do more: It can deal with writable files. On Unix, persistent, per user storage would go in the user's home directory, on other OS's it would go somewhere else. This is abstractable using an API at runtime but not using symlinks at install time.
This is all well and good, but it's actually quite orthogonal to most uses of __file__ and resources today.
Not entirely orthogonal, since it goes in the same direction: a common base to implement standard FHS and freedesktop.org locations. If all Python applications start using standard home and configuration directories, it’s a lot of burden that goes away for sysadmins and users.
5) cross package data. Using __file__ to detect file location is inherently not suitable for crossing package boundaries. Egg Translations would not be able to use a symlink based backend to do its work for this reason.
EggTranslations doesn't use __file__, it uses the API, so I don't see how this relates.
I think this was just an example. There will be many cases where you need to extend an existing package’s resources with your own. Using __file__ doesn’t allow that and will require that you pass full paths instead of using an abstract API to access the data.
It's requiring the use of an API that's a regression wrt developer-side features.
Generally, I call the deprecation of a hack in favor of a clean API an improvement, not a regression. But that may just be your own terminology.
7) Philip says that the reason pkg_resources does not see widespread adoption is that the developer cost of using an API is too high compared to __file__. I don't believe that the difference between file and API is that great.
It isn't; it's the *switching* cost that's high, and that's the cost that needs to be minimized in order to drive adoption quickly.
You shouldn’t forget another part of the issue. If the only thing the new specification allows is to make symbolic links to a few pre-defined locations without any flexibility, it is very unlikely that distributions will take the time to write any patches to it. Without anyone to drive this feature forward, it will not be adopted at all. It is not only a question of making the migration easy, but also on making people want to push this migration.
8) To a distribution, symlinks are just a hack. We use them for things like php web apps when the web application is hardcoded to accept only one path for things (like the writable state files being intermixed with the program code). Managing a symlink farm is not something distributions are going to get excited over so adoption by distributions that this is the way to work with files won't happen until upstreams move on their own.
We need to distinguish between "providing the ability to have a low-cost transition" and "the recommended True Way".
If no one makes the low-cost transition in the end, you are going to lose your time trying to set it up.
Further, since the install tool is being proposed as a separate project from the metadata to mark files, the expectation is that the distributions are going to want to write an install tool that manages this symlink farm. For that to happen, you have to get distributions to be much more than simply neutral about the idea of symlinks, you have to have them enthused enough about using symlinks that they are willing to spend time writing a tool to do it.
Well, the question is whether they prefer to have a long, drawn out transition or not. Maybe they don't care about that part, but my assumption was that a replacement for setuptools/easy_install in this space was desired sooner rather than later.
Currently I think we’d be more reluctant to symlinks than enthusiastic about them. We already maintain symlink farms to deal with the insanity of Python modules, moving files to a place that is more neutral for the FHS (/usr/share/pysomething). The urgency is far behind us since Python module developers didn’t see it at all at that time.
Now, if you want to establish a transition timetable for phasing out __file__ usage, deprecation, etc., based on when the API will be available in the stdlib etc., publicize and bless that schedule, etc... again, these are all good ideas.
The ONLY thing I object to is requiring it up front from day 1, because then we're just shooting off a giant foot-gun wrt adoption.
No one is asking that. BTW deprecating __file__ is far from easy, since there are valid uses for it, and it’s not trivial to distinguish them. Cheers, -- .''`. : :' : We are debian.org. Lower your prices, surrender your code. `. `' We will add your hardware and software distinctiveness to `- our own. Resistance is futile.

Josselin Mouette wrote:
Then, you need to flag your icons; this is what you already proposed, in a more sophisticated way. foo.png: icon size="48x48" category="apps" foo.svg: icon size="scalable" category="apps"
This smells like overdesign to me. Unless you want to allow for the possibility of e.g. putting your 48x48 icons in one location and icons of other sizes in a different location -- which seems rather unlikely -- then distinguishing between resources at this level of detail is not something the API we're considering should be concerned with. IMO all we need is a single parameter: resource.read(modulename, kind, path) where 'kind' is one of a well-known set of tags that correspond to ways that *packagers* may want to organize resources. If applications want to categorize their resources in more detail, it's up to them to provide a mapping from their own categorization scheme to this one. -- Greg

Le mardi 21 octobre 2008 à 12:18 +1300, Greg Ewing a écrit :
Josselin Mouette wrote:
Then, you need to flag your icons; this is what you already proposed, in a more sophisticated way. foo.png: icon size="48x48" category="apps" foo.svg: icon size="scalable" category="apps"
This smells like overdesign to me. Unless you want to allow for the possibility of e.g. putting your 48x48 icons in one location and icons of other sizes in a different location -- which seems rather unlikely -- then distinguishing between resources at this level of detail is not something the API we're considering should be concerned with.
Unless you want to be able to follow existing standards. http://standards.freedesktop.org/icon-theme-spec/icon-theme-spec-latest.html... The same goes for localisation tiles, for which there is a similar directory scheme to follow. Cheers, -- .''`. : :' : We are debian.org. Lower your prices, surrender your code. `. `' We will add your hardware and software distinctiveness to `- our own. Resistance is futile.

Is there an easy way to make a multi-version egg from the Numpy source code (supplied from sourceforge). Ideally I'd like to create an automated process that allows me to quickly make a Win32 egg any time a new release of Numpy comes out so that my team can test it and adjust our project's dependancies as they see fit. Initially I'd like to make multiversion eggs for both Numpy 1.1 and 1.2 so that my team can get started on testing this library. Sal This email does not create a legal relationship between any member of the Crédit Agricole group and the recipient or constitute investment advice. The content of this email should not be copied or disclosed (in whole or part) to any other person. It may contain information which is confidential, privileged or otherwise protected from disclosure. If you are not the intended recipient, you should notify us and delete it from your system. Emails may be monitored, are not secure and may be amended, destroyed or contain viruses and in communicating with us such conditions are accepted. Any content which does not relate to business matters is not endorsed by us. Calyon is authorised by the Commission Bancaire in France and regulated by the Financial Services Authority for the conduct of business in the United Kingdom. Calyon is incorporated in France with limited liability and registered in England & Wales. Registration number: FC008194. Registered office: Broadwalk House, 5 Appold Street, London, EC2A 2DA.

Phillip J. Eby wrote:
I think Ian's already said this, but the API itself has to do something more, and so far nobody's proposed an API that does anything "more" than what setuptools does in this area, from the developer point of view. (Except for the request that such an API be in the stdlib and thus avoid an extra dependency... but that of course introduces yet another implementation delay, if it means a new release of Python.)
It's probably a bit easier than waiting for a release of Python -- if it's in a PEP, and will be in a release of Python, then the library will be blessed and people will pick it up much more quickly. Realistically most library developers would need to add the package as a requirement for some time, since they won't stop supporting older versions of Python that don't have that package. -- Ian Bicking : ianb@colorstudy.com : http://blog.ianbicking.org

On Oct 17, 2008, at 11:32 AM, Toshio Kuratomi wrote:
So once again, I think this boils down to these questions: if we have a small library whose sole purpose is to abstract a data store so you can find out where a particular non-code file lives on this system will you use it?
Well, since I'm already using pkg_resources then I'm obviously willing to use such an API. Whether I'm willing to switch to a different API from the one that I already have and that already works for all of my needs -- pkg_resources -- would be a different question. By the way, I don't understand that hack using "__file__" that was mentioned as an alternative, so from my perspective as a poor, dumb, working developer, that __file__ thing is more confusing than the pkg_resources API is. Regards, Zooko --- http://allmydata.org -- Tahoe, the Least-Authority Filesystem http://allmydata.com -- back up all your files for $10/month
participants (9)
-
Barry Warsaw
-
Fadhley Salim
-
Greg Ewing
-
Ian Bicking
-
Josselin Mouette
-
Paul Moore
-
Phillip J. Eby
-
Toshio Kuratomi
-
zooko