I've removed mailman-users from the disto. We shhouldn't be using both lists at the same time in discussions, and this is a developers/design issue.
On 7/16/01 5:38 PM, "Gerald Oskoboiny" <gerald@impressive.net> wrote:
Sure... I agree that it's possible for a standard to be irrelevant or just not meet the needs of the users for which it was written.
But I don't think that is the case here:
But -- you haven't dealt with the safety issue in any substantive way. If you can't build a standard that protects the user from abuse, I'd argue that the standard provides a false sense of security that is more destructive than not standardizing at all; because, as you noted, it'll tend to encourage developers to write to the standard, and not all of those writers will really understand the subtler issues involved. So if you can't make GET safe to automatically browse, even with the blackhats, I'd argue it's better to not create standards that'd encourage that -- or write the standard in such a way that these issues and limitations are very clear IN the standard.
(I would like to back up my "widely followed" claim by doing a survey of various popular sites, but don't have time today, and probably won't until Friday at the earliest :( Anyway, I am fairly confident that is the case.)
I would really like to see this; especially since ignoring the larger issues with the standard, I'd like to see how people are doing this so make sure stuff that's done here (and stuff I have in the hopper) do it the best way. And that means following standards, as long as they make sense. But I still wouldn't auto-crawl an incoming data stream for links and pull them down automagically...
I'm bothered with the larger issues, almost to the point where the initial problem becomes irrelevant.
Fetching a URL into my local http cache doesn't cause a virus to be executed or anything else bad to happen, and I wouldn't use software where that kind of thing would be possible anyway.
No, but it can cause actions you'll regret. You started this by bringing up one as a problem. Now, however, you're saying "well, that's no big deal".
Which is it? No big deal? Or a problem? And if we can trigger actions you might or might not like, you can bet it'll honk off others. And if we can trigger actions, so can others, and those won't necessarily be innocent ones. So I don't think you can ignore this issue by simply minimizing it's importance. Either it is, or it isn't, and you can't start making judgemental calls on individual cases and using that to imply that all cases are not serious. To me, that's what you've done -- no offense, Gerald, but it's coming across a bit like you're trying to duck the larger issue, while still pushing for mailman to 'fix' the problem you're trying to minimize.
If you think the docs on this subject at W3C are lacking, by all means let me know.
No, what I really was hoping for were examples of what W3 (or you) consider 'proper', to see how W3 thinks this ought to be done.
Somewhat related, W3C recently published a note called "Common User Agent Problems" <http://www.w3.org/TR/cuap> and it was received quite well by the web community
Off to go read....
I think there is a plan to write a similar one targeted towards site administrators, pointing out common mistakes and raising awareness about little-known but important RFC/spec details like this.
One of the best things I think W3 could do in these cases is not only to write "good" "bad", but generate cookbooks of techniques, with explanations of why they're good, or why they ought to be avoided. Especially in the subtlties of the standards that might not be intuitively obvious, or which might be involved in emerging technologies (like wireless) that the typical designer hasn't had time to worry about yet (or doesn't know to worry about). I love things like the "Perl Cookbook" of code fragments and examples, not just because it saves me reinventing the wheel, but it gives me insight into how things ought to be done, at least in the eyes of my betters. And if you create a cookbook, people are a lot more likely to adopt them, since they can borrow the existing code...
By "Mailman's fault" I meant that if mailman did this, it would be the part of the equation causing problems by not abiding by the HTTP spec. But this prefetching thing is just an example; the main point is that the protocol has this stuff built in for a reason, and there may be hundreds of other applications (current and future) that need it to be there.
The standards also brought us, um, BLINK. Just because it's there or someone proposes it doesn't mean we ought to do it that way.
I'm not worried about abuse, myself;
You should be. Espeecailly if you're building a standard that enables security problems and encourages programmers to write to allow for those problems in it.
And the more I think about it, the more it's an interesting point -- but on more than one level. Has W3c considered the implications of defining a standard that depends on voluntary acceptance here?
Which Internet standards *don't* depend on voluntary acceptance?
But there's a difference here -- we're talking about possible security issues, not just whether someone adopts a tag.
-- Chuq Von Rospach, Internet Gnome <http://www.chuqui.com> [<chuqui@plaidworks.com> = <me@chuqui.com> = <chuq@apple.com>] Yes, yes, I've finally finished my home page. Lucky you.
95% of being a net.god is sounding persuasive and convincing people you know what you're talking about, even when you're making it up as you go along. (chuq von rospach, 1992)