[ANNOUNCE] Mailman 2.1 alpha 2
This the official announcement for Mailman 2.1 alpha 2. Because it's an alpha, this announcement is only going out to the mailman-* mailing lists. I'll make two warnings: you probably should still not use this version for production systems (but TIA for any and all testing you do with it!), and I've already had a couple of bug fixes from early adopters. 2.1a2 should still be useful, but you might want to keep an eye on cvs and the mailman-checkins list for updates.
I am only making the tarball available on SourceForge, so you'll need to go to http://sf.net/projects/mailman to grab it. You'll also need to upgrade to mimelib-0.4, so be sure to go to http://sf.net/projects/mimelib to grab and install that tarball first.
To view the on-line documentation, see
http://www.list.org/MM21/index.html
or
http://mailman.sf.net/MM21/index.html
Below is an excerpt from the NEWS file for all the changes since 2.1alpha1. There are a bunch of new features coming down the pike, and I hope to have an alpha3 out soon. I'm also planning on doing much more stress testing of this version with real list traffic, and I'm hoping we'll start to get more languages integrated into cvs.
Enjoy, -Barry
-------------------- snip snip -------------------- 2.1 alpha 2 (11-Jul-2001)
- Building
o mimelib 0.4 is now required. Get it from
http://mimelib.sf.net. If you've installed an earlier
version of mimelib, you must upgrade.
o /usr/local/mailman is now the default installation
directory. Use configure's --prefix switch to change it
back to the default (/home/mailman) or any other
installation directory of your choice.
- Security
o Better definition of authentication domains. The following
roles have been defined: user, list-admin, list-moderator,
creator, site-admin.
o There is now a separate role of "list moderator", which has
access to the pending requests (admindb) page, but not the
list configuration pages.
o Subscription confirmations can now be performed via email or
via URL. When a subscription is received, a unique (sha)
confirm URL is generated in the confirmation message.
Simply visiting this URL completes the subscription process.
o In a similar manner, removal requests (via web or email
command) no longer require the password. If the correct
password is given, the removal is performed immediately. If
no password is given, then a confirmation message is
generated.
- Internationalization
o More I18N patches. The basic infrastructure should now be
working correctly. Spanish templates and catalogs are
included, and English, French, Hungarian, and Big5 templates
are included.
o Cascading specializations and internationalization of
templates. Templates are now search for in the following
order: list-specific location, domain-specific location,
site-wide location, global defaults. Each search location
is further qualified by the language being displayed. This
means that you only need to change the templates that are
different from the global defaults.
Templates renamed: admlogin.txt => admlogin.html
Templates added: private.html
- Web UI
o Redesigned the user options page. It now sits behind an
authentication so user options cannot be viewed without the
proper password. The other advantage is that the user's
password need not be entered on the options page to
unsubscribe or change option values. The login screen also
provides for password mail-back, and unsubscription w/
confirmation.
Other new features accessible from the user options page
include: ability to change email address (with confirmation)
both per-list and globally for all list on virtual domain;
global membership password changing; global mail delivery
disable/enable; ability to suppress password reminders both
per-list and globally; logout button.
[Note: the handle_opts cgi has gone away]
o Color schemes for non-template based web pages can be defined
via mm_cfg.
o Redesign of the membership management page. The page is now
split into three subcategories (Membership List, Mass
Subscription, and Mass Removal). The Membership List
subcategory now supports searching for member addresses by
regular expression, and if necessary, it groups member
addresses first alphabetically, and then by chunks.
Mass Subscription and Mass Removal now support file upload,
with one address per line.
o Hyperlinks from the logos in the footers have been removed.
The sponsors got too much "unsubscribe me!" spam from
desperate user of Mailman at other sites.
o New buttons on the digest admin page to send a digest
immediately (if it's non-empty), to start a new digest
volume with the next digest, and to select the interval with
which to automatically start a new digest volume (yearly,
monthly, quarterly, weekly, daily).
DEFAULT_DIGEST_VOLUME_FREQUENCY is a new configuration
variable, initially set to give a new digest volume monthly.
o Through-the-web list creation and removal, using a separate
site-wide authentication role called the "list creator and
destroyer" or simply "list creator". If the configuration
variable OWNERS_CAN_DELETE_THEIR_OWN_LISTS is set to 1 (by
default, it's 0), then list admins can delete their own
lists.
This feature requires an adaptor for the particular MTA
you're using. An adaptor for Postfix is included, as is a
dumb adaptor that just emails mailman@yoursite with the
necessary Sendmail style /etc/alias file changes. Some MTAs
like Exim can be configured to automatically recognize new
lists. The adaptor is selected via the MTA option in
mm_cfg.py
- Email UI
o In email commands, "join" is a synonym for
"subscribe". "remove" and "leave" are synonyms for
"unsubscribe". New robot addresses are support to make
subscribing and unsubscribing much easier:
mylist-join@mysite
mylist-leave@mysite
o Confirmation messages have a shortened Subject: header,
containing just the word "confirm" and the confirmation
cookie. This should help for MUAs that like to wrap long
Subject: lines, messing up confirmation.
o Mailman now recognizes an Urgent: header, which, if it
contains the list moderator or list administrator password,
forces the message to be delivered immediately to all
members (i.e. both regular and digest members). The message
is also placed in the digest. If the password is incorrect,
the message will be bounced back to the sender.
- Performance
o Refinements to the new qrunner subsystem which preserves
FIFO order of messages.
o The qrunner is no longer started from cron. It is started
by a Un*x init-style script called bin/mailmanctl (see
below). cron/qrunner has been removed.
- Command line scripts
o bin/mailmanctl script added, which is used to start, stop,
and restart the qrunner daemon.
o bin/qrunner script added which allows a single sub-qrunner
to run once through its processing loop.
o bin/change_pw script added (eases mass changing of list
passwords).
o bin/update grows a -f switch to force an update.
o bin/newlang renamed to bin/addlang; bin/rmlang removed.
o bin/mmsitepass has grown a -c option to set the list
creator's password. The site-wide `create' web page is
linked to from the admin overview page.
o bin/newlist's -o option is removed. This script also grows
a way of spelling the creation of a list in a specific
virtual domain.
o The `auto' script has been removed.
o bin/dumpdb has grown -m/--marshal and -p/--pickle options.
o bin/list_admins can be used to print the owners of a mailing list.
o bin/genaliases regenerates from scratch the aliases and
aliases.db file for the Postfix MTA.
- Archiver
o New archiver date clobbering option, which allows dates to
only be clobber if they are outrageously out-of-date
(default setting is 15 days on either side of received
timestamp). New configuration variables:
ARCHIVER_CLOBBER_DATE_POLICY
ARCHIVER_ALLOWABLE_SANE_DATE_SKEW
The archived copy of messages grows an X-List-Received-Date:
header indicating the time the message was received by
Mailman.
o PRIVATE_ARCHIVE_URL configuration variable is removed (this
can be calculated on the fly, and removing it actually makes
site configuration easier).
- Miscellaneous
o Several new README's have been added.
o Most syslog entries for the qrunner have been redirected to
logs/error.
o On SIGHUP, qrunner will re-open all its log files and
restart all child processes. See "bin/mailmanctl restart".
- Patches and bug fixes
o SF patches and bug fixes applied: 420396, 424389, 227694,
426002, 401372 (partial), 401452.
o Fixes in 2.0.5 ported forward:
Fix a lock stagnation problem that can result when the
user hits the `stop' button on their browser during a
write operation that can take a long time (e.g. hitting
the membership management admin page).
o Fixes in 2.0.4 ported forward:
Python 2.1 compatibility release. There were a few
questionable constructs and uses of deprecated modules
that caused annoying warnings when used with Python 2.1.
This release quiets those warnings.
o Fixes in 2.0.3 ported forward:
Bug fix release. There was a small typo in 2.0.2 in
ListAdmin.py for approving an already subscribed member
(thanks Thomas!). Also, an update to the OpenWall
security workaround (contrib/securelinux_fix.py) was
included. Thanks to Marc Merlin.
On Fri, Jul 13, 2001 at 04:15:20PM -0400, Barry A. Warsaw wrote:
This the official announcement for Mailman 2.1 alpha 2. [...]
To view the on-line documentation, see
http://www.list.org/MM21/index.html
2.1 alpha 2 (11-Jul-2001)
[ lots of extremely cool stuff deleted ]
o Subscription confirmations can now be performed via email or via URL. When a subscription is received, a unique (sha) confirm URL is generated in the confirmation message. Simply visiting this URL completes the subscription process.
This violates the HTTP protocol: visiting a URL (i.e., an HTTP GET) should not have side effects like confirming a subscription.
A few months ago I sent mail to mailman-developers with a suggestion for how to implement this in a compliant way without hindering usability:
http://mail.python.org/pipermail/mailman-developers/2001-January/003579.html
mid:20010103022646.A31881@impressive.net
I realize that a number of other sites misuse GET this way, but I think most of the large ones (e.g., Yahoo, online brokerages and banks, etc.) get it right, and I think Mailman should too.
Further reading on GET vs POST:
Forms: GET and POST
http://www.w3.org/Provider/Style/Input
Axioms of Web architecture: Identity, State and GET
http://www.w3.org/DesignIssues/Axioms#state
HTTP 1.1 section 9.1: Safe and Idempotent Methods
http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.1
HTML 4.01 section 17.13: Form submission
http://www.w3.org/TR/html4/interact/forms.html#h-17.13
-- Gerald Oskoboiny <gerald@impressive.net> http://impressive.net/people/gerald/
On 7/13/01 1:43 PM, "Gerald Oskoboiny" <gerald@impressive.net> wrote:
This violates the HTTP protocol: visiting a URL (i.e., an HTTP GET) should not have side effects like confirming a subscription.
My first reaction was "say what?" but I went and read the w3 stuff before responding...
I realize that a number of other sites misuse GET this way, but I think most of the large ones (e.g., Yahoo, online brokerages and banks, etc.) get it right, and I think Mailman should too.
Because, frankly, I think w3 is wrong IN THIS CASE. That may make sense in a general case, especially on an HTTP only situation, but in this case, where the URL is being carried in e-mail to confirm an action the user has (presumably) started, I think they're wrong. As long as the e-mail clearly delineates the action being taken, do what's easy for the user; and the user isn't going to want to go clicking through multiple links just to allow us to abide to the HTTP stuff.
But the key is this is a finalization of a distributed transaction, with e-mail distributing the token. Under other circumstances, I see W3's logic. Here, however, using a URL to bring up a page that says "click here to confirm" is only going to piss off Joe User, not make his life better.
-- Chuq Von Rospach, Internet Gnome <http://www.chuqui.com> [<chuqui@plaidworks.com> = <me@chuqui.com> = <chuq@apple.com>] Yes, yes, I've finally finished my home page. Lucky you.
Some days you're the dog, some days you're the hydrant.
"CVR" == Chuq Von Rospach <chuqui@plaidworks.com> writes:
>> I realize that a number of other sites misuse GET this way, but
>> I think most of the large ones (e.g., Yahoo, online brokerages
>> and banks, etc.) get it right, and I think Mailman should too.
CVR> Because, frankly, I think w3 is wrong IN THIS CASE. That may
CVR> make sense in a general case, especially on an HTTP only
CVR> situation, but in this case, where the URL is being carried
CVR> in e-mail to confirm an action the user has (presumably)
CVR> started, I think they're wrong. As long as the e-mail clearly
CVR> delineates the action being taken, do what's easy for the
CVR> user; and the user isn't going to want to go clicking through
CVR> multiple links just to allow us to abide to the HTTP stuff.
CVR> But the key is this is a finalization of a distributed
CVR> transaction, with e-mail distributing the token. Under other
CVR> circumstances, I see W3's logic. Here, however, using a URL
CVR> to bring up a page that says "click here to confirm" is only
CVR> going to piss off Joe User, not make his life better.
I agree with Chuq. The user isn't going to understand the distinction and is just going to be annoyed by having to do, what to them seems like an extra unnecessary step.
-Barry
On Sat, Jul 14, 2001 at 12:55:04PM -0400, Barry A. Warsaw wrote:
"CVR" == Chuq Von Rospach <chuqui@plaidworks.com> writes:
>> I realize that a number of other sites misuse GET this way, but >> I think most of the large ones (e.g., Yahoo, online brokerages >> and banks, etc.) get it right, and I think Mailman should too.
[ ... ]
CVR> But the key is this is a finalization of a distributed CVR> transaction, with e-mail distributing the token. Under other CVR> circumstances, I see W3's logic. Here, however, using a URL CVR> to bring up a page that says "click here to confirm" is only CVR> going to piss off Joe User, not make his life better.
I agree with Chuq. The user isn't going to understand the distinction and is just going to be annoyed by having to do, what to them seems like an extra unnecessary step.
After some careful consideration, as well as a chat with a few clueful colleagues, I have to disagree with you, Barry. The trick here is 'managing the expectations'. Having the message say something like
To confirm or remove your subscription request, visit <URL>
And then have that URL bring up a nice overview of what list you are subscribing to, the options you chose (regular-digest/mime-digest/etc), what email address you entered, and 'remove' and 'confirm' buttons. Frankly, it's always bothered me that you can't unconfirm a mailinglist subscription, let alone not being able to see what you are subscribing to ;P
Extra credit if you make the URL (or something similar) also work if a subscription is held for approval, but without a 'confirm' button -- just a 'remove' one. Actually, the same kind of interface for a held message would be great, too :)
-- Thomas Wouters <thomas@xs4all.net>
Hi! I'm a .signature virus! copy me into your .signature file to help me spread!
On Tue, Jul 17, 2001 at 10:34:22AM +0200, Thomas Wouters wrote:
After some careful consideration, as well as a chat with a few clueful colleagues, I have to disagree with you, Barry. The trick here is 'managing the expectations'. Having the message say something like
To confirm or remove your subscription request, visit <URL>
And then have that URL bring up a nice overview of what list you are subscribing to, the options you chose (regular-digest/mime-digest/etc), what email address you entered, and 'remove' and 'confirm' buttons. Frankly, it's always bothered me that you can't unconfirm a mailinglist subscription, let alone not being able to see what you are subscribing to ;P
Extra credit if you make the URL (or something similar) also work if a subscription is held for approval, but without a 'confirm' button -- just a 'remove' one. Actually, the same kind of interface for a held message would be great, too :)
I agree strongly that this is the _RIGHT_ way to do it, and it complies with W3C standards as well. Opening the URL should not just go ahead and do something, and many people may be angry if it does. It should tell the user what it's offering to do, and then allow the user to make an informed decision about whether to do it or cancel it.
After all, it's only one extra click. Keep in mind that there are foolish people out there who use mail clients from evil software monopolists that can be configured to automatically preload URLs contained in mail, and some of them do so configure them. Then everyone on the list has to deal with, "How did I get on this list? Stop sending me mail!"
-- Linux Now! ..........Because friends don't let friends use Microsoft. phil stracchino -- the renaissance man -- mystic zen biker geek alaric@babcom.com halmayne@sourceforge.net 2000 CBR929RR, 1991 VFR750F3 (foully murdered), 1986 VF500F (sold)
On Fri, Jul 13, 2001 at 02:03:51PM -0700, Chuq Von Rospach wrote:
On 7/13/01 1:43 PM, "Gerald Oskoboiny" <gerald@impressive.net> wrote:
This violates the HTTP protocol: visiting a URL (i.e., an HTTP GET) should not have side effects like confirming a subscription.
My first reaction was "say what?" but I went and read the w3 stuff before responding...
Note that "the w3 stuff" includes the standards-track RFC 2616 (HTTP/1.1), the HTML 4.01 specification, and supplementary notes by the creator of the HTTP protocol (incl its GET and POST methods); they're not just a few random pages on w3.org :)
I realize that a number of other sites misuse GET this way, but I think most of the large ones (e.g., Yahoo, online brokerages and banks, etc.) get it right, and I think Mailman should too.
Because, frankly, I think w3 is wrong IN THIS CASE. That may make sense in a general case, especially on an HTTP only situation, but in this case, where the URL is being carried in e-mail to confirm an action the user has (presumably) started, I think they're wrong.
A couple of the references I gave in my previous message included this specific example (confirming a subscription) as a case where POST should be used instead of GET, so I think it is quite clear that the specs do apply in this specific case.
Regarding "I think they're wrong", I respect your opinion, but the HTTP spec is the result of a decade of work on and experience with HTTP by full-time web protocol geeks...
As long as the e-mail clearly delineates the action being taken, do what's easy for the user; and the user isn't going to want to go clicking through multiple links just to allow us to abide to the HTTP stuff.
What if I have a smart system on my desktop PC that is configured to prefetch any URLs it sees in my incoming email, so there is zero latency when I go to read them later? (or so I can read them while offline, on a train or plane or something)
That would allow anyone in the world to sign me up for any mailman-backed mailing list, whether or not I even see the email confirmation requests. And that would be Mailman's fault, for misusing HTTP GETs.
But the key is this is a finalization of a distributed transaction, with e-mail distributing the token. Under other circumstances, I see W3's logic. Here, however, using a URL to bring up a page that says "click here to confirm" is only going to piss off Joe User, not make his life better.
I disagree: a large part of the reason for the distinction between GET and POST is a social/usability one: people should become accustomed to following hypertext links and clicking on URLs without any action being taken. (personally, I cut and paste URLs into my browser all the time without checking carefully what they appear to be first, so when I actually want to read what's there, I don't have to wait for the browser. Of course, I only do that because I don't have that prefetching thing set up... yet.)
btw, part of the reason I care about this is that I work for W3C and am currently evaluating mailman for use on lists.w3.org (to replace smartlist) and we're pretty fussy about complying with our own specs, for obvious reasons. But I'd be making this argument whether or not that were the case, since it's clearly the Right Thing to do imho...
-- Gerald Oskoboiny <gerald@impressive.net> http://impressive.net/people/gerald/
On 7/14/01 4:32 PM, "Gerald Oskoboiny" <gerald@impressive.net> wrote:
Regarding "I think they're wrong", I respect your opinion, but the HTTP spec is the result of a decade of work on and experience with HTTP by full-time web protocol geeks...
But web geeks are not necessarily user-interface or user-experience geeks. You can perfectly correctly build a nice system that's technically correct, but not right for the users.
What if I have a smart system on my desktop PC that is configured to prefetch any URLs it sees in my incoming email,
Given the number of viruses, spam with URLs and other garbage out there, I'd say you're foolish (bordering on stupid), but that's beside the point. It is an interesting issue, one I can't toss out easily; but I'm not convinced, either. I think it's something that needs to be hashed over, perhaps prototyped both ways so we can see the best way to tweak it.
Barry, what do you think about this instance? As the net moves more towards wireless, PDA, mobile-phone, stuff, could we be setting ourselves up for a later problem by ignoring this pre-cache issue Gerald's raised?
Gerald, does W3 have sample pages for "right" and "wrong" that can be looked at, or are we going to have to develop some? The more I think about this, the more I think it's case where we ought to see if we can develop a prototype that follows the standards that we like, rather than toss it out at first glance. But if we can't come up with a system that we agree is 'easy enough', then we should go to what we're currently thinking.
That would allow anyone in the world to sign me up for any mailman-backed mailing list, whether or not I even see the email confirmation requests. And that would be Mailman's fault, for misusing HTTP GETs.
In disagree with this -- since as you say, any number of places already misuse GET, that usage is fairly common. A user who sets themselves up for it should know (or be warned by their mail client) that it has the possibility to trigger events. I'd say it's more a client issue than a Mailman issue.
And, thinking about it, since GET *can* do this, it's probably wrong for W3 to push for it not to be used that way, because if things like your pre-caching system come into common use, the dark side of the net will take advantage of it, the way early virus writers took advantage of mail clients with auto-exec of .EXE's being on by default. So aren't you setting yourself up for problems by having a technology that can, even if you deprecate it, because it sets a user expectation that's going to be broken by those wanting to take advantage of it? I've got a bad feeling about this -- your example seems to set yourself up for abuse by those looking for ways ot abuse you, and that's a bigger issue than Mailman using it -- because if all of the 'white side' programs cooperate, it just encourages creation of things (like the pre-caching) that the dark side will take adavantage of.
As long as GET is capable of being used this way, I'd be very careful about creating stuff that depends on "we don't want it used this way, so it won't be" -- it seems to open up avenues for attack.
Which is not a reason for mailman to ignore the standard -- but a bigger issue about whether this standard creates a perception that could come back and bite people. If people start creating services (like that pre-cache) that get tripped up by this, even if the white hats follow your advice, you're still at risk from the black hats. Isn't it better to acknowledge the capability and not create services that depend on "well behaved" systems?
Of course, I only do that because I don't have that prefetching thing set up... yet.)
At this point, I'd never turn on pre-fectching, since it's safety depends entirely no voluntary cooperation, and you aren't in a position to police until after the fact. That's a Bad Thing in a big way.
btw, part of the reason I care about this is that I work for W3C and am currently evaluating mailman for use on lists.w3.org (to replace smartlist) and we're pretty fussy about complying with our own specs, for obvious reasons.
As you should be.
But I'd be making this argument whether or not that were the case, since it's clearly the Right Thing to do imho...
And the more I think about it, the more it's an interesting point -- but on more than one level. Has W3c considered the implications of defining a standard that depends on voluntary acceptance here? Because the service you propose is unsafe unless you can guarantee everyone you talk to is compliant, and we know how likely that's going to be. That, to me, is a much bigger issue than whether or not Mailman complies, and in fact, I could make an argument that the standard isnt' acceptable if it's going to be a basis for services that can cause harm but requires voluntary acceptance on the server side. By the time you figure out the server isn't complying, they've burnt down the barn and run off with the horse. That's bad.
-- Chuq Von Rospach, Internet Gnome <http://www.chuqui.com> [<chuqui@plaidworks.com> = <me@chuqui.com> = <chuq@apple.com>] Yes, yes, I've finally finished my home page. Lucky you.
Someday, we'll look back on this, laugh nervously and change the subject.
On Sat, Jul 14, 2001 at 08:49:34PM -0700, Chuq Von Rospach wrote:
What if I have a smart system on my desktop PC that is configured to prefetch any URLs it sees in my incoming email,
Given the number of viruses, spam with URLs and other garbage out there, I'd say you're foolish (bordering on stupid), but that's beside the point. It is an interesting issue, one I can't toss out easily; but I'm not convinced, either. I think it's something that needs to be hashed over, perhaps prototyped both ways so we can see the best way to tweak it.
Barry, what do you think about this instance? As the net moves more towards wireless, PDA, mobile-phone, stuff, could we be setting ourselves up for a later problem by ignoring this pre-cache issue Gerald's raised?
What *I* think is that it's a special case, and any such pre-fetch system ought to, by default, *not* pre-fetch anything with GET parameters in it.
*All* GETs have side effects by definition: you get something different depending on what the parameters are.
That would allow anyone in the world to sign me up for any mailman-backed mailing list, whether or not I even see the email confirmation requests. And that would be Mailman's fault, for misusing HTTP GETs.
I disagree with this -- since as you say, any number of places already misuse GET, that usage is fairly common. A user who sets themselves up for it should know (or be warned by their mail client) that it has the possibility to trigger events. I'd say it's more a client issue than a Mailman issue.
Concur. And I base my opinion on 15 years of systems design experience, FWIW.
And, thinking about it, since GET *can* do this, it's probably wrong for W3 to push for it not to be used that way, because if things like your pre-caching system come into common use, the dark side of the net will take advantage of it, the way early virus writers took advantage of mail clients with auto-exec of .EXE's being on by default. So aren't you setting yourself up for problems by having a technology that can, even if you deprecate it, because it sets a user expectation that's going to be broken by those wanting to take advantage of it? I've got a bad feeling about this -- your example seems to set yourself up for abuse by those looking for ways ot abuse you, and that's a bigger issue than Mailman using it -- because if all of the 'white side' programs cooperate, it just encourages creation of things (like the pre-caching) that the dark side will take adavantage of.
Well put, young pilot.
Of course, I only do that because I don't have that prefetching thing set up... yet.)
At this point, I'd never turn on pre-fectching, since it's safety depends entirely no voluntary cooperation, and you aren't in a position to police until after the fact. That's a Bad Thing in a big way.
Well, yeah, but you don't have a palmtop, either, Chuq, right? :-)
But I'd be making this argument whether or not that were the case, since it's clearly the Right Thing to do imho...
And the more I think about it, the more it's an interesting point -- but on more than one level. Has W3c considered the implications of defining a standard that depends on voluntary acceptance here? Because the service you propose is unsafe unless you can guarantee everyone you talk to is compliant, and we know how likely that's going to be. That, to me, is a much bigger issue than whether or not Mailman complies, and in fact, I could make an argument that the standard isnt' acceptable if it's going to be a basis for services that can cause harm but requires voluntary acceptance on the server side. By the time you figure out the server isn't complying, they've burnt down the barn and run off with the horse. That's bad.
I can't find a thing to argue with here; let's see what he comes up with...
Cheers, -- jra
Jay R. Ashworth jra@baylink.com Member of the Technical Staff Baylink RFC 2100 The Suncoast Freenet The Things I Think Tampa Bay, Florida http://baylink.pitas.com +1 727 804 5015
OS X: Because making Unix user-friendly was easier than debugging Windows -- Simon Slavin in a.f.c
On 7/15/01 8:16 AM, "Jay R. Ashworth" <jra@baylink.com> wrote:
What *I* think is that it's a special case, and any such pre-fetch system ought to, by default, *not* pre-fetch anything with GET parameters in it.
That was what I was thinking, too -- no matter what W3 says, building a tool that pre-fetches those by default is like Microsoft defaulting .EXE execution to yes, or sendmail defaulting to open relay like it did in 8.8 and before. Those are situations just waiting for someone to take advantage of it, and the whitehats won't be the someones.
Well put, young pilot.
Young? Young? Where's my walker? (as an irrelevant side note, Apple finally hired me an assistant, who was -- literally -- not potty trained when I used my first Unix system. Good kid. Well, man. He's no kid... But he's getting going to get tired of the "In the Good Old Days.." jokes...)
At this point, I'd never turn on pre-fectching, since it's safety depends entirely no voluntary cooperation, and you aren't in a position to police until after the fact. That's a Bad Thing in a big way.
Well, yeah, but you don't have a palmtop, either, Chuq, right? :-)
I have a Handspring and my primary machine is a wireless laptop (a Titanium!). Do I need a palmtop?
Of course, none of this deals iwht whether Mailman should use GET or POST. That GET is inherently unsafe doesn't mean that it's therefore okay for Mailman to use it -- I still think we need to look at this further. It simply means, IMHO, that if we choose to not follow the W3 standard, that it's fairly safe to do so.
And, editorial comment time, the subject line is a classic example of why subject line topic flags are the second worst damn thing you can do to a mailing list -- after coercing reply-to. How in the bloody heck is someone supposed to look at THAT and figure out whether they want to read the message? And my user studies have shown that subject line is the key determinant on whether a list message gets read.
-- Chuq Von Rospach, Internet Gnome <http://www.chuqui.com> [<chuqui@plaidworks.com> = <me@chuqui.com> = <chuq@apple.com>] Yes, yes, I've finally finished my home page. Lucky you.
Shroedinger: We can never really be sure which side of the road the chicken is on. It's all a matter of chance. Like a game of dice.
Einstein, refuting Schroedinger: God does not play dice with chickens. Heisenburg: We can determine how fast the chicken travelled, or where it ended up, but we cannot determine why it did so.
On Sun, 15 Jul 2001, Jay R. Ashworth wrote:
What *I* think is that it's a special case, and any such pre-fetch system ought to, by default, *not* pre-fetch anything with GET parameters in it.
*All* GETs have side effects by definition: you get something different depending on what the parameters are.
Right--but they don't necessarily have side effects. For example, I think GETs are the right way to do the panes in these pages: http://www.wunderland.com/LooneyLabs/Chrononauts/lostids/view.html
And there's no reason all of those pages (database entries) shouldn't be pre-fetchable.
-Dale Newfield Dale@Newfield.org
On Sat, Jul 14, 2001 at 08:49:34PM -0700, Chuq Von Rospach wrote:
On 7/14/01 4:32 PM, "Gerald Oskoboiny" <gerald@impressive.net> wrote:
Regarding "I think they're wrong", I respect your opinion, but the HTTP spec is the result of a decade of work on and experience with HTTP by full-time web protocol geeks...
But web geeks are not necessarily user-interface or user-experience geeks. You can perfectly correctly build a nice system that's technically correct, but not right for the users.
Sure... I agree that it's possible for a standard to be irrelevant or just not meet the needs of the users for which it was written.
But I don't think that is the case here: there really is a good reason for this distinction between get and post, and it is widely followed by popular sites *because* of the usability issues. (I would like to back up my "widely followed" claim by doing a survey of various popular sites, but don't have time today, and probably won't until Friday at the earliest :( Anyway, I am fairly confident that is the case.)
What if I have a smart system on my desktop PC that is configured to prefetch any URLs it sees in my incoming email,
Given the number of viruses, spam with URLs and other garbage out there, I'd say you're foolish (bordering on stupid), but that's beside the point.
Either that, or I value my time more than that of some stupid machine's :)
Fetching a URL into my local http cache doesn't cause a virus to be executed or anything else bad to happen, and I wouldn't use software where that kind of thing would be possible anyway.
It is an interesting issue, one I can't toss out easily; but I'm not convinced, either. I think it's something that needs to be hashed over, perhaps prototyped both ways so we can see the best way to tweak it.
Barry, what do you think about this instance? As the net moves more towards wireless, PDA, mobile-phone, stuff, could we be setting ourselves up for a later problem by ignoring this pre-cache issue Gerald's raised?
Gerald, does W3 have sample pages for "right" and "wrong" that can be looked at, or are we going to have to develop some? The more I think about this, the more I think it's case where we ought to see if we can develop a prototype that follows the standards that we like, rather than toss it out at first glance. But if we can't come up with a system that we agree is 'easy enough', then we should go to what we're currently thinking.
I included pointers to all the stuff I could think of in my first message in this thread, including a proposed implementation for mailman: http://mail.python.org/pipermail/mailman-developers/2001-January/003579.html
If you think the docs on this subject at W3C are lacking, by all means let me know.
Somewhat related, W3C recently published a note called "Common User Agent Problems" <http://www.w3.org/TR/cuap> and it was received quite well by the web community ("we need more of that kind of thing", etc.) I think there is a plan to write a similar one targeted towards site administrators, pointing out common mistakes and raising awareness about little-known but important RFC/spec details like this.
That would allow anyone in the world to sign me up for any mailman-backed mailing list, whether or not I even see the email confirmation requests. And that would be Mailman's fault, for misusing HTTP GETs.
In disagree with this -- since as you say, any number of places already misuse GET, that usage is fairly common. A user who sets themselves up for it should know (or be warned by their mail client) that it has the possibility to trigger events. I'd say it's more a client issue than a Mailman issue.
By "Mailman's fault" I meant that if mailman did this, it would be the part of the equation causing problems by not abiding by the HTTP spec. But this prefetching thing is just an example; the main point is that the protocol has this stuff built in for a reason, and there may be hundreds of other applications (current and future) that need it to be there.
And, thinking about it, since GET *can* do this, it's probably wrong for W3 to push for it not to be used that way, because if things like your pre-caching system come into common use, the dark side of the net will take advantage of it, the way early virus writers took advantage of mail clients with auto-exec of .EXE's being on by default. So aren't you setting yourself up for problems by having a technology that can, even if you deprecate it, because it sets a user expectation that's going to be broken by those wanting to take advantage of it? I've got a bad feeling about this -- your example seems to set yourself up for abuse by those looking for ways ot abuse you, and that's a bigger issue than Mailman using it -- because if all of the 'white side' programs cooperate, it just encourages creation of things (like the pre-caching) that the dark side will take adavantage of.
I'm not worried about abuse, myself; I would have set this up on my desktop system already if I had more time to hack it up...
But I'd be making this argument whether or not that were the case, since it's clearly the Right Thing to do imho...
And the more I think about it, the more it's an interesting point -- but on more than one level. Has W3c considered the implications of defining a standard that depends on voluntary acceptance here?
Which Internet standards *don't* depend on voluntary acceptance?
The HTTP spec (for example) just provides information on how HTTP is meant to work. Then people who want to do the right thing have a place to look up what the Right Thing is, and if there are ever interoperability problems between different pieces of software, people can check the spec and say "hey, look, you're doing this wrong; see rfc nnnn, sec m.m; please fix." (and if that doesn't work, there's always http://www.rfc-ignorant.org/ ;)
Because the service you propose is unsafe unless you can guarantee everyone you talk to is compliant, and we know how likely that's going to be.
I disagree that it's unsafe; I don't especially want a bunch of spam in my http cache, but don't really care about it, either.
-- Gerald Oskoboiny <gerald@impressive.net> http://impressive.net/people/gerald/
On 7/16/01 5:38 PM, "Gerald Oskoboiny" <gerald@impressive.net> wrote:
Fetching a URL into my local http cache doesn't cause a virus to be executed or anything else bad to happen, and I wouldn't use software where that kind of thing would be possible anyway.
You're speaking both sides of the argument, Gerald, because this whole discussion started with "don't do it this way, or you could cause me to automatically be subscribed to a mailing list". Yet you're saying it's okay to fetch it, because evidently if you fetch it, nothing bad will happen.
So is it a good thing or a bad thing? You seem to be claiming both at the same time.
-- Chuq Von Rospach, Internet Gnome <http://www.chuqui.com> [<chuqui@plaidworks.com> = <me@chuqui.com> = <chuq@apple.com>] Yes, yes, I've finally finished my home page. Lucky you.
I recommend the handyman's secret weapon: duct tape.
On Mon, Jul 16, 2001 at 05:47:24PM -0700, Chuq Von Rospach wrote:
On 7/16/01 5:38 PM, "Gerald Oskoboiny" <gerald@impressive.net> wrote:
Fetching a URL into my local http cache doesn't cause a virus to be executed or anything else bad to happen, and I wouldn't use software where that kind of thing would be possible anyway.
You're speaking both sides of the argument, Gerald, because this whole discussion started with "don't do it this way, or you could cause me to automatically be subscribed to a mailing list". Yet you're saying it's okay to fetch it, because evidently if you fetch it, nothing bad will happen.
So is it a good thing or a bad thing? You seem to be claiming both at the same time.
I guess I should have written "or anything else really bad to happen"; I consider viruses and autoexecing executables to be much more dangerous than the occasional errant mailing list subscription.
I am prepared to deal with a few bogus subscriptions here and there (of course, I'll bug each of them to get fixed as I encounter them), but it would be really bad for mailman-backed lists to operate this way, especially as mailman continues to take over the world...
But once again, this prefetching business is just an example, please don't focus on that exclusively: what I am really asking is simply for Mailman to comply with the HTTP specs.
-- Gerald Oskoboiny <gerald@impressive.net> http://impressive.net/people/gerald/
I've removed mailman-users from the disto. We shhouldn't be using both lists at the same time in discussions, and this is a developers/design issue.
On 7/16/01 5:38 PM, "Gerald Oskoboiny" <gerald@impressive.net> wrote:
Sure... I agree that it's possible for a standard to be irrelevant or just not meet the needs of the users for which it was written.
But I don't think that is the case here:
But -- you haven't dealt with the safety issue in any substantive way. If you can't build a standard that protects the user from abuse, I'd argue that the standard provides a false sense of security that is more destructive than not standardizing at all; because, as you noted, it'll tend to encourage developers to write to the standard, and not all of those writers will really understand the subtler issues involved. So if you can't make GET safe to automatically browse, even with the blackhats, I'd argue it's better to not create standards that'd encourage that -- or write the standard in such a way that these issues and limitations are very clear IN the standard.
(I would like to back up my "widely followed" claim by doing a survey of various popular sites, but don't have time today, and probably won't until Friday at the earliest :( Anyway, I am fairly confident that is the case.)
I would really like to see this; especially since ignoring the larger issues with the standard, I'd like to see how people are doing this so make sure stuff that's done here (and stuff I have in the hopper) do it the best way. And that means following standards, as long as they make sense. But I still wouldn't auto-crawl an incoming data stream for links and pull them down automagically...
I'm bothered with the larger issues, almost to the point where the initial problem becomes irrelevant.
Fetching a URL into my local http cache doesn't cause a virus to be executed or anything else bad to happen, and I wouldn't use software where that kind of thing would be possible anyway.
No, but it can cause actions you'll regret. You started this by bringing up one as a problem. Now, however, you're saying "well, that's no big deal".
Which is it? No big deal? Or a problem? And if we can trigger actions you might or might not like, you can bet it'll honk off others. And if we can trigger actions, so can others, and those won't necessarily be innocent ones. So I don't think you can ignore this issue by simply minimizing it's importance. Either it is, or it isn't, and you can't start making judgemental calls on individual cases and using that to imply that all cases are not serious. To me, that's what you've done -- no offense, Gerald, but it's coming across a bit like you're trying to duck the larger issue, while still pushing for mailman to 'fix' the problem you're trying to minimize.
If you think the docs on this subject at W3C are lacking, by all means let me know.
No, what I really was hoping for were examples of what W3 (or you) consider 'proper', to see how W3 thinks this ought to be done.
Somewhat related, W3C recently published a note called "Common User Agent Problems" <http://www.w3.org/TR/cuap> and it was received quite well by the web community
Off to go read....
I think there is a plan to write a similar one targeted towards site administrators, pointing out common mistakes and raising awareness about little-known but important RFC/spec details like this.
One of the best things I think W3 could do in these cases is not only to write "good" "bad", but generate cookbooks of techniques, with explanations of why they're good, or why they ought to be avoided. Especially in the subtlties of the standards that might not be intuitively obvious, or which might be involved in emerging technologies (like wireless) that the typical designer hasn't had time to worry about yet (or doesn't know to worry about). I love things like the "Perl Cookbook" of code fragments and examples, not just because it saves me reinventing the wheel, but it gives me insight into how things ought to be done, at least in the eyes of my betters. And if you create a cookbook, people are a lot more likely to adopt them, since they can borrow the existing code...
By "Mailman's fault" I meant that if mailman did this, it would be the part of the equation causing problems by not abiding by the HTTP spec. But this prefetching thing is just an example; the main point is that the protocol has this stuff built in for a reason, and there may be hundreds of other applications (current and future) that need it to be there.
The standards also brought us, um, BLINK. Just because it's there or someone proposes it doesn't mean we ought to do it that way.
I'm not worried about abuse, myself;
You should be. Espeecailly if you're building a standard that enables security problems and encourages programmers to write to allow for those problems in it.
And the more I think about it, the more it's an interesting point -- but on more than one level. Has W3c considered the implications of defining a standard that depends on voluntary acceptance here?
Which Internet standards *don't* depend on voluntary acceptance?
But there's a difference here -- we're talking about possible security issues, not just whether someone adopts a tag.
-- Chuq Von Rospach, Internet Gnome <http://www.chuqui.com> [<chuqui@plaidworks.com> = <me@chuqui.com> = <chuq@apple.com>] Yes, yes, I've finally finished my home page. Lucky you.
95% of being a net.god is sounding persuasive and convincing people you know what you're talking about, even when you're making it up as you go along. (chuq von rospach, 1992)
On Mon, Jul 16, 2001 at 09:04:09PM -0700, Chuq Von Rospach wrote:
I've removed mailman-users from the disto. We shhouldn't be using both lists at the same time in discussions, and this is a developers/design issue.
ok... I was wondering about that, thanks.
On 7/16/01 5:38 PM, "Gerald Oskoboiny" <gerald@impressive.net> wrote:
Sure... I agree that it's possible for a standard to be irrelevant or just not meet the needs of the users for which it was written.
But I don't think that is the case here:
But -- you haven't dealt with the safety issue in any substantive way. If you can't build a standard that protects the user from abuse, I'd argue that the standard provides a false sense of security that is more destructive than not standardizing at all; because, as you noted, it'll tend to encourage developers to write to the standard, and not all of those writers will really understand the subtler issues involved. So if you can't make GET safe to automatically browse, even with the blackhats, I'd argue it's better to not create standards that'd encourage that -- or write the standard in such a way that these issues and limitations are very clear IN the standard.
I think the HTTP spec is fairly clear about most of this:
9.1.1 Safe Methods
Implementors should be aware that the software represents the user in
their interactions over the Internet, and should be careful to allow
the user to be aware of any actions they might take which may have an
unexpected significance to themselves or others.
In particular, the convention has been established that the GET and
HEAD methods SHOULD NOT have the significance of taking an action
other than retrieval. These methods ought to be considered "safe".
This allows user agents to represent other methods, such as POST, PUT
and DELETE, in a special way, so that the user is made aware of the
fact that a possibly unsafe action is being requested.
Naturally, it is not possible to ensure that the server does not
generate side-effects as a result of performing a GET request; in
fact, some dynamic resources consider that a feature. The important
distinction here is that the user did not request the side-effects,
so therefore cannot be held accountable for them.
-- http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.1.1
but once all that's been said, it's really up to the implementations to do the right thing.
Fetching a URL into my local http cache doesn't cause a virus to be executed or anything else bad to happen, and I wouldn't use software where that kind of thing would be possible anyway.
No, but it can cause actions you'll regret. You started this by bringing up one as a problem. Now, however, you're saying "well, that's no big deal".
Which is it? No big deal? Or a problem? And if we can trigger actions you might or might not like, you can bet it'll honk off others. And if we can trigger actions, so can others, and those won't necessarily be innocent ones.
If it happens once in a while with an obscure site here and there, that's much less of a problem than if some popular software like Mailman is doing the wrong thing and sending out tens or hundreds of thousands of these messages every day. (in part because every time one of these is used, it helps legitimize the practice of 'click this URL to unsub', which is the wrong message to be sending to people.)
I don't expect all the incorrect implementations in the world to suddenly get fixed overnight, but I'm trying to get the ones I know about fixed.
If you think the docs on this subject at W3C are lacking, by all means let me know.
No, what I really was hoping for were examples of what W3 (or you) consider 'proper', to see how W3 thinks this ought to be done.
ok, I'll try to write something up on this sometime...
I think there is a plan to write a similar one targeted towards site administrators, pointing out common mistakes and raising awareness about little-known but important RFC/spec details like this.
One of the best things I think W3 could do in these cases is not only to write "good" "bad", but generate cookbooks of techniques, with explanations of why they're good, or why they ought to be avoided. Especially in the subtlties of the standards that might not be intuitively obvious, or which might be involved in emerging technologies (like wireless) that the typical designer hasn't had time to worry about yet (or doesn't know to worry about).
I agree this kind of thing needs to be done, but I think it can usually be done quite well by third parties, in online courses and articles, printed books, etc. But like I said above, I think W3C will start doing a bit more of this than we have in the past, it's just a matter of finding the time...
By "Mailman's fault" I meant that if mailman did this, it would be the part of the equation causing problems by not abiding by the HTTP spec. But this prefetching thing is just an example; the main point is that the protocol has this stuff built in for a reason, and there may be hundreds of other applications (current and future) that need it to be there.
The standards also brought us, um, BLINK.
er... no, Netscape's programmers implemented BLINK one night after they had been drinking :)
It's not in any HTML standard ever published by W3C or the IETF.
And the more I think about it, the more it's an interesting point -- but on more than one level. Has W3c considered the implications of defining a standard that depends on voluntary acceptance here?
Which Internet standards *don't* depend on voluntary acceptance?
But there's a difference here -- we're talking about possible security issues, not just whether someone adopts a tag.
A bad implementation of a spec can always cause security problems. This distinction between GET and POST in the HTTP protocol is specifically there to *prevent* problems: if I make a stock trade using an online brokerage and then hit my browser's "back" and "forward" buttons, I don't want the same transaction executed again! That's why brokerages, banks, and other quality sites use POST for such transactions, and browsers written to the spec will prompt the user for confirmation before rePOSTing a form.
-- Gerald Oskoboiny <gerald@impressive.net> http://impressive.net/people/gerald/
On Tue, Jul 17, 2001 at 02:35:24AM -0400, Gerald Oskoboiny wrote:
# Naturally, it is not possible to ensure that the server does not # generate side-effects as a result of performing a GET request; in # fact, some dynamic resources consider that a feature. The important # distinction here is that the user did not request the side-effects, # so therefore cannot be held accountable for them. # # -- http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.1.1
but once all that's been said, it's really up to the implementations to do the right thing.
It seems worth noting here that Zope makes this even worse: "procedure calls" won't necessarily be POSTs *or* GETs.
Fetching a URL into my local http cache doesn't cause a virus to be executed or anything else bad to happen, and I wouldn't use software where that kind of thing would be possible anyway.
No, but it can cause actions you'll regret. You started this by bringing up one as a problem. Now, however, you're saying "well, that's no big deal".
Which is it? No big deal? Or a problem? And if we can trigger actions you might or might not like, you can bet it'll honk off others. And if we can trigger actions, so can others, and those won't necessarily be innocent ones.
If it happens once in a while with an obscure site here and there, that's much less of a problem than if some popular software like Mailman is doing the wrong thing and sending out tens or hundreds of thousands of these messages every day. (in part because every time one of these is used, it helps legitimize the practice of 'click this URL to unsub', which is the wrong message to be sending to people.)
Certainly.
I don't expect all the incorrect implementations in the world to suddenly get fixed overnight, but I'm trying to get the ones I know about fixed.
But the problem here, Gerald, is that Chuq and I are having the "should the standard actually say this in the real world" conversation, and you're assuming that it should. Chuq's asked you for your reasons why you support this, him having given his... and so far, his outweigh yours, for *me*.
One of the best things I think W3 could do in these cases is not only to write "good" "bad", but generate cookbooks of techniques, with explanations of why they're good, or why they ought to be avoided. Especially in the subtlties of the standards that might not be intuitively obvious, or which might be involved in emerging technologies (like wireless) that the typical designer hasn't had time to worry about yet (or doesn't know to worry about).
I agree this kind of thing needs to be done, but I think it can usually be done quite well by third parties, in online courses and articles, printed books, etc. But like I said above, I think W3C will start doing a bit more of this than we have in the past, it's just a matter of finding the time...
Um... *it's time*? HTTP underlays half the planet. Someone needs to explain that to the people who pay the bills of the folks on the committee.
And the more I think about it, the more it's an interesting point -- but on more than one level. Has W3c considered the implications of defining a standard that depends on voluntary acceptance here?
Which Internet standards *don't* depend on voluntary acceptance?
But there's a difference here -- we're talking about possible security issues, not just whether someone adopts a tag.
A bad implementation of a spec can always cause security problems.
Precisely our point. Thank you. :-)
This distinction between GET and POST in the HTTP protocol is specifically there to *prevent* problems: if I make a stock trade using an online brokerage and then hit my browser's "back" and "forward" buttons, I don't want the same transaction executed again! That's why brokerages, banks, and other quality sites use POST for such transactions, and browsers written to the spec will prompt the user for confirmation before rePOSTing a form.
No, that's why anyone competent doing that sort of coding will put a Transaction Sequence Number cookie in a hidden field in the form, and bounce duplicate submissions. Depending on the browser there is another of those false senses of security Chuq was talking about earlier.
Cheers, -- jra
Jay R. Ashworth jra@baylink.com Member of the Technical Staff Baylink RFC 2100 The Suncoast Freenet The Things I Think Tampa Bay, Florida http://baylink.pitas.com +1 727 804 5015
OS X: Because making Unix user-friendly was easier than debugging Windows -- Simon Slavin in a.f.c
On 7/16/01 11:35 PM, "Gerald Oskoboiny" <gerald@impressive.net> wrote:
I think the HTTP spec is fairly clear about most of this:
Gerald - I hope this is taken in a team-building way, since it's what I intend it to be.
9.1.1 Safe Methods
Implementors should be aware that the software represents the user in [... Etc ...]
This section is written like most good Unix man pages. It speaks volumes to someone who already knows the answer, and is pretty much opaque to someone who doesn't.
If I were coming to it cold, researching the protocol, I'd leave it again without any real idea that there are significant problems that I ought to be aware of. And even if I have some idea there are problems to worry about, I wouldn't have a clue what to worry about. Even after our discussion already, I find this very illuminating. Now, I've been doing internet stuff for 20 years, and HTTP/Web stuff since 1995, and even with my background, I find this doesn't really say anything to me about any of the issues I should be warned about here. It's very nice, but it speaks to the choir who already knows what it speaks of. To an outsider, it not only doesn't illuminate, it doesn't do a thing to tell me why the choir is a good thing to join, either. So if I'm doing my research, I read it through, and go off and do whatever I was planning on doing in the first place, without ever really knowing that I just tripped over an iceberg.
but once all that's been said, it's really up to the implementations to do the right thing.
And the implementors are given basically no guidance on how, or even why. And very little what. Which makes it really hard for the implementor to get it right, and even harder for them to care -- what with deadlines and bosses and other things actively in their faces, this stuff just isn't going to get a lot of visibility with the people you need to be evangelizing.
If it happens once in a while with an obscure site here and there, that's much less of a problem than if some popular software like Mailman is doing the wrong thing
I'm sorry, but I consider this ducking the issue again. You're completely ignoring the white hat/black hat issue,and hiding behind "obscure" and "not a significant issue" and other rationalizations, while still trying to prove that Mailman is none of those, and therefore ought to consider this a crisis issue.
But since I've tried three times now to get you to deal with this double-standard and gotten nowhere, I'll drop it. No sense beating a dead horse. You clearly don¹t' want to deal with the issue, so I'll stop pushing. But I'm disappointed, to be honest about it.
I don't expect all the incorrect implementations in the world to suddenly get fixed overnight, but I'm trying to get the ones I know about fixed.
Which ignores the larger issue of the standard encouraging implementations of tools that run into these problems, without dealing with the problems themselves, and leaving it up to the implementor to figure out how to "do the right thing" to avoid whatever those bogeymen are, which can't, basically, be known ahead of time. This all comes across to me as the prerson who decided that Microsoft's e-mail client would -- by default -- execute included .EXE files, because what's the worst that could happen?
I agree this kind of thing needs to be done, but I think it can usually be done quite well by third parties,
Which is a great way to duck responsibility, not get it done, and be able to blame someone else when something bad happens because this stuff didn't exist.
A bad implementation of a spec can always cause security problems.
But the problem here is that gET itself is the security problem, as the functionality currently exists, and you're trying to 'fix' the problem by creating a standard that says "don't do that".
THAT doesn't work. Because even if it does work with all us whitehats, the blackhats are simply going to look at it with even more glee, since it's even less expected of the end user when those side effects click in.
But enough. You and I simply don't see eye to eye here, and clearly won't, and I'll shut up now. You want to focus on the micro issue of Mailman, while I see that as a non-issue compared to the macro issues you are ignoring. And it looks like impasse.
-- Chuq Von Rospach, Internet Gnome <http://www.chuqui.com> [<chuqui@plaidworks.com> = <me@chuqui.com> = <chuq@apple.com>] Yes, yes, I've finally finished my home page. Lucky you.
I recommend the handyman's secret weapon: duct tape.
On Tue, Jul 17, 2001 at 05:27:52PM -0700, Chuq Von Rospach wrote:
On 7/16/01 11:35 PM, "Gerald Oskoboiny" <gerald@impressive.net> wrote:
I think the HTTP spec is fairly clear about most of this:
Gerald - I hope this is taken in a team-building way, since it's what I intend it to be.
Sure. I am trying to be clear but apparently not succeeding :(
If it happens once in a while with an obscure site here and there, that's much less of a problem than if some popular software like Mailman is doing the wrong thing
I'm sorry, but I consider this ducking the issue again. You're completely ignoring the white hat/black hat issue,and hiding behind "obscure" and "not a significant issue" and other rationalizations, while still trying to prove that Mailman is none of those, and therefore ought to consider this a crisis issue.
But since I've tried three times now to get you to deal with this double-standard and gotten nowhere, I'll drop it. No sense beating a dead horse. You clearly don¹t' want to deal with the issue, so I'll stop pushing. But I'm disappointed, to be honest about it.
Hmm... I don't know how I managed to give the impression that I'm ducking this issue. It's clear that we are miscommunicating somehow.
I consider all violations of this part of the HTTP spec to be a problem, but I don't think the problem is yet widespread enough that we need to declare the HTTP spec irrelevant and just use GET and POST with no regard for their intended semantics.
I would like to try to get all the broken implementations that I know about fixed, but because my time is limited I tend to focus on the ones that I consider most important, currently Mailman.
Because I know there are broken implementations out there, I would not recommend that most people try prefetching any URLs they see in their incoming email, but I wouldn't mind trying it myself because I'm a web nerd and I want to find out about broken implementations anyway. (and arrange for them to be fixed)
Does any of that help clarify my position?
Many of your other comments seem to be about the quality and clarity of the HTTP spec, which I don't think needs to be discussed here, but I encourage you to take it up elsewhere if you like. (e.g. www-talk, http://www.w3.org/Mail/Lists#www-talk )
-- Gerald Oskoboiny <gerald@impressive.net> http://impressive.net/people/gerald/
On 7/18/01 2:24 PM, "Gerald Oskoboiny" <gerald@impressive.net> wrote:
Because I know there are broken implementations out there, I would not recommend that most people try prefetching any URLs they see in their incoming email,
But doesn't the standard encourage just that, because it puts a patina of "it's okay to do this" on a standard that's clearly not safe to do so with, while not really dealing (at least in any of the stuff I've seen) with why it's not really safe, at least in a way someone who isn't already familiar with the issues would catch?
-- Chuq Von Rospach, Internet Gnome <http://www.chuqui.com> [<chuqui@plaidworks.com> = <me@chuqui.com> = <chuq@apple.com>] Yes, yes, I've finally finished my home page. Lucky you.
I'm really easy to get along with once you people learn to worship me.
On Wed, Jul 18, 2001 at 10:02:54PM -0700, Chuq Von Rospach wrote:
On 7/18/01 2:24 PM, "Gerald Oskoboiny" <gerald@impressive.net> wrote:
Because I know there are broken implementations out there, I would not recommend that most people try prefetching any URLs they see in their incoming email,
But doesn't the standard encourage just that, because it puts a patina of "it's okay to do this"
what is "this" exactly?
on a standard that's clearly not safe to do so with,
I disagree re "clearly not safe"; like I said in my previous message, "I don't think the problem is yet widespread enough that we need to declare the HTTP spec irrelevant and just use GET and POST with no regard for their intended semantics."
What should the spec say, you should not GET any URLs, any time, because doing so might trigger unexpected side effects in noncompliant implementations?
while not really dealing (at least in any of the stuff I've seen) with why it's not really safe, at least in a way someone who isn't already familiar with the issues would catch?
Once again this sounds like a comment on spec quality, and I suggest you take it up with the IETF HTTP working group or start a discussion on the www-talk list.
-- Gerald Oskoboiny <gerald@impressive.net> http://impressive.net/people/gerald/
"Gerald" == Gerald Oskoboiny <gerald@impressive.net> writes:
Mind if I chime in? (Even though you posted an EOT, Chuq).
>> [...] I'd argue that the standard provides a false sense of
>> security [...]
[Sorry for the butchering, Chuq, but I want to emphasize the conflict that I see...]
Gerald> 9.1.1 Safe Methods
Gerald> [...]
Gerald> In particular, the convention has been established
Gerald> that the GET and HEAD methods SHOULD NOT have the
Gerald> significance of taking an action other than
Gerald> retrieval. These methods ought to be considered
Gerald> "safe". This allows user agents to represent other
Gerald> methods, such as POST, PUT and DELETE, in a special
Gerald> way, so that the user is made aware of the fact that a
Gerald> possibly unsafe action is being requested.
So people will have their browser mark these links in a special way (is any browser actually doing that?).
*That* is the false sense of security Chuq mentioned. (I think ;-)
"This link is a safe link, because my browser tells me so". A fat lot of good that will do them.
(BTW: anyone realize that it's "SHOULD NOT" and not "MUST NOT"? Read RFC2119 and you'll see how this is relevant to *this* mailman discussion (yes, I've seen Barry's BDFL pronouncement... I just like to argue and debate ;-))
Gerald> The important distinction here is that the user did
Gerald> not request the side-effects, so therefore cannot be
Gerald> held accountable for them.
Gerald> -- http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.1.1
Yay! I didn't know the HTTP standards body wrote law. Or has RFC2616 been passed by the House? And been confirmed by the Senate? (Or is it the other way 'round?)
Gerald> but once all that's been said, it's really up to the
Gerald> implementations to do the right thing.
And then we'll have the old and beloved game of "passing the buck". "Me? But my browser showed me the link was safe!" "Us? We just implemented the standard!". "Us? Ooops..." ;-)
>> No, but it can cause actions you'll regret. You started this by
>> bringing up one as a problem. Now, however, you're saying
>> "well, that's no big deal".
As you can see above, Chuq, apparently RFC2616 contains new law saying that whatever that link did is "Not Your Fault" (The Browser Made Me Do It!) (Hey, Barry, I've got a song idea: "Not My Fault -- The Browser Made Me Do It (The RFC2616 Blues)" ;-)
Bye, J
PS: This is a resend, because the first mail out went to Chuq only... I screwed up the To header.
-- Jürgen A. Erhard (juergen.erhard@gmx.net, jae@users.sourceforge.net) My WebHome: http://members.tripod.com/Juergen_Erhard GIMP - Image Manipulation Program (http://www.gimp.org) Codito, ergo sum - I code, therefore I am -- Raster
I have a couple of questions and comments, and then I /really/ need to get some sleep, so I'll follow up with more tomorrow.
If state changing GETs break the standards, then why does e.g. Apache by default allow you to GET a cgi program? Apache is the most common web server (certainly on Mailman-friendly OSes) so I would think that it should adhere to the specs pretty closely.
Aren't the majority of cgi programs of a state-changing nature? Sure, you've got your odd search interface, but even a script like Mailman's private.py changes state: you get authenticated and a cookie gets dropped, and now your interactions are governed by a change in state.
Wouldn't it therefore make sense for Apache to in general disallow GETs to programs by default, with some enabling technique to allow specific state-neutral programs to be GETted?
I'll also mention that it seems to me that strict adherence to this rule would be pretty harmful to a platform like Zope, where urls are really encoded object access and execution commands (like RPC via urls).
sleepi-ly y'rs, -Barry
On Tue, 17 Jul 2001, Barry A. Warsaw wrote:
If state changing GETs break the standards, then why does e.g. Apache by default allow you to GET a cgi program?
A CGI program that has no side-effects and simply dynamically generates content wouldn't be a violation.
Wouldn't it therefore make sense for Apache to in general disallow GETs to programs by default, with some enabling technique to allow specific state-neutral programs to be GETted?
No, not to me, anyway.
I'll also mention that it seems to me that strict adherence to this rule would be pretty harmful to a platform like Zope, where urls are really encoded object access and execution commands (like RPC via urls).
Sounds like a bad choice.
ROGER B.A. KLORESE rogerk@QueerNet.ORG PO Box 14309 San Francisco, CA 94114 "Go without hate. But not without rage. Heal the world." -- Paul Monette
(Chuq has suggested that we keep this thread on -developers, so this will be my last post to -users on the subject for now, I just wanted to respond to this here in case anyone else was curious about this stuff.)
On Tue, Jul 17, 2001 at 12:16:06AM -0400, Barry A. Warsaw wrote:
I have a couple of questions and comments, and then I /really/ need to get some sleep, so I'll follow up with more tomorrow.
If state changing GETs break the standards, then why does e.g. Apache by default allow you to GET a cgi program? Apache is the most common web server (certainly on Mailman-friendly OSes) so I would think that it should adhere to the specs pretty closely.
Aren't the majority of cgi programs of a state-changing nature?
I don't think so; TimBL addresses this in his writeup:
Forms: GET and POST
There is a very important distinction in society and in software, and
certainly on the Web, between reading and writing; between having no
effect and making changes; between observing and making a commitment.
This is fundamental in the Web and your web site must respect it.
Currently the line is often fuzzily drawn, which is very bad for many
reasons.
Form can work in two ways corresponding to this distinction.
One way is to direct the user, like a link, to a new resource, but one
whose URI is constructed from the form's field values. This is
typically how a query form works. (It uses HTTP's GET method.) The
user makes no commitment. Once he or she has followed the link, he or
she can bookmark the result of the query. Following any link to that
same URI will perform the same query again. It is as though Web space
were populated by lots of virtual pages, one for the results of each
possible query to the server. There is no commitment by the user. The
operation can be undone simply by pressing the Back button on a
browser. The user can never be held responsible for anything which was
done using HTTP GET. If your website fills a shopping cart as a user
follows normal links, and sometimes users end up ordering too much or
too little as they use a web accelerator or a cache, then it is your
fault. You should have used the second way.
The second way a form can work is to take a command, or commitment,
from the user. This is done using HTTP POST or sometimes by sending an
message. "Submit this order", and "unsubscribe from this list" are
classic examples. It is really important that the user understands
when a commitment or change is being made and when it isn't.
Hopefully, clients will help by changing the cursor to a special one
when a commitment is about to be made. Such an operation, like sending
an email, or signing and mailing a paper document, cannot be undone.
It is socially quite different. Browsers and servers and proxies which
understand HTTP treat it quite differently. You should never confuse
these two types of interaction on your web site, either way. If you
do, you break the web, and the web will break your site.
-- http://www.w3.org/Provider/Style/Input
Sure, you've got your odd search interface, but even a script like Mailman's private.py changes state: you get authenticated and a cookie gets dropped, and now your interactions are governed by a change in state.
private.py uses POST, no?
un: gerald> grep -i '<form' /home/mailman/Mailman/Cgi/private.py
<FORM METHOD=POST ACTION="%(basepath)s/">
(from the 2.0.5 codebase)
I'll also mention that it seems to me that strict adherence to this rule would be pretty harmful to a platform like Zope, where urls are really encoded object access and execution commands (like RPC via urls).
I haven't studied Zope, so I don't know about that, sorry.
-- Gerald Oskoboiny <gerald@impressive.net> http://impressive.net/people/gerald/
I was pulled away on other work for most of the day, but I think I've caught up with the whole thread.
On the micro-issue of what Mailman's ttw confirmation should do, I am much more swayed by Thomas's observation that we can actually add useful value by providing a form that allows the user to confirm or discard his request. Given that I agree with everything Chuq et al have said about the inherent insecurity of GET, that seemed to me a more persuasive argument as it pertains narrowly to Mailman.
Unless someone wants to volunteer to do usability studies (for which I don't have the time), I propose to change confirm.py to POST a form, and to pull in the ability to cancel held postings and subscription requests. Good idea Thomas.
But I definitely appreciate the discussions Gerald initiated, and I'm glad he did that. Hopefully, Gerald can bring the very valid concerns raised here before the W3C and the standards authors. I think they're vitally important to where the web is going. The security and privacy of the web has such a deservedly poor reputation, what with JavaScript and Java vulnerabilities (and the increasing number of sites that are simply unnavigatable without them), client-side trojans, web bugs, hijacked ActiveX certificates etc. etc. I really wish browser vendors would err on the side of security and privacy than on convenience. Sucker the user in enough times, or sucker enough of them in and the web will not be able to recover.
-Barry
On 7/17/01 6:08 PM, "Barry A. Warsaw" <barry@digicool.com> wrote:
Good idea Thomas.
Agreed. I'm all for it, also.
I really wish browser vendors would err on the side of security and privacy than on convenience.
They won't until their users do, and to be honest, the loud minority notwithstanding, that's what they are -- a loud minority. And to some degree the loud part does their cause a disservice, because they get written off by many people because they tend to be so strident, while Joe User simply doesn't care -- and nobody's done a good job trying to convince Joe User he should.
Sucker the user in enough times, or sucker enough of them in and the web will not be able to recover.
Unfortunately, Barry, you're wrong. Don't believe me? Look at all of the spam that's nothing more than an electronic version of the same old scams that they've been trying to wipe out of the paper postal service for generations....
-- Chuq Von Rospach, Internet Gnome <http://www.chuqui.com> [<chuqui@plaidworks.com> = <me@chuqui.com> = <chuq@apple.com>] Yes, yes, I've finally finished my home page. Lucky you.
I'm really easy to get along with once you people learn to worship me.
On Tue, 17 Jul 2001 21:08:59 -0400 barry@digicool.com (Barry A. Warsaw) wrote:
I was pulled away on other work for most of the day, but I think I've caught up with the whole thread.
On the micro-issue of what Mailman's ttw confirmation should do, I am much more swayed by Thomas's observation that we can actually add useful value by providing a form that allows the user to confirm or discard his request. Given that I agree with everything Chuq et al have said about the inherent insecurity of GET, that seemed to me a more persuasive argument as it pertains narrowly to Mailman.
Unless someone wants to volunteer to do usability studies (for which I don't have the time), I propose to change confirm.py to POST a form, and to pull in the ability to cancel held postings and subscription requests. Good idea Thomas.
But I definitely appreciate the discussions Gerald initiated, and I'm glad he did that. Hopefully, Gerald can bring the very valid concerns raised here before the W3C and the standards authors. I think they're vitally important to where the web is going. The security and privacy of the web has such a deservedly poor reputation, what with JavaScript and Java vulnerabilities (and the increasing number of sites that are simply unnavigatable without them), client-side trojans, web bugs, hijacked ActiveX certificates etc. etc. I really wish browser vendors would err on the side of security and privacy than on convenience. Sucker the user in enough times, or sucker enough of them in and the web will not be able to recover.
-Barry
This whole controversy might be my fault -- I don't know the pedigree of Barry's implementation, but I'd submitted a confirm-by-visiting-this-URL patch several months ago. Hey, what can I say? it was a quick hack. But since it was implemented I don't think I've had one I-can't-follow-the-instructions-to-confirm exchange with a cluefully-challenged proto-subscriber; neither have I had a single complaint about misuse of GET.
The discussion has been thought-provoking. I'm not entirely swayed to the position that it's morally wrong to use GET in this case, where repeating the GET doesn't have any effect beyond what was caused by the first. But Barry has a good point, that Thomas' idea adds value. IMHO it's a much better UI -- no matter how much text surrounds it, a URL in email isn't as clearly delineated as a big fat button in the middle of a web page. Having a "don't confirm" button also is even better. And regardless, using POST instead of GET is a fairly simple change. From a pragmatic point of view, there's not much reason not to comply with the published standard, even if its justification is weak. (This is not to devalue the discussion and debate of the W3C's position.)
-les
On Tue, Jul 17, 2001 at 11:50:33PM -0700, Les Niles wrote:
And regardless, using POST instead of
GET is a fairly simple change. From a pragmatic point of view, there's not much reason not to comply with the published standard, even if its justification is weak.
Actually, alas, this is the crux of the discussion. It is *not* a fairly simple change: you can't POST *from the middle of an email*, which was the desired implementation. If you use POST, the user *has to do another thing*.
There is much reason not to comply with the published standard: people are stupid. Shame, isn't it?
Cheers, -- jra
Jay R. Ashworth jra@baylink.com Member of the Technical Staff Baylink RFC 2100 The Suncoast Freenet The Things I Think Tampa Bay, Florida http://baylink.pitas.com +1 727 804 5015
OS X: Because making Unix user-friendly was easier than debugging Windows -- Simon Slavin in a.f.c
On Wed, 18 Jul 2001 09:33:37 -0400 "Jay R. Ashworth" <jra@baylink.com> wrote:
On Tue, Jul 17, 2001 at 11:50:33PM -0700, Les Niles wrote:
And regardless, using POST instead of
GET is a fairly simple change. From a pragmatic point of view, there's not much reason not to comply with the published standard, even if its justification is weak.
Actually, alas, this is the crux of the discussion. It is *not* a fairly simple change: you can't POST *from the middle of an email*, which was the desired implementation. If you use POST, the user *has to do another thing*.
Well, yeah, but as I said, I think having the user do another thing actually makes for a much nicer UI. My "simple change" comment was intended to refer to the coding effort. (Hmm... seems like it would be possible to POST from a text/html section in the middle of an email... but I'm sure not going to suggest that. :)
There is much reason not to comply with the published standard: people are stupid. Shame, isn't it?
I'm not sure which stupidity you're talking about. Are you concerned that people will launch the link from the email but then not push the "confirm" (or "cancel") button?
-les
On Wed, Jul 18, 2001 at 07:56:46AM -0700, Les Niles wrote:
Actually, alas, this is the crux of the discussion. It is *not* a fairly simple change: you can't POST *from the middle of an email*, which was the desired implementation. If you use POST, the user *has to do another thing*.
Well, yeah, but as I said, I think having the user do another thing actually makes for a much nicer UI.
Alas, Chuq's assertion, which would not surprise me in the least, proved it true, is that it does *not*; it gets you lots of stupid support questions.
My "simple change" comment
was intended to refer to the coding effort. (Hmm... seems like it would be possible to POST from a text/html section in the middle of an email... but I'm sure not going to suggest that. :)
:-) Small Matter Of Programming.
There is much reason not to comply with the published standard: people are stupid. Shame, isn't it?
I'm not sure which stupidity you're talking about. Are you concerned that people will launch the link from the email but then not push the "confirm" (or "cancel") button?
I personally wasn't concerned with that, but Chuq seems to be (unless I've misread him) and he as a *much* larger stable of experience on the topic.
Cheers, -- jra
Jay R. Ashworth jra@baylink.com Member of the Technical Staff Baylink RFC 2100 The Suncoast Freenet The Things I Think Tampa Bay, Florida http://baylink.pitas.com +1 727 804 5015
OS X: Because making Unix user-friendly was easier than debugging Windows -- Simon Slavin in a.f.c
On 7/18/01 8:36 AM, "Jay R. Ashworth" <jra@baylink.com> wrote:
I'm not sure which stupidity you're talking about. Are you concerned that people will launch the link from the email but then not push the "confirm" (or "cancel") button?
I personally wasn't concerned with that, but Chuq seems to be (unless I've misread him) and he as a *much* larger stable of experience on the topic.
In some cases, it's not as big an issue, but with my big server (the one that does the marketing stuff), there's a strong charter to make sure everyone get on the list (but nobody who doesn't want to be there). Very big pressure to remove any hoop that needs to be jumped through or bar that needs to be jumped over, unless it absolutely has to be there.
So understanding the needs of the non-geek, and making things as simple as possible, are big issues with me. It doesn't hurt that I have an 80 year old mom that loves her iMac, loves her e-mail, and is still fascinated AND intimidated by this stuff to keep me honest. She's not stupid -- she's just not a geek. And working with her causes me to ALWAYS try to understand how people like her see this stuff, because how she sees it is very different than how I see it.
If you don't have someone like that around to keep you honest, adopt one. Seriously.
-- Chuq Von Rospach, Internet Gnome <http://www.chuqui.com> [<chuqui@plaidworks.com> = <me@chuqui.com> = <chuq@apple.com>] Yes, yes, I've finally finished my home page. Lucky you.
95% of being a net.god is sounding persuasive and convincing people you know what you're talking about, even when you're making it up as you go along. (chuq von rospach, 1992)
On Jul 14, 2001 at 19:32, Gerald Oskoboiny wrote:
our own specs, for obvious reasons. But I'd be making this argument whether or not that were the case, since it's clearly the Right Thing to do imho...
Yeah, like that List-* headers :-)
-- Satya. <URL:http://satya.virtualave.net/> US-bound grad students! For pre-apps, see <URL:http://quickapps.cjb.net/> It ALWAYS goes wrong, especially if it's mission critical!
I'd be happy with an admin-configurable option to either do traditional subscription confirmation or the http method.
At 02:03 PM 7/13/2001 -0700, Chuq Von Rospach wrote:
On 7/13/01 1:43 PM, "Gerald Oskoboiny" <gerald@impressive.net> wrote:
This violates the HTTP protocol: visiting a URL (i.e., an HTTP GET) should not have side effects like confirming a subscription.
My first reaction was "say what?" but I went and read the w3 stuff before responding...
I realize that a number of other sites misuse GET this way, but I think most of the large ones (e.g., Yahoo, online brokerages and banks, etc.) get it right, and I think Mailman should too.
Because, frankly, I think w3 is wrong IN THIS CASE. That may make sense in a general case, especially on an HTTP only situation, but in this case, where the URL is being carried in e-mail to confirm an action the user has (presumably) started, I think they're wrong. As long as the e-mail clearly delineates the action being taken, do what's easy for the user; and the user isn't going to want to go clicking through multiple links just to allow us to abide to the HTTP stuff.
But the key is this is a finalization of a distributed transaction, with e-mail distributing the token. Under other circumstances, I see W3's logic. Here, however, using a URL to bring up a page that says "click here to confirm" is only going to piss off Joe User, not make his life better.
-- Chuq Von Rospach, Internet Gnome <http://www.chuqui.com> [<chuqui@plaidworks.com> = <me@chuqui.com> = <chuq@apple.com>] Yes, yes, I've finally finished my home page. Lucky you.
Some days you're the dog, some days you're the hydrant.
Mailman-Users maillist - Mailman-Users@python.org http://mail.python.org/mailman/listinfo/mailman-users
Some time ago, we had a consultant write us some code for MailMan to implement a feature found in LISTSERV, so that we could stop using that listhandler. The feature in question is the ability to put a limit on how many members can be subscribed to a list. The code was written GPL:ed, and in the contract we stipulated that it be submitted to you, the MailMan maintainers, so that it could be included in future releases of MailMan. Since this anouncement displays no trace of such functionality, I'd just like to ask if you have rejected the code, or if it for some reason never reached you?
Calle Dybedahl | UNIX-admin | Telenordia Internet | cdy@algonet.se
Dear Barry --
Thanks again for all your wonderful service to the 'net community and assistance. I was wondering if there is a way to backup the list (ie, extract all the email addresses to a text file)? Or perhaps there is a specific file that already contains these?
Roy
"RH" == Roy Harvey <roy@lamrim.com> writes:
RH> Thanks again for all your wonderful service to the 'net
RH> community and assistance.
You're welcome!
RH> I was wondering if there is a way to backup the list (ie,
RH> extract all the email addresses to a text file)? Or perhaps
RH> there is a specific file that already contains these?
bin/list_members is as close as it gets right now. This does loose some information, although it can be used to retain digest vs. regular delivery. It would be nice to have a script that extracted all the membership information for a list into a plain text file. It shouldn't be hard to write given the new MemberAdaptor interface in MM2.1.
Cheers, -Barry
Is there a Virus Filtering package that works with Mailman?
Tom Eagle
At 05:18 PM 3/19/02 -0600, you wrote:
Is there a Virus Filtering package that works with Mailman?
That would more properly be an MTA issue, not an MLM issue.
Depending on what you're running, there are various vendors who
have plugins that can scan. Trend has several unixy variants for
instance, although they're not cheap. We use Mirapoint's solution;
an MD300 message directory sitting in from of our main mail servers
scrubbing all inbound and outbound mail with Trend's engine. (Soon
to be Sophos's engine, since Mirapoint, thanks to their agreements,
was able to underbid Trend on every contract for a combined hardware
and software solution, charging less than Trend would for just the
software...)
None of them are cheap, though..
participants (17)
-
"Jürgen A. Erhard"
-
barry@digicool.com
-
barry@zope.com
-
Calle Dybedahl
-
Chuq Von Rospach
-
Dale Newfield
-
Forrest Aldrich
-
Gerald Oskoboiny
-
Jay R. Ashworth
-
Les Niles
-
Phil Stracchino
-
Roger B.A. Klorese
-
Ron Jarrell
-
Roy Harvey
-
Satya
-
Thomas Wouters
-
Tom Eagle