This the official announcement for Mailman 2.1 alpha 2. Because it's an alpha, this announcement is only going out to the mailman-* mailing lists. I'll make two warnings: you probably should still not use this version for production systems (but TIA for any and all testing you do with it!), and I've already had a couple of bug fixes from early adopters. 2.1a2 should still be useful, but you might want to keep an eye on cvs and the mailman-checkins list for updates.
I am only making the tarball available on SourceForge, so you'll need to go to http://sf.net/projects/mailman to grab it. You'll also need to upgrade to mimelib-0.4, so be sure to go to http://sf.net/projects/mimelib to grab and install that tarball first.
To view the on-line documentation, see
http://www.list.org/MM21/index.html
or
http://mailman.sf.net/MM21/index.html
Below is an excerpt from the NEWS file for all the changes since 2.1alpha1. There are a bunch of new features coming down the pike, and I hope to have an alpha3 out soon. I'm also planning on doing much more stress testing of this version with real list traffic, and I'm hoping we'll start to get more languages integrated into cvs.
Enjoy, -Barry
-------------------- snip snip -------------------- 2.1 alpha 2 (11-Jul-2001)
- Building
o mimelib 0.4 is now required. Get it from
http://mimelib.sf.net. If you've installed an earlier
version of mimelib, you must upgrade.
o /usr/local/mailman is now the default installation
directory. Use configure's --prefix switch to change it
back to the default (/home/mailman) or any other
installation directory of your choice.
- Security
o Better definition of authentication domains. The following
roles have been defined: user, list-admin, list-moderator,
creator, site-admin.
o There is now a separate role of "list moderator", which has
access to the pending requests (admindb) page, but not the
list configuration pages.
o Subscription confirmations can now be performed via email or
via URL. When a subscription is received, a unique (sha)
confirm URL is generated in the confirmation message.
Simply visiting this URL completes the subscription process.
o In a similar manner, removal requests (via web or email
command) no longer require the password. If the correct
password is given, the removal is performed immediately. If
no password is given, then a confirmation message is
generated.
- Internationalization
o More I18N patches. The basic infrastructure should now be
working correctly. Spanish templates and catalogs are
included, and English, French, Hungarian, and Big5 templates
are included.
o Cascading specializations and internationalization of
templates. Templates are now search for in the following
order: list-specific location, domain-specific location,
site-wide location, global defaults. Each search location
is further qualified by the language being displayed. This
means that you only need to change the templates that are
different from the global defaults.
Templates renamed: admlogin.txt => admlogin.html
Templates added: private.html
- Web UI
o Redesigned the user options page. It now sits behind an
authentication so user options cannot be viewed without the
proper password. The other advantage is that the user's
password need not be entered on the options page to
unsubscribe or change option values. The login screen also
provides for password mail-back, and unsubscription w/
confirmation.
Other new features accessible from the user options page
include: ability to change email address (with confirmation)
both per-list and globally for all list on virtual domain;
global membership password changing; global mail delivery
disable/enable; ability to suppress password reminders both
per-list and globally; logout button.
[Note: the handle_opts cgi has gone away]
o Color schemes for non-template based web pages can be defined
via mm_cfg.
o Redesign of the membership management page. The page is now
split into three subcategories (Membership List, Mass
Subscription, and Mass Removal). The Membership List
subcategory now supports searching for member addresses by
regular expression, and if necessary, it groups member
addresses first alphabetically, and then by chunks.
Mass Subscription and Mass Removal now support file upload,
with one address per line.
o Hyperlinks from the logos in the footers have been removed.
The sponsors got too much "unsubscribe me!" spam from
desperate user of Mailman at other sites.
o New buttons on the digest admin page to send a digest
immediately (if it's non-empty), to start a new digest
volume with the next digest, and to select the interval with
which to automatically start a new digest volume (yearly,
monthly, quarterly, weekly, daily).
DEFAULT_DIGEST_VOLUME_FREQUENCY is a new configuration
variable, initially set to give a new digest volume monthly.
o Through-the-web list creation and removal, using a separate
site-wide authentication role called the "list creator and
destroyer" or simply "list creator". If the configuration
variable OWNERS_CAN_DELETE_THEIR_OWN_LISTS is set to 1 (by
default, it's 0), then list admins can delete their own
lists.
This feature requires an adaptor for the particular MTA
you're using. An adaptor for Postfix is included, as is a
dumb adaptor that just emails mailman@yoursite with the
necessary Sendmail style /etc/alias file changes. Some MTAs
like Exim can be configured to automatically recognize new
lists. The adaptor is selected via the MTA option in
mm_cfg.py
- Email UI
o In email commands, "join" is a synonym for
"subscribe". "remove" and "leave" are synonyms for
"unsubscribe". New robot addresses are support to make
subscribing and unsubscribing much easier:
mylist-join@mysite
mylist-leave@mysite
o Confirmation messages have a shortened Subject: header,
containing just the word "confirm" and the confirmation
cookie. This should help for MUAs that like to wrap long
Subject: lines, messing up confirmation.
o Mailman now recognizes an Urgent: header, which, if it
contains the list moderator or list administrator password,
forces the message to be delivered immediately to all
members (i.e. both regular and digest members). The message
is also placed in the digest. If the password is incorrect,
the message will be bounced back to the sender.
- Performance
o Refinements to the new qrunner subsystem which preserves
FIFO order of messages.
o The qrunner is no longer started from cron. It is started
by a Un*x init-style script called bin/mailmanctl (see
below). cron/qrunner has been removed.
- Command line scripts
o bin/mailmanctl script added, which is used to start, stop,
and restart the qrunner daemon.
o bin/qrunner script added which allows a single sub-qrunner
to run once through its processing loop.
o bin/change_pw script added (eases mass changing of list
passwords).
o bin/update grows a -f switch to force an update.
o bin/newlang renamed to bin/addlang; bin/rmlang removed.
o bin/mmsitepass has grown a -c option to set the list
creator's password. The site-wide `create' web page is
linked to from the admin overview page.
o bin/newlist's -o option is removed. This script also grows
a way of spelling the creation of a list in a specific
virtual domain.
o The `auto' script has been removed.
o bin/dumpdb has grown -m/--marshal and -p/--pickle options.
o bin/list_admins can be used to print the owners of a mailing list.
o bin/genaliases regenerates from scratch the aliases and
aliases.db file for the Postfix MTA.
- Archiver
o New archiver date clobbering option, which allows dates to
only be clobber if they are outrageously out-of-date
(default setting is 15 days on either side of received
timestamp). New configuration variables:
ARCHIVER_CLOBBER_DATE_POLICY
ARCHIVER_ALLOWABLE_SANE_DATE_SKEW
The archived copy of messages grows an X-List-Received-Date:
header indicating the time the message was received by
Mailman.
o PRIVATE_ARCHIVE_URL configuration variable is removed (this
can be calculated on the fly, and removing it actually makes
site configuration easier).
- Miscellaneous
o Several new README's have been added.
o Most syslog entries for the qrunner have been redirected to
logs/error.
o On SIGHUP, qrunner will re-open all its log files and
restart all child processes. See "bin/mailmanctl restart".
- Patches and bug fixes
o SF patches and bug fixes applied: 420396, 424389, 227694,
426002, 401372 (partial), 401452.
o Fixes in 2.0.5 ported forward:
Fix a lock stagnation problem that can result when the
user hits the `stop' button on their browser during a
write operation that can take a long time (e.g. hitting
the membership management admin page).
o Fixes in 2.0.4 ported forward:
Python 2.1 compatibility release. There were a few
questionable constructs and uses of deprecated modules
that caused annoying warnings when used with Python 2.1.
This release quiets those warnings.
o Fixes in 2.0.3 ported forward:
Bug fix release. There was a small typo in 2.0.2 in
ListAdmin.py for approving an already subscribed member
(thanks Thomas!). Also, an update to the OpenWall
security workaround (contrib/securelinux_fix.py) was
included. Thanks to Marc Merlin.
On Fri, Jul 13, 2001 at 04:15:20PM -0400, Barry A. Warsaw wrote:
This the official announcement for Mailman 2.1 alpha 2. [...]
To view the on-line documentation, see
http://www.list.org/MM21/index.html
2.1 alpha 2 (11-Jul-2001)
[ lots of extremely cool stuff deleted ]
o Subscription confirmations can now be performed via email or via URL. When a subscription is received, a unique (sha) confirm URL is generated in the confirmation message. Simply visiting this URL completes the subscription process.
This violates the HTTP protocol: visiting a URL (i.e., an HTTP GET) should not have side effects like confirming a subscription.
A few months ago I sent mail to mailman-developers with a suggestion for how to implement this in a compliant way without hindering usability:
http://mail.python.org/pipermail/mailman-developers/2001-January/003579.html
mid:20010103022646.A31881@impressive.net
I realize that a number of other sites misuse GET this way, but I think most of the large ones (e.g., Yahoo, online brokerages and banks, etc.) get it right, and I think Mailman should too.
Further reading on GET vs POST:
Forms: GET and POST
http://www.w3.org/Provider/Style/Input
Axioms of Web architecture: Identity, State and GET
http://www.w3.org/DesignIssues/Axioms#state
HTTP 1.1 section 9.1: Safe and Idempotent Methods
http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.1
HTML 4.01 section 17.13: Form submission
http://www.w3.org/TR/html4/interact/forms.html#h-17.13
-- Gerald Oskoboiny <gerald@impressive.net> http://impressive.net/people/gerald/
On 7/13/01 1:43 PM, "Gerald Oskoboiny" <gerald@impressive.net> wrote:
This violates the HTTP protocol: visiting a URL (i.e., an HTTP GET) should not have side effects like confirming a subscription.
My first reaction was "say what?" but I went and read the w3 stuff before responding...
I realize that a number of other sites misuse GET this way, but I think most of the large ones (e.g., Yahoo, online brokerages and banks, etc.) get it right, and I think Mailman should too.
Because, frankly, I think w3 is wrong IN THIS CASE. That may make sense in a general case, especially on an HTTP only situation, but in this case, where the URL is being carried in e-mail to confirm an action the user has (presumably) started, I think they're wrong. As long as the e-mail clearly delineates the action being taken, do what's easy for the user; and the user isn't going to want to go clicking through multiple links just to allow us to abide to the HTTP stuff.
But the key is this is a finalization of a distributed transaction, with e-mail distributing the token. Under other circumstances, I see W3's logic. Here, however, using a URL to bring up a page that says "click here to confirm" is only going to piss off Joe User, not make his life better.
-- Chuq Von Rospach, Internet Gnome <http://www.chuqui.com> [<chuqui@plaidworks.com> = <me@chuqui.com> = <chuq@apple.com>] Yes, yes, I've finally finished my home page. Lucky you.
Some days you're the dog, some days you're the hydrant.
"CVR" == Chuq Von Rospach <chuqui@plaidworks.com> writes:
>> I realize that a number of other sites misuse GET this way, but
>> I think most of the large ones (e.g., Yahoo, online brokerages
>> and banks, etc.) get it right, and I think Mailman should too.
CVR> Because, frankly, I think w3 is wrong IN THIS CASE. That may
CVR> make sense in a general case, especially on an HTTP only
CVR> situation, but in this case, where the URL is being carried
CVR> in e-mail to confirm an action the user has (presumably)
CVR> started, I think they're wrong. As long as the e-mail clearly
CVR> delineates the action being taken, do what's easy for the
CVR> user; and the user isn't going to want to go clicking through
CVR> multiple links just to allow us to abide to the HTTP stuff.
CVR> But the key is this is a finalization of a distributed
CVR> transaction, with e-mail distributing the token. Under other
CVR> circumstances, I see W3's logic. Here, however, using a URL
CVR> to bring up a page that says "click here to confirm" is only
CVR> going to piss off Joe User, not make his life better.
I agree with Chuq. The user isn't going to understand the distinction and is just going to be annoyed by having to do, what to them seems like an extra unnecessary step.
-Barry
On Sat, Jul 14, 2001 at 12:55:04PM -0400, Barry A. Warsaw wrote:
"CVR" == Chuq Von Rospach <chuqui@plaidworks.com> writes:
>> I realize that a number of other sites misuse GET this way, but >> I think most of the large ones (e.g., Yahoo, online brokerages >> and banks, etc.) get it right, and I think Mailman should too.
[ ... ]
CVR> But the key is this is a finalization of a distributed CVR> transaction, with e-mail distributing the token. Under other CVR> circumstances, I see W3's logic. Here, however, using a URL CVR> to bring up a page that says "click here to confirm" is only CVR> going to piss off Joe User, not make his life better.
I agree with Chuq. The user isn't going to understand the distinction and is just going to be annoyed by having to do, what to them seems like an extra unnecessary step.
After some careful consideration, as well as a chat with a few clueful colleagues, I have to disagree with you, Barry. The trick here is 'managing the expectations'. Having the message say something like
To confirm or remove your subscription request, visit <URL>
And then have that URL bring up a nice overview of what list you are subscribing to, the options you chose (regular-digest/mime-digest/etc), what email address you entered, and 'remove' and 'confirm' buttons. Frankly, it's always bothered me that you can't unconfirm a mailinglist subscription, let alone not being able to see what you are subscribing to ;P
Extra credit if you make the URL (or something similar) also work if a subscription is held for approval, but without a 'confirm' button -- just a 'remove' one. Actually, the same kind of interface for a held message would be great, too :)
-- Thomas Wouters <thomas@xs4all.net>
Hi! I'm a .signature virus! copy me into your .signature file to help me spread!
On Tue, Jul 17, 2001 at 10:34:22AM +0200, Thomas Wouters wrote:
After some careful consideration, as well as a chat with a few clueful colleagues, I have to disagree with you, Barry. The trick here is 'managing the expectations'. Having the message say something like
To confirm or remove your subscription request, visit <URL>
And then have that URL bring up a nice overview of what list you are subscribing to, the options you chose (regular-digest/mime-digest/etc), what email address you entered, and 'remove' and 'confirm' buttons. Frankly, it's always bothered me that you can't unconfirm a mailinglist subscription, let alone not being able to see what you are subscribing to ;P
Extra credit if you make the URL (or something similar) also work if a subscription is held for approval, but without a 'confirm' button -- just a 'remove' one. Actually, the same kind of interface for a held message would be great, too :)
I agree strongly that this is the _RIGHT_ way to do it, and it complies with W3C standards as well. Opening the URL should not just go ahead and do something, and many people may be angry if it does. It should tell the user what it's offering to do, and then allow the user to make an informed decision about whether to do it or cancel it.
After all, it's only one extra click. Keep in mind that there are foolish people out there who use mail clients from evil software monopolists that can be configured to automatically preload URLs contained in mail, and some of them do so configure them. Then everyone on the list has to deal with, "How did I get on this list? Stop sending me mail!"
-- Linux Now! ..........Because friends don't let friends use Microsoft. phil stracchino -- the renaissance man -- mystic zen biker geek alaric@babcom.com halmayne@sourceforge.net 2000 CBR929RR, 1991 VFR750F3 (foully murdered), 1986 VF500F (sold)
On Fri, Jul 13, 2001 at 02:03:51PM -0700, Chuq Von Rospach wrote:
On 7/13/01 1:43 PM, "Gerald Oskoboiny" <gerald@impressive.net> wrote:
This violates the HTTP protocol: visiting a URL (i.e., an HTTP GET) should not have side effects like confirming a subscription.
My first reaction was "say what?" but I went and read the w3 stuff before responding...
Note that "the w3 stuff" includes the standards-track RFC 2616 (HTTP/1.1), the HTML 4.01 specification, and supplementary notes by the creator of the HTTP protocol (incl its GET and POST methods); they're not just a few random pages on w3.org :)
I realize that a number of other sites misuse GET this way, but I think most of the large ones (e.g., Yahoo, online brokerages and banks, etc.) get it right, and I think Mailman should too.
Because, frankly, I think w3 is wrong IN THIS CASE. That may make sense in a general case, especially on an HTTP only situation, but in this case, where the URL is being carried in e-mail to confirm an action the user has (presumably) started, I think they're wrong.
A couple of the references I gave in my previous message included this specific example (confirming a subscription) as a case where POST should be used instead of GET, so I think it is quite clear that the specs do apply in this specific case.
Regarding "I think they're wrong", I respect your opinion, but the HTTP spec is the result of a decade of work on and experience with HTTP by full-time web protocol geeks...
As long as the e-mail clearly delineates the action being taken, do what's easy for the user; and the user isn't going to want to go clicking through multiple links just to allow us to abide to the HTTP stuff.
What if I have a smart system on my desktop PC that is configured to prefetch any URLs it sees in my incoming email, so there is zero latency when I go to read them later? (or so I can read them while offline, on a train or plane or something)
That would allow anyone in the world to sign me up for any mailman-backed mailing list, whether or not I even see the email confirmation requests. And that would be Mailman's fault, for misusing HTTP GETs.
But the key is this is a finalization of a distributed transaction, with e-mail distributing the token. Under other circumstances, I see W3's logic. Here, however, using a URL to bring up a page that says "click here to confirm" is only going to piss off Joe User, not make his life better.
I disagree: a large part of the reason for the distinction between GET and POST is a social/usability one: people should become accustomed to following hypertext links and clicking on URLs without any action being taken. (personally, I cut and paste URLs into my browser all the time without checking carefully what they appear to be first, so when I actually want to read what's there, I don't have to wait for the browser. Of course, I only do that because I don't have that prefetching thing set up... yet.)
btw, part of the reason I care about this is that I work for W3C and am currently evaluating mailman for use on lists.w3.org (to replace smartlist) and we're pretty fussy about complying with our own specs, for obvious reasons. But I'd be making this argument whether or not that were the case, since it's clearly the Right Thing to do imho...
-- Gerald Oskoboiny <gerald@impressive.net> http://impressive.net/people/gerald/
On 7/14/01 4:32 PM, "Gerald Oskoboiny" <gerald@impressive.net> wrote:
Regarding "I think they're wrong", I respect your opinion, but the HTTP spec is the result of a decade of work on and experience with HTTP by full-time web protocol geeks...
But web geeks are not necessarily user-interface or user-experience geeks. You can perfectly correctly build a nice system that's technically correct, but not right for the users.
What if I have a smart system on my desktop PC that is configured to prefetch any URLs it sees in my incoming email,
Given the number of viruses, spam with URLs and other garbage out there, I'd say you're foolish (bordering on stupid), but that's beside the point. It is an interesting issue, one I can't toss out easily; but I'm not convinced, either. I think it's something that needs to be hashed over, perhaps prototyped both ways so we can see the best way to tweak it.
Barry, what do you think about this instance? As the net moves more towards wireless, PDA, mobile-phone, stuff, could we be setting ourselves up for a later problem by ignoring this pre-cache issue Gerald's raised?
Gerald, does W3 have sample pages for "right" and "wrong" that can be looked at, or are we going to have to develop some? The more I think about this, the more I think it's case where we ought to see if we can develop a prototype that follows the standards that we like, rather than toss it out at first glance. But if we can't come up with a system that we agree is 'easy enough', then we should go to what we're currently thinking.
That would allow anyone in the world to sign me up for any mailman-backed mailing list, whether or not I even see the email confirmation requests. And that would be Mailman's fault, for misusing HTTP GETs.
In disagree with this -- since as you say, any number of places already misuse GET, that usage is fairly common. A user who sets themselves up for it should know (or be warned by their mail client) that it has the possibility to trigger events. I'd say it's more a client issue than a Mailman issue.
And, thinking about it, since GET *can* do this, it's probably wrong for W3 to push for it not to be used that way, because if things like your pre-caching system come into common use, the dark side of the net will take advantage of it, the way early virus writers took advantage of mail clients with auto-exec of .EXE's being on by default. So aren't you setting yourself up for problems by having a technology that can, even if you deprecate it, because it sets a user expectation that's going to be broken by those wanting to take advantage of it? I've got a bad feeling about this -- your example seems to set yourself up for abuse by those looking for ways ot abuse you, and that's a bigger issue than Mailman using it -- because if all of the 'white side' programs cooperate, it just encourages creation of things (like the pre-caching) that the dark side will take adavantage of.
As long as GET is capable of being used this way, I'd be very careful about creating stuff that depends on "we don't want it used this way, so it won't be" -- it seems to open up avenues for attack.
Which is not a reason for mailman to ignore the standard -- but a bigger issue about whether this standard creates a perception that could come back and bite people. If people start creating services (like that pre-cache) that get tripped up by this, even if the white hats follow your advice, you're still at risk from the black hats. Isn't it better to acknowledge the capability and not create services that depend on "well behaved" systems?
Of course, I only do that because I don't have that prefetching thing set up... yet.)
At this point, I'd never turn on pre-fectching, since it's safety depends entirely no voluntary cooperation, and you aren't in a position to police until after the fact. That's a Bad Thing in a big way.
btw, part of the reason I care about this is that I work for W3C and am currently evaluating mailman for use on lists.w3.org (to replace smartlist) and we're pretty fussy about complying with our own specs, for obvious reasons.
As you should be.
But I'd be making this argument whether or not that were the case, since it's clearly the Right Thing to do imho...
And the more I think about it, the more it's an interesting point -- but on more than one level. Has W3c considered the implications of defining a standard that depends on voluntary acceptance here? Because the service you propose is unsafe unless you can guarantee everyone you talk to is compliant, and we know how likely that's going to be. That, to me, is a much bigger issue than whether or not Mailman complies, and in fact, I could make an argument that the standard isnt' acceptable if it's going to be a basis for services that can cause harm but requires voluntary acceptance on the server side. By the time you figure out the server isn't complying, they've burnt down the barn and run off with the horse. That's bad.
-- Chuq Von Rospach, Internet Gnome <http://www.chuqui.com> [<chuqui@plaidworks.com> = <me@chuqui.com> = <chuq@apple.com>] Yes, yes, I've finally finished my home page. Lucky you.
Someday, we'll look back on this, laugh nervously and change the subject.
On Sat, Jul 14, 2001 at 08:49:34PM -0700, Chuq Von Rospach wrote:
What if I have a smart system on my desktop PC that is configured to prefetch any URLs it sees in my incoming email,
Given the number of viruses, spam with URLs and other garbage out there, I'd say you're foolish (bordering on stupid), but that's beside the point. It is an interesting issue, one I can't toss out easily; but I'm not convinced, either. I think it's something that needs to be hashed over, perhaps prototyped both ways so we can see the best way to tweak it.
Barry, what do you think about this instance? As the net moves more towards wireless, PDA, mobile-phone, stuff, could we be setting ourselves up for a later problem by ignoring this pre-cache issue Gerald's raised?
What *I* think is that it's a special case, and any such pre-fetch system ought to, by default, *not* pre-fetch anything with GET parameters in it.
*All* GETs have side effects by definition: you get something different depending on what the parameters are.
That would allow anyone in the world to sign me up for any mailman-backed mailing list, whether or not I even see the email confirmation requests. And that would be Mailman's fault, for misusing HTTP GETs.
I disagree with this -- since as you say, any number of places already misuse GET, that usage is fairly common. A user who sets themselves up for it should know (or be warned by their mail client) that it has the possibility to trigger events. I'd say it's more a client issue than a Mailman issue.
Concur. And I base my opinion on 15 years of systems design experience, FWIW.
And, thinking about it, since GET *can* do this, it's probably wrong for W3 to push for it not to be used that way, because if things like your pre-caching system come into common use, the dark side of the net will take advantage of it, the way early virus writers took advantage of mail clients with auto-exec of .EXE's being on by default. So aren't you setting yourself up for problems by having a technology that can, even if you deprecate it, because it sets a user expectation that's going to be broken by those wanting to take advantage of it? I've got a bad feeling about this -- your example seems to set yourself up for abuse by those looking for ways ot abuse you, and that's a bigger issue than Mailman using it -- because if all of the 'white side' programs cooperate, it just encourages creation of things (like the pre-caching) that the dark side will take adavantage of.
Well put, young pilot.
Of course, I only do that because I don't have that prefetching thing set up... yet.)
At this point, I'd never turn on pre-fectching, since it's safety depends entirely no voluntary cooperation, and you aren't in a position to police until after the fact. That's a Bad Thing in a big way.
Well, yeah, but you don't have a palmtop, either, Chuq, right? :-)
But I'd be making this argument whether or not that were the case, since it's clearly the Right Thing to do imho...
And the more I think about it, the more it's an interesting point -- but on more than one level. Has W3c considered the implications of defining a standard that depends on voluntary acceptance here? Because the service you propose is unsafe unless you can guarantee everyone you talk to is compliant, and we know how likely that's going to be. That, to me, is a much bigger issue than whether or not Mailman complies, and in fact, I could make an argument that the standard isnt' acceptable if it's going to be a basis for services that can cause harm but requires voluntary acceptance on the server side. By the time you figure out the server isn't complying, they've burnt down the barn and run off with the horse. That's bad.
I can't find a thing to argue with here; let's see what he comes up with...
Cheers, -- jra
Jay R. Ashworth jra@baylink.com Member of the Technical Staff Baylink RFC 2100 The Suncoast Freenet The Things I Think Tampa Bay, Florida http://baylink.pitas.com +1 727 804 5015
OS X: Because making Unix user-friendly was easier than debugging Windows -- Simon Slavin in a.f.c
On 7/15/01 8:16 AM, "Jay R. Ashworth" <jra@baylink.com> wrote:
What *I* think is that it's a special case, and any such pre-fetch system ought to, by default, *not* pre-fetch anything with GET parameters in it.
That was what I was thinking, too -- no matter what W3 says, building a tool that pre-fetches those by default is like Microsoft defaulting .EXE execution to yes, or sendmail defaulting to open relay like it did in 8.8 and before. Those are situations just waiting for someone to take advantage of it, and the whitehats won't be the someones.
Well put, young pilot.
Young? Young? Where's my walker? (as an irrelevant side note, Apple finally hired me an assistant, who was -- literally -- not potty trained when I used my first Unix system. Good kid. Well, man. He's no kid... But he's getting going to get tired of the "In the Good Old Days.." jokes...)
At this point, I'd never turn on pre-fectching, since it's safety depends entirely no voluntary cooperation, and you aren't in a position to police until after the fact. That's a Bad Thing in a big way.
Well, yeah, but you don't have a palmtop, either, Chuq, right? :-)
I have a Handspring and my primary machine is a wireless laptop (a Titanium!). Do I need a palmtop?
Of course, none of this deals iwht whether Mailman should use GET or POST. That GET is inherently unsafe doesn't mean that it's therefore okay for Mailman to use it -- I still think we need to look at this further. It simply means, IMHO, that if we choose to not follow the W3 standard, that it's fairly safe to do so.
And, editorial comment time, the subject line is a classic example of why subject line topic flags are the second worst damn thing you can do to a mailing list -- after coercing reply-to. How in the bloody heck is someone supposed to look at THAT and figure out whether they want to read the message? And my user studies have shown that subject line is the key determinant on whether a list message gets read.
-- Chuq Von Rospach, Internet Gnome <http://www.chuqui.com> [<chuqui@plaidworks.com> = <me@chuqui.com> = <chuq@apple.com>] Yes, yes, I've finally finished my home page. Lucky you.
Shroedinger: We can never really be sure which side of the road the chicken is on. It's all a matter of chance. Like a game of dice.
Einstein, refuting Schroedinger: God does not play dice with chickens. Heisenburg: We can determine how fast the chicken travelled, or where it ended up, but we cannot determine why it did so.
On Sat, Jul 14, 2001 at 08:49:34PM -0700, Chuq Von Rospach wrote:
On 7/14/01 4:32 PM, "Gerald Oskoboiny" <gerald@impressive.net> wrote:
Regarding "I think they're wrong", I respect your opinion, but the HTTP spec is the result of a decade of work on and experience with HTTP by full-time web protocol geeks...
But web geeks are not necessarily user-interface or user-experience geeks. You can perfectly correctly build a nice system that's technically correct, but not right for the users.
Sure... I agree that it's possible for a standard to be irrelevant or just not meet the needs of the users for which it was written.
But I don't think that is the case here: there really is a good reason for this distinction between get and post, and it is widely followed by popular sites *because* of the usability issues. (I would like to back up my "widely followed" claim by doing a survey of various popular sites, but don't have time today, and probably won't until Friday at the earliest :( Anyway, I am fairly confident that is the case.)
What if I have a smart system on my desktop PC that is configured to prefetch any URLs it sees in my incoming email,
Given the number of viruses, spam with URLs and other garbage out there, I'd say you're foolish (bordering on stupid), but that's beside the point.
Either that, or I value my time more than that of some stupid machine's :)
Fetching a URL into my local http cache doesn't cause a virus to be executed or anything else bad to happen, and I wouldn't use software where that kind of thing would be possible anyway.
It is an interesting issue, one I can't toss out easily; but I'm not convinced, either. I think it's something that needs to be hashed over, perhaps prototyped both ways so we can see the best way to tweak it.
Barry, what do you think about this instance? As the net moves more towards wireless, PDA, mobile-phone, stuff, could we be setting ourselves up for a later problem by ignoring this pre-cache issue Gerald's raised?
Gerald, does W3 have sample pages for "right" and "wrong" that can be looked at, or are we going to have to develop some? The more I think about this, the more I think it's case where we ought to see if we can develop a prototype that follows the standards that we like, rather than toss it out at first glance. But if we can't come up with a system that we agree is 'easy enough', then we should go to what we're currently thinking.
I included pointers to all the stuff I could think of in my first message in this thread, including a proposed implementation for mailman: http://mail.python.org/pipermail/mailman-developers/2001-January/003579.html
If you think the docs on this subject at W3C are lacking, by all means let me know.
Somewhat related, W3C recently published a note called "Common User Agent Problems" <http://www.w3.org/TR/cuap> and it was received quite well by the web community ("we need more of that kind of thing", etc.) I think there is a plan to write a similar one targeted towards site administrators, pointing out common mistakes and raising awareness about little-known but important RFC/spec details like this.
That would allow anyone in the world to sign me up for any mailman-backed mailing list, whether or not I even see the email confirmation requests. And that would be Mailman's fault, for misusing HTTP GETs.
In disagree with this -- since as you say, any number of places already misuse GET, that usage is fairly common. A user who sets themselves up for it should know (or be warned by their mail client) that it has the possibility to trigger events. I'd say it's more a client issue than a Mailman issue.
By "Mailman's fault" I meant that if mailman did this, it would be the part of the equation causing problems by not abiding by the HTTP spec. But this prefetching thing is just an example; the main point is that the protocol has this stuff built in for a reason, and there may be hundreds of other applications (current and future) that need it to be there.
And, thinking about it, since GET *can* do this, it's probably wrong for W3 to push for it not to be used that way, because if things like your pre-caching system come into common use, the dark side of the net will take advantage of it, the way early virus writers took advantage of mail clients with auto-exec of .EXE's being on by default. So aren't you setting yourself up for problems by having a technology that can, even if you deprecate it, because it sets a user expectation that's going to be broken by those wanting to take advantage of it? I've got a bad feeling about this -- your example seems to set yourself up for abuse by those looking for ways ot abuse you, and that's a bigger issue than Mailman using it -- because if all of the 'white side' programs cooperate, it just encourages creation of things (like the pre-caching) that the dark side will take adavantage of.
I'm not worried about abuse, myself; I would have set this up on my desktop system already if I had more time to hack it up...
But I'd be making this argument whether or not that were the case, since it's clearly the Right Thing to do imho...
And the more I think about it, the more it's an interesting point -- but on more than one level. Has W3c considered the implications of defining a standard that depends on voluntary acceptance here?
Which Internet standards *don't* depend on voluntary acceptance?
The HTTP spec (for example) just provides information on how HTTP is meant to work. Then people who want to do the right thing have a place to look up what the Right Thing is, and if there are ever interoperability problems between different pieces of software, people can check the spec and say "hey, look, you're doing this wrong; see rfc nnnn, sec m.m; please fix." (and if that doesn't work, there's always http://www.rfc-ignorant.org/ ;)
Because the service you propose is unsafe unless you can guarantee everyone you talk to is compliant, and we know how likely that's going to be.
I disagree that it's unsafe; I don't especially want a bunch of spam in my http cache, but don't really care about it, either.
-- Gerald Oskoboiny <gerald@impressive.net> http://impressive.net/people/gerald/
On 7/16/01 5:38 PM, "Gerald Oskoboiny" <gerald@impressive.net> wrote:
Fetching a URL into my local http cache doesn't cause a virus to be executed or anything else bad to happen, and I wouldn't use software where that kind of thing would be possible anyway.
You're speaking both sides of the argument, Gerald, because this whole discussion started with "don't do it this way, or you could cause me to automatically be subscribed to a mailing list". Yet you're saying it's okay to fetch it, because evidently if you fetch it, nothing bad will happen.
So is it a good thing or a bad thing? You seem to be claiming both at the same time.
-- Chuq Von Rospach, Internet Gnome <http://www.chuqui.com> [<chuqui@plaidworks.com> = <me@chuqui.com> = <chuq@apple.com>] Yes, yes, I've finally finished my home page. Lucky you.
I recommend the handyman's secret weapon: duct tape.
On Mon, Jul 16, 2001 at 05:47:24PM -0700, Chuq Von Rospach wrote:
On 7/16/01 5:38 PM, "Gerald Oskoboiny" <gerald@impressive.net> wrote:
Fetching a URL into my local http cache doesn't cause a virus to be executed or anything else bad to happen, and I wouldn't use software where that kind of thing would be possible anyway.
You're speaking both sides of the argument, Gerald, because this whole discussion started with "don't do it this way, or you could cause me to automatically be subscribed to a mailing list". Yet you're saying it's okay to fetch it, because evidently if you fetch it, nothing bad will happen.
So is it a good thing or a bad thing? You seem to be claiming both at the same time.
I guess I should have written "or anything else really bad to happen"; I consider viruses and autoexecing executables to be much more dangerous than the occasional errant mailing list subscription.
I am prepared to deal with a few bogus subscriptions here and there (of course, I'll bug each of them to get fixed as I encounter them), but it would be really bad for mailman-backed lists to operate this way, especially as mailman continues to take over the world...
But once again, this prefetching business is just an example, please don't focus on that exclusively: what I am really asking is simply for Mailman to comply with the HTTP specs.
-- Gerald Oskoboiny <gerald@impressive.net> http://impressive.net/people/gerald/
I have a couple of questions and comments, and then I /really/ need to get some sleep, so I'll follow up with more tomorrow.
If state changing GETs break the standards, then why does e.g. Apache by default allow you to GET a cgi program? Apache is the most common web server (certainly on Mailman-friendly OSes) so I would think that it should adhere to the specs pretty closely.
Aren't the majority of cgi programs of a state-changing nature? Sure, you've got your odd search interface, but even a script like Mailman's private.py changes state: you get authenticated and a cookie gets dropped, and now your interactions are governed by a change in state.
Wouldn't it therefore make sense for Apache to in general disallow GETs to programs by default, with some enabling technique to allow specific state-neutral programs to be GETted?
I'll also mention that it seems to me that strict adherence to this rule would be pretty harmful to a platform like Zope, where urls are really encoded object access and execution commands (like RPC via urls).
sleepi-ly y'rs, -Barry
On Tue, 17 Jul 2001, Barry A. Warsaw wrote:
If state changing GETs break the standards, then why does e.g. Apache by default allow you to GET a cgi program?
A CGI program that has no side-effects and simply dynamically generates content wouldn't be a violation.
Wouldn't it therefore make sense for Apache to in general disallow GETs to programs by default, with some enabling technique to allow specific state-neutral programs to be GETted?
No, not to me, anyway.
I'll also mention that it seems to me that strict adherence to this rule would be pretty harmful to a platform like Zope, where urls are really encoded object access and execution commands (like RPC via urls).
Sounds like a bad choice.
ROGER B.A. KLORESE rogerk@QueerNet.ORG PO Box 14309 San Francisco, CA 94114 "Go without hate. But not without rage. Heal the world." -- Paul Monette
(Chuq has suggested that we keep this thread on -developers, so this will be my last post to -users on the subject for now, I just wanted to respond to this here in case anyone else was curious about this stuff.)
On Tue, Jul 17, 2001 at 12:16:06AM -0400, Barry A. Warsaw wrote:
I have a couple of questions and comments, and then I /really/ need to get some sleep, so I'll follow up with more tomorrow.
If state changing GETs break the standards, then why does e.g. Apache by default allow you to GET a cgi program? Apache is the most common web server (certainly on Mailman-friendly OSes) so I would think that it should adhere to the specs pretty closely.
Aren't the majority of cgi programs of a state-changing nature?
I don't think so; TimBL addresses this in his writeup:
Forms: GET and POST
There is a very important distinction in society and in software, and
certainly on the Web, between reading and writing; between having no
effect and making changes; between observing and making a commitment.
This is fundamental in the Web and your web site must respect it.
Currently the line is often fuzzily drawn, which is very bad for many
reasons.
Form can work in two ways corresponding to this distinction.
One way is to direct the user, like a link, to a new resource, but one
whose URI is constructed from the form's field values. This is
typically how a query form works. (It uses HTTP's GET method.) The
user makes no commitment. Once he or she has followed the link, he or
she can bookmark the result of the query. Following any link to that
same URI will perform the same query again. It is as though Web space
were populated by lots of virtual pages, one for the results of each
possible query to the server. There is no commitment by the user. The
operation can be undone simply by pressing the Back button on a
browser. The user can never be held responsible for anything which was
done using HTTP GET. If your website fills a shopping cart as a user
follows normal links, and sometimes users end up ordering too much or
too little as they use a web accelerator or a cache, then it is your
fault. You should have used the second way.
The second way a form can work is to take a command, or commitment,
from the user. This is done using HTTP POST or sometimes by sending an
message. "Submit this order", and "unsubscribe from this list" are
classic examples. It is really important that the user understands
when a commitment or change is being made and when it isn't.
Hopefully, clients will help by changing the cursor to a special one
when a commitment is about to be made. Such an operation, like sending
an email, or signing and mailing a paper document, cannot be undone.
It is socially quite different. Browsers and servers and proxies which
understand HTTP treat it quite differently. You should never confuse
these two types of interaction on your web site, either way. If you
do, you break the web, and the web will break your site.
-- http://www.w3.org/Provider/Style/Input
Sure, you've got your odd search interface, but even a script like Mailman's private.py changes state: you get authenticated and a cookie gets dropped, and now your interactions are governed by a change in state.
private.py uses POST, no?
un: gerald> grep -i '<form' /home/mailman/Mailman/Cgi/private.py
<FORM METHOD=POST ACTION="%(basepath)s/">
(from the 2.0.5 codebase)
I'll also mention that it seems to me that strict adherence to this rule would be pretty harmful to a platform like Zope, where urls are really encoded object access and execution commands (like RPC via urls).
I haven't studied Zope, so I don't know about that, sorry.
-- Gerald Oskoboiny <gerald@impressive.net> http://impressive.net/people/gerald/
Greetings Folks,
I currently run eight lists on with Mailman.
Starting a couple of hours ago I could no longer get in to manage ONE of the lists. When I say get in, I mean that I can not access the Web interface for that particular list. The server simply churns and churns and churns but nothing ever comes up on the screen.
When I run list_lists on the command line all eight lists are shown. I can also run other commands related to the list from the command line. I just can not view the admin interface in my browser on this ONE list. I have tried it on multiple machines too.
I can get to all five of the other lists. I have rebooted my server and I am not sure what else to try.
Any ideas?
Torresen Marine Diesel Direct wrote:
Starting a couple of hours ago I could no longer get in to manage ONE of the lists. When I say get in, I mean that I can not access the Web interface for that particular list. The server simply churns and churns and churns but nothing ever comes up on the screen.
That list has a stale lock file. Look in the /locks directory and clear
out whatever's no longer needed.
-- W | I haven't lost my mind; it's backed up on tape somewhere. +-------------------------------------------------------------------- Ashley M. Kirchner <mailto:ashley@pcraft.com> . 303.442.6410 x130 IT Director / SysAdmin / WebSmith . 800.441.3873 x130 Photo Craft Laboratories, Inc. . 3550 Arapahoe Ave. #6 http://www.pcraft.com ..... . . . Boulder, CO 80303, U.S.A.
I'd be happy with an admin-configurable option to either do traditional subscription confirmation or the http method.
At 02:03 PM 7/13/2001 -0700, Chuq Von Rospach wrote:
On 7/13/01 1:43 PM, "Gerald Oskoboiny" <gerald@impressive.net> wrote:
This violates the HTTP protocol: visiting a URL (i.e., an HTTP GET) should not have side effects like confirming a subscription.
My first reaction was "say what?" but I went and read the w3 stuff before responding...
I realize that a number of other sites misuse GET this way, but I think most of the large ones (e.g., Yahoo, online brokerages and banks, etc.) get it right, and I think Mailman should too.
Because, frankly, I think w3 is wrong IN THIS CASE. That may make sense in a general case, especially on an HTTP only situation, but in this case, where the URL is being carried in e-mail to confirm an action the user has (presumably) started, I think they're wrong. As long as the e-mail clearly delineates the action being taken, do what's easy for the user; and the user isn't going to want to go clicking through multiple links just to allow us to abide to the HTTP stuff.
But the key is this is a finalization of a distributed transaction, with e-mail distributing the token. Under other circumstances, I see W3's logic. Here, however, using a URL to bring up a page that says "click here to confirm" is only going to piss off Joe User, not make his life better.
-- Chuq Von Rospach, Internet Gnome <http://www.chuqui.com> [<chuqui@plaidworks.com> = <me@chuqui.com> = <chuq@apple.com>] Yes, yes, I've finally finished my home page. Lucky you.
Some days you're the dog, some days you're the hydrant.
Mailman-Users maillist - Mailman-Users@python.org http://mail.python.org/mailman/listinfo/mailman-users
Forgive my ignorance on this one but I've got to ask:
I run mailman separately for each virtual domain I have by setting up a separate installation each time.
-----BEGIN Original Message-----
o The qrunner is no longer started from cron. It is started
by a Un*x init-style script called bin/mailmanctl (see
below). cron/qrunner has been removed.
o bin/mailmanctl script added, which is used to start, stop,
and restart the qrunner daemon.
-----END Original Message-----
Since It's no longer started from the cron, will it be able to have multiple daemons running at the same time?
"CE" == Cal Evans <cal@calevans.com> writes:
CE> Forgive my ignorance on this one but I've got to ask:
CE> I run mailman separately for each virtual domain I have by
CE> setting up a separate installation each time.
CE> -----BEGIN Original Message-----
| o The qrunner is no longer started from cron. It is started
| by a Un*x init-style script called bin/mailmanctl (see
| below). cron/qrunner has been removed.
| o bin/mailmanctl script added, which is used to start, stop,
| and restart the qrunner daemon.
CE> -----END Original Message-----
CE> Since It's no longer started from the cron, will it be able to
CE> have multiple daemons running at the same time?
It should be. -Barry
participants (12)
-
Ashley M. Kirchner
-
barry@digicool.com
-
barry@zope.com
-
Cal Evans
-
Chuq Von Rospach
-
Forrest Aldrich
-
Gerald Oskoboiny
-
Jay R. Ashworth
-
Phil Stracchino
-
Roger B.A. Klorese
-
Thomas Wouters
-
Torresen Marine Diesel Direct