Re: [Mailman-Developers] Architecture for extra profile info
Whoa! Perhaps I don't understand oAuth. I thought that oAuth (and persona, kerberos, etc.) were protocols whereby one system (the provider) furnishes credentials for a second system (the client) to some third system (the consumer).
By configuration, the consumer trusts that the provider has verified the client's identity and furnished appropriate credentials. Thus, when the client presents credentials in an interaction with the consumer, the consumer provides services on the basis of the credentials.
If we assume that we distribute the MM implementation to include more than the two (core and web UI) systems by having, for example, a user manager, there might be an argument for passing around such credentials.
However, the design that has been followed thus far does not have the client communicating directly with the consumer system. Instead, the web UI interacts as an agent for the client. In this model, we have implicitly trusted the agent to properly represent the client and also screen client requests in accordance with system policy. Thus, although we need some level of authentication of the agent, there is no need for third party credentials such as those implemented in oAuth.
A connection on "localhost" is a form of credential. Trusting that the OS design restricts access to the connection, and trusting the applications running on that host provides a level of identification and trust.
There is no reason why alternate channels cannot be substituted as long as a means of identification (such as shared secrets) is utilized. In those cases, the security of the communication channel and the trustworthiness of the agent system need to be considered. However, in a logical sense, the interaction is the same as one using the localhost channel.
On Apr 18, 2013, at 4:27 AM, Florian Fuchs <flo.fuchs@gmail.com> wrote:
2013/4/18 Stephen J. Turnbull <stephen@xemacs.org>:
Florian Fuchs writes:
- It should implement an oAuth provider.
I don't see this. Mailman is an auth consumer. The only people Mailman can provide auth for are the site admins. Everybody else is more or less untrustworthy.
I can see that there are applications where it would be useful to have an auth provider bundled with Mailman, but I think implementing it is somebody else's job.
This could be used for API authentication and to log into Postorius/Hyperkitty
I think generic auth provider is overkill for these purposes, and a trap for anybody who thinks we know enough about crypto/security to do this stuff well.
I agree it's probably not easy. And, yes, maybe we need someone to help us with that.
But maybe we can take a moment to think about the usefulness of such a feature and the possibilities this might open up, rather than dismissing the use of a certain technology right off the bat. If we're unsure we can implement this in a secure way, we can still say no to this. Also, who says such a feature would be enabled by default? We can add it as an "experts only" thing and leave it up to the admins to make sure they use it in a secure environment.
I remember several discussions during PyCon(s) and on IRC where scenarios of different mailman instances talking to each other came up. Of course that doesn't mean implementing a generic oAuth provider is the only answer to this. If there are better and easier solutions, fine.
Luckily going beyond localhost isn't something we need for the prototype that Terri suggested to be built for GSoC.
Florian
Mailman-Developers mailing list Mailman-Developers@python.org http://mail.python.org/mailman/listinfo/mailman-developers Mailman FAQ: http://wiki.list.org/x/AgA3 Searchable Archives: http://www.mail-archive.com/mailman-developers%40python.org/ Unsubscribe: http://mail.python.org/mailman/options/mailman-developers/rkw%40dataplex.net
Security Policy: http://wiki.list.org/x/QIA9
Richard Wackerbarth writes:
Whoa! Perhaps I don't understand oAuth. I thought that oAuth (and persona, kerberos, etc.) were protocols whereby one system (the provider) furnishes credentials for a second system (the client) to some third system (the consumer).
That's correct.
If we assume that we distribute the MM implementation to include more than the two (core and web UI) systems by having, for example, a user manager, there might be an argument for passing around such credentials.
But the does provide a user manager, and the "extra profile info" is in fact intended to be a user manager external to the core.
Thus, although we need some level of authentication of the agent, there is no need for third party credentials such as those implemented in oAuth.
The point is that in many cases we would like to dispense with the agent authentication process altogether, and let a third party manage that. This is perfectly acceptable in the case of open subscription lists where we simply want to ensure that only the subscriber can change their subscriptions. For example, a person subscribing a Gmail account to use that account's credentials rather than creating new owns inside of Mailman -- which we trust only because the person demonstrates in a roundabout way that they can access that mailbox. OAuth allows us to make that check directly in real time.
I suppose that what Florian is thinking is that some owners want *closed* subscription processes, and therefore want to control the authentication process themselves. I agree that that is a valid and likely (if unusual) use case. I just think it's better for such users to go find a provider implementation themselves, rather than offer them something that I know *I* can't properly design or review, and haven't seen credentials from anyone else on the team that they can do it, either.
There is no reason why alternate channels [to a connection from localhost authorized by the OS] cannot be substituted as long as a means of identification (such as shared secrets) is utilized.
Sure, but didn't you notice the elephant in the room as you swept it under the rug? The implementation of "alternate channels" matters *a lot*, and it's not trivial.
On Apr 18, 2013, at 11:42 AM, "Stephen J. Turnbull" <stephen@xemacs.org> wrote:
Richard Wackerbarth writes:
There is no reason why alternate channels [to a connection from localhost authorized by the OS] cannot be substituted as long as a means of identification (such as shared secrets) is utilized.
Sure, but didn't you notice the elephant in the room as you swept it under the rug? The implementation of "alternate channels" matters *a lot*, and it's not trivial.
Just because something is important or non-trivial to implement properly does not imply that it is difficult for us to utilize it. Rather than developing our own, we can, and should, leverage the efforts of "the professionals" and use the tools that they provide (such as https and oAuth, etc.).
Certainly, the proper administration of each, and every, host is an essential element to prevent access "on the coat tails" of the trusted agents. But that also applies to the "localhost" implementation.
Richard Wackerbarth writes:
On Apr 18, 2013, at 11:42 AM, "Stephen J. Turnbull" <stephen@xemacs.org> wrote:
Richard Wackerbarth writes:
There is no reason why alternate channels [to a connection from localhost authorized by the OS] cannot be substituted as long as a means of identification (such as shared secrets) is utilized.
Sure, but didn't you notice the elephant in the room as you swept it under the rug? The implementation of "alternate channels" matters *a lot*, and it's not trivial.
Just because something is important or non-trivial to implement properly does not imply that it is difficult for us to utilize it. Rather than developing our own, we can, and should, leverage the efforts of "the professionals" and use the tools that they provide (such as https and oAuth, etc.).
Certainly, the proper administration of each, and every, host is an essential element to prevent access "on the coat tails" of the trusted agents. But that also applies to the "localhost" implementation.
I don't understand what you're advocating, your comments are way too general.
My position is that secure authentication and authorization is a hard problem, and we should avoid doing that as much as possible (partly because as far as I know none of us are experts). No channels that few sites will use, ditto OAuth providers. Concentrate on a couple of channels with specific, well-understood, universal (or at least very common) use cases.
The channels I have in mind are (1) shell access, (2) Basic Auth over HTTPS for people who need to control access fairly tightly, and (3) OAuth and/or Persona clients allowing authentication by any of a number of public providers for user (especially subscriber) convenience. I'm not wedded to any of those (except (1), for obvious reasons), but I don't think it's a good idea to extend the list if we can avoid it.
On Apr 18, 2013, at 12:21 PM, "Stephen J. Turnbull" <stephen@xemacs.org> wrote:
Richard Wackerbarth writes:
On Apr 18, 2013, at 11:42 AM, "Stephen J. Turnbull" <stephen@xemacs.org> wrote:
Richard Wackerbarth writes:
There is no reason why alternate channels [to a connection from localhost authorized by the OS] cannot be substituted as long as a means of identification (such as shared secrets) is utilized.
Sure, but didn't you notice the elephant in the room as you swept it under the rug? The implementation of "alternate channels" matters *a lot*, and it's not trivial.
Just because something is important or non-trivial to implement properly does not imply that it is difficult for us to utilize it. Rather than developing our own, we can, and should, leverage the efforts of "the professionals" and use the tools that they provide (such as https and oAuth, etc.).
Certainly, the proper administration of each, and every, host is an essential element to prevent access "on the coat tails" of the trusted agents. But that also applies to the "localhost" implementation.
I don't understand what you're advocating, your comments are way too general.
My position is that secure authentication and authorization is a hard problem, and we should avoid doing that as much as possible (partly because as far as I know none of us are experts). No channels that few sites will use, ditto OAuth providers. Concentrate on a couple of channels with specific, well-understood, universal (or at least very common) use cases.
The channels I have in mind are (1) shell access, (2) Basic Auth over HTTPS for people who need to control access fairly tightly, and (3) OAuth and/or Persona clients allowing authentication by any of a number of public providers for user (especially subscriber) convenience. I'm not wedded to any of those (except (1), for obvious reasons), but I don't think it's a good idea to extend the list if we can avoid it.
Perhaps I didn't understand you. I thought that you were advocating the omission of any channels other than "shell" and "localhost". I was trying to point out that HTTPS, oAuth, etc. should be equally viable (and they don't REQUIRE that the components reside on the shame host).
Richard Wackerbarth writes:
Perhaps I didn't understand you. I thought that you were advocating the omission of any channels other than "shell" and "localhost".
I'm saying that we should make appropriate Mailman components be OAuth clients (subject to site policy, per component), but try to avoid providing *any* authentication ourselves (localhost relies on existing shell access mechanisms via OS or SSH etc, HTTP Basic auth relies on Apache or other webserver, OAuth we'll have to build in, but the actual authentication is done by 3rd party providers). I suppose we'll have to provide moderation-by-password-in-headers and the traditional triple-handshake-by-mail for backward compatibility.
Ah - I missed a very important channel: secure mail via OpenPGP. We need something like that (but again, the actual auth/auth process is done by GPG or PGP, we just rely on the token (signature) provided as a valid identification of a user).
I was trying to point out that HTTPS, oAuth, etc. should be equally viable (and they don't REQUIRE that the components reside on the shame host).
I don't think anybody is opposed to exploring distributed architecture, and that implies securing inter-component communications. The question is how much of the security architecture we should provide ourselves. I advocate restricting that to the bare minimum, by which I mean "we don't *do* anything we don't *need* to do ourselves", not "we don't reimplement functionality that is available in Python packages or 3rd-party libraries we can wrap".
On Apr 18, 2013, at 11:42 AM, "Stephen J. Turnbull" <stephen@xemacs.org> wrote:
Richard Wackerbarth writes:
Whoa! Perhaps I don't understand oAuth. I thought that oAuth (and persona, kerberos, etc.) were protocols whereby one system (the provider) furnishes credentials for a second system (the client) to some third system (the consumer).
That's correct.
If we assume that we distribute the MM implementation to include more than the two (core and web UI) systems by having, for example, a user manager, there might be an argument for passing around such credentials.
But the does provide a user manager, and the "extra profile info" is in fact intended to be a user manager external to the core.
Thus, although we need some level of authentication of the agent, there is no need for third party credentials such as those implemented in oAuth.
The point is that in many cases we would like to dispense with the agent authentication process altogether, and let a third party manage that. This is perfectly acceptable in the case of open subscription lists where we simply want to ensure that only the subscriber can change their subscriptions. For example, a person subscribing a Gmail account to use that account's credentials rather than creating new owns inside of Mailman -- which we trust only because the person demonstrates in a roundabout way that they can access that mailbox. OAuth allows us to make that check directly in real time.
I have no problem with, and actually encourage, that we act as a consumer of oAuth credentials. However, the issue here is whether we should be provider of oAuth credentials (which might then be presented to some outside, totally unrelated, entity.
Richard Wackerbarth writes:
I have no problem with, and actually encourage, that we act as a consumer of oAuth credentials.
+1
However, the issue here is whether we should be provider of oAuth credentials (which might then be presented to some outside, totally unrelated, entity.
-1
2013/4/18 Richard Wackerbarth <rkw@dataplex.net>:
Whoa! Perhaps I don't understand oAuth. I thought that oAuth (and persona, kerberos, etc.) were protocols whereby one system (the provider) furnishes credentials for a second system (the client) to some third system (the consumer).
By configuration, the consumer trusts that the provider has verified the client's identity and furnished appropriate credentials. Thus, when the client presents credentials in an interaction with the consumer, the consumer provides services on the basis of the credentials.
If we assume that we distribute the MM implementation to include more than the two (core and web UI) systems by having, for example, a user manager, there might be an argument for passing around such credentials.
I was primarily thinking about the (future) authenticated REST API in Postorius. In that scenario a third party web app that we don't know would request an access token from the user profile store (provider) as well a user's email address. It would then use that access token for requests to the Postorius API (consumer).
If the instance of the user store does not act as provider, we would either:
- effectively require every api user to have an account with some other oauth provider.
or
use some other authentication method.
is odd in my opinion. Mailman is free software. And there are not too many oauth providers that match that philosophy.
would be totally great if we found something that doesn't require a user to provide his credentials to said 3rd party app.
I don't know. Maybe I'm missing something. And of course I agree with Stephen: If it's too hairy in terms of security, we should not do it.
Anyway, we're talking about something that is absolutely not needed for what we want to achieve *right now*, which is a profile data store that Postorius/HK/etc can access from localhost (or maybe from an internal network. Or IP-restricted through SSL. Or... ?).
Florian
participants (3)
-
Florian Fuchs
-
Richard Wackerbarth
-
Stephen J. Turnbull